content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Revision history of "Quirks mode and how to avoid it"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 15:48, 25 April 2013 JoomlaWikiBot (Talk | contribs) deleted page Quirks mode and how to avoid it (page is not needed, abandoned or hasn't been edited in years) | https://docs.joomla.org/index.php?title=Quirks_mode_and_how_to_avoid_it&action=history | 2015-06-30T06:28:00 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Release and support cycle"
From Joomla! Documentation
Revision as of 12:12, 29 August 2011.
-.4.1 (see Category:Joomla! 3.4). The latest LTS version documented on this Wiki is 2.5.28 (see Category:Joomla! 2.5). | https://docs.joomla.org/index.php?title=Release_and_support_cycle&diff=61637&oldid=61484 | 2015-06-30T05:36:43 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Components Banners Banners
From Joomla! Documentation
Revision as of 08:16, 13 January 2013 by Tom Hutchison (Talk | contribs)..
- Name. The name of the Banner. Editing Option - 'click' on the name to open the Banner for editing.
-.
- Purchase Type. The purchase type of the banner. This is used to indicate how the banner client purchased the display time,.
- -. | https://docs.joomla.org/index.php?title=Help32:Components_Banners_Banners&direction=prev&oldid=83755 | 2015-06-30T05:32:08 | CC-MAIN-2015-27 | 1435375091751.85 | [array(['/images/a/a7/Help30-colheader-Order-Ascending-DisplayNum.png',
'Help30-colheader-Order-Ascending-DisplayNum.png'], dtype=object)] | docs.joomla.org |
Changes related to "J2.5:Hands-on adding a new article"
← J2.5:Hands-on adding a new article
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&days=14&from=&target=J2.5%3AHands-on_adding_a_new_article | 2015-06-30T06:09:09 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Components Weblinks Links Edit"
From Joomla! Documentation
Revision as of 13:37, 11 July 2010
Contents
Overview
This screen lets you create new Web Links and edit existing ones.
How to Access
Navigate to".
- URL. The URL of the Web Link.
- State. TBD
- Links are displayed can also be changed in the Web Links Manager.
- Access Level. Who has access to this item. Current options are:
- Public: Everyone has access
- Registered: Only registered users have access
- Special: Only users with author status or higher have access
- You can change an item's Access Level by clicking on the icon in the column.
- Language. TBD
- | https://docs.joomla.org/index.php?title=Help16:Components_Weblinks_Links_Edit&diff=29286&oldid=28941 | 2015-06-30T06:14:18 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
JQuery/order
order($columns)
Defined in
libraries/joomla/database/query.php
Importing
jimport( 'joomla.database.query' );
Source Body
public function order($columns) { if (is_null($this->_order)) { $this->_order = new JQueryElement('ORDER BY', $columns); } else { $this->_order->append($columns); } return $this; }
[<! removed edit link to red link >] <! removed transcluded page call, red link never existed >
Examples
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API15:JQuery/order&oldid=98106 | 2015-06-30T06:30:50 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Upgrading 1.6.5 to 1.7"
From Joomla! Documentation
Revision as of 19:49, 29 April 2013
Contents
Updating Joomla! 1.6 Intro
Joom.
IMPORTANT!
If you are not currently on Joomla! 1.6.5/1.6.6, use Upgrading 1.6 from an existing 1.6x version to update your site to 1.6.5/1.6.6, then resume the process of updating to 1.7
- OR
If you are still using Joomla! 1.5, please see these instructions:! Follow these simple instructions next time you upgrade. Now, it's to keep your Joomla website up-to-date and safe!
Update using the Installation Manager AND the database as needed.
Manual Update - Uploading:
- .
Congratulations, your site is now updated to Joomla 1.7. <headertabs/> | https://docs.joomla.org/index.php?title=Upgrading_1.6.5_to_1.7&diff=86296&oldid=60960 | 2015-06-30T06:14:46 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Help system"
From Joomla! Documentation
Revision as of 20:34, 8 August 2012
This document describes the help system as implemented in Joomla 1.
Introduction to the Joomla help system:.
- Read more about how the help system works.
- Read more about setting up your own help server.
- Read more about using local help files.
- Read more about translating and localising help screens.
- Read more about how the administrator help area works.
- Read more about adding help support to Joomla extensions. | https://docs.joomla.org/index.php?title=Help_system&diff=prev&oldid=70795 | 2015-06-30T05:36:15 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Timezone form field type
From Joomla! Documentation.
- field definition:
<field name="mytimezone" type="timezone" default="-10" label="Select a timezone" description="" /> | https://docs.joomla.org/Timezone_form_field_type | 2015-06-30T06:03:06 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Code 08004"
From Joomla! Documentation
Revision as of 08:43, 25 March: :] is a powerful constituent relationship management system designed for the not for profit/nongovernmental organizations. It currently integrates with Joomla! 1.0.x and 1.5 in legacy mode, and the 2.1 version will be native to Joomla! 1.5.
Currently | https://docs.joomla.org/index.php?title=Code_08004&diff=prev&oldid=4013 | 2015-06-30T05:44:03 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Revision history of "JFactory/getURI"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 14:44, 4 July 2013 Wilsonge (Talk | contribs) deleted page JFactory/getURI (Page has outdated info.) | https://docs.joomla.org/index.php?title=JFactory/getURI&action=history | 2015-06-30T05:47:50 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Site Control Panel
From Joomla! Documentation
Revision as of 16:22, 16 May 2010 by LegacyDave (Talk | contribs).
-. | https://docs.joomla.org/index.php?title=Help16:Site_Control_Panel&oldid=27513 | 2015-06-30T06:32:26 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Extensions Module Manager Syndication Feeds"
From Joomla! Documentation
Revision as of 14:29, 11 December 2013
Contents
Description
This Module creates a RSS Feed link for the page. This allows a User to create a newsfeed for the current page. An example is shown below.
How to Access
To 'add' a new Syndication Feeds module or 'edit' an existing Syndication Feeds module, navigate to the Module Manager:
- Select Extensions →.
Screenshot
Details
- Title: Module must have a title
Module
Smart Syndication Module that creates a Syndicated Feed for the page where the Module is displayed.
- Display Text: (Yes/No). If set to 'Yes', text will be displayed next to the icon
- Text: If 'Display Text' is activated, the text entered here will be displayed next to the icon along with the RSS Link. If this field is left empty, the default text displayed will be picked from the site language ini file.
- Feed Format: (RSS 2.0/Atom 1.0). Select the format for the Syndication Feed>
Quick Tips
Not all menu items types actually provide for an RSS feed. This will be displayed on types of Category Blog and on Featured Articles menu types (probably others). On pages for which no feed is available this module will not display. | https://docs.joomla.org/index.php?title=Help32:Extensions_Module_Manager_Syndication_Feeds&diff=106412&oldid=106284 | 2015-06-30T06:39:22 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Information for "Developing a MVC Component/Adding a model to the site part" Basic information Display titleJ2.5 talk:Developing a MVC Component/Adding a model to the site part Default sort keyDeveloping a MVC Component/Adding a model to the site part Page length (in bytes)300 Page ID6717Fire (Talk | contribs) Date of page creation04:48, 12 March 2010 Latest editorTom Hutchison (Talk | contribs) Date of latest edit14:46, 3 May 2013 Total number of edits8 Total number of distinct authors5 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=J2.5_talk:Developing_a_MVC_Component/Adding_a_model_to_the_site_part&action=info | 2015-06-30T06:13:29 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Changes related to "How to create a language pack"
← How to create a language pack
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20140503081808&target=How_to_create_a_language_pack | 2015-06-30T06:10:41 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Revision history of "Xml-rpc changes in Joomla! 1.6"
View logs for this page
Diff selection: Mark the radio boxes of the revisions to compare and hit enter or the button at the bottom.
Legend: (cur) = difference with latest revision, (prev) = difference with preceding revision, m = minor edit. | https://docs.joomla.org/index.php?title=Xml-rpc_changes_in_Joomla!_1.6&action=history | 2015-06-30T05:37:45 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Help screens
From Joomla! Documentation
Revision as of 14:40, 23 January 2010 by Chris Davenport (Talk | contribs))
- if at all it should be called just "joomla.help.15", no screen, like its siblings: joomla.credits, joomla.glossary, joomla.support, joomla.whatsnew10, joomla.whatsnew15 --CirTap (talk • contribs) 13:42, 14 May 2008 (EDT)
I looked for the description of this feature and didn't find the expected answer here: Screen.menus.edit.15 --Embeix 03:46, 7 August 2008 (EDT)
Hi. Not sure what you are referring to. I don't see this layout in 1.5.5. Mark Dexter 11:53, 7 August 2008 (EDT)) | https://docs.joomla.org/index.php?title=Help15_talk:Help_screens&oldid=21358 | 2015-06-30T06:13:54 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Installing a template" From Joomla! Documentation Revision as of 07:23, 28 June 2013 (view source)Tom Hutchison (Talk | contribs)m (RVer usage)← Older edit Latest revision as of 18:27, 17 September 2013 (view source) Tom Hutchison (Talk | contribs) m (update) (4 intermediate revisions by 2}} Latest revision as of 18:27, 17 September | https://docs.joomla.org/index.php?title=Installing_a_template&diff=cur&oldid=101156 | 2015-06-30T06:20:37 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Changes related to "Category:Joomla! versions"
← Category:Joomla! versions<<
| https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20121125082644&target=Category%3AJoomla!_versions | 2015-06-30T06:10:05 | CC-MAIN-2015-27 | 1435375091751.85 | [array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object) ] | docs.joomla.org |
Difference between revisions of "Getting Started with Joomla!"
From Joomla! Documentation
Revision as of 13:00, 14 February 2011
Contents
- 1 Background
- 2 Summary
- 3 Setting up a Joomla! site
- 4 Looking after a Joomla! site
- 5 Take Joomla! beyond the basics
- 6 Further Information
- 7 Index to the documents in this series
This series of documents introduces Joomla! to people who have not previously used it. This is an introduction to the series and aims to help you know how best to use them.
Background
Which versions are covered?
There are two versions in common use:-
- Version 1.5 is well established and widely used.
- Version 1.6 was released early in 2011.
There are many similarities between the versions but they are presented here to newcomers to Joomla! in separate documents in order to avoid confusion. This is the introduction to version 1.5. There is a separate series for version 1.6.
Who is it written for?
The series is for anyone.
- [Use Joomla! 1.5 on your own computer| - things you need to know
These are hands-on documents to familiarise you with the background to setting up a new site. There is emphasis on thinking about the content and appearance before doing the mechanics of creating a site.
The mechanics of setting up a new site
This covers | https://docs.joomla.org/index.php?title=Getting_Started_with_Joomla!_1.5&diff=37290&oldid=37237 | 2015-06-30T06:32:44 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Components Tags Manager Options"
From Joomla! Documentation
Revision as of 05:46, 9 May 2013
How to Access
- Click the Global Configuration button in the Control Panel and click the Tags button on left side panel, or
- Select Components → Tags → Options from the drop-down menus.
Description
Used.
Screenshot>
Details).
Permissions
- Configure. Allows users in the group to edit the options of this extension.
- Access Administration Interface. Allows users in the group to access the administration interface for this extension.
- Create. Allows users in the group to create any content in this extension.
- Delete. Allows users in the group to delete any content in this extension.
- Edit. Allows users in the group to edit any content in this extension.
- Edit State. Allows users in the group to change the state of any content in this extension.
Quick Tips
-. | https://docs.joomla.org/index.php?title=Help31:Components_Tags_Configuration&diff=89173&oldid=89172 | 2015-06-30T06:02:29 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Developing a MVC Component/Creating an Administrator Interface
From Joomla! Documentation
< J1.5:Developing a MVC Component
Contents
- 1 Introduction
- 2 Site / Admin
- 3 MVC pattern interaction
- 3.1 Example roll-out
- 3.2 Mapping the example to a new to develop Joomla! component
- 3.3 Essential Interaction Parameters
- 3.4 How to add interaction
- 4 Conclusion
- 5 Articles in this Series
- 6 Contributors
Introduction
In the first three tutorials, we have developed a MVC component that retrieves its data from a table in the database. Currently, there is no way to add data to the database except to do it manually using another tool. In the next articles of this tutorial, we will develop an administrator section for our component which will make it possible to manage the entries in the database.
This article, Part 4 - Creating an Administrator Interface , will be an article with no new source code for our Hello component but we will go into some basic details of the MVC pattern. In the frontend solution (site section) we have developed the first part of our component. The frontend solution is based upon default Controllers, Views and Templates and you were taken by the hand to trust the default handling of the code. This is going to change in the Backend or Administration section of our Hello component.
Site / Admin
Joomla! is a content management system. The frontend is used for presenting the users with content and allowing certain logged in users to manipulate the content, the backend is responsible for administrating the website framework (structuring / managing / controlling / maintaining). This deviation in site-content and administration is the fundamental of the Joomla! architecture.
Entrance points with a terminal programm your life site at service provider (or on your own server) and do some browsing on your site after installing the frondend com_hello component. If you have clicked the right sub-directories you may have noticed that our Hello component is to be found twice:
<root>/components/com_hello <root>/administrator/components/com_hello
These two sub-directories link to the previously explained site-content and administration. Administrator interactions explicitly goes via the administrator sub-directory, where guest or registered users will enter the default entrance on the root:
<root>/index.php
Administrator users will have to login Controls, Views and Models naming can (and sometimes must) be used as in the site section.
MVC pattern interaction.
Example roll-out
For explanation usage an easy to grasp example will be used. A library has the main function of lending books to registered users. Simply layed out three tables are applicable:
- users
- books
- relation
Lets keep it all very simple. The users are addressed by Id and Name. The books are identified by Id and Title and the relation contains both Ids and date of lending.
The example is carefully chosen and will help in explaining the Joomla! architecture in more detail. The administrative side of our Hello component is not even that complex with only one table to manage. After the explanation of this chapter it should be easy to follow the tutorial in the succeeding chapters
Mapping the example to a new to develop Joomla! component (Hint:
if ($user->registered) {} ).
Just like our frontend Hello component, for this library component only the default view is being used in the frontend. It lists the relational table, left joining the other two tables to obtain a human readable list of lend books.
Alice | One flew over ... | 12 aug 2009 Alice | Celeb talk | 12 aug 2009 Mark | Joomla! | 15 aug 2009
With respect to the administration part it is important to understand that we have one default and three dedicated views, each controlling three tasks:
- <Component name default view>
- User administration
- Add
- Change
-
- Book administration
- Add
- Change
-
- Relation administration
- Add
- Change
-
Controller.
Models
Views
Separate views are of cause. Sharing template parts needs to be defined somewhere in the first lines of your file and you have to make sure that the returning = strchr(dirname(__FILE__), dirname($_SERVER['SCRIPT_NAME'])); $pathToGeneral = str_replace(dirname($_SERVER['SCRIPT_NAME']),'.',$pathToGeneral); $pathToGeneral = $pathToGeneral . "/../../general/"; <-- returning path from current position. ... <?php require_once $pathToGeneral . 'navigation.header.php'; ?> <P>Do something <?php require_once $pathToGeneral . 'navigation.footer.php'; ?>
Note: Giving the forms logical instead of the default naming is of cause handy when having a lot of different views. Having also that much default forms could make you easily loose oversight. On the other hand the view.html.php can not be renamed and you are always forced to look at the directory name for the view name you are working in.
Essential Interaction Parameters
Everything is in place:
- main and dedicated controllers;
- main and dedicated modules;
- different views and their forms.
Just one big question remains: How to use them!
Three parameters are mandatory for letting Joomla! do its job:
- option
- controller
- task
These three parameters are almost self explaining ;). The option part when developing a component is easy, always assign your component name to it. For component development consider this one as a constant, of cause the Joomla! engine has more options than only components.
The controller and task parameters can be manipulated within your component anyway you want it to.
How it all works together
Looking at the simplified MVC picture Joomla! the logical flow of interaction goes the following way:
- What is my entrance point?
- The Joomla! engine discovers a logged in administrator and sets the entrance point to <root>/administrator/index.php otherwise it wil go to the <root>/index.php entrance.
- What option is requested?
- The Joomla! engine reads the option variable and discovers that a component named <componentname> is requested. The entrance point becomes: <root>/administrator/modules/com_<componentname>/<componentname>.php
- Is there need to access a dedicated controller?
- The Joomla! engine reads the controller variable and discovers that a dedicated controller is required: <root>/administrator/modules/com_<componentname>/controllers/<dedicatedcontroller>.php
- Is there a task that needs to be addressed?
- The identified controller is handed the task value as parameter.
- The Model is derived from the controller and instantiated.
- The View and Form are set in the controller and initiated to become active.
How to add interaction
Within HTML there are two common ways to interact with Joomla!:
- reference to a link
- form submission post / get
Link reference for the Joomla! engine
Again the activating tasks and resulting task definitions drop by. Most activating tasks are initiated by a link. In case of the Library example the site section overview could be copied to the admin section and all cells could become links for editing the specific database element.
The first row could be programmed in a loop in the the template containing following code:
$link = JRoute::_( 'index.php?option=com_library&controller=users&task=edit&cid='. $row->idu ); echo "<td><a href=\"".$link."\">Alice </a></td>"; $link = JRoute::_( 'index.php?option=com_library&controller=books&task=edit&cid='. $row->idb ); echo "<td><a href=\"".$link."\">One flew over ...</a></td>"; $link = JRoute::_( 'index.php?option=com_library&controller=relation&task=edit&cid='. $row->idr ); echo "<td><a href=\"".$link."\">12 aug 2009</a></td>";
Within each click-able field the three mandatory parameters can be identified and one user parameter for handling the correct index in the controller task. These parameter are separated by '&'. Do not use spaces in your variables this might screw-up parameter handling in certain browsers.
[Alice} | [One flew over ...] | [12 aug 2009] [Alice] | [Celeb talk] | [12 aug 2009] [Mark] | [Joomla!] | [15 aug 2009]
Posting Form Data to the Joomla! Engine
After being initiated by an activating task, an input form view might be the result. The sniplet. Of cause one can look at the source code but using Post instead of Get, but this eliminates the first 90% of the earth population. The other reason is more technical and simply explained the method="post" can contain more (complex) data.
Remark: In some developed modules you may notice that developers have also added the view parameter. This is bad programming whilst the controller(s) should take care of the view and the form.
Conclusion
The use of the three mandatory parameters and the different access points are clarified. Let's do something with this knowledge and extend the Hello component to the administrative section in the succeeding articles of this tutorial. | https://docs.joomla.org/index.php?title=J1.5:Developing_a_Model-View-Controller_Component/1.5/Creating_an_Administrator_Interface&oldid=15608 | 2015-06-30T05:40:48 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Revision history of "JDocument::getLanguage/1.6"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 03:07, 7 May 2013 Wilsonge (Talk | contribs) deleted page JDocument::getLanguage/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JDocument::getLanguage== ===Description=== Returns the document language. {{Description:JDocument::getLanguage}} <span class="editsecti..." (and the only contributor was "Doxiki2")) | https://docs.joomla.org/index.php?title=JDocument::getLanguage/1.6&action=history | 2015-06-30T05:42:19 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "JRegistry::merge"
From Joomla! Documentation
Revision as of 20: /> | https://docs.joomla.org/index.php?title=API17:JRegistry::merge&diff=next&oldid=57514 | 2015-06-30T05:40:03 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Creating an Authentication Plugin for Joomla"
From Joomla! Documentation
Revision as of 11:56, 14 January 2011
A google search on 'joomla onauthenticate example' returns this tutorial so I suspect many people tasked with configuring a Joomla! 1.5 system to authenticate a user login against a database external to Joomla's will no doubt arrive here and use the code presented as a starting point to implementing their own auth plugin. At least that is what happend to me!
In this tutorial it says that the example is based on example.php, which can be found in the plugins/authentication directory of your Joomla installation.
After scratching my head and wondering if I could be any dumber than I already am, I noted the following differences between the code in the tutorial and the code in the plugins/authentication/example.php file in my Joomla 1.5.13 installation.
In fact, the code is so different that it may explain why I have had so much trouble implementing the simple process of authenticating against a table in another MySQL database on the same machine.
Would someone be so kind as to confirm that these differences are problematic or deconfuse me on this issue? Please bear in mind that I'm a perl guy drowning in php.
There are two issues:
1. The constructors for the plugin are defined differently.
Tutorial:
function plgAuthenticationMyauth(& $subject) { parent::__construct($subject); }
example.php:
function plgAuthenticationExample(& $subject, $config) { parent::__construct($subject, $config); }
2. The definitions of the parameters that onAuthenticate expects are different.
Tutorial: )
example.php:
/** * This method should handle any authentication and report back to the subject * * @access public * @param array $credentials Array holding the user credentials * @param array $options Array of extra options * @param object $response Authentication response object * @return boolean * @since 1.5 */ function onAuthenticate( $credentials, $options, &$response )
Invalid link
The link to is not displayed correctly.
Fixed. Thanks. Chris Davenport 23:45, 16 November 2009 (UTC)
Possible error in code
This section looks like it contains an error.'; }
If I am not completely mistaken the last else will override the User does not exist error as there is no return in that if block? | https://docs.joomla.org/index.php?title=J1.5_talk:Creating_an_Authentication_Plugin_for_Joomla&diff=prev&oldid=34695 | 2015-06-30T06:17:54 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "JControllerForm::postSaveHook"
From Joomla! Documentation
Revision as of::postSaveHook
Description
Function that allows child controller access to model data after the data has been saved.
Description:JControllerForm::postSaveHook [Edit Descripton]
protected function postSaveHook ( JModel &$model $validData=array )
See also
JControllerForm::postSaveHook source code on BitBucket
Class JControllerForm
Subpackage Application
- Other versions of JControllerForm::postSaveHook
SeeAlso:JControllerForm::postSaveHook [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JControllerForm::postSaveHook&diff=next&oldid=56164 | 2015-06-30T06:12:14 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Glossary
From Joomla! Documentation
This is a contents index for the Glossary page.
Note that pages added to this category will not appear on the Glossary page. See Talk:Glossary for instructions for adding glossary items.
Pages in category ‘Glossary’
The following 72 pages are in this category, out of 100 total.(previous 200) (next 200)(previous 200) (next 200) | https://docs.joomla.org/index.php?title=Category:Glossary&from=E | 2015-06-30T05:35:32 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Components Messaging Inbox"
From Joomla! Documentation
Revision as of 22:30, 4 August 2012
Contents
How to Access
Select Components → Messaging → Read Private Messages from the drop-down menu on the Back-end of your Joomla! 1. Creates a new message to be sent. A window called Write Private Message will open.
- Mark As Read. Marks the selected messages as Read. Select messages by checking the message's checkbox.
- Mark As Unread. Marks the selected messages as Unread. Select messages by checking the message's checkbox.
- Trash. Deletes the selected messages. Select messages by checking the message's checkbox.
- My Settings. Opens a pop-up window that allows you to change messaging settings.
- Options Opens a pop-up window that allows you to manage User Groups for this component.
- | https://docs.joomla.org/index.php?title=Help16:Components_Messaging_Inbox&diff=70317&oldid=27523 | 2015-06-30T06:39:26 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Configuring Dispatcher Threads and Order Policy for Event Distribution
Configuring Dispatcher Threads and Order Policy for Event Distribution
By default, GemFire uses multiple dispatcher threads to process region events simultaneously in a gateway sender queue for distribution between sites, or in an asynchronous event queue for distributing events for write-behind caching. With serial queues, you can also configure the ordering policy for dispatching those events.
By default, a gateway sender queue or asynchronous event queue uses 5 dispatcher threads per queue. This provides support for applications that have the ability to process queued events concurrently for distribution to another GemFire site or listener. If your application does not require concurrent distribution, or if you do not have enough resources to support the requirements of multiple dispatcher threads, then you can configure a single dispatcher thread to process a queue.
- Using Multiple Dispatcher Threads to Process a Queue
- Performance and Memory Considerations
- Configuring the Ordering Policy for Serial Queues
- Examples: Configuring Dispatcher Threads and Ordering Policy for a Serial Gateway Sender Queue
Using Multiple Dispatcher Threads to Process a Queue
When multiple dispatcher threads are configured for a parallel queue, GemFire simply uses multiple threads to process the contents of each individual queue. The total number of queues that are created is still determined by the number of GemFire members that host the region.
When multiple dispatcher threads are configured for a serial queue, GemFire creates an additional copy of the queue for each thread on each member that hosts the queue. To obtain the maximum throughput, increase the number of dispatcher threads until your network is saturated.
The following diagram illustrates a serial gateway sender queue that is configured
with multiple dispatcher threads.
Performance and Memory Considerations
- Queue attributes are repeated for each copy of the queue that is created for a dispatcher thread. That is, each concurrent queue points to the same disk store, so the same disk directories are used. If persistence is enabled and overflow occurs, the threads that insert entries into the queues compete for the disk. This applies to application threads and dispatcher threads, so it can affect application performance.
- The maximum-queue-memory setting applies to each copy of the serial queue. If you configure 10 dispatcher threads and the maximum queue memory is set to 100MB, then the total maximum queue memory for the queue is 1000MB on each member that hosts the queue.
Configuring the Ordering Policy for Serial Queues
- key (default). All updates to the same key are distributed in order. GemFire preserves key ordering by placing all updates to the same key in the same dispatcher thread queue. You typically use key ordering when updates to entries have no relationship to each other, such as for an application that uses a single feeder to distribute stock updates to several other systems.
- thread. All region updates from a given thread are distributed in order. GemFire preserves thread ordering by placing all region updates from the same thread into the same dispatcher thread queue. In general, use thread ordering when updates to one region entry affect updates to another region entry.
- partition. All region events that share the same partitioning key are distributed in order. Specify partition ordering when applications use a PartitionResolver to implement custom partitioning. With partition ordering, all entries that share the same "partitioning key" (RoutingObject) are placed into the same dispatcher thread queue.
You cannot configure the order-policy for a parallel event queue, because parallel queues cannot preserve event ordering for regions. Only the ordering of events for a given partition (or in a given queue of a distributed region) can be preserved.
Examples: Configuring Dispatcher Threads and Ordering Policy for a Serial Gateway Sender Queue
- cache.xml configuration
<cache> <gateway-sender ... </cache>
- Java API configuration
Cache cache = new CacheFactory().create(); GatewaySenderFactory gateway = cache.createGatewaySenderFactory(); gateway.setParallel(false); gateway.setPersistenceEnabled(true); gateway.setDiskStoreName("gateway-disk-store"); gateway.setMaximumQueueMemory(200); gateway.setDispatcherThreads(7); gateway.setOrderPolicy(OrderPolicy.KEY); GatewaySender sender = gateway.create("NY", "1"); sender.start();
- gfsh:
gfsh>create gateway-sender -d="NY" --parallel=false --remote-distributed-system-id="1" --enable-persistence=true --disk-store-name="gateway-disk-store" --maximum-queue-memory=200 --dispatcher-threads=7 --order-policy="key"
- cache.xml configuration
<cache> <async-event-queue API configuration
Cache cache = new CacheFactory().create(); AsyncEventQueueFactory factory = cache.createAsyncEventQueueFactory(); factory.setPersistent(true); factory.setDiskStoreName("async-disk-store"); factory.setParallel(false); factory.setDispatcherThreads(7); factory.setOrderPolicy(OrderPolicy.KEY); AsyncEventListener listener = new MyAsyncEventListener(); AsyncEventQueue sampleQueue = factory.create("customerWB", listener);
Entry updates in the current, in-process batch are not eligible for conflation.
- gfsh:
gfsh>create async-event-queue --id="sampleQueue" --persistent=true --disk-store="async-disk-store" --parallel=false --dispatcher-threads=7 order-policy="key" --listener=myAsycEventListener --listener-param=url#jdbc:db2:SAMPLE --listener-param=username#gfeadmin --listener-param=password#admin1 | http://gemfire82.docs.pivotal.io/docs-gemfire/developing/events/configuring_gateway_concurrency_levels.html | 2019-04-18T16:30:08 | CC-MAIN-2019-18 | 1555578517745.15 | [array(['../../common/images/MultisiteConcurrency_WAN_Gateway.png', None],
dtype=object) ] | gemfire82.docs.pivotal.io |
Deploying SIP Server for GIR
Contents
Genesys Interaction Recording (GIR) needs SIP Server for routing, call control and to initiate the recordings.
The following steps describe how to deploy and configure SIP Server for GIR, and how to configure the DNs for GIR.
SIP Server
- Install and configure SIP Server as described in the SIP Server Deployment Guide.
- In addition to the configuration described in the deployment guide, set the following SIP Server options:
VoIP Service DN
- Create a new MSML DN object and add the following parameters to the General tab:
- Number = The name of the MSML Server
- Type= Voice over IP Service
- Add the following parameters to the Annex tab of the new DN:
Agent DN
On the Agent's DN, set the following parameter:
- enable-agentlogin-presence to true
Agent Login
If you want to record your agent's After Call Work (ACW) time screen, on the Annex of the Agent Login object, in the [TServer] section, set the wrap-up-time by seconds; for example, set wrap-up-time=10.
This page was last modified on October 8, 2018, at 04:19.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/CR/8.5.2/Solution/SIP | 2019-04-18T17:08:06 | CC-MAIN-2019-18 | 1555578517745.15 | [] | docs.genesys.com |
Activity Reporting
Tasks
Provides methods for retrieving information about task resources. Tasks are long running operations – for example, when you issue a purge request, the API creates a task object for it and provides a handle to the task. Since a cache purge can take a relatively long time, you can look at currently running tasks, get the task ID and handle, and check the status of the purge request. Potentially any API request that changes something (PUT, POST, DELETE) might return a task. This will be explicit in the API's method definition.
A single task object has the following general structure.
"last_updated": "2018-03-01T21:19:41.427Z",
"error_info": {
"details": [
{
"message": "string"
}
],
"message": "string",
"type": "string"
},
"end_time": "2018-03-01T21:19:41.427Z",
"begin_time": "2018-03-01T21:19:41.427Z",
"progress": 0,
"id": "string",
"complete": true,
"uri": "string",
"operation": "string",
"user": {
"id": "string",
"uri": "string"
},
"status": "string",
"target": {
"id": "string",
"uri": "string"
}
}
The following table describes the contents of a task object:
An error_info object can contain the following fields:
GET /tasks
Returns tasks that have been updated since a given date
This method can be used to page through list of tasks that have been updated after a specified date. The specified date should be in ISO-8601 format yyyy-MM-dd'T'HH:mm:ss.SSSZ; for example, 2015-01-31T:02:25:37:455 for January 31, 2015 at approximately 2:25, with millisecond granularity.
Parameters
Response Messages
Example requests:
GET /tasks?since=2015-02-01
GET /tasks?since=2015-02-01&count=20
GET /tasks?since=2015-02-01&count=20&page=2
GET /tasks/{task_id}
Retrieves information about a specified task. | https://docs.instart.com/apis/api-reference/activity-reporting/ | 2019-04-18T16:26:23 | CC-MAIN-2019-18 | 1555578517745.15 | [] | docs.instart.com |
You can disable the Office Online integration on your account. When you disable Office Online, you will no longer be able to preview and edit office documents in the Jottacloud web application.
Go to Connected apps settings to disable Office Online. Simply click the Disable button to disable the feature.
Note: if you disable the Office Online integration, we will not display preview of office documents shared with public links.. | https://docs.jottacloud.com/microsoft-office-online/how-to-disable-office-online | 2019-04-18T16:24:21 | CC-MAIN-2019-18 | 1555578517745.15 | [array(['https://downloads.intercomcdn.com/i/o/78559386/4e3ce3e6aa2e9084b9e035d5/Skjermbilde+2018-09-29+kl.+19.22.11.png',
None], dtype=object) ] | docs.jottacloud.com |
Walkthrough: Receiving and Putting Away in Basic Warehousing
In Microsoft Dynamics NAV, warehousing where your location is set up to require put-away processing but not receive processing, you use the Inventory Put-away window to record and post put-away and receipt information for your inbound source documents. The inbound source document can be a purchase order, sales return order, inbound transfer order, or production order with output that is ready to be put away.:
Microsoft Dynamics NAV with the CRONUS International Ltd. demonstration database installed.
To make yourself a warehouse employee at SILVER location by following these steps:
In the Search box, enter Warehouse Employees, and then choose the related link.
Choose the User ID field, and select your own user account in the Users window.
In the Location Code field, enter SILVER.
Select the Default field.
Story
Ellen, the warehouse manager at CRONUS International Ltd., sets up SILVER warehouse for basic put-away handling where warehouse workers process individual inbound orders according to predefined bin structures. Alicia, the purchasing agent at CRONUS, window defines the company’s warehouse flows.
To set up the location
In the Search box, enter Locations, and then choose the related link.
Open the SILVER location card.
Select the Require Put-away check box.
Proceed to set up a default bin for the two item numbers to control where they are put away.
On the Navigate tab, in the Location group, choose Bins.
Select the first row, for bin S-01-0001, and then on the Home tab, in the Bins group, choose Contents.
Notice in the Bin Content window that item LS-75 is already set up as content in bin S-01-0001.
On the Home tab, in the New group, choose New.
Select the Fixed and the Default fields.
In the Item No. field, enter LS-81.
Creating the Purchase Order
Purchase orders are the most common type of inbound source document.
To create the purchase order
In the Search box, enter Purchase Orders, and then choose the related link.
On the Home tab, in the New group, choose New..
On the Actions tab, in the Release group, choose Release.
The delivery of loudspeakers from vendor 10000 has arrived at SILVER warehouse, and John proceeds to put them away.
Receiving and Putting the Items Away
In the Inventory Put-away window, you can manage all inbound warehouse activities for a specific source document, such as a purchase order.
To receive and put the items away
In the Search box, enter Inventory Put-aways, and then choose the related link.
On the Home tab, in the New group, choose New.
Select the Source Document field, and then select Purchase Order.
Select the Source No. field, select the line for the purchase from vendor 10000, and then choose the OK button.
Alternatively, on the Actions tab, in the Functions group, choose Get Source Document, and then select the purchase order.
On the Actions tab, in the Functions group, choose Autofill Qty. to Handle.
Alternatively, in the Qty. to Handle field, enter 10 and 30 respectively on the two inventory put-away lines.
On the Actions tab, in the Posting group, choose Post, select Receive, and then choose the OK button.
The 40 loudspeakers are now registered as put away in bin S-01-0001, and a positive item ledger entry is created reflecting the posted purchase receipt.
See Also
Other Resources
Inventory Put-away
Location Card
How to: Put Items Away with Inventory Put-aways
How to: Set Up Basic Warehouses with Operations Areas
How to: Move Components to an Operation Area in Basic Warehousing
How to: Pick for Production in Basic Warehousing
How to: Move Items Ad Hoc in Basic Warehousing
Design Details: Inbound Warehouse Flow
Business Process Walkthroughs | https://docs.microsoft.com/en-us/previous-versions/dynamicsnav-2016/jj662720%28v%3Dnav.90%29 | 2019-04-18T17:04:07 | CC-MAIN-2019-18 | 1555578517745.15 | [] | docs.microsoft.com |
Asset transfer
- Click the “transfer” command shown next to your assets
- Fill in transfer details such as the sending and receiving address, transfer amount, username and password, then click “confirm transfer”. You may adjust your transfer fee. The minimum miner reward for each transaction is 0.0001 ETP. Your transaction is likely to be confirmed more quickly if you set a higher miner fee.
| https://docs.mvs.org/docs/features-transfer-token.html | 2019-04-18T16:38:05 | CC-MAIN-2019-18 | 1555578517745.15 | [array(['https://i.imgur.com/E64wnVj.png', None], dtype=object)
array(['https://i.imgur.com/0VMG7Ls.png', None], dtype=object)] | docs.mvs.org |
There (parallel) job. One lab is assigned to one core. Thus, a job with eight labs has eight processor cores allocated to it.).
TBA take from old FAQ:
*
In order to submit multiple Matlab batch jobs, simply repeat this section for each job.
PCT provides an API that allows.
5. Import the cluster configuration for the cluster you are running on:
If all validation stages succeed, then you are ready to submit jobs to MDCS.: | https://docs.rice.edu/confluence/plugins/viewsource/viewpagesrc.action?pageId=30757983 | 2019-04-18T17:36:25 | CC-MAIN-2019-18 | 1555578517745.15 | [] | docs.rice.edu |
.
Note
Also supports all keyword arguments supported by
kombu.Producer.publish().
default_retry_delay= 180¶
Default time in seconds before a retry of the task should be executed. 3 minutes by default.
delay(*args, **kwargs)[source]¶
Star argument version of
apply_async().
Does not support the extra options enabled by
apply_async().
None(no rate limit), ‘100/s’ (hundred tasks a second), ‘100/m’ (hundred tasks a minute),`‘100/h’` (hundred tasks an hour)
reject_on_worker_lost= None¶
Even if
acks_lateis enabled, the worker will acknowledge tasks when the worker process executing them abruptly exits or is signaled (e.g.,
KILL/
INT, etc).
Setting this to true allows the message to be re-queued instead, so that the task will execute again by the same worker, or another worker.
Warning: Enabling this can cause message loops; make sure you know what you’re doing.
replace(sig)[source]¶
Replace this task, with a new task inheriting the task id.pickle’.
If enabled the task will report its status as ‘started’ when the task is executed by a worker. Disabled by default as the normal behavior is to not report that level of granularity. Tasks are either pending, finished, or waiting to be retried.
Having a ‘started’ status can be useful for when there are long running tasks and there’s a need to report what task is currently running.
The application default can be overridden using the
task_track_startedsetting.. | http://docs.celeryproject.org/en/latest/reference/celery.app.task.html | 2019-04-18T16:23:44 | CC-MAIN-2019-18 | 1555578517745.15 | [] | docs.celeryproject.org |
$ .option.="org.apache.geode.modules.session.catalina. PeerToPeerCacheLifecycleListener" locator="localhost[10334]" />
and the following entry within the
<context>tag in the context.xml file:
<Manager className="org.apache.geode. | http://gemfire.docs.pivotal.io/94/geode/tools_modules/http_session_mgmt/quick_start.html | 2019-04-18T17:09:48 | CC-MAIN-2019-18 | 1555578517745.15 | [] | gemfire.docs.pivotal.io |
Microsoft IT implements a custom claims provider (case study)
Applies to: SharePoint Server 2010
Microsoft IT (MSIT) successfully designed and implemented a claims-based security model using Microsoft SharePoint Server 2010 for an internal human resources portal.
In this article
A flexible security model
SharePoint claims
Components of the human resources portal security model
Additional resources
A flexible security model
The human resources portal supports about 90,000 unique identities using a custom claims provider to implement compound claims and claims augmentation. By default, SharePoint Server 2010 supports simple, disjunctive claims that can only use the OR operator between assertions. To use compound claims that are conjunctive and disjunctive (supporting the use of the OR and AND operators between assertions), a custom claims provider is required. Claims augmentation was also a requirement to enable the combination of user data from an internally accessible data repository (Active Directory Domain Services) with information from external data repositories such as SAP. The custom claims provider was designed to explicitly secure and target specific content to each uniquely identified user. The custom claims provider was also required to support a feature called View Portal As. View Portal As enables the temporary delegation of one user’s identity to another user. This allows the user with the delegated identity to view the human resources portal as another user. The View Portal As feature is restricted to a subset of administrators who are working on the human resources portal, and is not available to most users who can access the portal.
SharePoint claims
SharePoint Server 2010 provides the benefits of simple claims-based authentication, which includes a secure identity management system that enables the configuration and management of user authentication, authorization, and auditing. Claims are made up of identity assertions that are encapsulated in security tokens that are granted to authenticated users, which enable users to access resources within SharePoint Server 2010 Web applications. Identity assertions are attributes that are associated with users. Assertions can include a user name, a role, an employee ID, and a variety of other attributes that can be used to determine authorization and permission levels for access to SharePoint Server 2010 Web application resources. Security tokens are created and managed by a Security Token Service (STS) that acts as an identity provider. An STS encapsulates a collection of assertions, based on attributes specified by a policy, into a security token that can then be used to authenticate and authorize a user. To create a security token, the STS must be able to locate valid credentials for a user in an attribute store. Active Directory Domain Services can be used as an attribute store for an STS.
Components of the human resources portal security model
To provide manageable, searchable, and secure access for about 90,000 users, the design of the human resources portal includes the implementation of the following components:
SharePoint groups
Audience targeting
FAST Search
SharePoint groups
To manage a deployment that includes about 90,000 users and an implementation that has to handle hundreds of compound claims, the design of the security model for the human resources portal includes SharePoint groups. A SharePoint group is a logical container for SharePoint Server 2010 users. By implementing SharePoint groups, administrators can create meaningful group names to identity collections of claim values. In addition, People Picker supports name resolution for SharePoint groups. Another reason to implement SharePoint groups is that SharePoint groups enable administrators to dynamically manage authentication and the use of the OR operator to combine multiple claims at the level of a SharePoint group.
Audience targeting
Enabling administrators to filter and scope content for specified groups of users is important in a deployment that supports about 90,000 users. The human resources portal design team used the User Profile Service Application to implement SharePoint groups for audience targeting. However, when the design team created a SharePoint group that contains claims from a custom claims provider, the design team did not assign a default permission level to the SharePoint group. When you create SharePoint groups, use the groups for security, and do not assign a permission level when you create them, the groups are automatically assigned a Limited Access permission level. The target audience processor ignores SharePoint groups that are created in this way. To resolve this issue, the design team discovered that if it assigned the Read level permission to a SharePoint group that contains claims from a custom claims provider, and then removed the Read level permission, the target audience processor correctly processed the SharePoint group. You only need to perform this procedure on one SharePoint group within the inheritance tree. You do not need to perform this procedure on every SharePoint group that contains a claim from a custom claims provider.
FAST Search
To provide content discoverability and a high level of search performance across a deployment that includes a very large number of users and network resources, the portal design team used Microsoft FAST Search Server 2010 for SharePoint hosted in a federated environment. The FAST Search Server 2010 for SharePoint deployment included the following components:
Query Search Service Applications
A content Search Service Application
A content source
The design team learned that custom claim type mappings must be registered in the FAST Search Server 2010 for SharePoint farm to enable search security trimming, and that any custom claims provider used in the human resources portal must also be deployed in the FAST Search Server 2010 for SharePoint farm. The design team also learned that you must confirm that the claim type mappings are registered, and that all claim IDs are the same across all of the SharePoint Server 2010 farms for the same claim type value pair. In addition, the design team learned that the following conditions must be met to ensure that the human resources portal is searchable from other intranet portals within the enterprise:
All other intranet portals that render human resources portal content are built on SharePoint Server 2010 and use claims-based Web applications and the Windows authentication method.
All other intranet portals that render content from the human resources portal are querying the same FAST Search index.
All other intranet portals that render content from the human resources portal have the custom claims provider for the human resources portal installed. Also, the custom claims provider must be installed in the same sequence as it was installed on the human resources portal.
The following diagram illustrates the physical architecture for the MSIT human resources portal built on SharePoint Server 2010 and FAST Search Server 2010 for SharePoint, using a custom claims provider.
Additional resources
For more detailed information about implementing this deployment scenario, see Custom Claims-Based Security in SharePoint 2010 ().
SharePoint Server 2010 claims-based authentication is built on the Windows Identity Foundation (WIF), which is a set of .NET Framework classes. Claims-based authentication relies on standards such as WS-Federation and WS-Trust. For more information about WIF, see Windows Identity Foundation Simplifies User Access for Developers ().
For prescriptive guidance about how to implement a custom claims provider, see the following TechNet and MSDN articles:
People Picker and claims provider planning (SharePoint Server 2010)
People Picker overview (SharePoint Server 2010)
Configure People Picker (SharePoint Server 2010)
Custom claims providers for People Picker (SharePoint Server 2010)
Claims Walkthrough: Writing Claims Providers for SharePoint 2010 ()
How to: Create a Claims Provider ()
SPClaimProvider Class SharePoint 2010 ()
Audience and content targeting planning (SharePoint Server 2010)
FAST Search Server 2010 for SharePoint
Create and set up the Content Search Service Application (FAST Search Server 2010 for SharePoint)
Create and set up the Query Search Service Application (FAST Search Server 2010 for SharePoint)
Enable queries from Microsoft SharePoint Server (FAST Search Server 2010 for SharePoint)
Create a FAST Search Center site (FAST Search Server 2010 for SharePoint)
Claims-Based Identity Term Definitions () | https://docs.microsoft.com/en-us/previous-versions/office/sharepoint-server-2010/hh427307(v=office.14) | 2019-04-18T16:20:09 | CC-MAIN-2019-18 | 1555578517745.15 | [array(['images/hh427307.98159f1a-3eed-4399-a89b-47d32e69454a%28office.14%29.jpg',
'MSIT SharePoint claims network diagram MSIT SharePoint claims network diagram'],
dtype=object) ] | docs.microsoft.com |
How to Manage Conflicting Records for Configuration Manager Clients
Applies To: System Center Configuration Manager 2007, System Center Configuration Manager 2007 R2, System Center Configuration Manager 2007 R3, System Center Configuration Manager 2007 SP1, System Center Configuration Manager 2007 SP2
Microsoft System Center Configuration Manager 2007 uses the hardware ID to attempt to identify clients that might be duplicates and alert you to the conflicting records. For example, if you have to reinstall a computer, the hardware ID would be the same but the GUID used by Configuration Manager 2007 might be changed.
You can configure a site-wide setting to tell Configuration Manager 2007 whether to automatically create new records when it detects duplicate hardware IDs, or to allow you to decide when to merge, block, or create new client records. If you decide to manually manage duplicate records, you must manually resolve the conflicting records.
In addition to manually resolving conflicting records at the site level, you can resolve conflicts centrally in the entire hierarchy, if the entire branch of the hierarchy is configured to manually resolve conflicting records.
Important
If the entire branch of the hierarchy is not set to manually resolve conflicting records, the conflicting records will not appear in the parent site Configuration Manager 2007 console.
To change the site-wide settings for managing conflicting records
In the Configuration Manager console, navigate to System Center Configuration Manager / Site Database / Site Management / <site code> - <site name>.
Right-click <site code> - <site name> and then click Properties.
On the Advanced tab, click either Manually resolve conflicting records or Automatically create new client records for duplicate hardware IDs.
Click OK.
To manually resolve conflicting records
In the Configuration Manager console, navigate to System Center Configuration Manager / Site Database / Computer Management / Conflicting Records.
In the results pane, select one or more conflicting records.
In the Actions pane, select one of the following:
Merge to combine the newly detected record with the existing client record, creating one unified record.
New to create a new record for the conflicting client record.
Block to create a new record for the conflicting client record, but mark it as blocked.
See Also
Tasks
How to Block Configuration Manager Clients
Reference
Conflicting Records
Site Properties: Advanced Tab
For additional information, see Configuration Manager 2007 Information and Support.
To contact the documentation team, email [email protected]. | https://docs.microsoft.com/en-us/previous-versions/system-center/configuration-manager-2007/bb693963(v=technet.10) | 2019-04-18T16:54:49 | CC-MAIN-2019-18 | 1555578517745.15 | [] | docs.microsoft.com |
Contents IT Service Management Previous Topic Next Topic Task SLA table Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Task SLA table The Task SLA [task_sla] table stores Task SLA records for the SLAs attached to particular tasks. For each task, attached SLAs are accessible in a related list on the Task's form. Figure 1. Task SLA table The SLA form for a task shows further details: Figure 2. Task SLA form Stage values The following Stage values are defined: In progress Cancelled Paused Completed Note: The Breached stage value is also available for systems either using the 2010 SLA engine, or running in compatibility mode. Timing information The Timings fields on the Task SLA contain the crucial information powered by the SLA Engine: Table 1. Task SLA Time-Based Fields Field Description Start time The time the SLA was started. Stop time The time the SLA ended. Breach time The time the SLA will breach, adjusted for business pause duration (for task SLAs with a schedule specified) or pause duration (for task SLAs with no schedule).Note: Breach time is the same as Planned end time. Actual Elapsed Time Time between start time and now (minus pause duration). Actual Elapsed Percentage Percentage of total SLA that has elapsed (minus pause duration). Actual Time Left Time remaining until SLA breach. Business Elapsed Time Time within the specified schedule between start time and now (minus pause duration). Business Elapsed Percentage Percentage of total SLA that has elapsed within the specified schedule (minus pause duration). Business Time Left Time within the schedule remaining until SLA breach. Original breach time The date/time the SLA would breach, as calculated when the SLA is first attached.Note: You may have to configure the form to see this field. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/madrid-it-service-management/page/product/service-level-management/reference/r_TaskSLATable.html | 2019-04-18T16:51:40 | CC-MAIN-2019-18 | 1555578517745.15 | [] | docs.servicenow.com |
Gradle 2.2 delivers some nice general features and improvements, as well as profound new capabilities for dependency management.
The addition of arbitrary “Component Selection Rules” and the modelling of module “replacement” continues to push the state of the art in JVM dependency management. Selection rules allow extremely fine grained, custom, conflict resolution strategies. Support for declaring module replacements allows Gradle to consider modules that have different identities but that conflict in some way during conflict resolution. This can be used to avoid ending up with duplicate copies of libraries at different versions due to their published coordinates changing over time or due to merging into other libraries entirely.
Support for the SonarQube code quality management platform has significantly improved in this release. The Sonar Runner no longer runs in the build process, which allows more control over its execution (e.g. memory settings) and the use of arbitrary versions of the Sonar Runner. This will allow leveraging of new Sonar features without updates to the plugin and more control over how Gradle integrates with Sonar.
The new support for “text resources”, added to the code quality plugins (e.g. the Checkstyle plugin), opens up new possibilities for sharing configuration/settings for code quality checks. More generally, support for “text resources” opens up new possibilities for obtaining and/or generating text to be used in the build process, typically as a file. While only in use by the code quality plugins at this release, this new mechanism will be leveraged by other tasks and plugins in future versions of Gradle.
Gradle 2.1 previously set the high watermark for contributions to Gradle with contributions by 18 different contributors. This release raises that high watermark to contributions by 23 different contributors. Thank you to all who have contributed and helped to make Gradle an even better build system.
We hope you enjoy Gradle 2.2.
Fileobjects representing relative paths
Here are the new features introduced in this Gradle release.
Fine tuning the dependency resolution process is even more powerful now with the use of component selection rules. These allow custom rules to be applied whenever multiple versions of a component are being evaluated. Using such rules, one can explicitly reject a version that might otherwise be accepted by the default version matching strategy.
configurations { conf { resolutionStrategy { componentSelection { // Accept the newest version that matches the dynamic selector // but does not end with "-experimental". all { ComponentSelection selection -> if (selection.candidate.group == 'org.sample' && selection.candidate.name == 'api' && selection.candidate.version.endsWith('-experimental')) { selection.reject("rejecting experimental") } } // Rules can consider component metadata as well // Accept the highest version with a branch of 'testing' or a status of 'milestone' all { ComponentSelection selection, IvyModuleDescriptor descriptor, ComponentMetadata metadata -> if (descriptor.branch != 'testing' && metadata.status != 'milestone') { selection.reject("does not match branch or status") } } // Rules can target specific modules // Reject the 1.1 version of org.sample:api withModule("org.sample:api") { ComponentSelection selection -> if (selection.candidate.version == "1.1") { selection.reject("known bad version") } } } } } } dependencies { conf "org.sample:api:1.+" }
See the User Guide section on component selection rules for further information.
It is now possible to declare that a certain module has been replaced by some other. An example of this happening in the real world is the replacement of the Google Collections project by Google Guava. By making Gradle aware that this happened, Gradle can consider that these modules are the same thing when resolving conflicts in the dependency graph. Another common example of this phenomenon is when a module changes its group or name. Examples of such changes are
org.jboss.netty -> io.netty,
spring -> spring-core and there are many more.
Module replacement declarations can ship with as part of custom Gradle plugins and enable stronger and smarter dependency resolution for all Gradle-powered projects in the enterprise.
This new incubating feature is described in detail in the User Guide.
dependencies { modules { module("com.google.collections:google-collections") { replacedBy("com.google.guava:guava") } } }
The Sonar Runner Plugin has been improved to fork the Sonar Runner process, whereas in previous Gradle versions the runner was executed within the build process. This was problematic is it made controlling the environment (e.g. JVM memory settings) for the runner difficult and meant the runner could destabilize the build process. Importantly, because the Sonar Runner process is now forked, the version of Sonar Runner to use can now be configured in the build allowing choice of the version of Sonar Runner to use.
The
sonar-runner plugin defaults to using version 2.3 of the runner. Upgrading to a later version is now simple:
apply plugin: "sonar-runner" sonarRunner { toolVersion = "2.4" // Fine grained control over the runner process forkOptions { maxHeapSize = '1024m' } }
This feature was contributed by Andrea Panattoni.
Various improvements were made to the ability to configure a native tool chain from cross-compilation. These improvements should make easier to use Gradle to compile for a target platform other than the host.
These improvements include:
NativeToolChaintype now has an
eachPlatform(Action<NativePlatformToolChain>)method, to allow fine-grained customization of a particular tool chain on a per-platform basis.
NativePlatformToolChain.getPlatform()allows tool chain customization logic access to the target platform.
model { toolChains { gcc(Gcc) { eachPlatform { tc -> if (tc.platform.name == "arm") { cCompiler.executable = 'gcc-arm' } } } } }
Previous versions of Gradle have supported building x86 binaries using GCC on Windows. This Gradle release adds initial support for building x64 binaries using GCC on Windows.
When using the
idea plugin, it is now possible to specify the version control system to configure IDEA to use when importing the project.
apply plugin: "idea" idea { project { vcs = "Git" } }
Note: This setting is only respected when the project is opened in IDEA using the
.ipr (and associated) files generated by the
./gradlew idea task. It is not respected when the project is imported into IDEA using IDEA's import feature.
This feature was contributed by Kallin Nagelberg.
The location of the local Maven repository can now be controlled by setting the system property
maven.repo.local to the absolute path to the repo. This has been added for parity with Maven itself. This can be used to isolate the maven local repository for a particular build, without changing the location of the
~/.m2/settings.xml which may contain information to be shared by all builds.
This feature was contributed by Christoph Gritschenberger.
The OpenShift PaaS environment uses a proprietary mechanism for discovering the binding address of the network interface. Gradle requires this information for inter process communication. Support has been added for this environment which now makes it possible to use Gradle with OpenShift.
This feature was contributed by Colin Findlay.
When importing an Ant build it is now possible to specify an alternative name for tasks that corresponds to the targets of the imported Ant build. This can be used to resolve naming collisions between Ant targets and existing Gradle tasks (GRADLE-771).
To do so, supply a transformer to the [
ant.importBuild()] method that supplies the alternative name.
apply plugin: "java" // adds 'clean' task ant.importBuild("build.xml") { it == "clean" ? "ant-clean" : it }
The above example avoids a name collision with the clean task. See the section on importing Ant builds in the Gradle Userguide for more information.
This feature was contributed by Paul Watson.
In previous Gradle versions, sharing external configuration files across builds (e.g. to enforce code quality standards) was difficult. To support this use case, a new
TextResource abstraction was introduced.
TextResources are created using factory methods provided by
project.resources.text. They can be backed by various sources such as inline strings, local text files, or archives containing text files. A
TextResource backed by an archive can then be shared across builds by publishing and resolving the archive from a binary repository, benefiting from Gradle's standard dependency management features (e.g. dependency caching).
Gradle's code quality plugins and tasks are the first to support
TextResource. The following example shows how a Checkstyle configuration file can be sourced from different locations:
apply plugin: "checkstyle" configurations { checkstyleConfig } dependencies { // a Jar/Zip/Tar archive containing one or more Checkstyle configuration files, // shared via a binary repository checkstyleConfig "com.company:checkstyle-config:1.0@zip" } checkstyle { // affects all Checkstyle tasks // sourced from inline string config = resources.text.fromString("""<module name="Checker">...</module>""") // sourced from local file config = resources.text.fromFile("path/to/file.txt") // sourced from a task that produces a single file (and declares it as output) config = resources.text.fromFile(someTask) // sourced from shared archive config = resources.text.fromArchiveEntry(configurations.checkstyleConfig, "path/to/archive/entry.txt") }
Over time,
TextResource will be leveraged by more existing and new Gradle APIs.
The submission process for Gradle plugins is currently a work in progress, and upcoming versions of Gradle will provide a fully automated publishing process for plugins. Since we are not quite there yet, we are happy that there is the 3rd-party plugindev plugin that highly facilitates packaging and publishing of plugins. Thus, for the time being, we recommend to use the
plugindev plugin. You can learn here about how to use it.
plugins { id 'nu.studer.plugindev' version '1.0.3' } group = 'org.example' version = '0.0.1.DEV' plugindev { pluginImplementationClass 'org.example.gradle.foo.FooPlugin' pluginDescription 'Gradle plugin that does foo.' pluginLicenses 'Apache-2.0' pluginTags 'gradle', 'plugin', 'foo' authorId 'johnsmith' authorName 'John Smith' authorEmail '[email protected]' projectUrl '' projectInceptionYear '2014' done() } bintray { user = "$BINTRAY_USER" key = "$BINTRAY_API_KEY" pkg.repo = 'gradle-plugins' }.
In previous Gradle releases, it was possible to modify a configuration whose child has been resolved. This, however, leads to confusing behaviour because each configuration is resolved once and the result reused. This means that the changes made to the parent will never be used by the child configuration.
This behaviour is now deprecated and will become an error in Gradle 3.0.
In the example below, both
filesMatching blocks will now match against the source path of the files under
from. In previous versions of Gradle, the second
filesMatching block would match against the destination path that was set by executing the first block.
task copy(type: Copy) { from 'from' into 'dest' filesMatching ('**/*.txt') { path = path + '.template' } filesMatching ('**/*.template') { // will not match the files from the first block anymore path = path.replace('template', 'concrete') } }
The
org.gradle.api.tasks.ant.AntTarget task implementation adapts a target from an Ant build to a Gradle task and is used when Gradle imports an Ant build.
In previous Gradle versions, it was somewhat possible to manually add tasks of this type and wire them to Ant targets manually. However, this was not recommended and can produce surprising and incorrect behaviour. Instead, the
ant.importBuild() method should be used to import Ant build and to run Ant targets.
As of Gradle 2.2, manually added
AntTarget tasks no longer honor target dependencies. Tasks created as a result of
ant.importBuild() (i.e. the recommended practice) are unaffected and will continue to work.
The Sonar Runner plugin now forks a new JVM to analyze the project. Projects using the Sonar Runner Plugin should consider setting explicitly the memory settings for the runner process.
Existing users of the
sonar-runner plugin may have increased the memory allocation to the Gradle process to facilitate the Sonar Runner. This can now be reduced.
Additionally, the plugin previously mandated the use of version 2.0 of the Sonar Runner. The default version is now 2.3 and it is configurable. If you require the previous default of 2.0, you can specify this version via the project extension.
sonarRunner { toolVersion = '2.0' }
In previous Gradle versions it was possible to use
afterEvaluate {} blocks to configure tasks added to the project by
"maven-publish",
"ivy-publish" and Native Language Support plugins. These tasks are now created after execution of
afterEvaluate {} blocks. This change was necessary to continue improving the new model configuration. Please use
model {} blocks instead for that purpose, e.g.:
model { tasks.generatePomFileForMavenJavaPublication { dependsOn 'someOtherTask' } }
The version of Groovy that the CodeNarc plugin uses while analyzing Groovy source code has changed in this Gradle release. Previously, the version of Groovy that Gradle ships with was used. Now, the version of Groovy that the CodeNarc tool declares as a dependency is used.
The CodeNarc implementation used by the CodeNarc plugin is defined by the
codenarc dependency configuration, which defaults to containing the dependency
"org.codenarc:CodeNarc:0.21". This configuration is expected to provide all of CodeNarc's runtime dependencies, including the Groovy runtime (which it does by default as the CodeNarc dependency depends on
"org.codehaus.groovy:groovy-all:1.7.5"). This should have no impact on users of the CodeNarc plugin. Upon first use of the CodeNarc plugin with Gradle 2.1, you may see Gradle downloading a Groovy implementation for use with the CodeNarc plugin.
Please note that any generally applied dependency rules targeted at Groovy (e.g. changing
groovy-all to
groovy-core or similar) will now affect the CodeNarc plugin whereas they did not previously.
The classes of the (incubating) Sonar Runner Plugin have moved from the package
org.gradle.api.sonar.runner to
org.gradle.sonar.runner.
If you were depending on these classes explicitly, you will need to update the reference.
TargetedPlatformToolChainwith
GccPlatformToolChainand
VisualCppPlatformToolChain.
PlatformConfigurableToolChainto
GccCompatibleToolChain.
target()or
eachPlatform()should be used instead.
ExecutableBinary: use
NativeExecutableBinaryinstead.
org.gradle.nativeplatform.sourcesetto
org.gradle.language.nativeplatform
org.gradle.language.nativebaseto
org.gradle.language.nativeplatform
Nativeprefix to existing
Platform,
ToolChain,
ToolChainRegistryand
PlatformToolChaintypes
NativeComponentSpec.getBinaries()to return
DomainObjectSet<BinarySpec>
NativeComponentSpec.getNativeBinaries()to return
DomainObjectSet<NativeBinarySpec>
org.gradle.language.jvm.ResourceSetto
JvmResourceSet
org.gradle.api.jvm.ClassDirectoryBinarySpecto
org.gradle.jvm.ClassDirectoryBinarySpec
org.gradle.language.jvm.artifact.JavadocArtifactto
org.gradle.language.java.artifact.JavadocArtifact.
Using the internal convention mapping feature for one of the following properties will no longer have an effect:
Fileobjects representing relative paths
A
File object that represents a relative path and is used to configure one of the following properties will now be interpreted relative to the current project, rather than relative to the current working directory of the Gradle process:
Note that this only affects files created with
new File("relative/path") (which is not recommended), but not files created with
project.file("relative/path").
We would like to thank the following community members for making contributions to this release of Gradle.
maven.repo.localsystem property
Actionoverloads project
project.exec()and
project.javaexec()
readelfwhen parsing output in integration tests
.as decimal separator
'sonar-runner'plugin
IdeaModulemodel to mark generated source directories
We love getting contributions from the Gradle community. For information on contributing, please see gradle.org/contribute.
Known issues are problems that were discovered post release that are directly related to changes made in this release. | https://docs.gradle.org/2.2/release-notes.html | 2017-04-23T13:57:30 | CC-MAIN-2017-17 | 1492917118707.23 | [] | docs.gradle.org |
Attention
This is copied verbatim from the old IPython wiki and is currently under development. Much of the information in this part of the development guide is out of date.
IPython does all of its development using GitHub. All of our development information is hosted on this GitHub wiki. This page indexes all of that information. Developers interested in getting involved with IPython development should start here.
Core Documents
Lab Meetings
Development Policies
Other Information
A template for new Python files in IPython: template.py | http://jupyter.readthedocs.io/en/latest/development_guide/index.html | 2017-04-23T13:51:23 | CC-MAIN-2017-17 | 1492917118707.23 | [] | jupyter.readthedocs.io |
Frequently asked questions¶
The following notes answer common questions, and may be useful to you when using webcolors.
What versions of Python are supported?¶
On Python 2, webcolors supports and is tested on Python 2.7. On Python 3, webcolors supports and is tested on Python 3.3, 3.4 and 3.5. It is expected that webcolors 1.7 will also work without modification on Python 3.6 once it is released.
These Python version requirements are due to a combination of factors:
- On Python 2, only 2.7 still receives official security support from Python’s development team. Although some third parties continue to provide unofficial security support for earlier Python 2 versions, the fact remains that Python 2.6 and earlier have reached their official end-of-life and their use should not be encouraged. On Python 3, 3.0, 3.1 and 3.2 have similarly reached end-of-life and no longer receive security support.
- On Python 3, 3.3 was the first widely-adopted release, and also introduced features (not present in earlier Python 3 releases) which greatly simplify the process of consistently handling strings in both Python 2 and Python 3 within the same codebase.
How closely does this module follow the standards?¶
As closely as is practical (see below regarding floating-point values), within the supported formats; the webcolors module was written with the relevant standards documents close at hand. See the conformance documentation for details.
Why aren’t
rgb_to_rgb_percent() and
rgb_percent_to_rgb() precise?¶
This is due to limitations in the representation of floating-point numbers in programming languages. Python, like many programming languages, uses IEEE floating-point, which is inherently imprecise for some values.
This imprecision only appears when converting between integer and
percentage
rgb() triplets.
To work around this, some common values (255, 128, 64, 32, 16 and 0)
are handled as special cases, with hard-coded precise results. For all
other values, conversion to percentage
rgb() triplet uses a
standard Python
float, rounding the result to two decimal places.
See the conformance documentation for details on how this affects testing.
Why aren’t HSL values supported?¶
In the author’s experience, actual use of HSL values on the Web is
extremely rare; the overwhelming majority of all colors used on the
Web are specified using sRGB, through hexadecimal color values or
through integer or percentage
rgb() triplets. This decreases the
importance of supporting the
hsl() construct.
Additionally, Python already has the colorsys module in the
standard library, which offers functions for converting between RGB,
HSL, HSV and YIQ color systems. If you need conversion to/from HSL or
another color system, use
colorsys.
Why not use a more object-oriented design with classes for the colors?¶
Representing color values with Python classes would introduce overhead for no real gain. Real-world use cases tend to simply involve working with the actual values, so settling on conventions for how to represent them as Python types, and then offering a function-based interface, accomplishes everything needed without the addtional indirection layer of having to instantiate and serialize a color-wrapping object.
Keeping a simple function-based interface also maintains consistency with Python’s built-in colorsys module, which has the same style of interface for converting amongst color spaces.
Note that if an object-oriented interface is desired, the third-party colormath module does have a class-based interface (and rightly so, as it offers a wider range of color representation and manipulation options than webcolors).
How am I allowed to use this module?¶
The webcolors module is distributed under a three-clause BSD
license. This is an
open-source license which grants you broad freedom to use,
redistribute, modify and distribute modified versions of
webcolors. For details, see the file
LICENSE in the source
distribution of webcolors.
I found a bug or want to make an improvement!¶
The canonical development repository for webcolors is online at <>. Issues and pull requests can both be filed there. | http://webcolors.readthedocs.io/en/1.7/faq.html | 2017-04-23T13:46:38 | CC-MAIN-2017-17 | 1492917118707.23 | [] | webcolors.readthedocs.io |
Getting started¶
Example inventory¶
The
debops.ferm role is part of the default DebOps playbook an run on
all hosts which are part of the
[debops_all_hosts] group. To use this
role with DebOps it's therefore enough to add your host to the mentioned
host group (which most likely it is already):
[debops_all_hosts] hostname
Example playbook¶
Here's an example playbook which uses the
debops.ferm role:
--- - name: Manage firewall using ferm hosts: [ 'debops_all_hosts', 'debops_service_ferm' ] become: True roles: - role: debops.ferm tags: [ 'role::ferm' ] | https://docs.debops.org/en/latest/ansible/roles/ansible-ferm/docs/getting-started.html | 2017-04-23T13:59:38 | CC-MAIN-2017-17 | 1492917118707.23 | [] | docs.debops.org |
pywinauto.tests.asianhotkey¶
Asian Hotkey Format Test
What is checked
This test checks whether the format for shortcuts/hotkeys follows the standards for localised Windows applications. This format is {localised text}({uppercase hotkey}) so for example if the English control is “&Help” the localised control for Asian languages should be “LocHelp(H)”
How is it checked
After checking whether this control displays hotkeys it examines the 1st string of the control to make sure that the format is correct. If the reference control is available then it also makes sure that the hotkey character is the same as the reference. Controls with a title of less than 4 characters are ignored. This has been done to avoid false positive bug reports for strings like “&X:”.
When is a bug reported
A bug is reported when a control has a hotkey and it is not in the correct format. Also if the reference control is available a bug will be reported if the hotkey character is not the same as used in the reference
Bug Extra Information
This test produces 2 different types of bug: BugType: “AsianHotkeyFormat” There is no extra information associated with this bug type
BugType: “AsianHotkeyDiffRef”
There is no extra information associated with this bug type
Is Reference dialog needed
The reference dialog is not needed. If it is unavailable then only bugs of type “AsianHotkeyFormat” will be reported, bug of type “AsianHotkeyDiffRef” will not be found.
False positive bug reports
There should be very few false positive bug reports when testing Asian software. If a string is very short (eg “&Y:”) but is padded with spaces then it will get reported.
Test Identifier
The identifier for this test/bug is “AsianHotkeyTests” | https://pywinauto.readthedocs.io/en/latest/code/pywinauto.tests.asianhotkey.html | 2017-04-23T13:45:15 | CC-MAIN-2017-17 | 1492917118707.23 | [] | pywinauto.readthedocs.io |
THEANETS¶
The
theanets package is a deep learning and neural network toolkit. It is
written in Python to interoperate with excellent tools like
numpy and
scikit-learn, and it uses Theano to accelerate computations when possible
using your GPU. The package aims to provide:
- a simple API for building and training common types of neural network models;
- thorough documentation;
- easy-to-read code;
- and, under the hood, a fully expressive graph computation framework.
The package strives to “make the easy things easy and the difficult things possible.” Please try it out, and let us know what you think!
The source code for
theanets lives at,
the documentation lives at, and announcements
and discussion happen on the mailing list.
Quick Start: Classification¶
Suppose you want to create a classifier and train it on some 100-dimensional data points that you’ve classified into 10 categories. No problem! With just a few lines you can (a) provide some data, (b) build and (c) train a model, and (d) evaluate the model:
import climate import theanets from sklearn.datasets import make_classification from sklearn.metrics import confusion_matrix climate.enable_default_logging() # Create a classification dataset. X, y = make_classification( n_samples=3000, n_features=100, n_classes=10, n_informative=10) X = X.astype('f') y = y.astype('i') cut = int(len(X) * 0.8) # training / validation split train = X[:cut], y[:cut] valid = X[cut:], y[cut:] # Build a classifier model with 100 inputs and 10 outputs. net = theanets.Classifier([100, 10]) # Train the model using SGD with momentum. net.train(train, valid, algo='sgd', learning_rate=1e-4, momentum=0.9) # Show confusion matrices on the training/validation splits. for label, (X, y) in (('training:', train), ('validation:', valid)): print(label) print(confusion_matrix(y, net.predict(X)))
Layers¶
The model above is quite simplistic! Make it a bit more sophisticated by adding a hidden layer:
net = theanets.Classifier([100, 1000, 10])
In fact, you can just as easily create 3 (or any number of) hidden layers:
net = theanets.Classifier([ 100, 1000, 1000, 1000, 10])
By default, hidden layers use the relu transfer function. By passing a tuple
instead of just an integer, you can change some of these layers to use different
activations:
maxout = (1000, 'maxout:4') # maxout with 4 pieces. net = theanets.Classifier([ 100, 1000, maxout, (1000, 'tanh'), 10])
By passing a dictionary instead, you can specify even more attributes of each
layer, like how its parameters are initialized:
# Sparsely-initialized layer with large nonzero weights. foo = dict(name='foo', size=1000, std=1, sparsity=0.9) net = theanets.Classifier([ 100, foo, (1000, 'maxout:4'), (1000, 'tanh'), 10])
Specifying layers is the heart of building models in
theanets. Read more
about this in Specifying Layers.
Regularization¶
Adding regularizers is easy, too! Just pass them to the training method. For instance, you can train up a sparse classification model with weight decay:
# Penalize hidden-unit activity (L1 norm) and weights (L2 norm). net.train(train, valid, hidden_l1=0.001, weight_l2=0.001)
In
theanets dropout is treated as a regularizer and can be set on many
layers at once:
net.train(train, valid, hidden_dropout=0.5)
or just on a specific layer:
net.train(train, valid, dropout={'foo:out': 0.5})
Similarly, you can add Gaussian noise to any of the layers (here, just to the input layer):
net.train(train, valid, input_noise=0.3)
Optimization Algorithms¶
You can optimize your model using any of the algorithms provided by downhill
(SGD, NAG, RMSProp, ADADELTA, etc.), or additionally using a couple of
pretraining methods specific to neural networks.
You can also make as many successive calls to
train() as you like. Each call can include different
training algorithms:
net.train(train, valid, algo='rmsprop') net.train(train, valid, algo='nag')
different learning hyperparameters:
net.train(train, valid, algo='rmsprop', learning_rate=0.1) net.train(train, valid, algo='rmsprop', learning_rate=0.01)
and different regularization hyperparameters:
net.train(train, valid, input_noise=0.7) net.train(train, valid, input_noise=0.3)
Training models is a bit more art than science, but
theanets tries to make
it easy to evaluate different training approaches. Read more about this in
Training a Model.
Quick Start: Recurrent Models¶
Recurrent neural networks are becoming quite important for many sequence-based tasks in machine learning; one popular toy example for recurrent models is to generate text that’s similar to some body of training text.
In these models, a recurrent classifier is set up to predict the identity of the
next character in a sequence of text, given all of the preceding characters. The
inputs to the model are the one-hot encodings of a sequence of characters from
the text, and the corresponding outputs are the class labels of the subsequent
character. The
theanets code has a
Text
helper class that provides easy encoding and decoding of text to and from
integer classes; using the helper makes the top-level code look like:
import numpy as np, re, theanets chars = re.sub(r'\s+', ' ', open('corpus.txt').read().lower()) txt = theanets.recurrent.Text(chars, min_count=10) A = 1 + len(txt.alpha) # of letter classes # create a model to train: input -> gru -> relu -> softmax. net = theanets.recurrent.Classifier([ A, (100, 'gru'), (1000, 'relu'), A]) # train the model iteratively; draw a sample after every epoch. seed = txt.encode(txt.text[300017:300050]) for tm, _ in net.itertrain(txt.classifier_batches(100, 32), momentum=0.9): print('{}|{} ({:.1f}%)'.format( txt.decode(seed), txt.decode(net.predict_sequence(seed, 40)), 100 * tm['acc']))
This example uses several features of
theanets that make modeling neural
networks fun and interesting. The model uses a layer of
Gated Recurrent
Units to capture the temporal dependencies in
the data. It also uses a callable to provide data to the model, and takes
advantage of iterative training to sample an output from the model after each
training epoch.
To run this example, download a text you’d like to model (e.g., Herman
Melville’s Moby Dick) and save it in
corpus.txt:
curl > corpus.txt
Then when you run the script, the output might look something like this (abbreviated to show patterns):
used for light, but only as an oi|pr vgti ki nliiariiets-a, o t.;to niy , (16.6%) used for light, but only as an oi|s bafsvim-te i"eg nadg tiaraiatlrekls tv (20.2%) used for light, but only as an oi|vetr uob bsyeatit is-ad. agtat girirole, (28.5%) used for light, but only as an oi|siy thinle wonl'th, in the begme sr"hey (29.9%) used for light, but only as an oi|nr. bonthe the tuout honils ohe thib th (30.5%) used for light, but only as an oi|kg that mand sons an, of,rtopit bale thu (31.0%) used for light, but only as an oi|nsm blasc yan, ang theate thor wille han (32.1%) used for light, but only as an oi|b thea mevind, int amat ars sif istuad p (33.3%) used for light, but only as an oi|msenge bie therale hing, aik asmeatked s (34.1%) used for light, but only as an oi|ge," rrermondy ghe e comasnig that urle (35.5%) used for light, but only as an oi|s or thartich comase surt thant seaiceng (36.1%) used for light, but only as an oi|s lot fircennor, unding dald bots trre i (37.1%) used for light, but only as an oi|st onderass noptand. "peles, suiondes is (38.2%) used for light, but only as an oi|gnith. s. lited, anca! stobbease so las, (39.3%) used for light, but only as an oi|chics fleet dong berieribus armor has or (40.1%) used for light, but only as an oi|cs and quirbout detom tis glome dold pco (41.1%) used for light, but only as an oi|nht shome wand, the your at movernife lo (42.0%) used for light, but only as an oi|r a reald hind the, with of the from sti (43.0%) used for light, but only as an oi|t beftect. how shapellatgen the fortower (44.0%) used for light, but only as an oi|rtucated fanns dountetter from fom to wi (45.2%) used for light, but only as an oi|r the sea priised tay queequings hearhou (46.8%) used for light, but only as an oi|ld, wode, i long ben! but the gentived. (48.0%) used for light, but only as an oi|r wide-no nate was him. "a king to had o (49.1%) used for light, but only as an oi|l erol min't defositanable paring our. 4 (50.0%) used for light, but only as an oi|l the motion ahab, too, and relay in aha (51.0%) used for light, but only as an oi|n dago, and contantly used the coil; but (52.3%) used for light, but only as an oi|l starbuckably happoss of the fullies ti (52.4%) used for light, but only as an oi|led-bubble most disinuan into the mate-- (53.3%) used for light, but only as an oi|len. ye?' 'tis though moby starbuck, and (53.6%) used for light, but only as an oi|l, and the pequodeers. but was all this: (53.9%) used for light, but only as an oi|ling his first repore to the pequod, sym (54.4%) used for light, but only as an oi|led escried; we they like potants--old s (54.3%) used for light, but only as an oi|l-ginqueg! i save started her supplain h (54.3%) used for light, but only as an oi|l is, the captain all this mildly bounde (54.9%)
Here, the seed text is shown left of the pipe character, and the randomly sampled sequence follows. In parantheses are the per-character accuracy values on the training set while training the model. The pattern of learning proceeds from almost-random character generation, to producing groups of letters separated by spaces, to generating words that seem like they might belong in Moby Dick, things like “captain,” “ahab, too,” and “constantly used the coil.”
Much amusement can be derived from a temporal model extending itself forward in this way. After all, how else would we ever think of “Pequodeers,” “Starbuckably,” or “Ginqueg”?!
User Guide¶
- Installation
- Package Overview
- Creating a Model
- Training a Model
- Using a Model
- Advanced Topics
- More Information
Examples¶
API Documentation¶
- Models
- Layers
- Activation Functions
- Loss Functions
- Regularizers
- Trainers
- Utilities | http://theanets.readthedocs.io/en/stable/ | 2017-04-23T13:41:52 | CC-MAIN-2017-17 | 1492917118707.23 | [] | theanets.readthedocs.io |
:
CGI
MultiPartFilter..
Throttling
WelcomeFilter
This is another serlvet which also implements the Filter interface. This forwards any servlets to the necessary welcome display of an application when serviced.
DefaultServlet | http://docs.codehaus.org/pages/viewpage.action?pageId=54132 | 2014-12-18T16:30:53 | CC-MAIN-2014-52 | 1418802767274.159 | [] | docs.codehaus.org |
Difference between revisions of "Bug Squad/Contacts"
From Joomla! Documentation
Bug Squad
Revision as of 23:40, 18 November 2013
- Joomla! Bug Squad Co-Coordinators
- Mark Dexter and Nick Savov
- PLT Contact
- Nick Savov
- To request to join the Bug Squad
- Mark Dexter
- Tracker Team Leaders
- Elin Waring
- Coding Team Leaders
- Nick Savov
- Testing Team Leaders
- to be announced
- Automated Testing Team Leaders
- to be announced | https://docs.joomla.org/index.php?title=Bug_Squad/Contacts&diff=105101&oldid=84377 | 2015-10-04T07:58:29 | CC-MAIN-2015-40 | 1443736672923.0 | [] | docs.joomla.org |
Changes related to "What is the profile plugin?"
← What is the profile plugin?
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20130728053231&target=What_is_the_profile_plugin%3F | 2015-10-04T08:23:34 | CC-MAIN-2015-40 | 1443736672923.0 | [] | docs.joomla.org |
use project variables.
Specify a third party repository in
composer.json
For this example, consider that there are several packages we want to install from a private repository hosted at
my-private-repos.example.com. List that repository in
composer.json.
{ "repositories": [ { "type": "composer", "url": "" } ] }
Set project variable credentials
Now set the composer authentication key and password as project variables. That can be done through the UI or via the command line, like so:
platform project:variable:set env:composer_key abc123 --no-visible-runtime platform project:variable:set env:composer_password abc123 --no-visible-runtime
The
env: prefix will make those variables appear as their own Unix environment variables, which makes the next step easier. The optional
--no-visible-runtime flag means the variable will only be defined during the build hook, which offers slightly better security.
The variable names used here are arbitrary, as long as they match what's in the build hook below.
Use a custom Composer command
You'll need to run your own custom Composer command, so first disable the default composer build mode. Add or modify the following in
.platform.app.yaml:
build: flavor: "none"
Then in your build hook set the authentication information for the remote server, using the environment variables defined above. Then run
composer install as normal.
hooks: build: | set -e composer config http-basic.my-private-repos.example.com $COMPOSER_KEY $COMPOSER_PASSWORD composer install --no-dev --prefer-dist --no-progress --no-interaction --optimize-autoloader
The specific switches in the
composer install line are what Platform.sh would run by default, but you can customize that as needed. They important part is the
composer config line, which tells composer to use the key and password from the environment when connecting to
my-private-repos.example.com, and to do so using
http-auth. (See the Composer documentation for other possible authentication options.)
From here on, everything should proceed as normal. Composer will download all listed packages from whatever repositories are appropriate, authenticating against them as needed, build the application, and then it will deploy as normal. Because the variables are defined above to not be visible at runtime they will not appear at all in the running application.. | https://docs.platform.sh/tutorials/composer-auth.html | 2017-02-19T18:43:56 | CC-MAIN-2017-09 | 1487501170249.75 | [] | docs.platform.sh |
No art submitted yet
Protector of the Hearth
Writen by NekoLLX
Rewrite by Umgestalten
Based on a photomanip at
Protector of the Hearth, Part 1
The place was a warm suburban landscape. A place considered the safest and also the most dangerous place in America. While a person may sleep soundly in their bed protected by the police and neighborhood watch they also, at the same time, let their guard down against those that are determined to take what is theirs.
This story began in a house not unlike many others, a house with two children. A seven-year-old boy and a nine-year-old girl, and two parents. The mother, also a housewife and a fit and athletic little minx with long blonde hair by the name of Anne. The father and her husband, John, a Production Engineer was at work and not expected home until morning. It was Anne's time. It was one of those rare moments that she had all to herself. The children were asleep in their beds, the house was clean, and all of her other numerous chores were done. The moon outside shone as a thin silvery crescent in the night sky.
Anne felt safe as she stepped out of the shower clad only in a bath towel wrapped tightly around her body that glistened with moisture. The serenity of the moment was soon broken when, as she crossed the living room, the faintest of sounds made her pause and shiver. She stood and listened intently, but nothing, 'Just my imagination,' she thought.
She moved slowly toward the master bedroom located on the other side of the living room when another noise made her jump. It was a metal jingling noise. She paused again and moved toward the noise. As the back kitchen door came into view she drew in a sharp breath. The knob was turning violently from one side to another.
'Burglers!' snapped into her mind. 'Damn it John! I told you we needed an alarm or at least a guard dog. What should I do?'
Anne trembled as she moved cautiously heading for the phone. She was now paranoid of every sound she made. She lifted the receiver. Nothing. She hit the reset a few times but still no dial tone. 'They cut the line,' she thought as a shiver of fear coursed through her. The towel she had wrapped around her slipped from her trembling grip and hit the floor with a light sound leaving her standing naked and exposed in the living room.
There was a snapping sound and the rending of wood as the door creaked open. Anne bolted off to the side and crouched out of direct view.
"Are you sure they don't have a dog?" an unfamiliar voice said.
"Positive," came the answer.
"Yeah, right! You said that about the last 3 houses."
Anne swore silently, 'If only we had a dog to scare them off,' she thought. 'If only we had a…' and her mind froze. 'No the idea is silly,' she thought as her gaze drifted up toward the kid's room then back to the shattered doorframe and the sound of approaching footsteps. She pushed away the fear and gathered up all of her courage. Her mouth was slow to open as her fear-tightened throat squeaked out a small, "Arf!"
The footsteps stopped, "What was that?" she heard a voice ask.
"Sounded like a mouse. Maybe a neighbor's dog."
A low rumble of anger arose from deep in Anne's throat. It gained in volume and lowered to a base range. Anne barked again but this time she put all of her anger and maternal instincts to protect her children behind it, "Rrrrrrroooofff! Rufff! Rrrrrrrrrr, rrrrufff!"
The robbers stepped back audibly, "What the hell was that? You said there was no dog!"
Anne rose up slowly onto her hunches. She moved slowly on her hands and knees and she barked again and continued with a deep resounding growl. In the back of her mind she felt a pricking sensation run over her body but dismissed it as goose flesh or pins and needles. They were uncomfortable but only slightly distracting and she pushed it out of her mind.
She became single minded as she barked again and again. Her lips twisted and pulled back bearing her blunt human teeth threateningly to the menace that was just beyond her sight. Anne was getting so deeply into her part that she never noticed the strands of golden hair that pushed through the skin of her arms and back.
The posturing as a dog was becoming easier and she shifted from her knees to her feet. It felt more natural to stand on all fours as her shoulder blades and pelvis reformed and she found it easier to move and take a more aggressive stance. She never noticed that the length of her legs had become the same as her arms or that she was standing on the balls of her feet and that her fingers had curled under her hands.
Her mind was so concentrated on the threat to her offspring that she neither felt, nor noticed as her face pushed out and began to form a muzzle. She didn't know that her nose had darkened to black and the skin of it roughened but she did noticed the scents of the intruders as they assailed her more sensitive olfactory nerves and instinct told her it was the enemy. Her lips again twisted and pulled back again but now they revealed four lengthening pointed canine teeth, sharp incisors and foamy saliva that was expelled with every guttural growl she made.
Her curled fingers receded into her hands and formed into toe like structures. Her fingernails darkened and curled under from the sides until they were dark curved claws. Her thumb shrank until it became almost nonexistent leaving only a dark curved dewclaw. Her feet and toes were undergoing a similar metamorphosis as her hands and feet formed into four paws making every step and movement easier and easier to make.
As her back arched the end of her spine shifted position, curled upwards, and then extended slowly. As it became longer, the golden hairs that now covered her back, arms and legs moved up the length of her growing tail. At the same time, the other end of her spine also extended giving her a much longer neck.
Emboldened by sensations of strength and a need to protect her territory she bounded out from her concealment. As Anne appeared in the doorway she let loose with a low guttural growl, her now large canines bared as her lips twisted and quivered. Her ears twitched and grew as they pushed through her hair and then flopped to the sides of her head.
Anne's back arched the stiff golden hairs rose along her spine. The muscles in her legs grew taut and her new claws dug into the carpet as she leapt into the kitchen and bounded towards the intruders.
"Shit! It's a monster!"
"Let's get the fuck out of here!" the men shouted as they turned and bolted for the door.
Anne gave chase. She could smell their fear as her fangs were nipping at the pants cuff of one of the pair, tearing the fabric even as they vanished out the door and into the night with Anne in close pursuit. The changes continued as she stood there on all four paws barking into the evening at the retreating burglars.
Once convinced that they were gone, Anne turned around and trotted contentedly toward the house. She had a sensation of accomplishment and victory and she wrapped herself in that feeling as she padded over to the couch in the center of the living room.
Hopping onto it, she turned around three times in a circle before settling into a tight ball on the plush surface with her elongated snout laying on her left hip. She took in a deep breath and let out a low contented "woof" before her dark eyes slowly closed.
As she slept the changes continued to progress over her body, shifting, changing until all that remained on the couch was a large female Golden Retriever.
* * * * *
Protector of the Hearth, Part 2
When John arrived home the kids heard his car pull into the driveway and met him at the door all excited. He had no idea what it was all about but went along with being dragged into the living room by a small hand pulling on each of his hands.
"We were very quiet and didn't wake her up, Daddy," His daughter, Evelyn said in an excited whisper.
"I was quiet too, Daddy," Nathan said quietly but proudly.
Just as they rounded the couch, John saw the large Golden Retriever curled up in a ball sleeping there.
"Daddy, can we please keep her!" Evelyn pleaded.
It was the familiar sound of her daughter's voice that roused Anne from her contented slumber. She opened her eyes and as her vision cleared, she stared in shock down her long muzzle and saw her hind legs, paws, rump and tail. Her body felt strange. She had no idea what had happened to her.
"Look you scared her," Nathan said.
She raised her head and turned toward her husband looking up into his concerned face as he spoke, "You know I don't want a dog in our house," he said sternly.
"But Daddy, Pleeeeease. Can't we please keep her?" Both children said in unison.
John sighed and looked defeated. 'He always was a big softie. That's what makes him such a good mate.' Anne thought. 'Mate? Where did that come from?' she wondered.
'He's more than just that. He's the father of my children,' she continued.
Then an aberrant thought crossed her mind and Anne cocked her head to one side while looking critically at John, 'Still, there may be better mates out there,' Anne quickly shook her head to clear out her adulterous thoughts.
"We'll let her stay for a couple weeks, if no one claims her in that time I'll think about it."
"Yea!" the children cheered.
John moved to a near phone and lifted the receiver, it was still dead. "Strange. The phone is dead. Wonder what's up with that?" he said to no one in particular.
Anne hopped down from couch, her paws landing her softly on the carpet. She stretched her body out with her fore paws way out in front of her, claws curled down drawing on the carpet, her head down and her rump thrust up in the air. She threw back her head and let out a long yawn and whine with her mouth wide open exposing her glistening white canines and long tongue.
Then she stood up, did an all-over shake and padded over to where John was standing. She was beginning to understand what ahd happened to her with each step. But she needed a mirror to confirm her thoughts.
John put down the phone and went into the kitchen. The back door was still wide open and as he looked around critically he saw the shattered door jamb and broken lock. The children and Anne followed behind him silently and watched as John bent down and picked up a scrap of cloth from the floor and then stepped outside. The children and Anne followed.
Anne paused catching her reflection in the glass. Or rather the reflection of a brown eyed golden haired retriever. She barked in surprise again.
Just outside the door, John stopped to pick up a discarded kitchen knife on the porch floor, "This isn't one of ours," He mumbled out loud.
He turned back to Anne, "Did you…" but he was cut off as Anne came over and rubbed up against his leg. John smiled as he ran his hand through her coat and scratched behind her ears, "Good girl!" he said and Anne's tail wagged excitedly.
Then John became serious and told the kids to stay in the house while he looked around outside. Anne followed him as he walked around the house. It wasn't long before he found the cut phone line on the side of the house and knew for sure there had been some big trouble the night before.
John took out his cell phone and dialed 911. A short time later there were several squad cars in his driveway and an unmarked car at the curb. For several hours it was like a scene from a television cop show. John and the kids were questioned, alibis and times were checked, fingerprints and photos were taken but in the end, Anne was still missing and there were no answers.
The detectives said it looked like a burglary and abduction and said they would do all that they could. They said there would be an investigation and they would follow any leads they found but John knew by their tone and body language that they didn't have much hope. Plus he knew that there were no fingerprints in the house or on the knife and the piece of cloth that Anne had torn off was the only hard evidence they had.
John was devastated but knew he had to put up a strong front for the kids sake. He thanked the police as they left him and the kids alone and headed for their cars. He turned to where the kids were standing with devastated faces on either side of Anne and said, "That's all we can do right now. They will find Mommy," he said trying to convince them and himself.
He moved back towards the kitchen and asked, "Are you kids hungry? You need to have some breakfast. How about you girl? Are you hungry? Want something to eat?" John said looking down at Anne.
Anne's stomach rumbled slightly at his words and she nodded her head. John looked at her and thought, 'Huh, it's like she understood what I said and was answering me. Weird.'
He searched around the cupboards and in the fridge and then stood and sighed. He turned to the kids "Ok guys, get washed up. It look's like I need to make something for our little hero and ourselves."
The kids ran off to get cleaned up while Anne sat looking up at her husband as he began to heat some water for oatmeal. He had a look of worry on his face. "Anne where are you?" he said, half under his breath.
John found a container of leftover beef stew in the fridge and got a bowl from the cupboard. After warming it slightly in the microwave he placed it on the floor and called Anne over, "Here girl. Here's some really good stew for you. Don't get used to it though. After I go shopping it's strictly dog food for you."
Anne moved quickly to the bowl and sniffed at it. 'That smells really good. I'm half starved," she was thinking as she tentatively stuck out her tongue and licked at the stew in the bowl. She tried several different approaches to the task of eating the stew from a bowl without a spoon or fork and soon found that the best way was to just open her mouth and grab some of it and then chew and swallow. She found that a lot of chewing wasn't all that necessary. It seemed that a few quick chomps of her new molars was enough to mash it down for swallowing. She also noticed that her dog throat could handle a lot larger pieces than she was used to.
In a matter of moments, Anne had licked the bowl clean and was licking the last of the stew from her muzzle. It came to her as a completely instinctual movement and she was amused that she could now lick the end of her nose and down under her chin. 'This is so weird but at the same time it feels so completely normal,' she thought. When she finished she looked up at John and wagged her tail. She felt the need to thank him so she gave out with a, "Woof, ruff, woof," and licked the back of his hand.
That afternoon, John called his boss to explain what was going on and ask for some time off. His boos told him he had just seen it on the news and to not worry about work. "I was going to call you," he said. "You have over a month of vacation time built up so take it. We'll survive and I know where to find you if I have any questions. You take care of things at home and let us know if we can help," his boss told him.
John thanked him and closed his cell phone. His next call was to a security company to have an alarm system installed as soon as possible. Then it was a quick trip to the grocery store for food that he could prepare and dog food for Anne. When they arrived home, he opened the door and called out, "Lucy, I'm home," and Anne came running to the door to greet them.
"Her name's not Lucy, it's Happy," Evelyn said.
"Is not," Nathan countered. "It's Woof!"
"No it's not. That's a stupid name. Her name is Happy!" Evelyn stated matter-of-factly.
"Ok, that's enough, kids. She came when I called Lucy so that's what her name will be. That's the end of the argument!" John stated parentally.
At dinner that first night, they had microwave TV dinners. John bought a lot of TV dinners when they went shopping. He wasn't helpless in the kitchen but was very close to it. At first, he tried to eat his dinner in the living room so he could watch TV but Anne was having none of that. She sat in the doorway blocking him and shook her head.
John just looked at her, retreated back into the kitchen and sat at the table with the kids. 'Having Lucy around is almost like Anne being here,' he thought as he remembered Anne's rules.
Over the next two weeks things settled into a sort of routine. Friends and neighbors did their best to help John out with the things that Anne usually took care of like his neighbor, Rose that made dinners for them so they could give the microwave a break.
John was thanking Rose for the 5th time as his cute red-haired neighbor handed him another casserole to feed the kids and him. The newly christened Lucy watched from the other side of the kitchen. It had been two weeks and she still hadn't shown any signs of changing back to Anne. 'Honestly John, I've never met a man so helpless in the kitchen,' Lucy was thinking as she watched him accept the most recent donation from Rose.
John was expressing his appreciation to Rose for her generosity when she said, "Any word on…" asked Rose, changing the subject.
"No, no sign of her at all. The police have called several times but never had a thing to report other than they may have a lead on who broke in but it hasn't gone anywhere yet. It's been two weeks and nothing at all. No sign of her," John sighed.
"Well thanks again," he said to her.
Rose hugged John before she turned and went out the door. John turned and saw Anne sitting by the kitchen door. Setting the covered Casserole dish down, he opened a cupboard door and pulled out a small bag as he called out for the kids to come into the kitchen. The kids came in from the next room and came up to John and Anne. John reached into the bag and pulled out a red collar. Dangling from it was a tag that jingled as he moved. It was a bone shaped gold colored tag with "Lucy" and their phone number engraved on it.
"Welcome to the family Lucy," John said smiling as he fastened the colar around Anne's neck. "Good girl."
After that, life for John, the kids and Lucy started to settle into a sort of routine. John made an appointment at the veterinarian's for a checkup and to get all of her shots. Lucy was not amused but figured that it was only to be expected given her current situation.
Bowls were purchased with "Lucy" printed on them and a place was set aside in the kitchen for her to eat. John bought a large dog bed mattress for her to sleep on and it was placed at the foot of his bed. It started out in the kitchen but she dragged it into the master bedroom every night until John just gave up.
* * * * *
Protector of the Hearth Part 3
Anne, who was now called Lucy, was keeping a crisp pace ahead of and to the left of John as they walked down the sidewalk for their after dinner walk. There was just enough tension to give the lead a firm line. Lucy had come to really enjoy their nightly walks around the neighborhood. She had no idea why but at some level, it was fun and oh so much more interesting to go for a walk now than it had been as Anne. There were smells in the air that she never noticed before. She could tell what everyone was having for dinner and she knew which houses had dogs and which had cats. Most of the time she could even tell if a neighbor was home or not just by the smell in the air.
Lucy glanced back over her shoulder at John and thought, 'He really is a wonderful man. I love these walks alone with him. I'm so lucky to have him for a master. Ahh, I mean owner. No, he's my husband… isn't he? Yes. He's my husband. But I'm a dog now. Can he still be my husband? Maybe he is my master now. No, he is my husband and I have to find out how to change back no matter how much I enjoy my life as it is now.'
Lucy settled back into a brisk walking pace and her mind was pulled back to the odors wafting through the evening air. She had just stopped and was squatting by the curb to relive herself when suddenly all thoughts of her status in life and situation were forgotten. She had detected a new scent and it was strong and had a strange effect on her. She looked around for the source but at first saw nothing. Then, just up ahead she spotted a beautiful Golden Retriever just rounding the corner on the other side of the street followed by his owner. Her instincts told her it was a male and she needed to go meet the newcomer to the neighborhood.
Her tail began to wag wildly and John was surprised at the normally docile canine's sudden firm tug on the lead. The combination of the suddenness and the hard pull on the lead caused it to slip from John's grip and Lucy bounded across the street. The other dog's owner was startled as Anne came running up to his dog and began to check him out.
The male dog immediately did the same to Lucy. They both met nose to nose first and established a level of compatibility and then he turned to sniff under Lucy's tail. She obligingly held it straight up stiff as a flagpole for him. At the same time she checked both his rear and underneath. She noticed that his penis was beginning to poke its way out of its furry sheath and felt a compulsion to give it a lick. She had just gotten started when John caught up to her, reclaimed the leash and gave it a firm tug. Lucy was not happy about being so forcefully pulled away but dropped her head and slouched as she sat next to John.
"Sorry, she's normally so well behaved, it caught me off guard!" John apologized.
"No harm done, your dog and Ben seem to be getting along just fine," The other man responded.
As John and the man introduced themselves and started a conversation of small talk, Lucy and Ben started their own interaction, "Your master must take good care of you, your in great shape," Ben barked, whined and low growled happily.
"Thanks!" Lucy responded. "But he's not my master, he's my husband, er… mate."
"Your mate is a human? How strange."
"Well, I was human…once," Lucy answered.
"We'll you're not anymore," Ben replied with obvious double meaning.
"Yeah, I know."
"You need a real mate, like me. One of your own kind. Don't you want to have a litter?"
Lucy let out a low growl, "I have a litter, I mean children, a boy and a girl."
"Humans?" Ben questioned incredulously.
"Yes, of course. What else?"
Just then, John pulled on Lucy's lead and broke up their conversation. "We're heading back home now girl. Come on Lucy," he said.
Ben looked back over his shoulder at Lucy and said, "When your time comes and you want a real litter, come find me. I'm just around the corner and up the block. You'll know my territory by my scent." Then he barked a final goodbye to the really fine looking bitch and her master as they walked off in the opposite direction.
Lucy barked an enthusiastic response, "I will!" before catching her own words and correcting herself, "I mean…I'm, I'm married!"
Lucy's mind was in a bit of a turmoil as they walked the several blocks home. 'What's happening to me? Every day that goes by I'm enjoying being a dog more and more and thinking less and less about returning to being Anne. I love John but it's a different kind of love than it was before. He's kind to me and takes good care of me and I love the time we spend together like these walks. I've even come to really enjoy playing fetch in the back yard with him and the kids. But then there's Ben… He's such a fine looking stud and he stirs a feeling in me that I've never felt before. It's something like I used to feel for John. Damn! I'm so confused,' she thought.
After they got home Lucy went to the kitchen and munched silently on the dry kibble in her bowl. She had mastered the art of eating without utensils over the last two weeks and now never gave a thought to the fact that she was eating and drinking from dog dishes on the kitchen floor. She also loved it when John or the kids would drop scraps on the floor and she woofed them down without Anne's usual concern for dirt and germs. She also caught herself sitting and begging at John's feet at the table or while he was preparing food. She felt she had mastered 'looking cute' in order to get a scrap of 'people food.'
When Lucy took the time to think about it, she was concerned that she seemed to be slipping more and more into the contented life of a dog. There were larger and larger gaps of time that went by where she never gave a thought to, or even considered, her previous human existence. She was a dog, all dog in not only body, but in her mind too. And that was what caused her the most concern. She had come to love her life as Lucy but couldn't cope with the thought of losing her human mind and memories. Yet, during those times when they were lost to her, she never noticed or missed them.
Lucy's thoughts wandered, 'So, if I did lose all of my human mind, I would still be happy and contented and never miss it. Would I just be a dog. Maybe it wouldn't be as bad as I imagine. The only mental turmoil I have is with my human mind and thoughts. Yes, I'm completely content as a dog. All I have to do is just let go. Quit fighting to retain something that I no longer seem to have a use for.' And so, Lucy let go for the moment and returned to her food. She didn't know why it tasted so good but it did and the canned dog food the master gave her for dinner every night was just fantastic. She found that she could consume an entire can of food in mere moments.
Meanwhile, after their walk, John had retreated to the master bedroom for a nap while the kids waited for Lucy to finish snacking on her kibble. Once done, they went out to the back yard and called to Lucy. She hastened to the doggy door that John had installed in the wall next to the kitchen door and ran out into the back yard where Nathan and Evelyn were playing with a worn out old baseball. Locking her gaze on it as he tossed it to Evelyn she playfully yipped and bounded first over to Evelyn and then back to Nathan as Evelyn tossed the ball back to him. Finally, Lucy caught the ball in mid air between them and took off with it.
Lucy teasingly played 'Keep Away' with the ball waving it temptingly by Nathan's hand and then running away when he reached for it. When the kids appeared to be getting frustrated with Lucy's game she would 'accidentally' drop the ball and they would recover it and the game began anew.
Back and forth they played for a good half-hour before the kids tired. As Evelyn and Nathan headed for the house, Lucy took a few minutes to make a pit stop before going inside again. She found the two kids watching TV in the living room and it was apparent that they would both be sound asleep at any moment.
With a soft bite, Lucy tugged on Nathan's sleeve and then did the same to Evelyn's. She guided them both up the stairs to their rooms and supervised their change into pajamas. She made sure Nathan was comfortable in bed and the covers were over him. Then with a gentle lick on the cheek she headed out the door. Standing on her hind legs she turned off the light and nudged the door to a partially open position.
After doing the same for Evelyn, Lucy padded down the stairs to the back door and nosed it shut, waiting to hear the click of the latch. She then stood on her hind legs again and with her nose, threw the dead bolt closed. She sat in front of the door for a few minutes and sighed, 'At least I can still do the things that are important to me,' she thought.
Lucy had noticed that the door to master bedroom was open a crack so, padding silently back upstairs she made her way to the room where her husband slept still fully clothed. With a soft nudge of her nose the door to his room opened and she carefully crawled up onto the bed.
"I truly do miss your warm body my mate," she woofed softly as she snuggled up carefully against his back and lay beside him. She felt a strange need, a longing for something and there was a gentle itch just below her tail. She took a deep breath and let it out with a sigh and fell asleep.
* * * * *
Protector of the Hearth Part 4
John rolled over in the bed early in the morning, his hand landing on the supple skin of a woman. His eye bolted wide open as he stared in shock and awe at the sleeping form of his wife lying naked beside him, Lucy's collar still around her neck.
"Anne! You're alive!" he shouted as he sat up in the bed.
Anne's eyes bolted wide open then and she looked up into John's eyes and then down at her own body as she shook with disbelief.
"Where were you? Why didn't you call? And why are you wearing Lucy's collar?" John shouted at her excitedly.
Anne blinked a few times as her brain slowly woke up and tried to adjust, "I, I'm human again?"
"What do you mean you're 'human again'? Where were you? I was so scared and worried. And the kids…"
Anne rolled close to John and laid an arm across his legs and held him tightly, "Calm down and I'll try to explain," she said softly as she laid her head in his lap.
And then she began to tell her story of the last two weeks. When she finished John was dead silent. He put his hand on her forehead with a concerned look, "You don't have a fever."
"It wasn't a hallucination. Damn it John, why can't you believe me. I've told you everything that happened for the last two weeks. Everything you and the kids did. How would I know that if I wasn't right here living in this house? Think about it," she said with frustration and anger in her voice.
"You must have been really traumatized. You should get some rest. You'll feel better after you get some food and rest," John said trying to appear calm.
A deep guttural canine growl rumbled up from Anne's throat, "I'm not crazy! God, that dog Ben was more accepting of me than you are!"
Rolling off John, she got to her hands and knees and hoped down from the bed, "You're supposed to be my husband, my mate. You should trust me. You should believe me," she said looking up at him from her position on hands and knees on the floor.
"But honey…" John cried as he choked back his tears and stared down at her. "Anne, please… Please, we can talk this out," he begged.
Anne turned her back to him and then looked over her shoulder and bared her teeth, her long curved canines exposed by her viciously curled lips. As John stared at her dumbfounded, her eyes shifted from green to brown and the whites almost disappeared.
John could only watch in horror and disbelief as her beautiful face pushed out into a muzzle and the golden hairs started to cover her body. He cringed as her ears moved to the top of her head, elongated and then flopped down to the side of her head.
Anne continued to shift into the now familiar form of Lucy, as her tail pushed out from the end of her spine and the long golden hairs covered it. He glanced down to see that her arms and hands had returned to being fore legs and paws and that her legs were now hindquarters.
She had changed from Anne into Lucy right before his eyes and now sat on her haunches at his feet, her long golden fur flowing over her body. John stared at her, knowing full well what he had just witnessed. The large Golden Retriever sitting at his feet was actually his wife Anne.
"My God Anne, it's true. Everything you said, it was all true!"
Anne, now back in the form of Lucy rose to all fours, growled and snarled at him with a viciousness that scared the hell out of him and then she bounded out of the room. He just sat there scared and trying to come to grips with the reality that his wife had just changed into a dog, a Golden Retriever, right before his eyes.
He rose up from the bed and hurried to follow after her but as he reached the bottom of the stairs he heard the sound of her doggy door swinging shut. John watched her vanish around the side of the house towards the front yard.
He gave chase calling both of her names, "Anne, Lucy, come back. I'm sorry. Please come back," he pleaded. As John reached the front yard, he was only able to catch the briefest glimpse of her as she ran full out down the street, bounding away fast on her four paws. He knew pursuit was hopeless and his shoulders slumped as he gave up, defeated and returned to the house. His only hope was that she would come back later.
Lucy stormed down the streets snarling and growling, 'I can't believe him! Ok I understand that a pet were dog is a bit hard to believe but I'm his wife. He should have at least given it a chance. But no, he just dismissed it, thought I was sick or crazy. He wouldn't even listen to me. Hell, he didn't even humor me. What a damn jerk!'
Winded, Lucy slowed to catch her breath and noticed a scent that wafted past her nose as she rested. Her mood changed, She alerted and her tail started waging. 'I know that scent. It's Ben. I must be near his territory.'
With her nose to the breeze she followed the scent as it grew stronger. When she was sure she had found the source she sniffed, nose to the ground, around the mailbox post and then a tree in the yard. The final proof came when she found a fly-covered pile near the front steps, "Yes! This is Ben's." she happily barked.
A moment later, Ben bounded around from the back of the house and came up to her. "Lucy, you found me." he said as he approached. "This is great. I was hoping I would see you again."
Ben began the standard dog greeting procedures and had just reached her rear when he suddenly stopped, sniffed and said to himself, 'Oh yes! This bitch is in heat. That's why she came looking for me. She needed a stud like me to satisfy her. Well no problem there. I can scratch that itch for her.'
At the same time, Lucy was also involved with her instinctual greeting of Ben when she noticed that there was more than just a bit of the tip of his penis showing this time. When she sniffed underneath him and gave it a lick she thought. 'Wow. He's really happy to see me.'
Ben gave a soft groan as she licked him and began to give her swollen nether regions a good tongue massage. Lucy felt the attention that Ben was giving her and her tail stiffened and rose quickly to allow him to continue uninterrupted.
'Oh damn that feels good,' she thought as her shoulders slumped and her pelvis rose higher. "Oh please don't stop," she whined as he pulled away momentarily.
Ben answered, "Oh I'm not even stopping, bitch," as he gave a small hop with his front end and landed square on her back and wrapped his front legs and paws around her chest.
She growled, bared her fangs and looked back at him as he bit down gently on the back of her neck. "What the hell are you doing?" she asked what was soon almost a rhetorical question as she felt his long stiff penis probing her rear end. "Oh! Oh shit! That's not why I came here. I didn't want this," she moaned as he penetrated her.
"Oh yes you did. Even if you didn't know it, this is why you are here and this is why you became so upset. Don't worry though, I'll make it better, you'll see,"
Lucy soon forgot all but the moment. She felt him penetrate her and then the rapid jackhammer movement of Ben's hips as he pounded into her faster, harder and deeper. Then she felt as if something else, something larger was trying to get inside and soon did. As the large bulge of his knot passed her opening and lodged deep inside her vagina she knew what it was.
Ben's speed increased and soon he spent himself, filling her vagina and uterus with his seed. Then he relaxed on her back and she knew he was done. She felt as if something she needed to do had just been accomplished. A mental and physical itch had been scratched.
As Ben recovered, he moved one leg over her back and slid his paws to the ground. They were still held together by the knot of his penis buried deeply in her vagina and now stood tail to tail locked in lust.
That's how Ben's master found them in his front yard. "Ben! What have you done?" he shouted, even though he knew very well what the answer was. Even if they hadn't been firmly locked together, the look of complete satisfaction on both their faces would have told him what had just transpired.
He ran to the house and returned with two leads that he hooked to their collars and then waited. Lucy looked over her shoulder at Ben and woofed, "I can't believe you just did that to me."
"It ain't over yet Lucy. Not until the fat knot shrinks," woof, woof, woof, he laughed.
After a few more minutes, Lucy felt an easing of the tightness inside her vagina and Ben slipped out. Ben moved around to face Lucy and licked her muzzle, "Sorry about that but when I smell a hot bitch like you I just lose all control."
"You behaved like an animal!" Lucy responded.
"I am an animal. And so are you, Lucy," he answered.
"Yes. I know I am now but I changed back into a human and my mate acted like I was crazy." She whined.
"Yeah, that's a human for you. They don't believe anything you tell them unless you beat them over the head with it." He said gruffly.
"I really missed him. I missed sleeping with him. I missed talking to him. And he acted like I was insane. I was so pissed and frustrated I just ran out of the house. That's how I wound up here."
"Any regrets?" Ben asked.
"No. Not now. None at all."
"So forget your human mate. He doesn't understand you, I do. I'm your mate now. He's just your owner and master, nothing more."
Lucy nipped playfully at Ben, "No. Don't be silly. I married him. He's my mate."
"When you were human, he was your mate. You're no longer human so he is no longer your mate. That life for you is gone now. You have a new and better life and I'm you mate now. You'll be much happier this way, believe me."
She shook her head but what Ben had just said and her feelings for him only grew stronger. She knew that he was right, she was no longer human, and it didn't bother her as she thought about it.
Lucy grunted and playfully nipped at him and he nipped back at her. 'Yes. He is my mate, at least for now,' she thought and then she sat down, twisted around and started to clean their mess from her rear end. 'Mmmm, now this is one big plus about being a dog,' she thought to herself as she licked.
* * * * *
Protector of the Hearth Part 5
As John came into the house, Ben's owner said, "There she is," indicating the two dogs cuddled up lying next to each other in the corner of the room. "I found the two of them knotted together out in my front yard for all the world to see," he said chuckling. "Aren't they cute together? A regular couple those two are."
John glared down at his dog/wife lying next to the neighbor's retriever. His was quite upset but kept it below the surface. He knew he'd only look the fool if he said anything in front of his neighbor. "Thanks. I really appreciate the phone call and you taking care of her for me. I hope she wasn't too much trouble."
"Not at all. I'm quite used to it. I used to breed Golden Retrievers but a year ago my female was killed in an accident. She ran into the street and was hit by a car two weeks ago. I noticed on her tag that you dog's name is Lucy. That's what I called my dog. Her registered name was Lucinda. Ben is from her last litter."
John listened to the story and then said, "I'm sorry to hear about your dog. That's really too bad. I'm glad Lucy didn't get hit when she ran away. I guess I better get her home now."
Approaching slowly, leash in hand, John tried to hook up Lucy. She was still angry with him and looked up, growled and bared her fangs. The Neighbor approached John and pulled him back, "Let me. Dogs tend to get really testy right after they mate."
John gritted his teeth and smiled, "Yeah, they sure do. You go ahead."
"Hey don't take it so personal."
"Not take it personal when my dog just got herself fucked by your dog in your front yard?" he almost shouted. "Sorry but it is very personal to me. Especially when she growls at me like that."
"That's what they do. You haven't had her fixed, have you?"
"No," John stammered. "I haven't had her that long. She just appeared one day and we kept her."
"Well then you can pretty much count on her having a litter of puppies in a few months. Ben is a pretty sure fire stud."
The thought suddenly occurred to John that his wife was most likely going to bear a litter of puppies. "Oh shit! That's all I need."
"I might be able to help you out, John. I have connections for registered Golden Retriever pups."
"But I don't have any papers on Lucy."
"Yes, but I have the papers on Lucinda. We can use them. Lucy is obviously a purebred Golden and has excellent conformation. Her pups should be real show dogs and would bring in a lot of money. I'll take pick of the litter for the stud fee."
John shook his head and paused for a moment. He couldn't believe he was standing there having a conversation with a neighbor about breeding his wife and selling her offspring. "Yeah. Well I'll have to think about it. I'm pretty upset right now."
"No problem. I understand. You take Lucy home and let her settle down a bit and you think about what I said."
John took a firm grip on his wife's lead and started to pull her away. Lucy growled a couple of times and struggled a bit against the leash but soon gave in and followed him out of the neighbor's house and back home again.
Once they were home John led his semi-resisting dog and/or wife into their room and secured the door. He took a deep breath and slowly started in, "What were you thinking? Running off like that, not listening to me when I told you to stop, and then… and then you get yourself fucked by the neighbor's dog out in his front yard where everyone we know was able to see you. On top of that, when I get there you're laying on the floor sleeping with another dog. Were married you bitch!"
Lucy bared her fangs growling back "Yeah that's right! I'm a bitch, I'm a female dog, and you couldn't accept that. Some fucking mate you turned out to be. God, how could I have been so blind about you."
John just shook his head as his wife barked, snarled and whined at him. "Speak English, damn it! I can't understand you but I know you're saying something."
Lucy snorted, "When I did you just disregarded anything I said."
The two just stared at each other for several moments.
John dropped to a sitting position on the bed, head in hands, "God, this is all my fault. I had my wife back and I acted like she was crazy. It's no wonder she'd rather be a dog," John sighed.
Lucy started to growl but stopped and tilted her head to the side as she heard what John said and thought, 'Oh John, what have we done? I don't know how I came back to you and then I think my anger at your reaction caused me to return to being a dog. Now I don't know how to get back to being your wife again. I'm so sorry. Now here we are, both of us blaming ourselves and neither of us knowing how any of this really happened. Where do we go from here?'
The voice of the kids filtered into both of their thoughts. Lucy stood on all fours and her tail began to wag when she heard them. She walked over to the bedroom door and whined. John looked up and smiled, "Guess we both need a break, don't we girl? Ok, you go play with the kids. We'll talk more civilly later," he said as he opened the door and let her out. "Go on girl, go play with the kids."
Lucy bounded out the door as he opened it and he could hear her claws clacking on the wooden steps as she ran down to the children. He sat back down on the bed and bemoaned, "Oh Anne, I'm afraid that I've lost you for good this time."
Lucy went down and played with the kids and John joined them later when he felt up to it.
* * * * *
Protector of the Hearth Part 6
Several more weeks passed and things began to settle into a routine. John was sitting in the living room one night after the kids went to bed and Lucy was sleeping at his feet. "You're not coming back again are you Anne?" he said to her. She never budged or acknowledged that he was talking. Then he said, "Anne is gone isn't she Lucy?"
With that she lifted her head and looked up at him. He stared down at her and repeated himself, "Anne's gone now isn't she Lucy? She's never coming back to me."
Lucy looked at his sad face and thought, 'What do I do? If I acknowledge that I understand him then he will think that Anne still exists, but she doesn't. I'm Lucy now.' Lucy knew what was happening. She knew John was her husband and the children were her kids but something was happening to her. She knew she was slowly changing mentally to match her new body and life.
Her thoughts were in a turmoil as she tried to maintain the outer appearance of a typical dog. She stared up at him with an attentive but empty stare and cocked her head to one side as she continued to think, 'My human mind seems to leave me for longer and longer periods of time. It's as if I don't need it anymore. I don't need John as a mate anymore either because I have Ben. And I don't need the kids as my children because I'm carrying a litter of my own and will soon have puppies to take care of. My only real responsibilities now are as a dog and loyal family pet, and I like it that way. It's a comfortable life. I don't want to give him any false hopes that I will return because I don't think I will,' she was thinking. As she continued to stare at him she opened her mouth and started to pant just a little and let her long pink tongue loll out the side of her mouth.
John looked down sadly at the beautiful Golden Retriever that was once his loving wife, "Oh Anne, or Lucy, whichever you are, I love you. I wish you understood what I'm saying. I don't know that I will ever forgive myself for driving you back into that form. I wish I knew how to change you back. I wish I knew what caused the change in the first place. I miss Anne, my wife."
Lucy's thoughts and reasoning searched for answers or solutions but there were none, 'My only regret is the sadness and pain that John and the kids feel because I'm gone, especially John. I can see that the kids are adjusting well and soon will have moved on past my disappearance and that is good, but John is another story. He still talks to me every night and whenever we're alone as if I was still Anne. He believes that by not accepting the fact that I am most likely going to live out my life as a dog that I will somehow change back into Anne. But even if I knew how to change back into Anne I don't know that I would. I don't think I want to be Anne anymore. I like being Lucy. I am Lucy. Anne is as dead to me as she is to the rest of the world.'
Just then, the phone rang and John answered it. It was his boss, "John. How's it going? I got your message to call, any word on Anne?"
John answered, "No. nothing new. The police have concluded that whoever broke in abducted her and more than likely killed her. They figure that her body is buried somewhere in a shallow unmarked grave. The courts have officially declared her dead and issued a death certificate." John said, barely able to get the words out.
"Oh. Damn, I'm sorry, John. Is there anything that I can do?" his boss responded.
"Yeah. You talked about using me as a private consultant so I can work from home and take care of the kids too. Any progress on that thought?" John asked.
"If you want it that way, it's a done deal. I have it all approved. The head shed doesn't want to lose you and they are more than happy to have you as a private contractor. I'll have the contracts over to you in the morning."
After a short further conversation, John hung up and looked back down at Lucy. "Well, girl, looks like you're stuck with me at home full time now."
Lucy looked up at him and thought, 'I hope this works out for him and he gets over mourning the loss of Anne. The sooner he looks at me as he would any normal dog the better. I'm tired of fighting my emotions. I wonder if I'll ever be able to be just Lucy, the Golden Retriever?'
As the days and weeks passed it became apparent to everyone around that Anne was gone. The official court findings helped but it was still a rough transition for John and the kids to finally come to grips with the fact that she was never coming back to them.
John knew better but he could hardly disagree with everyone. He looked down at Lucy and said, "I can hardly try to tell them that my wife, Anne is actually alive and well and living at home as a pregnant Golden Retriever named Lucy. They would have me fitted for a jacket with extra long sleeves and the kids would wind up in foster care. You would most likely wind up in the Animal Shelter. I can't have that so this is the way it will have to be. I'll never have any interest in another woman because as far as I'm concerned my wife is still here and I'm still married to her."
Lucy mentally shook her head and thought, 'Won't he ever give up?'
John was able to set himself up as a private consulting firm. This allowed him to continue working at his job, still be at home most of the time for the kids and to set his own schedule. It also allowed him to take in work from other businesses that needed his expertise. In the long run, his salary actually tripled but it wasn't all a bed of roses along the way.
John played the part of the bereaved husband and did what he needed to do to maintain that part. Anne's life insurance paid off on her accidental death and he put it in a trust for the kids education. John never expected to see Anne again and so he did his best to move on with life. He treated Lucy more and more as the dog she appeared to be in public but at night, in the privacy of their bedroom, he continued talked to her as if she was still Anne. Sometimes he thought that he could see a light of understanding in her expressions and hoped that she knew what he was saying but he was never sure. It seemed that as time went by, Lucy became more and more, just a dog.
After a few months had passed, Lucy had her litter. The kids loved them. They were so cute and just as well behaved as Lucy. It was hard, but when they were six weeks old John turned them over to his neighbor and they were sold. He had arranged all of the registration paperwork just as he said he would and they brought a good price. The money was added to the kid's education fund.
Because the pups were from quality stock, according to their paperwork, they were well accepted and did amazingly well at all the dog shows. It was often said that they almost seemed to be as smart as a human.
One night, shortly after Lucy weaned her first litter, she and John were up in the master bedroom. He looked down at her and said, "I'm so sorry Anne, won't you ever come back to me. Even for a short time?"
Lucy's eyes sparkled, she tilted her head and barked softly. John knew that something was different about her right away and hope sprang into his heart as he saw subtle changes taking place.
Lucy closed her eyes and breathed slowly. Her front paws lengthened and the dewclaw digit moved down as her paws stretched out into a fur covered cross between hands and a paws. Slowly her hindquarters altered until she was able to stand up awkwardly on her back paws. Her shoulders broadened and shifted as eight breasts running down her chest in two parallel rows swelled out. The golden fur on her head lengthened and became a long golden mane that cascaded down her back like a waterfall until it almost touched her tail. She shifted her posture as her hips altered some more and she was able to settle down on her paws.
John stood staring at her in shock. Then he slowly moved towards her when it seemed that the changes had stopped or at least slowed. He stood in front of this combination, this hybrid of a Golden Retriever and a human woman. A woman that was his wife Anne.
"Anne, you're coming back to me!" he almost shouted. "I've missed you so much, Anne. I love you and need you so much. The kids need their mother."
He watched as her muzzle shortened a bit and stopped. She still had a muzzle but it was wider and she still had her large canine teeth. "I'm not Anne anymore. I'm Lucy. I've missed you all in a way too, but I've also been here with you in another way. When I returned before you just couldn't accept the fact that I had really become a dog. Now you want your loving human mate back, your docile housewife but that isn't going to happen. Your rejection of me the last time caused me to retreat back into being a dog and now I'll never be able to return to you as a human and your wife again.
"No. Don't say that. Please, I will accept everything you tell me. Please, we need you to return to us as Anne, my wife and the mother of our children."
"That can't happen. Not now, it's too late for that. Besides, I'm much more than that now and I feel so much better as I am. I love my life as a dog and I don't want to change back. I've come to realize that I wouldn't change back even if I could and I'm sure that it's not possible anymore."
"But you are changing, look at yourself. You're almost there, just a little more."
"John, you're refusing to accept me as I am again. This is as far as I go. Accept me and I may be able to return to this form from time to time. Reject me and I will return to being your dog Lucy and you will never see me like this again. There will never again be the slightest hint of Anne ever again. You can have this one night with me my mate, my lover, my husband. Accept me like this or not at all," Lucy cry-growled at him.
"Oh Lucy, anything you say. I accept you in any form. Just hold me! I love you," John said as he wrapped his arms around her fur-covered body.
They made love that night and fell asleep in each other's arms. John awoke the next morning to find Lucy the dog lying on the other side of the bed. From that day on, Lucy was no more than a normal Golden Retriever. He never again saw the spark of Anne's presence in her eyes or expression and never again noticed any human traits in her actions.
Lucy lived the life of a normal dog and never let John know that Anne was still occasionally there. She felt that the only way he would be able to move on was if Anne was gone completely and all that remained was Lucy.
Lucy never knew why or how but she did return in her hybrid form every once in awhile and they spent wonderful nights together but it was always Lucy, never Anne.
At her request, John arranged for Lucy to be bred with Ben many more times. She seemed to enjoy it. She was a good mother and had many more fine litters.
Their kids matured and rose to the task of taking care of Lucy. They also did well at caring for John and the house. Evelyn and Nathan learned to cook out of self preservation. Both of the children were great at watching over Lucy's puppies whenever she had a litter. The kids grew older and their love for Lucy grew with them. John never did completely get over the loss of his wife but his time with Lucy and his love for her helped.
The End | http://docs-lab.com/submissions/128/protector-of-the-hearth | 2017-02-19T18:42:49 | CC-MAIN-2017-09 | 1487501170249.75 | [] | docs-lab.com |
BES - Modules - The NetCDF Handler
Contents
1 Kinds of files the handler will serve
There are several versions of the netCDF software for reading and writing data and using those different versions, it's possible to make several different kinds of data files. For the most part, netCDF strives to maintain compatibility so that any older file can be read using any newer version of the library. To ensure that the netCDF handler can read (almost) any valid netCDF data file, you should make sure to use the latest version of the netCDF library when you build or install the handler.
However, as of netCDF 4, there are some new data model components in netCDF that are hard to represent in DAP2 (hence the 'almost' in the preceding paragraph). If the handler, as of version 3.10.x, is linked with netCDF 4.1.x or later, you will be able to read any netCDF file that fits the 'classic' model of netCDF (as defined by Unidata's documentation) which essentially means any file that uses only data types present in the netCDF 3.x API but with the addition that these files can employ both internal compression and chunking.
The new data types present in the netCDF data model present more of a challenge. However, as of version 3.10.x, the Hyrax data handler will serve most of the new cardinal types and the more commonly used 'user defined types'.
1.1 Mappings between NetCDF version 4 data model and DAP2 data types
All of the cardinal types in the netCDF 4 data model map directly to types in DAP2 except for the following:
- NC_BYTE
- There is no 'signed byte' type in DAP2 so these map to an unsigned byte or signed Int16, depending on the value of the option NC.PromoteByteToShort (see below where the configuration parameters are described).
- NC_CHAR
- There is no 'character' type in DAP2 so these map to DAP Strings of length one. Arrays of N characters in netCDF map to arrays of N-1 Strings in DAP
- NC_INT64, NC_UINT64
- DAP2 does not support 64-bit integers (this will be added soon to the next version of the protocol).
1.1.1 Mappings for netCDF 4's User Defined types
In the netCDF documentation, types such as Compound (which is effectively C's struct type), et c., are called User Defined types. Unlike the cardinal types, netCDF 4'S user defined types don't always have a simple mapping to DAP2's types. However, the most important of the user defined types, NC_COMPOUND, does map directly to DAP2's Structure. Here's how the user defined types are mapped by the handler as of version 3.10:
- NC_COMPOUND
- This maps directly to a DAP2 Structure. The handler works with both compound variables and attributes. For attributes, the handler only recognizes scalar and vector (one-dimensional) compounds. For variables scalar and array compounds are supported including compounds within compounds and compounds with fields that are arrays.
- NC_VLEN
- Not supported
- NC_ENUM
- Supported so long as the 'base type' is not a 64-bit integer. We add extra attributes to help the downstream user. We add DAP2_OriginalNetCDFBaseType with the value NC_ENUM and DAP2_OriginalNetCDFTypeName with the name of the type from the file (Enums in netCDF are user-defined types, so they have names set y the folks who wrote the file). We also add two attributes that provide information about the integral values and they names (e.g., Clear = 0, Cumulonimbus = 1, Stratus = 2, ..., Missing = 255) using two attributes: DAP2_EnumValues and DAP2_EnumNames.
- NC_OPAQUE
- This type is mapped to an array of Bytes (so the scalar NC_OPAQUE becomes a one-dimensional array in DAP2). If a netCDf file contains an array (with M dimensions) of NC_OPAQUE vars, then the DAP response will contain a Byte array with M+1 dimensions. In addition, the handler adds an attribute DAP2_OriginalNetCDFBaseType with the value NC_OPAQUE and DAP2_OriginalNetCDFTypeName with the name of the type from the file to the Byte variable so that savvy clients can see what's going on. Even though the DAP2 object for an NC_OPAQUE is an array, it cannot be subset (but arrays of NC_OPAQUEs can be subset with the restriction that M+1 dimensional DAP2 Byte array can only be subset in the original NC_OPAQUE's M dimensions).
1.1.2 NetCDF 4's Group
The netCDF handler currently reads only from the root group.
2 Configuration parameters
2.1 IgnoreUnknownTypes
When the handler reads a type that it does not recognize, it will normally signal an error and stop processing. Setting this parameter to true will cause it to silently ignore the unknown type (an error message may be written to the bes log file).
Accepted values: true,yes|false,no, defaults to false.
Example:
NC.IgnoreUnknownTypes=true
Include shared dimensions as separate variables. This feature is included to support older clients based on the netCDF library. Some versions of the library depend on the shared dimensions appearing as variables at the 'top' of the file.
Clients that announce to the server that they understand newer versions of the DAP (3.2 and up) won't need these extra variables, while older ones likely will. In the 3.10.0 version of the handler, the DAP version that clients announce they can accept will determine how the handler responses unless this parameter is set, in which case, the value set in the configuration file will override that default behavior.
Accepted values: true,yes|false,no, defaults to false.
Example:
NC.ShowSharedDimensions=false
2.3 PromoteByteToShort
This option first appears in Hyrax 1.8; version 3.10.0 of the netcdf_handler.
Note: Hyrax version 1.8 ships with this turned on in the netcdf handler's configuration file, even though the default for the option is off.
Use this option to promote DAP2 Byte variables and attributes to Int16, noting that Byte is unsigned and Int16 is signed, so this is a way to preserve the sign of netCDF's signed Byte data type.
For netcdf4 files, this option behaves the same except that NC_OPAQUE variables are externalized as DAP Bytes regardless of the option's value; their Byte attributes, on the other hand, as promoted to Int16 when the option is true.
Backstory: In NetCDF the Byte data type is signed while in DAP2 it is unsigned. For data (i.e., variables) this often makes no real difference because byte data are often read from the network and dumped into an array where their sign is interpreted (correctly or not) by the client software - in other words byte-data is often a special case. However, this is, strictly speaking, wrong. In addition, and maybe more importantly, with attributes the values are interpreted by the server and represented in ASCII (and sent to the client as text), so the sign is interpreted by the server and and the resulting text is converted into a binary value by the client; the simple trick of letting the default C types handle the value's sign won't work. One way around this incompatibility is to promote Byte in DAP2 to Int16, which is a signed type.
Accepted values: true,yes|false,no, defaults to false, the server's original behavior.
Example:
NC.PromoteByteToShort=true | http://docs.opendap.org/index.php/BES_-_Modules_-_The_NetCDF_Handler | 2017-02-19T19:04:29 | CC-MAIN-2017-09 | 1487501170249.75 | [] | docs.opendap.org |
CheapStix
CheapStix is a small project to build a VM using Ubuntu on VMware to deliver Hyrax. The VM will be copied onto cheap memory sticks and given out. It would be great if the stuff was ready for the ESDSWG meeting which starts on 21 Oct. 2008. This project was funded by NASA.
See also: Using Virtual Machines to Serve Data
Steps:
- Find out a place to get memory sticks at a decent price
- How much/many? ~$11.50ea for 100 @ 2GB
- With a logo? + 0.99ea for each color after two
- Sent in for quotes from usb-depot.com (Styles: London and Chicago look good). Cost is ~$11.50ea plus 0.99ea for each color more than two if we get 100 2GB sticks. Lead-time of about a week
- What about Chicago in Red or London in Silver?
- Also requested quote from usb-flashdrive.com
- Other sources: customusb.com
- Build the VM
- Special dist of Ubuntu or not
- Ubuntu: Gets us experience with debian packages and maybe support for building them in Makefiles
- Cento gets us something we can use on our rented VMs, maybe
- VMware virtual appliance?
Contents
1 Update
29 Oct 2008 The project was completed on the 17th in time for the meeting and we presented a poster on the idea there.
I used usb-flashdrive.com and ordered 100 2GB Twister 1 drives in Blue with a two-color logo. Cost, including shipping was $10.25 ea. I've checked the logo (which is a vector graphics file, .ai) into the SVN graphics project.
2 Building the VMs
I built two VMs, both using Ubuntu 8.04 (Hardy). The first one uses the JeOS distribution of Hardy and does not contain a build environment or a GUI in its distribution form while the second does contain both of those things. However, I built the server binaries using a third version of the VM - JeOS with a build environment.
While doing this, make liberal use of the 'snapshot' feature of WS. I mention how I used it to go 'back in time' to a smaller VM, but I actually used it much more than to make just one snapshot (see the picture at below at right).
There is also a bug somewhere in Ubuntu and/or VMware involving network persistence. There's a subsection below on a fix for that.
2.1 Load a development environment and build & test Hyrax
Here's the procedure:
- Get VMware Workstation - I did all of the work in Workstation since it's much easier to build and fiddle with the VM in Workstation than server and the two are mutually exclusive.
- Download the ISO for JeOS - I got this from the main Ubuntu site.
- Crank up VM Workstation, make a new VM and tie the ISO image to the CDROM drive using the VM's configuration options. Also, turn on shared folders since they simplify transfer of stuff between the VM and the host.
- Boot the VM - There's an option to create an initial user. I used jimg and then had to hassle to change it to opendap later. Either way, it's set to use sudo and you do most of this as root (i.e., sudo su)
- Install VMware tools - Use the menu in Workstation (WS) and then in the VM as root:
cp -a /media/cdrom0/VMwareTools*.gz /tmp cd /tmp tar -xzf VMwareTools*.gz cd vmware-tools-distrib ./vmware-install.pl
- Make a snapshot of the VM at this point so we can come back to this state (before we load up the VM with the development tools) later.
- Now use apt-get to install stuff we need to build the server (i.e., sudo apt-get install <package>)
- build-essential
- libcurl3, curl, libcurl4-openssl-dev - I may have used libcurl3-dev and not needed curl...
- libxml2-dev
- subversion, autoconf, automake, libtool, flex, bison - probably not needed if you use the tar.gz distributions
- wget (curl works too, but I like wget)
- libreadline-dev
- libnetcdf-dev, libhdf4g-dev, libhdf5-serial-dev
- sun-java6-jre - this you need to run tomcat. Don't use apt-get to get tomcat 5.5; I describe how to get Tomcat 6 later on.
- Use subversion to get the Hyrax server software (or use the tar.gz dists from the web page)
- Build the BES:
- libdap 3.8.2 - configure using prefix (I built it in my own directory once and then again in /opt/Hyrax-1.4.2 which is where the code is on the distributions). Once built, add $prefix/bin to PATH and always use --prefix with configure.
- BES 3.6.2
- dap-server 3.8.5, netcdf_handler 3.7.9, hdf4_h 3.7.9, freeform 3.7.9, hdf5 1.2.3 - use make install and make bes-conf
- Add bes user and group (vi /etc/group and then useradd -g bes -d /dev/null -s /bin/false bes)
- Edited bes.conf to user the bes user/group. I added (mkdir) /opt/Hyrax-1.4.2/log and set the log file to /opt/Hyrax*/log/bes.log)
- chmod/grp bes /opt/Hyrax-1.4.2/ - so the log file works and may otherwise be needed.
- Worked That is, the BES starts and I can talk to it with bescmdln
- Get Tomcat - the tomcat 5.5 code available from apt-get is hosed. Instead, download tomcat from Apace (wget .../apache-tomcat-6.0.18.tar.gz)
- Unpack Tomcat in /opt - I made a sym link from /opt/tomcat to /opt/apache-tomcat-...
- Get the OLFS 1.4.2 webapp.jar from the web page, open and cp the opendap.war in /opt/apache-tomcat-.../webapps
- Start Tomcat - You need to set JAVA_HOME for this to work (export JAVA_HOME=/usr/lib/jvm/java-6-sun)
- Worked Given that the BES was already running and the make bes-conf targets installed sample data and modified the bes.conf file to match.
2.2 Shrink the JeOS VM
There is a trick I used to get the JeOS VM small: Load development packages to build the server, then go back to a snapshot before anything was loaded except VMware Tools and load only those runtime (not development) packages needed. For example, the dist VM uses libnetcdf4 instead of libnetcdf4-dev which was needed to build the server.
- Run make clean in the Hyrax source dirs or remove the source.
- Use tar to package the binary build - the entire /opt/Hyrax-1.4.2 tree
- Copy to the shared folder directory to put that tar on the host OS - the shared folder directory mounts under /mnt when VMware Tools is running.
- Now, shut down the VM and revert to the earlier snapshot where VMware tools had just been installed but nothing else was done. I did this by copying the VM directory on the host and then using the revert option in WS, but I think the copy was not really necessary so long as you take snapshots of any state to which you want to return.
- Boot the snapshot2 VM
- Copy the tar file from the shared folder to /opt and expand.
- Add /opt/Hyrax-1.4.2/bin to PATH
- Use apt-get to add libcurl3 and libxml2
- Now the client getdap (which was built/installed as part of the libdap build) works
- Starting The BES
- added the bes user: Use vi to add the bes group to /etc/group, then useradd -g bes -d /dev/null -s /bin/false bes
- apt-get libhdf4g, libhdf5-serial-1.6.5-0 libnetcdf4
- BES worked - start and then test with bescmdln
- Get Tomcat working
- apt-get sun-java6-jre
- Get tomcat 6.0.18.tar.gz (using wget)
- Expand that tar.gz file in /opt. I Made a symlink from /opt/tomcat to /opt/apache...
- Set JAVA_HOME to /var/lib/jvm/java-6-sun (note that it's not exactly the same as the package name)
- Copy opendap.war to tomcat's webapps
- Set CATALINA_HOME to /opt/tomcat - I don't know if this was necessary
- Start tomcat (assuming the BES is still running from before)
- Test with getdap
- Hyrax works
Now we have a small VM with a working Hyrax and only the packages needed to run the code - take a snapshot.
2.2.1 The udev hack
I found that sometimes the VM would start in WS with networking broken. I don't see a pattern, but looking at the network devices using ifconfig -a, eth0 is hosed (it does not say 'UP'). to fix, cd to /etc/udev and the file 70-persistent-net-rules to remove the line about eth0 and edit the line for eth1 replacing 'eth1' with 'eth0'. Restart udev and networking using the eponymous scripts in /etc/init.d. Now ifconfig should show eth0 as 'UP'
2.3 Package the VM for distribution
Now set the VM so that the image will prompt the first person to start it for a password. Since we're distributing the VM with a known username and password, this provides some assurance that the password really will be changed. This is also a good time to make permanent the changes to PATH and JAVA_HOME. Lastly, really shrink the VM.
Edit /etc/bash.bashrc so that it calls /opt/initial-config unless that has already been called (use a semaphore file)
export JAVA_HOME=/usr/lib/jvm/java-6-sun export CATALINA_HOME=/opt/tomcat export hyrax_prefix=/opt/Hyrax-1.4.2 export PATH=$hyrax_prefix/bin:$PATH if [ ! -e /etc/opt/initial-config-run ]; then /opt/init/initial-config.sh sudo touch /etc/opt/initial-config-run fi
I added the environment variable stuff here, too.
Here's the shell script it runs:
#!/bin/sh # # Let's change the user's password echo "Thank you for using the OPeNDAP Hyrax appliance" echo "For the security of the appliance, we need you to change this user passwor d now." passwd # You can add here any first user login actions that you require
Since the opendap user can run sudo, we could use sudo passwd opendap in place of passwd. The former would be slicker since the initial user would not be asked to type their password (which they have just entered and is well known) to then change their password.
2.3.1 Last steps
Remove anything that smacks of ssh. This means that if any ssh packages were loaded, remove them using apt-get --purge remove and also look for .ssh directories in login accounts. It's important to not distribute VMs with this stuff...
Now, shrink the disk files - to get the VM to fit in the smallest space without using compression, we need to do the following: Zero fill the disks (files, really) and then use a VMware tool to 'shrink' them. As root:
cat /dev/zero > zero.fill sync sleep 1 sync rm -f zero.fill
Stop the VM (shutdown) and in the host OS run vmware-vdiskmanager -k <disk file master> (there are likely a bunch of 'vmdk' files, the 'master' file is named something like Ubuntu JeOS-000008-cl1.vmdk and is relatively small. Ignore the ones name ...s001.vmdk.
I used zip to compress the resulting VM (which is a directory). I could not figure out how to run zip from the command line, but Fedora/Gnome has a decent 'Create Archive...' option. I chose zip because I know windows users have it.
3 Running the VM
Use VM server to run the VM.
3.1 Data Access
To access data, the VM will need to either have a copy of the data on its local disk or use a network share. Let's assume the latter.
On the host OS, export the share using NFS or Samba. It's a good idea to export the share as read-only. On the VM use apt-get to add samba or nfs and configure it to mount the remote share. Edit /opt/Hyrax-1.4.2/etc/bes/bes.conf so that it uses the remote share as the DataRoot directory.
3.2 Network Access
NB: I made these changes to the RSS copy of the VM only. jimg 09:59, 12 February 2009 (PST)
VMware Server has extensive documentation on this topic.
The current VM built with Ubuntu JeOS uses Bridged networking and a static IP, based on work/feedback from Marty Brewer at RSS, Inc. The static IP number is 192.168.0.100 and the DNS servers are set to the servers provided by the OpenDNS project. You may be able to use it as is. However, here are the files you need to edit to change these settings:
In /etc/network/interfaces change the IP number. Here's what the file looks like now:
#
In /etc/resolv.conf change the IP numbers of the two nameserver lines to local names servers if you don't want to use the OpenDNS servers.
The VM supports both bridged and NAT networking options and you can switch between these modes in the VMware Server Settings dialog. | http://docs.opendap.org/index.php/CheapStix | 2017-02-19T19:06:12 | CC-MAIN-2017-09 | 1487501170249.75 | [] | docs.opendap.org |
No art submitted yet
The car pulled up as close to the building she could get it. The parking lot was fairly busy already, plenty of people joining in for the lunch rush in the middle of the week. Most places weren’t busy on Wednesday, but most places weren’t this high quality. Word got out right away that this new brand was amazingly delicious, like had top quality burgers. The hard part was due to the uniforms, about half the country wasn’t going to be getting a branch anytime soon. Luckily, Sarah’s state had legalized transformatives by the time she was a freshman, and she was fresh out of highschool now. And she was also hungry for money, and the restaurant paid six bucks above minimum wage even for starting employees. So of course, she had to do it. She’d worked at restaraunts before, so she had experience, and she was decently easy to talk with so she nailed the interview, now just came the training, and that was clearly going to be the hard part…
Just walking right in the door, there was a patron of the restaurant leaning out of her seat, her face gorged on a massive cock. Sarah nearly backed right out the door the moment she saw it. Just some girl, holding her long hair back, going to town on this thick horse cock. The woman who possessed said cock was done up in a cute little uniform, with a small skirt, a slightly open blouse with a nametag, and a thong rolled up down to her knees so she could let her meat do the talking. The woman was massively busty, and had long blonde hair and horse ears. She shot a wink to Sarah and stuck her tongue out. Sarah’s blush ripened redder than an apple, so she just looked away and sunk into the building to find the manager.
“Aha! There you are!” A woman said, coming out from behind the counter. The manager was a tall woman whose face stretched out into a muzzle and had fur all over her. She had pointed ears, whiskers, and a long tail, and needless to say huge breasts and a bulge stuffing the front of her tight skirt. “Sarah, glad you could make it in! Welcome to Thank God I’m Futa! Are you ready for your training?”
“Y-yup.” Sarah said, still trying to get used to the sight of her manager. The whole transformative thing was a new concept for most in her town, though they say the people up east in all the big cities were so used to it that it was almost mundane. “So, are those two back there really just kinda…?”
“Hmm? Oh, you mean Lynne over there?” The manager asked, looking over Sarah at the horse girl. “Of course~ That is a big part of our appeal is giving customers exactly what they want! Though, making a girl change her form costs a little extra… But yes, yes, as I’m sure you’re aware everyone who works for us is required to take some little itty pills and become a dick girl, as per uniform regulations. We appeal to a very hungry audience. Hungry for our burgers, and our cocks~”
Sarah nodded, finding the implications of the eatery’s apparent catch phrase appropriate. Everyone in the place was to some extent very into the sort of people who’d be serving them. The shy guy sitting in the corner nibbled on his burger, but when a cock was shoved towards him he accepted it happily in his hands and jerked off the hyena girl serving him. A group of girls came in and quickly picked out a waitress and got themselves a table and ordered fries for everyone. There was even a bar off to the side with a shark girl spreading her legs on the table, letting her twin dragon dicks with fins air out for the two patrons sipping cheap beer and suckling salty shark cock. Sarah’s hands awkwardly darted for her loins, pushing her skirt back against her. She tried to imagine a bulge there, just something taking up that space time and time, but she couldn’t. It was too odd, too out there. She shook her head out of it and nodded back. “Of course, miss…”
“Olaine.” The manager said, grinning. “Now, your interview was pretty good. We got a few people with more experience and who were already futas, but you had better availability. The other girls were busy strippers, go figure. However, you had said that you were a little concerned with the whole transformative bit. Now, that is understandable, and a big reasoning behind implanting our chain here! We hope to help people grow more accustomed with the things possible with transformatives, as well as discover exactly who they want to be. Up north they even have things called dick stores where you can just buy any change you want without doing the mail order stuff we have to go through. Here’s hoping soon enough this store can start serving futa pills along with fries.” Olaine said, her tail swaying as she talked on, leading Sarah through the double doors into the back. Sarah was frisked past a kitchen worked by a busy bunch of girls. Some were down on the floor sucking each other off real quick so they could get back to work, while others were well and in the flow of frying, washing, and serving. Olaine led Sarah back around a turn, and into a sizeable office.
Tucked in the corner on the office table was a neatly folded uniform shirt, a cute skirt, and a thong. As well, there was a neat tray of about four pills. “Three futanari pills, and a cleanser.” Olaine explained. “Should an emergency come up. We know there are rare conditions where a dick may not be best.”
“Yeah.” Sarah nodded, easily thinking up two or five, or maybe even a hundred. She was still on the fence of it. Holding the little pill container wasn’t much assuring. It was like a little day organized container, except three slots were labeled “Down to fuck” and the last one was “Gotta up and stop.” Sarah grabbed the clothes and started moving to the door to change, until she remembered where she was working. Olaine eyed her oddly, so Sarah went back into the center of the room, and started stripping. Olaine audibly purred as the brown haired girl pulled up her shirt, and tugged down her pants and panties. Sarah reached for the thong, but Olaine stopped her.
“No, no, keep it off. I’d like to see you take your pill now. I personally oversee all the growths.”
Hard to argue with that. Or with the huge paycheck. Sarah popped open one of the slots and slipped the pill in her mouth. It didn’t have much of a flavor, which was both good and bad depending on how Sarah looked at it. The effects set in almost right away. There was a quick burst near her crotch of something that felt like fire, but there was nothing really burning. Then without warning, a growth just jutted forward from her crotch, growing and growing. Sarah gasped and moaned. Her eyes rolled back like the most mind blowing orgasm was wracking her. She quaked and shivered through the whole thing, drool coming out of the corners of her mouth. Through squinted eyes she could nearly see Olaine rising from her desk, undoing her blouse, and wrapping both of her huge cat tits around Sarah’s newly grown cock. Sarah squealed, the soft feeling around her newly grown sensitive dick was too much. She could feel the tits rise up and down, constantly massaging back and forth against it, her new thing pulsating as it thickened up and grew out longer still.
There was a hot feeling, like she was squirting lava, and when Sarah opened her eyes again, she had a massive cock with a weird bulge near the base, and she had just splattered her manager with enough cream to cover a dozen cinnamon rolls. “Oops?” Sarah said, her first thought after she came down from her sky high orgasm.
“Mmm.” Olaine said, purring as she cleaned her face off. “Seems your genes had an interesting effect on the pill.” She said, squeezing the cock again. “You’ve got a dog cock from a plain set of pills. And a lot more cum than usual~ Which is just an effect from re-activating some of your male genes. This pretty good~ Damn, its long too. You’re going to end up a favorite, I can just tell!” Olaine grinned, rising up from her knees, her tits still splattered in cum. “Now, we’ll need to get some more exotic traits for you.”
“Exotic?” Sarah asked. “I-I have a dog dick!”
“No offense, but that is considered boring by a lot of our clients~” Olaine said. “If people wanted boring, they’d just go to Futa Queen and chew on foreskin, flappy disappointment, and fake grill marks. We are a high up establishment, and you’re going to be serving, so that means you gotta look the part. So, we’ll let you pick from this…” Olaine said, reaching on her counter to grab a decent sized tablet. Sarah looked at it and instantly saw a big, like, endless list of changes she could pick from. The variety was a little much, like being let loose with a blank check at a supermarket. There were things she never even considered sexually, a picture of a trillion animals with a percentage slider, cartoon characters somehow, and on and on and on. Sarah kept sliding along, trying to find something she could settle for, or even better feel excited for. This would be a big gig, and if she got a lot of hours it’d be easier to not have to reset every night. “You get to pick five things.” Olaine said, tapping Sarah on the head.
Five things?! She didn’t even know where to start with one! Did her dick count, or was that just number zero? “H-how about this busty one?”
“A fine choice.” Olaine said, tapping her chest proudly. “Most people go for that on first, not that there is anything wrong with flat chicks~” The cat girl mused, twisting one of Sarah’s nipples. Sarah gasped, sliding a little away. Olaine went back to her desk and slipped out a container of her own from the drawer, passing it over to Sarah. “Here ya go. Give it a shot, see how ya like it.” Sarah turned it over in her hand before deciding to take the plunge. She opened wide, and swallowed the pill.
Right away she was pretty glad she didn’t opt to put on her clothes before taking the pills. This new one was even quicker acting than the futanari pill was. Her hips thudded forward, expanding out to take up more space, and her flat chest spilled forward with two, perky breasts that kept swelling until she was in the upper echelon of double D’s. Sarah stared down at her body and how quickly she went from being the flattest girl in her school to have the absolute perfect body. Her breasts were big and filled her hands. Her hips were wide and her ass bouncy. She kept cupping all over herself like she was her very own long lost lover. No doubt, if her lips could move around (which was an option for a pill) she’d probably kiss herself too. Sarah settled after all the wiggling to look back at the tablet, a little more excited to look for some more changes, one hand taking breaks between swipes to cup her breast and massage her nipple.
“Hmm, what about this fox one?” Sarah asked, showing it over to Olaine. “I mean, I don’t really want to be a full on fox, because I’m kinda not completely a furry or whatever you call it, but Io always thought the ears and tails were really cute… and the noises they make… and I saw some good art online before and in my favorite video games.” She said, trying to hide her blush, but Olaine just had a big grin on her face.
“Relax hun~ Before this job, I was a crazy cat lady. Now I’m a little more literal~ We encourage everyone; pick something you really, really like. And lucky for you, we do have just the thing.” Olaine said, slipped to the corner to a little machine. Sarah didn’t know what she was doing, but she only had to tape a few buttons on the display and hit a switch. The thing looked like some dull espresso machine, but instead of coffee it spilled out a single bean shaped bill. Olaine lifted it and brought it over to Sarah. “By the way, don’t tell anyone that we already brought over our custom pill machine. We’d get in a little bit of trouble until some more laws are passed.”
“That thing can just make any pill right on the spot?” Sarah asked. She knew that after cancer was cured, things got a little wild, but she didn’t know at what point all of these crazy genetics altering stuff was. She really regretted sleeping through science class, because science was really kicking her ass. The cat woman only nodded, her tail swaying back and forth behind her. Sarah swallowed the pill, this one actually have a little bit of a rank taste, but that faded quickly, and what took its place was that sweet hot sensation that Sarah could see herself getting addicted to.
Every transformation was a burst of erotic fires that overtook her. Her ears grew in and her cock spewed cum all over the floor. Her tail grew in, swaying in the air with the fur growing along it, and her pussy dripped with her juices. “O-oh shiiit~” She gasped.
“Feels good, doesn’t it?” Olaine nodded.
“N-no!” Sarah gasped, recovered from her transformation high. “I just remembered, there was a fox girl in my graduating class! Like, not even one, five of them! I-I kind of want to be a bit more unique… Is there a way I could grow a few more tails?”
“Of course, though how many more exactly?”
“Like, eight more so I have nine?”
“A nine tails!” Olaine grinned. “Like the Pokemon, right?”
“Well, more like the Japanese thing…” Sarah replied, watching the woman head back over to the machine, plug in a few little things, and instantly getting the exact pill to do the exact thing she wanted. Sarah guzzled it down right away, jerking her cock off in tandem as her excitement grew for the sensation. And as always, it didn’t disappoint. Her cock spewed a long strand, hitting one of the walls of the office. A cluster of tails grew from behind her, one after the other. Each one growing plucked a string of Sarah like she was an instrument, and so she came along to it nine times.
Olaine seemed a little less thrilled about the cum in her office, but she still had to go down and pet the girl on the head, praising her for all the hard work she was already putting in. “Nine shots, more than enough to kill anything that moves… Better be careful with that hun, we can’t be putting down splash zones in the dining room.”
“S-sorry.” Sarah cooed. “Just got a little carried away.” She recovered from her pose down on her knees and stood all the way up, stretching and clicking things back into place after her little round. “A-alright, so only two more, right? Then can I get something for like, white hair with blue streaks?”
“I mean we did get you dicks and tails with no problem, so a dye job should be doable~” Olaine shrugged, plugging some things in, and getting a blue pill out from the contraption. Sarah took it, though was disappointed that the effects were a lot more subdued. Her hair turned white with blue tips, and her tails took on a similar look all over. Sarah held her tails in her hands, holding them happily, combing her fingers through the newly recolored fur.
“Alright, so only one more. I need to get weird with this one…” Sarah mused. All she knew for sure, was that she needed to do something to get popular with the customers, and make sure she kept the job. She had big plans for colleges, and she really needed the money, really really bad. So what could she do to ensure she would keep the job for a good long time? She wracked her brain, thinking carefully about it. What did she like when she went out to eat? What would make it perfect? “I got it! But it is a little weirder than the rest of the stuff I’ve thought up.”
“Well, let’s hear it!” Olaine said, a big smile on her face, encouraging as she could be. “What did you think of?”
Sarah got up from the ground and whispered it in Olaine’s ears. Her cat ears perked up, standing straight. “Oh that’s brilliant. Yes, yes! We can do that!” The cat girl said, going over and making the pill. She handed it over to Sarah, and patted her on the back. “Now, for your training. There is a floor manager out there already, and a few employees. We have a table set up with someone who was told you’re new and stuff. We usually sit down and explain everything so damn slow, but you already have worked at restaurants, and you don’t have to worry about explaining the specials. We only have two specials. Your cock, and bofa.”
“Bofa?” Sarah asked, turning the pill over in her hand.
“Bofa your nuts! Now get out there and get some orders!”
“Yes mam!” Sarah said, sliding on her TGIF branded thong up her fat ass and her shirt over her wobbly tits. She slammed the pill down, and wet out.
The customer waited at his table, idly looking around, waiting for his trainee to get the knots out of her performance. It was an easy job, and the food would be free in exchange for helping the girl learn how things at Thank God I’m Futa work. The floor manager came over to him once, making sure everything was still alright. Suddenly, from out of the back she came. “Hi, my name’s Sarah~” The kitsune girl said, plopping the menu down in front of him. While she did so, one of her two huge tits gently tapped against his cheek. The front of her shirt was slightly cold and wet, and her breasts were probably the biggest he’d seen in the store. Instinct took over, and he knew his job was to gauge her comfort with her job, so he peeled up her shirt.
One nipple was brown as chocolate, the other pink as a strawberry. He brought his lip up to one and suckled, Sarah gasping cutely and excitedly, like giggles from a performer losing her nerves. He sipped and his eyes went wide. Ice cream. Her tits were leaking milkshakes. He gasped, starting to suck with a lot more enthusiasm. With breasts so warm it was a bit of a surprise to get something so cold, but he wasn’t complaining. He sucked and sucked, and dragged the nipple against his teeth. He slammed both tits together and sucked as hard as he could on the twins, mixing the strawberry and the chocolate together. When he was out of breath, he stopped and sat back against his chair, gasping along as the woman regained her composure.
“I’m glad you like them, sir~” She grinned, crossing her legs cutely. “Now, what else can I get ya to drink?”
“That.” He said, pointing at her tits again. “Milk yourself into a cup. In front of me please.” He said, trying to regain composure but he was already sweating like a politician with money on the table.
“Mind if I help?” Another waitress asked, slipping a tray of shake glasses beneath Sarah’s orbiting tits. The waitress’s name tag read “Lynne”, and she was already slipping her horsecock inside of Sarah, her little thong the easiest thing to just slip to the side.
Sarah moaned, the horse girl’s fingers wrapping around her breasts and milking her. She wasn’t expecting another waitress to gang up on her, but then again it was just the feel of the place. No doubt, after all the pills, most of the waitresses were even hornier than the customers. Sarah shivered, trying to help Lynne aim her breasts to mostly hit the glasses, and also to avoid bumping into anything. Sarah slapped her hands against the table, though she had to be careful. With the power of the thrusts the table was sent wobbling, and inevitably some of the milkshake was missing the mark of the half a dozen glasses arranged just above her tits. Lynne milked quickly with skilled hands. Squirt after squirt of runny cream came spilling out. Sarah’s fat ass wobbled as she was bounced in and out of, her body wired and configured to perfectly take any size. At first it stung, but she adapted lightning quick. Her tails swayed in the air, tickling Lynne teasingly as the horse girl devastated the girl in front of her.
The further they went, the more that Sarah was spilling, all of the thrusting doing a good job to help her out. The cream filled a few glases, than a few more. Then finally, Lynne blasted off inside of Sarah. She nonchalantly slipped her cock out, stuffing it back up inside her dress, some of her cum runnily staining the front of her shirt. She giggled, and scooted off to make sure her tables were doing fine. “Phew~” Sarah shivered. “Alright, there’s your drink. Now, is there anything else I can help you with?”
“How much does your dick cost?”
“It comes with the purchase of any meal~” | http://docs-lab.com/submissions/1319/eat-at-dick-burgers | 2017-02-19T18:41:21 | CC-MAIN-2017-09 | 1487501170249.75 | [] | docs-lab.com |
Introduction
Gemini
Survey
Please complete our API Use Survey to help us improve your experience using the Gemini AP
REST API
WebSocket Feed
wss://api.sandbox.gemini.com
Documentation
Create your account
Go to the sandbox site to register for a test account to begin trading.
- use the website to get comfortable trading on Gemini
- use the API to validate your trading systems before deploying them against the real Gemini exchange
Your account will automatically be credited with test BTC, USD, and ETH. You may use these funds to trade, both through the web site and through the API.
Gemini's sandbox site does not support either depositing or withdrawing your test funds, which can only be used to trade on the sandbox exchange.
If you have any issues, or if you need to adjust your balances (to test insufficient funds handling, for example), contact [email protected].
Rate Limits
To prevent abuse, Gemini imposes rate limits on incoming requests as described in the Gemini API Agreement.
For public API entry points, we limit requests to 120 requests per minute, and recommend that you do not exceed 1 request per second.
For private API entry points, we limit requests to 600 requests per minute, and recommend that you not exceed 5 requests per second.
Use WebSocket APIs
Polling Order Status API endpoints may be subject to rate limiting: Gemini recommends using WebSocket API to get market data and realtime information about your orders and trades.
Batch cancels
If you want to cancel a group of orders, instead of making multiple Cancel Order requests, use one of Gemini's batch cancel endpoints:
Then use your Order Events WebSocket subscription to watch for notifications of:
- Order Events: Cancelled followed by Order Events: Closed
- under certain circumstances, a Order Events: Cancel Rejected event containing a
reasonfield explaining why the exchange could not fulfil your cancel request
Requests
There are two types of APIs below, the public ones and the private ones, which are subdivided into order placement, order status, and account status.
Public API invocation
Public APIs are accessible via GET, and the parameters for the request are included in the query string.
Private API invocation
To walk through the process of generating a private API invocation, we start with the request json itself
// The comments are for documentation purposes only as the JSON // spec does not permit comments. { // The generic elements, required for all POSTS "request": "/v1/order/status", "nonce": 123456, // Request-specific data "order_id": 18834 }
Whitespace is ignored by the server, and may be included if desired. The hashes are always taken on the base64 string directly, with no normalization, so whatever is sent in the payload is what should be hashed, and what the server will verify.
~$ base64 << EOF > { > "request": "/v1/order/status", > "nonce": 123456, > > "order_id": 18834 > } > EOF ewogICAgInJlcXVlc3QiOiAiL3YxL29yZGVyL3N0YXR1cyIsCiAgICAibm9uY2UiOiAxMjM0NTYs CgogICAgIm9yZGVyX2lkIjogMTg4MzQKfQo=
b64 = base64.b64encode("""{ "request": "/v1/order/status", "nonce": 123456, "order_id": 18834 } """) b64 = ='
In this example, the
api_secretis
1234abcd
echo -n =' | openssl sha384 -hmac "1234abcd" (std
hmac.new("1234abcd", b64, hashlib.sha384).hexdig'
The final request will look like this. The linebreaks are added for clarity, your http library may or may not put them in.
POST /v1/order/status Content-Type: text/plain Content-Length: 0 X-GEMINI-APIKEY: mykey X-GEMINI-PAYLOAD:ewogICAgInJlcXVlc3QiOiAiL3YxL29yZGVyL3N 0YXR1cyIsCiAgICAibm9uY2UiOiAxMjM0NTYsCgogICAgIm9yZGV yX2lkIjogMTg4MzQKfQo= X-GEMINI-SIGNATURE: 337cc8b4ea692cfe65b4a85fcc9f042b2e3f 702ac956fd098d600ab15705775017beae402be773ceee10719f f70d710f
Authentication
Gemini uses API keys to allow access to private APIs. You can obtain these by logging on and creating a key in Settings/API. This will give you both an "API Key" that will serve as your user name, and an "API Secret" that you will use to sign messages.
All requests must contain a nonce, a number that will never be repeated and must increase between requests. This is to prevent an attacker who has captured a previous request from simply replaying that request. We recommend using a timestamp at millisecond or higher precision. The nonce need only be increasing with respect to the session that the message is on.
Sessions
A single account may have multiple API keys provisioned. In this document, we'll refer to these as "sessions". All orders will be recorded with the session that created them. The nonce associated with a request needs to be increasing with respect to the session that the nonce is used on.
This allows multithreaded or distributed trading systems to place orders independently of each other, without needing to synchronize clocks to avoid race conditions.
In addition, some operations (such as
Cancel All Session Orders) act on the orders associated with a specific session.
Require Heartbeat
When provisioning a session key you have the option of marking the session as "Requires Heartbeat". The intention here is to specify that if connectivity to the exchange is lost for any reason, then all outstanding orders on this session should be canceled.
If this option is selected for a session, then if the exchange does not receive a message for 30 seconds, then it will assume there has been an interruption in service, and cancel all outstanding orders. To maintain the session, the trading system should send a heartbeat message at a more frequent interval. We suggest at most 15 seconds between heartbeats.
The heartbeat message is provided for convenience when there is no trading activity. Any authenticated API call will reset the 30 second clock, event if explicit heartbeats are not sent.
This feature is often referred to as "Cancel on Disconnect" on connection-oriented exchange protocols.
Payload
The payload of the requests will be a JSON object, which will be described in the documentation below. Rather than being sent as the body of the POST request, it will be base-64 encoded and stored as a header in the request.
All of them will include the request name and the nonce associated with the request. The nonce must be increasing with each request to prevent replay attacks.
Roles
Example of error response due to API key missing a role
{ "result":"error", "reason":"MissingRole", "message":"To access this endpoint, you need to log in to the website and go to the settings page to assign one of these roles [FundManager] to API key wujB3szN54gtJ4QDhqRJ which currently has roles [Trader]" }
Gemini uses a role-based system for private API endpoints so that you can separate privileges for your API keys.
By assigning different roles to different API keys, you can create
- one API key that can trade, and
- another API key that can withdraw BTC to a whitelisted address
You can configure which roles are assigned to your API keys by logging in to the Gemini Exchange website and going to API Settings to configure your API keys.
If you try to access an endpoint that requires a role you did not assign to your API key, you will get back a response with:
403status
- a JSON response body with
reasonset to
MissingRole, and
messageexplaining what role you need to add to your API key to use this endpoint
See Error Codes for more information about API error responses.
Trader
Assigning the Trader role to an API key allows this API key to:
- check balances
- place and cancel orders
- check the status of orders
- see all active orders
- see your trade history and volume
Fund Manager
Assigning the Fund Manager role to an API key allows this API key to
- check balances
- request new cryptocurrency deposit addresses
- withdraw cryptocurrency funds to whitelisted addresses
Endpoint summary
Here's a summary of which role you need to assign to your API key to use each endpoint in the API:
Client Order ID
Order Event subscription Accepted event showing client_order_id
[ { "type" : "accepted", "order_id" : "372456298", "event_id" : "372456299", "client_order_id": "20170208_example", "api_session" : "AeRLptFXoYEqLaNiRwv8", "symbol" : "btcusd", "side" : "buy", "order_type" : "exchange limit", "timestamp" : "1478203017", "timestampms" : 1478203017455, "is_live" : true, "is_cancelled" : false, "is_hidden" : false, "avg_execution_price" : "0", "original_amount" : "14.0296", "price" : "1059.54" } ]
Order Status endpoint for the same order, showing client_order_id
{ "avg_execution_price": "0.00", "client_order_id": "20170208_example", "exchange": "gemini", "executed_amount": "0", "id": "372456298", "is_cancelled": false, "is_hidden": false, "is_live": true, "order_id": "372456298", "original_amount": "14.0296", "price": "1059.54", "remaining_amount": "14.0296", "side": "buy", "symbol": "btcusd", "timestamp": "1478203017", "timestampms": 1478203017455, "type": "exchange limit", "was_forced": false }
Client order ID is a client-supplied order identifier that Gemini will echo back to you in all subsequent messages about that order.
Although this identifier is optional, Gemini strongly recommends supplying
client_order_id when placing orders using the New Order endpoint.
This makes it easy to track the Order Events: Accepted and Order Events: Booked responses in your Order Events WebSocket subscription.
Visibility
Your client order ids are only visible to the Gemini exchange and you. They are never visible on any public API endpoints.
Uniqueness
Gemini recommends that your client order IDs should be unique per trading session.
Allowed characters
Your client order ids should match against this PCRE regular expression:
[:\-_\.#a-zA-Z0-9]{1,100}.
Data Types
The protocol description below will contain references to various types, which are collected here for reference
Symbols and minimums
Symbols are formatted as
CCY1CCY2 where prices are in
CCY2 and quantities are in
CCY1, as this table makes explicit:
Public APIs
Symbols
import urllib2 base_url = "" # or, for sandbox # base_url = "" response = urllib2.urlopen(base_url + "/symbols") print(response.read())
$ curl ""
[ "btcusd", "ethusd", "ethbtc" ]
This endpoint retrieves all available symbols for trading
HTTP Request
GET
URL Parameters
None
Response
An array of supported symbols
Ticker
import urllib2 base_url = "" # or, for sandbox # base_url = "" response = urllib2.urlopen(base_url + "/pubticker/btcusd") print(response.read())
$ curl ""
{ "ask": "977.59", "bid": "977.35", "last": "977.65", "volume": { "BTC": "2210.505328803", "USD": "2135477.463379586263", "timestamp": 1483018200000 } }
This endpoint retrieves information about recent trading activity for the symbol.
HTTP Request
GET
URL Parameters
None
Response
The response will be an object
The volume field will contain information about the 24 hour volume on the exchange. The volume is updated every five minutes based on a trailing 24-hour window of volume. It will have three fields
Current Order Book
curl ""
import urllib2 base_url = "" # or, for sandbox # base_url = "" response = urllib2.urlopen(base_url + "/book/btcusd") print(response.read())
The above command returns JSON structured like this:
{ "bids": [ /* bids look like asks */ ], "asks": [ { "price": "822.12", // Note these are sent as strings "amount": "12.1" // Ditto }, ... ] }
This will return the current order book, as two arrays, one of bids, and one of asks
HTTP Request
GET
URL Parameters
If a limit is specified on a side, then the orders closest to the midpoint of the book will be the ones returned.
Response
The response will be two arrays
The bids and the asks are grouped by price, so each entry may represent multiple orders at that price. Each element of the array will be a JSON object
Trade History
curl "(date -d 2014-01-01 +%s)"
import urllib2 import datetime date = datetime.date(2015,1,1).strftime("%s") base_url = "" # or, for sandbox # base_url = "" response = urllib2.urlopen(base_url + "/trades/btcusd?since=%s" % date) print(response.read())
The above command returns JSON structured like this:
[ { "timestamp": 1420088400, "timestampms": 1420088400122, "tid": 155814, "price": "822.12", "amount": "12.10", "exchange": "gemini", "type": "buy" }, ... ]
This will return the trades that have executed since the specified timestamp.
Timestamps are either seconds or milliseconds since the epoch (1970-01-01). See the Data Types section about
timestamp for information on this.
Each request will show at most 500 records.
If no
since or
timestamp is specified, then it will show the most recent trades; otherwise, it will show the most recent trades that occurred after that timestamp.
HTTP Request
GET
URL Parameters
Response
The response will be an array of JSON objects, sorted by timestamp, with the newest trade shown first.
Current auction
import urllib2 base_url = "" # or, for sandbox # base_url = "" response = urllib2.urlopen(base_url + "/auction/btcusd") print(response.read())
$ curl ""
Before an auction opens
{ "closed_until_ms": 1474567602895, "last_auction_price": "629.92", "last_auction_quantity": "430.12917506", "last_highest_bid_price": "630.10", "last_lowest_ask_price": "632.44", "next_auction_ms": 1474567782895 }
After an auction opens
{ "last_auction_eid": 109929, "last_auction_price": "629.92", "last_auction_quantity": "430.12917506", "last_highest_bid_price": "630.10", "last_lowest_ask_price": "632.44", "next_auction_ms": 1474567782895, "next_update_ms": 1474567662895 }
After an indicative price has been published
{ "last_auction_eid": 110085, "most_recent_indicative_price": "632.33", "most_recent_indicative_quantity": "151.93847124", "most_recent_highest_bid_price": "633.26", "most_recent_lowest_ask_price": "633.83", "next_auction_ms": 1474567782895, "next_update_ms": 1474567722895 }
After the last indicative price has been published
{ "last_auction_eid": 110239, "most_recent_indicative_price": "632.305", "most_recent_indicative_quantity": "151.93847124", "most_recent_highest_bid_price": "633.24", "most_recent_lowest_ask_price": "633.54", "next_auction_ms": 1474567782895, "next_update_ms": 1474567782895 }
After the auction runs
{ "closed_until_ms": 1474567828771, "last_auction_price": "632.375", "last_auction_quantity": "243.21166049", "last_highest_bid_price": "633.13", "last_lowest_ask_price": "633.54", "next_auction_ms": 1474568008771 }
HTTP Request
GET
URL Parameters
None
Response
Response in an object with the following fields:
Auction history
curl "(date -d 2016-08-22 +%s)"
import urllib2 import datetime date = datetime.date(2015,1,1).strftime("%s") base_url = "" # or, for sandbox # base_url = "" response = urllib2.urlopen(base_url + "/trades/auction?since=%s" % date) print(response.read())
The above command returns JSON structured like this:
[ { "auction_id": 3, "auction_price": "628.775", "auction_quantity": "66.32225622", "eid": 4066, "highest_bid_price": "628.82", "lowest_ask_price": "629.48", "auction_result": "success", "timestamp": 1471902531, "timestampms": 1471902531225, "event_type": "auction" }, { "auction_id": 3, "auction_price": "628.865", "auction_quantity": "89.22776435", "eid": 3920, "highest_bid_price": "629.59", "lowest_ask_price": "629.77", "auction_result": "success", "timestamp": 1471902471, "timestampms": 1471902471225, "event_type": "indicative" }, ... ]
This will return the auction events, optionally including publications of indicative prices, since the specific timestamp.
Timestamps are either seconds or milliseconds since the epoch (1970-01-01). See the Data Types section about
timestamp for information on this.
Each request will show at most 500 records.
If no
since or
timestamp is specified, then it will show the most recent events. Otherwise, it will show the oldest auctions that occurred after that timestamp.
HTTP Request
GET
URL Parameters
Response
The response will be an array of JSON objects, sorted by timestamp, with the newest event shown first.
Order Placement APIs
New Order
Only limit orders are supported through the API at this point. All orders on the exchange are "Good Until Cancel" - they will stay active until either completely filled or cancelled.
If you wish orders to be automatically cancelled when your session ends, see the require heartbeat section, or manually send the cancel all session orders message.
// As before, the comments here are for documentation only; you may not put // any comments in the submitted JSON as per the JSON spec { // Standard headers` "request": "/v1/order/new", "nonce": <nonce>, // Request-specific items "client_order_id": "20150102-4738721", // A client-specified order token "symbol": "btcusd", // Or any symbol from the /symbols api "amount": "34.12", // Once again, a quoted number "price": "622.13", "side": "buy", // must be "buy" or "sell" "type": "exchange limit", // the order type; only "exchange limit" supported "options": ["maker-or-cancel"] // execution options; may be omitted for a standard limit order }
On success, this will return a JSON response that looks like
{ // These are the same fields returned by order/status "order_id": "22333", "client_order_id": "20150102-4738721", "symbol": "btcusd", "price": "34.23", "avg_execution_price": "34.24", "side": "buy", "type": "exchange limit", "timestamp": "128938491", "timestampms": 128938491234, "is_live": true, "is_cancelled": false, "executed_amount": "12.11", "remaining_amount": "16.22", "original_amount": "28.33" }
Roles
The API key you use to access this endpoint must have the Trader role assigned. See Roles for more information.
HTTP Request
Parameters
Order execution options
Note that
options is an array. If you omit
options or provide an empty array, your order will be a standard
limit order - it will immediately fill against any open orders at an equal or better price, then the remainder of
the order will be posted to the order book.
If you specify more than one option (or an unsupported option) in the
options array, the exchange will reject your order.
The available limit order
options are:
Result
Result will be the fields included in Order Status
Cancel Order
This will cancel an order. If the order is already canceled, the message will succeed but have no effect.
{ // Standard headers "request": "/v1/order/order/cancel", "nonce": <nonce>, // Request-specific items "order_id": 12345 }
Roles
The API key you use to access this endpoint must have the Trader role assigned. See Roles for more information.
HTTP Request
Parameters
Response
Result of
/order/status for the canceled order. If the order was already canceled, then the request will have no effect and the status will be returned.
Cancel All Session Orders
This will cancel all orders opened by this session.
This will have the same effect as heartbeat expiration if "Require Heartbeat" is selected for the session.
Roles
The API key you use to access this endpoint must have the Trader role assigned. See Roles for more information.
HTTP Request
Parameters
Response
The response will be a JSON object with the single field "result" with value "true"
Cancel All Active Orders
This will cancel all outstanding orders created by all sessions owned by this account, including interactive orders placed through the UI.
Typically Cancel All Session Orders is preferable, so that only orders related to the current connected session are cancelled.
Roles
The API key you use to access this endpoint must have the Trader role assigned. See Roles for more information.
HTTP Request
Parameters
Response
The response will be a JSON object with the single field "result" with value "true"
Order Status APIs
Order Status
Gets the status for an order
{ // Standard headers "request": "/v1/order/status", "nonce": <nonce>, // Request-specific items "order_id": 1234 // The order id from the response to the order/new request }
On success, this will return a JSON response that looks like
{ "order_id": "1234" "client_order_id": "20150102-4738721", "symbol": "btcusd", "exchange": "gemini", "price": "34.23", "avg_execution_price": "34.24", "type": "exchange limit", "timestampms": 12847817234, "timestamp": "12847817", "is_live": true, "is_cancelled": false, "executed_amount": "12.11", "remaining_amount": "16.22", "original_amount": "28.33" }
Roles
The API key you use to access this endpoint must have the Trader role assigned. See Roles for more information.
HTTP Request
Parameters
Response
Get Active Orders
Roles
The API key you use to access this endpoint must have the Trader role assigned. See Roles for more information.
HTTP Request
Parameters
This takes no parameters other than the general ones
Reponse
An array of the results of
/order/status for all your live orders.
Get Past Trades
Roles
The API key you use to access this endpoint must have the Trader role assigned. See Roles for more information.
HTTP Request
Parameters
Reponse
The response will be an array of trade information items.
Break Types
In the rare event that a trade has been reversed (broken), the trade that is broken will have this flag set. The field will contain one of these values
Get Trade Volume
Roles
The API key you use to access this endpoint must have the Trader role assigned. See Roles for more information.
HTTP Request
Parameters
Reponse
The response will be an array of up to 30 days of trade volume for each symbol.
- Each row represents 1d of trading volume. All dates are UTC dates.
- Partial trading volume for the current day is not supplied.
- Days without trading volume will be omitted from the array.
Fund Management APIs
Get Available Balances
This will show the available balances in the supported currencies
Roles
The API key you use to access this endpoint must have the Trader or Fund Manager role assigned. See Roles for more information.
HTTP Request
Parameters
No fields are expected in the request, other than the request name and the nonce
Response
An array of elements, with one block per currency
New Deposit Address
Sample payload
{ "request" : "/v1/deposit/btc/newAddress", "nonce" : "12345", "label" : "optional test label" }
New BTC deposit address
{ "currency" : "BTC", "address" : "n2saq73aDTu42bRgEHd8gd4to1gCzHxrdj", "label" : "optional test label" }
New ETH deposit address
{ "currency" : "ETH", "address" : "0xA63123350Acc8F5ee1b1fBd1A6717135e82dBd28", "label" : "optional test label" }
This will create a new cryptocurrency deposit address with an optional label.
Roles
The API key you use to access this endpoint must have the Fund Manager role assigned. See Roles for more information.
HTTP Request
where
:currency is a supported cryptocurrency, e.g.
btc or
eth.
Parameters
Response
An object with the following fields:
Withdraw Crypto Funds To Whitelisted Address
Sample payload for BTC withdrawal
{ "request":"/v1/withdraw/btc", "nonce":"12345", "address":"mi98Z9brJ3TgaKsmvXatuRahbFRUFKRUdR", "amount":"1" }
Sample successful BTC withdrawal response
{ "destination":"mi98Z9brJ3TgaKsmvXatuRahbFRUFKRUdR", "amount":"1", "txHash":"6ca7340d4dd0af1892f11d88d2b40d4846332d36b91ef60bbb844904de250941" }
Sample payload for ETH withdrawal
{ "request":"/v1/withdraw/eth", "nonce":"12345", "address":"0xA63123350Acc8F5ee1b1fBd1A6717135e82dBd28", "amount":"2.34567" }
Sample successful ETH withdrawal response
{ "address":"0xA63123350Acc8F5ee1b1fBd1A6717135e82dBd28", "amount":"2.34567", "txHash":"123abc1234" }
Error when account does not have whitelists enabled
{ "result":"error", "reason":"CryptoAddressWhitelistsNotEnabled", "message":"Cryptocurrency withdrawal address whitelists are not enabled for account 24. Please contact [email protected] for information on setting up a withdrawal address whitelist." }
Error when account has whitelists enabled but this specific address is not whitelisted
{ "result":"error", "reason":"CryptoAddressNotWhitelisted", "message":"'moXiuoPh6oe2edoFQvxbyxQZgiYkNzjXZ9' is not a whitelisted BTC address. Please contact [email protected] for information on adding addresses to your withdrawal whitelist." }
Before you can withdraw cryptocurrency funds to a whitelisted address, you need three things:
- cryptocurrency address whitelists needs to be enabled for your account
- the address you want to withdraw funds to needs to already be on that whitelist
- an API key with the Fund Manager role added
Contact [email protected] for information on setting up whitelists and adding addresses. See Roles for more information on how to add the Fund Manager role to the API key you want to use.
Roles
The API key you use to access this endpoint must have the Fund Manager role assigned. See Roles for more information.
HTTP Request
where
:currency is a supported cryptocurrency, e.g.
btc or
eth.
Parameters
Response
An object with the following fields:
Session APIs
Heartbeat
This will prevent a session from timing out and canceling orders if the require heartbeat flag has been set. Note that this is only required if no other private API requests have been made. The arrival of any message resets the heartbeat timer.
HTTP Request
Parameters
No fields are expected in this request other than the request name and the nonce
Response
The response will be a JSON object with a single field "result" with value "true"
Error Codes
If a response is in error, then the HTTP response code will be set to reflect this, and a JSON body will be returned that will contain information about the failure.
HTTP Error Codes
Error payload
{ "result": "error", "reason": "BadNonce", "message": "Out-of-sequence nonce <1234> precedes previously used nonce <2345>" }
In the event of an error, a non-200 error code will be returned, and the response body will be a json object with three fields:
result, which will always be "error"
reason, which will be one of the strings listed in the table below
message, a human-readable English string indicating additional error information.
Difference from Bitfinex
In order to make it easy for existing market participants to use Gemini, we have attempted to be compatible with Bitfinex.com where possible. Differences are highlighted below. If you encounter other compatibility-breaking differences, please contact us at [email protected].
Stats API
This has been removed.
Authentication
The headers have been renamed to
X-GEMINI-APIKEY,
X-GEMINI-PAYLOAD, and
X-GEMINI-SIGNATURE
Lends and Margin Trading
The APIs relating to offers, lends, and margin orders have been removed.
Order Types
The only order type supported through the order/new API are limit orders.
Hidden Orders
Auction-only orders are the only type of undisplayed liquidity.
Replace Order Removed
There is no "Replace Order" API. Instead, cancel and place a new order. There is no way to maintain time priority across an operation like this.
Sessions and Require Heartbeats
These are features specific to Gemini. See sessions for more information.
Timestamp
We recommend that you use the
timestampms field if you are implementing against our API. The
timestamp field is included for compatibility with Bitfinex, but the behavior is inconsistent; sometimes it's an integer, sometimes it has a decimal point, and other times it is returned as a string.
Revision History | https://docs.gemini.com/rest-api/ | 2017-02-19T18:34:38 | CC-MAIN-2017-09 | 1487501170249.75 | [] | docs.gemini.com |
Managing Organisations and Users
Chances are that you are not the only one needing to access your logs. Luckily, elmah.io offers great features in order to manage the users in your organisation and to specify who should be allowed access to what. In order to manage access, you will need to know about the concepts of users and organisations.
A user represents a person wanting to access one or more logs. Each user has its own login using username/password or a social provider of choice. A user can be added to one or more organisations. Each user has an access level within the organisation as well as an access level on each log. The access level on the organisation and the logs doesn't need to be the same.
An organisation is a collection of users and their role inside the organisation. You will typically only need a single organisation, representing all of the users in your company needing to access one or more logs on elmah.io. Your elmah.io subscription is attached your organization and everyone with administrator access to the organization, will be able to manage the subscription.
Adding users to an organisation
To assign users to a log, you will need to add them to the organisation first. When hovering the organisation name in either the left menu or on the dashboard, you will see a small gear icon. When clicking the icon, you will be taken to the organisation settings page:
At first, the user creating the organisation will be the only one in the list. To add a new user to the list, input the user's email or name in the textbox below Add new user. The dropdown will show a list of users on elmah.io matching your query.
Each user needs to sign up on elmah.io before being visible in the Add new user list.
When the new user is visible in the dropdown, click the user and select an access level. The chosen access level decides what the new user is allowed to do inside the organisation. Read users are only allowed to view the organisation, while Administrator users are allowed to add new users and delete the entire organisation and all logs beneath it. The access level set for the user in the organisation, will become the user's access level on all new logs inside that organisation as well. Let's add a new user to the organisation:
To change the access level on an already added user, click one of the grouped buttons to the right of the user's name. Changing a user's access level on the organisation won't change the users access level on each log. To delete a user from the organisation, click the red delete button to the far right.
When a user is added to an organisation, the user will automatically have access to all new logs created in that organisation. For security reasons, a new user added to the organisation, will not have access to existing logs in the organisation. To assign the new user to existing logs, assign an access level on each log shown beneath the user. The list of logs can be opened by clicking the dropdown button to the right of the user.
Awarding a user Administrator on a log, doesn't give them Administrator rights on the organisation.
To assign a user to all logs, click the None, Read, Write or Administrator buttons in the table header above the list of logs.
This article was brought to you by the elmah.io team. elmah.io is the best error management system for .NET web applications. We monitor your website, alert you when errors start happening and help you fix errors fast. | https://docs.elmah.io/managing-organisations-and-users/ | 2017-11-18T00:31:19 | CC-MAIN-2017-47 | 1510934804125.49 | [array(['../images/organisation_settings.png', 'Organisation Settings'],
dtype=object)
array(['../images/add_user_to_org.png', 'Add User to Organisation'],
dtype=object) ] | docs.elmah.io |
Using Latency and Weighted Resource Record Sets in Amazon Route 53 to Route Traffic to Multiple Amazon EC2 Instances in a Region (Ohio) region and you want to distribute requests across all three IPs evenly for users for whom US East (Ohio) is the appropriate region. Just one Amazon EC2 instance is sufficient in the other regions, although you can apply the same technique to many regions at once.
To use latency and weighted resource record sets in Amazon Route 53 to route traffic to multiple Amazon EC2 instances in a region
Create a group of weighted resource record sets for the Amazon EC2 instances in the region. Note the following:
Give each weighted resource record set the same value for Name (for example,
us-east.example.com) and Type.
For Value, specify the value of one of the Elastic IP addresses.
If you want to weight the Amazon EC2 instances equally, specify the same value for Weight.
Specify a unique value for Set ID for each resource record set.
If you have multiple Amazon EC2 instances in other regions, repeat Step 1 for the other regions. Specify a different value for Name in each region.
For each region in which you have multiple Amazon EC2 instances (for example, US East (Ohio)), create a latency alias resource record set. For the value of Alias Target, specify the value of the Name field (for example,
us-east.example.com) that you assigned to the weighted resource record sets in that region.
For each region in which you have one Amazon EC2 instance, create a latency resource record set. For the value of Name, specify the same value that you specified for the latency alias resource record sets that you created in Step 3. For Value, specify the Elastic IP address of the Amazon EC2 instance in that region.
For more information about creating resource record sets, see Creating Resource Record Sets by Using the Amazon Route 53 Console. | http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/TutorialLBRMultipleEC2InRegion.html | 2017-11-18T01:22:19 | CC-MAIN-2017-47 | 1510934804125.49 | [] | docs.aws.amazon.com |
Keywords: Mental Health Notes for TRB-TET exam, study material in tamil for TRB-TET examinations, Mental health of child, TNTET Examination, History of India, National Affairs, TRB,Tamil Nadu Teachers Recruitment Board Exam, Tamil QuizTNPSC Previous Question, Important questions, General Knowledge for All competitive Exam, latest notification,Latest Current Affairs,, Applied science question, Model Question Papers,Tamil Books,TET previous question paper, Tamil Notes, TNPSC online exam, TRB exam question, ,Tamil textbooks, General Tamil Question and answer, Chemistry Questions, Recruitment Notification, Exam Notification,Biology quizTNPSC Model Question Paper, TET English, Indian Economics Question, TET syllabus, Tamil Objective questions, , Tamil GK, Physics Questions,TET news, TET Exam study materials, TET Science,Geography Questions, ,TNPSC News, TET Model Papers, | http://docs.tnpscquestionpapers.com/2013/08/mental-health-tet-trb-materials-tamil.html | 2017-11-18T01:01:28 | CC-MAIN-2017-47 | 1510934804125.49 | [] | docs.tnpscquestionpapers.com |
Postgres database details¶
Remote Postgres database¶
This is a bit annoying to setup, but you can configure Zulip to use a
dedicated postgres server by setting the
REMOTE_POSTGRES_HOST
variable in /etc/zulip/settings.py, and configuring Postgres
certificate authentication (see and for
documentation on how to set this up and deploy the certificates) to
make the DATABASES configuration in
zproject/settings.py work (or
override that configuration).
If you want to use a remote Postgresql database, you should configure the information about the connection with the server. You need a user called “zulip” in your database server. You can configure these options in /etc/zulip/settings.py (the below descriptions are from the Postgresql documentation):
- REMOTE_POSTGRES_HOST: Name or IP address of the remote host
- REMOTE_POSTGRES_SSLMODE: SSL Mode used to connect to the server, different options you can use are:
- disable: I don’t care about security, and I don’t want to pay the overhead of encryption.
- allow: I don’t care about security, but I will pay the overhead of encryption if the server insists on it.
- prefer: I don’t care about encryption, but I wish to pay the overhead of encryption if the server supports it.
- require: I want my data to be encrypted, and I accept the overhead. I trust that the network will make sure I always connect to the server I want.
- verify-ca: I want my data encrypted, and I accept the overhead. I want to be sure that I connect to a server that I trust.
- verify-full: I want my data encrypted, and I accept the overhead. I want to be sure that I connect to a server I trust, and that it’s the one I specify.
Then you should specify the password of the user zulip for the database in /etc/zulip/zulip-secrets.conf:
postgres_password = xxxx
Finally, you can stop your database on the Zulip server via:
sudo service postgresql stop sudo update-rc.d postgresql disable
In future versions of this feature, we’d like to implement and document how to the remote postgres database server itself automatically by using the Zulip install script with a different set of puppet manifests than the all-in-one feature; if you’re interested in working on this, post to the Zulip development mailing list and we can give you some tips.
Debugging postgres database issues¶
When debugging postgres issues, in addition to the standard
pg_top
tool, often it can be useful to use this query:
SELECT procpid,waiting,query_start,current_query FROM pg_stat_activity ORDER BY procpid;
which shows the currently running backends and their activity. This is similar to the pg_top output, with the added advantage of showing the complete query, which can be valuable in debugging.
To stop a runaway query, you can run
SELECT pg_cancel_backend(pid int) or
SELECT pg_terminate_backend(pid int) as the ‘postgres’
user. The former cancels the backend’s current query and the latter
terminates the backend process. They are implemented by sending SIGINT
and SIGTERM to the processes, respectively. We recommend against
sending a Postgres process SIGKILL. Doing so will cause the database
to kill all current connections, roll back any pending transactions,
and enter recovery mode.
Stopping the Zulip postgres database¶
To start or stop postgres manually, use the pg_ctlcluster command:
pg_ctlcluster 9.1 [--force] main {start|stop|restart|reload}
By default, using stop uses “smart” mode, which waits for all clients to disconnect before shutting down the database. This can take prohibitively long. If you use the –force option with stop, pg_ctlcluster will try to use the “fast” mode for shutting down. “Fast” mode is described by the manpage thusly:. If this still does not help, the postmaster process is killed. Exits with 0 on success, with 2 if the server is not running, and with 1 on other failure conditions. This mode should only be used when the machine is about to be shut down.
Many database parameters can be adjusted while the database is running. Just modify /etc/postgresql/9.1/main/postgresql.conf and issue a reload. The logs will note the change.
Debugging issues starting postgres¶
pg_ctlcluster often doesn’t give you any information on why the database failed to start. It may tell you to check the logs, but you won’t find any information there. pg_ctlcluster runs the following command underneath when it actually goes to start Postgres:
/usr/lib/postgresql/9.1/bin/pg_ctl start -D /var/lib/postgresql/9.1/main -s -o \ '-c config_file="/etc/postgresql/9.1/main/postgresql.conf"'
Since pg_ctl doesn’t redirect stdout or stderr, running the above can give you better diagnostic information. However, you might want to stop Postgres and restart it using pg_ctlcluster after you’ve debugged with this approach, since it does bypass some of the work that pg_ctlcluster does.
Postgres Vacuuming alerts¶
The
autovac_freeze postgres alert from
check_postgres is
particularly important. This alert indicates that the age (in terms
of number of transactions) of the oldest transaction id (XID) is
getting close to the
autovacuum_freeze_max_age setting. When the
oldest XID hits that age, Postgres will force a VACUUM operation,
which can often lead to sudden downtime until the operation finishes.
If it did not do this and the age of the oldest XID reached 2 billion,
transaction id wraparound would occur and there would be data loss.
To clear the nagios alert, perform a
VACUUM in each indicated
database as a database superuser (
postgres).
See for more details on postgres vacuuming. | http://zulip.readthedocs.io/en/1.6.0/prod-postgres.html | 2017-11-18T00:51:55 | CC-MAIN-2017-47 | 1510934804125.49 | [] | zulip.readthedocs.io |
r.connect(options) → connection
Create a new connection to the database server. The keyword arguments are:
host: host of the RethinkDB instance. The default value is
localhost.
port: the driver port, by default
28015.
db: the database used if not explicitly specified in a query, by default
test.
user: the user account to connect as (default
admin).
password: the password for the user account to connect as (default
'', empty).
timeout: timeout period in seconds for the connection to be opened (default
20).
ssl: a hash of options to support SSL connections (default
None). Currently, there is only one option available, and if the
ssloption is specified, this key is required:
ca_certs: a path to the SSL CA certificate.
If the connection cannot be established, a
ReqlDriverError exception will be thrown..
The RethinkDB Python driver includes support for asynchronous connections using Tornado and Twisted. Read the asynchronous connections documentation for more information.
Note: Currently, the Python driver is not thread-safe. Each thread or multiprocessing PID should be given its own connection object. (This is likely to change in a future release of RethinkDB; you can track issue #2427 for progress.)
Example: Open a connection using the default host and port, specifying the default database.
conn = r.connect(db='marvel')
Example: Open a new connection to the database.
conn = r.connect(host='localhost', port=28015, db='heroes')
Example: Open a new connection to the database, specifying a user/password combination for authentication.
conn = r.connect(host='localhost', port=28015, db='heroes', user='herofinder', password='metropolis')
Example: Open a new connection to the database using an SSL proxy.
conn = r.connect(host='localhost', port=28015, ssl={'ca_certs': '/path/to/ca.crt'})
Example: Use a
with statement to open a connection and pass it to a block. Using this style, the connection will be automatically closed when execution reaches the end of the block.
with r.connect(db='marvel') as conn: r.table('superheroes').run(conn)
Couldn't find what you were looking for?
© RethinkDB contributors
Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License. | http://docs.w3cub.com/rethinkdb~python/api/python/connect/ | 2017-11-18T00:30:51 | CC-MAIN-2017-47 | 1510934804125.49 | [] | docs.w3cub.com |
When prompted with "FINISH SETTING UP ALL FUNNEL STEPS TO USE FUNNELS, there are a few steps to follow:
First of all, open the Funnel in Question.
Below the list of Funnel Steps is a Dropdown labeled: OTHER FUNNEL STEPS
Select a template for each page that does not have one set:
Once you have selected a template, you can either choose to keep the page, or delete it:
After you have set a template / deleted the Funnels in question, you will no longer receive this prompt and your page can go live.
Did this answer your question? | http://docs.clickfunnels.com/page-editor/troubleshooting/other-funnel-steps-what-does-the-clickfunnels-finish-setting-up-all-funnel-steps-to-use-funnel-warning-mean | 2017-11-18T00:32:09 | CC-MAIN-2017-47 | 1510934804125.49 | [array(['https://downloads.intercomcdn.com/i/o/30018650/51c91f9635f7ec2f48011159/OFS4.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/30018663/5360a627b6a3febde7663491/OFS.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/30018707/be5a488a32c5d7a5fb46e047/OFS2.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/30018734/fc786c6aeba44d959c328d4d/OFS3.png',
None], dtype=object) ] | docs.clickfunnels.com |
The Jira integration provides DivvyCloud the ability to create Jira tasks, and is compatible with all DivvyCloud resources. As an example, you can create a Jira task if a bot locates an Instance with SSH open to the world.
Follow the steps below to configure and leverage the integration.
1. Update DivvyCloud - In the DivvyCloud console, click on Integrations on the left side of the screen. Then click edit and input your username, password, and server to access your Jira account.
Local User Jira
Currently we only support local Jira users.
Once this is done you can leverage the DivvyCloud Jira action within your Bot configurations.
2. Resulting Tasks - Once you run your bot and it matches resources, it will create Jira tasks in your Jira console. Log in to your Jira account to see the new tasks. | https://docs.divvycloud.com/docs/jira-integration | 2019-02-15T22:01:11 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.divvycloud.com |
After reading Hampton Sides’ GHOST STORIES: THE FORGOTTEN EPIC STORY OF WORLD WAR II’S MOST DRAMATIC MISSION that deals with the treatment of American POWs by the Japanese during World War II it fosters the bizarre wonderment about people’s inhumanity toward people. Hampton Sides, the author of numerous books that include IN THE KINGDOM OF ICE and HELLHOUND ON HIS TRAIL, concentrates on the January, 1945 rescue of 513 American and British POWs from the hellish Japanese POW camp at Cabanatuan in the Philippines. Sides has done a significant amount of research interviewing survivors, those that rescued them, and mined the memoirs and secondary material dealing with this amazing operation. Sides immediately sets the tone of his narrative by describing through Private First Class Eugene Nielson’s eyes the mass burning of POWs on Palawan Island by the Japanese. The goal was to burn alive 150 POWs, of which, after a number escaped, eleven survived.
After General Douglas MacArthur had landed on the island of Leyte he dispatched General Walter Krueger, the Commander of the US Sixth Army toward Manila. As his forces neared the city of Cabanatuan he came across Major Robert Lapham who led a band of Filipino insurgents against the Japanese. Krueger learned there were roughly 500 POWs, many survivors of the Bataan Death March and Corregidor, remaining in the Cabanatuan camp. Lapham also learned there were 8-9,000 Japanese soldiers around the city. Army intelligence understood Japanese contempt for POWs in general and feared that the remainder of these men who would suffer a horrible death at the hands of the Japanese if nothing was done. With 27% of all POWs killed by the Japanese, Krueger needed little convincing to attempt a rescue mission, an action that forms the basis of Sides intimate and at times horrific narrative.
(After the successful US Army Ranger liberation of POWs from the Cabanatuan camp)
Sides introduces all the major characters involved in the mission from Lt. Colonel Henry Mucci, the Commander of the Ranger Battalion that would carry out the rescue, Captain Robert Prince, the assault commander and the man who implemented the strategy needed, to Dr. Ralph Emerson Hibbs who did his best to keep the POWs alive. American soldiers had no concept of the Japanese cultural view of surrender. They had never been trained in the concept or how to behave as a POW. Since the Japanese culture saw surrender as cowardice and dishonorable their treatment of those who did surrender was appalling.
Sides structures the narrative by alternating chapters between the plight of the POWs from their capture, the Bataan Death March, their treatment at Camp O’Donnell, to their incarceration at Cabanatuan; with the training and implementation of the Army Ranger assault on the camp, and the resulting freeing of the POWs. The Japanese Commander, Lt-General Masaharu Homma actually believed that 25,000 POWs could be taken to Cabanatuan. He believed that they could march to the camp, however he had little knowledge of their health and strength, and that the prisoner figure was closer to 100,000 resulting in a murderous calamity.
(US Army Ranger, Capt. Robert Prince)
Sides does a superb job describing the recruitment and training of the Army Rangers. He provides a number of character profiles of the men and allows the reader to feel as if they know them. They would move out on January 28, 1945 along with their Filipino allies, without whom the mission would have been doomed. These Filipinos led by Captains Eduardo Joson and Juan Pajota knew the topography of the region as well as having important insights into Japanese strategy. Side’s offers intimate details of the inhuman conditions that existed at Cabanatuan. The POWs lacked food leading to malnutrition and starvation, suffered beheadings, bayoneting, and torture and human cruelty that was unimaginable. Sides takes us back to 1942 and describes the three years of captivity. Food became an obsession to the point where POWs actually traded recipes, and perhaps their happiest moment occurred on Christmas day, 1942 when Red Cross packages arrived. For the POWs, who had learned to rely on themselves during the Great Depression “self-reliance” became their mantra as “stealing, hoarding and scheming” dominated their behavior. The key for the Rangers was to complete the rescue before the Japanese killed all of their prisoners. The Rangers were “flying blind” because no amount of training could have prepared them for what they were about to attempt.
(Lt. Colonel Henry Mucci)
As the narrative progresses Sides introduces many important individuals. One of the most interesting was Clara Fuentes, a.k.a. “High Pockets,” a.k.a. Madame Isubaki, a.k.a. Claire Phillips, an American spy who ran a night club that was a clearing house for information and used the proceeds of her business to supply medicine, clothing and whatever supplies could be smuggled into the camps. Her story was one of the many amazing ones that Sides offers.
Sides places the reader next to the Army Rangers as they crawl a good part of the thirty miles to reach their target. We witness the thought processes of them. The rescuers were appalled at what they saw, in particular the condition of the POWs as many were emaciated and sickly. What is interesting is that once the escape takes place and the men have to march miles and miles to freedom they take on a different persona as their pride is somewhat restored and they dig deep down and find strength and emotions that they thought that the Japanese had beaten out of them.
(Many of the American soldiers rescued from the Cabanatuan POW Camp in 2/1945)
Sides follows the narrative with an epilogue that touches the heart as he describes the voyage on the USS Anderson through enemy waters to return to the United States and a hero’s welcome. Sides then summarizes how a number of the US Army Rangers and the men they freed lived the remainder of their lives. GHOST WARS is a triumph of the human spirit that I recommend to all.
(American POWS liberated from the Cabanatuan camp in the Philippines in 2/1945) | https://docs-books.com/2016/09/08/ghost-stories-the-forgotten-epic-story-of-world-war-iis-most-dramatic-mission-by-hampton-sides/ | 2019-02-15T21:36:25 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs-books.com |
Datastore 2.3.0 User Guide Execute a query Queries are executed in background and you can execute other queries while waiting for the results. In the history panel, the query icon shows that the query is in progress. While the query execution is in progress, the same user cannot execute it again until the results are displayed. However, another user can execute the query if it is not shared. Execute the query from the main screen: Select queryTo display results in Right click query in list and select :ExecuteExecute in panel 2P1 or P2Double click query in query tree of navigator paneP1 by defaultFrom P1/P2 header comboThe panel that contains the combo box If some of the arguments or values are missing, the Query Wizard prompts you to:Enter the required values.Click OK. If your query requests records that are located in a partition that has been archived or purged, your query is rejected and a warning message is displayed. A dialog window displays the list of the partitions that are needed for the query.You can copy this list to the clipboard and send it to your administrator for him to mount the corresponding partitions. When the execution has ended, the results are displayed in the selected result panel if they are available right away. However if it is a long duration query, a notification is displayed and a link provided to access the execution results. Related Links | https://docs.axway.com/bundle/Datastore_230_UserGuide_allOS_en_HTML5/page/Content/UserGuide/Datastore/Tasks/Datastore_tasks/Query_Wizard/t_query_exec.htm | 2019-02-15T20:54:52 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.axway.com |
Visit Mattermost to create webhook integration
First, you'll need to set up a new incoming webhook in your team's Mattermost configuration. An incoming webhook is a method for Mattermost to receive incoming messages to be posted to your Mattermost team from external services.
Add your Mattermost webhook to Polyaxon deployment config
Now you can add your Mattermost's webhook to the integrations' section:
integrations: mattermost: - url: url1 - url: url2
More automation with Zapier
You can also go further and connect other popular Polyaxon integrations to Mattermost using Zapier. | https://docs.polyaxon.com/integrations/mattermost/ | 2019-02-15T22:03:34 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.polyaxon.com |
The two main ways of reducing the size of the Player are by making a proper Release build within Xcode and by changing the Stripping Level within Unity.
It is expected that final release builds are made using the Xcode command Product > Archive. Using this command ensures that build is made with release configuration and all the debug symbols are stripped. After issuing this command, Xcode switches to the Organizer window Archives tab. For guidelines on how to calculate app size and other size-reducing tips, see Apple’s Technical Q&A on Reducing the size of my App.
Note: We recommend you have some small extra margin for error when aiming for an over-the-air download limit (which currently is 150MB).
Activate the size optimizations for Mono scripting backendA framework that powers scripting in Unity. Unity supports three different scripting backends depending on target platform: Mono, .NET and IL2CPP. Universal Windows Platform, however, supports only two: .NET and IL2CPP. More info
See in Glossary builds by stripping work in the following way:
Strip assemblies level: the scriptsA piece of code that allows you to create your own Components, trigger game events, modify Component properties over time and respond to user input in any way you like. More info
See in Glossary’.
Note that Micro mscorlib is a heavily stripped-down version of the core library. Only those items that are required by the Mono runtime in Unity remain. Best practice for using micro mscorlib is not to use any classes or other features of .NET that are not required by your application. GUIDs are a good example of something you could omit; they can easily be replaced with custom made pseudo GUIDs and doing this would result in better performance and app size.
Refer to documentation on managed bytecode stripping with IL2CPP for more information
Note: Tt can sometimes be difficult to determine which classes are getting stripped in error even though the application requires them. You can often get useful information about this by running the stripped application on the simulator and checking the Xcode console for error messages.
An empty project would take less than 22 MB in the App Store if all the size optimizations were turned off. With code stripping, the empty sceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info
See in Glossary with just the main cameraA component which creates an image of a particular viewpoint in your scene. The output is either drawn to the screen or captured as a texture. More info
See in Glossary can be reduced to less than 12 MB in the App Store (zipped and DRM attached)..
2018–06–14 Page amended with limited editorial review
2017–14–06 - Upated Stripping with IL2CPP section
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/Manual/iphone-playerSizeOptimization.html | 2019-02-15T22:10:53 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.unity3d.com |
Semaphores¶
A semaphore is a kernel object that implements a traditional counting semaphore.
Concepts¶
Any number of semaphores can be defined. Each semaphore is referenced by its memory address.
A semaphore has the following key properties:
- A count that indicates the number of times the semaphore can be taken. A count of zero indicates that the semaphore is unavailable.
- A limit that indicates the maximum value the semaphore’s count can reach.
A semaphore must be initialized before it can be used. Its count must be set to a non-negative value that is less than or equal to its limit.
A semaphore may be given by a thread or an ISR. Giving the semaphore increments its count, unless the count is already equal to the limit.
A semaphore may be taken by a thread. Taking the semaphore decrements its count, unless the semaphore is unavailable (i.e. at zero). When a semaphore is unavailable a thread may choose to wait for it to be given. Any number of threads may wait on an unavailable semaphore simultaneously. When the semaphore is given, it is taken by the highest priority thread that has waited longest.
Note
The kernel does allow an ISR to take a semaphore, however the ISR must not attempt to wait if the semaphore is unavailable.
Implementation¶
Defining a Semaphore¶
A semaphore is defined using a variable of type
struct k_sem.
It must then be initialized by calling
k_sem_init().
The following code defines a semaphore, then configures it as a binary semaphore by setting its count to 0 and its limit to 1.
struct k_sem my_sem; k_sem_init(&my_sem, 0, 1);
Alternatively, a semaphore can be defined and initialized at compile time
by calling
K_SEM_DEFINE.
The following code has the same effect as the code segment above.
K_SEM_DEFINE(my_sem, 0, 1);
Giving a Semaphore¶
A semaphore is given by calling
k_sem_give().
The following code builds on the example above, and gives the semaphore to indicate that a unit of data is available for processing by a consumer thread.
void input_data_interrupt_handler(void *arg) { /* notify thread that data is available */ k_sem_give(&my_sem); ... }
Taking a Semaphore¶
A semaphore is taken by calling
k_sem_take().
The following code builds on the example above, and waits up to 50 milliseconds for the semaphore to be given. A warning is issued if the semaphore is not obtained in time.
void consumer_thread(void) { ... if (k_sem_take(&my_sem, K_MSEC(50)) != 0) { printk("Input data not available!"); } else { /* fetch available data */ ... } ... }
Suggested Uses¶
Use a semaphore to control access to a set of resources by multiple threads.
Use a semaphore to synchronize processing between a producing and consuming threads or ISRs.
Configuration Options¶
Related configuration options:
- None.
API Reference¶
- group
semaphore_apis
Defines
K_SEM_DEFINE(name, initial_count, count_limit)¶
Statically define and initialize a semaphore.
The semaphore can be accessed outside the module where it is defined using:
extern struct k_sem <name>;
- Parameters
name: Name of the semaphore.
initial_count: Initial semaphore count.
count_limit: Maximum permitted semaphore count.
Functions
- void
k_sem_init(struct k_sem *sem, unsigned int initial_count, unsigned int limit)¶
Initialize a semaphore.
This routine initializes a semaphore object, prior to its first use.
- Return
- N/A
- Parameters
sem: Address of the semaphore.
initial_count: Initial semaphore count.
limit: Maximum permitted semaphore count.
- int
k_sem_take(struct k_sem *sem, s32_t timeout)¶
Take a semaphore.
This routine takes sem.
- Note
- Can be called by ISRs, but timeout must be set to K_NO_WAIT.
- Note
- When porting code from the nanokernel legacy API to the new API, be careful with the return value of this function. The return value is the reverse of the one of nano_sem_take family of APIs: 0 means success, and non-zero means failure, while the nano_sem_take family returns 1 for success and 0 for failure.
- Parameters
sem: Address of the semaphore.
timeout: Waiting period to take the semaphore (in milliseconds), or one of the special values K_NO_WAIT and K_FOREVER.
- Return Value
0: Semaphore taken.
-EBUSY: Returned without waiting.
-EAGAIN: Waiting period timed out.
- void
k_sem_give(struct k_sem *sem)¶
Give a semaphore.
This routine gives sem, unless the semaphore is already at its maximum permitted count.
- Note
- Can be called by ISRs.
- Return
- N/A
- Parameters
sem: Address of the semaphore.
- void
k_sem_reset(struct k_sem *sem)¶
Reset a semaphore’s count to zero.
This routine sets the count of sem to zero.
- Return
- N/A
- Parameters
sem: Address of the semaphore. | https://docs.zephyrproject.org/latest/kernel/synchronization/semaphores.html | 2019-02-15T21:17:49 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.zephyrproject.org |
This document will run through how to get your first features from your dev sandbox into your pipeline and then get those features through your pipeline environments.
Promoting features from a dev sandbox
Promoting work from a dev sandbox into your pipeline is easily done through Gearset. While you can manually compare and deploy from your dev org to your source branch, we helpfully provide this functionality within the Pipelines UI.
We’ve created a custom field within our dev org that looks like this:
In the deployment pipelines view, we can click on the dev sandbox environment. This is the blue environment that we set up in the previous step.
This will open the following window, where you will be able to create your feature branch. Click
+ Create new branch from main and create a new feature branch.
If you are using Jira or Azure Work Items then we recommend including the ticket ID in your branch name. In this example my Jira ticket is
AL-100.
Click
Create branch.
Once you have created your branch click on the
Compare and commit option within the sidebar view:
This will start a comparison between the dev sandbox org and our new feature branch and allow us to choose the new custom field to commit:
After you have selected your custom field, click
next - this will take you through to the Problem Analysis stage. Here you can review the Problem Analysers, Warnings, Code Analysis and Environment Variables.
After you have reviewed these click
Pre-deployment summary.
On this screen you can review your changes, add a commit message, and connect your Jira ticket, Azure Work Item or Asana task.
Once you've made any notes, you can click
Commit changes.
Once the commit to the feature branch is complete, click
View in Pipelines to return to the Pipelines view.
Once you have returned to the Pipelines view, click on your dev sandbox again to see all of the components you've added to your feature branch so far. You can continue to run comparisons and add components to your feature branch, or you can move on to the next step.
We’ve now successfully committed the feature from the dev sandbox environment to the feature branch, and are ready to target our first environment, i.e. the INT environment.
Click
Create pull request in the pipeline environment view to start the feature’s journey through the pipeline:
You can see the target environment is pre-populated with the next integration (INT) environment:
Once this pull request is open, the pipeline view is updated with the indication that a new feature is ready for promotion into the next environment:
Promoting a change through your pipeline
Once a pull request is open and displayed in front of an environment, we can select the environment to see the pull request and feature ready for promotion:
We can select the pull request (or multiple pull requests) and click
Promote changes to promote into each of the subsequent environments in the pipeline:
Clicking the
Promote changes button will then start the CI job on the Gearset side, to:
merge the pull request to take the feature into the next git branch (i.e. the
intbranch)
start the CI job to deploy this feature into the next org (i.e. the INT org), indicated by the spinning circle
open a pull request of this feature against the next environment (i.e. the UAT environment) in the pipeline.
When the CI job to promote the feature into the INT org is complete, this is what we’ll see:
You can now continue to move the changes down the pipeline and towards production.
Promoting features from a static environment
If your first feature is starting not in a dev sandbox, but in a static environment, you can promote as well. Imagine the scenario where you have a hotfix feature in this pipeline; you can promote the feature from
hotfixenvironment to
main, then
back propagate the feature from
mainto
uat
In your git provider, start by creating a pull request targeting the static environment branch (from
hotfix_feature_1 to
hotfix branch).
Gearset will close the PR and create a promotion PR (from
gs-pipeline/hotfix_feature_1_-_hotfix to
hotfix).
In the pipeline UI, this will be shown as a feature before the
hotfix static environment, which you can promote into the next environment.
Once it is promoted you should see a icon in front of the
main environment to merge into the
main branch.
Once you promote the feature into
main you will see a back-propagation icon to move the feature backwards into the
uat environment.
That's how you can promote a feature from a static environment and subsequently back propagate it to other environments in the pipeline.
Using JIRA with pipelines
If we want to associate each promotion of changes with a Jira ticket, Gearset can attach comments to the selected Jira ticket to update your ticket tracking software. | http://docs.gearset.com/en/articles/6166062-promoting-your-first-features | 2022-09-24T21:45:16 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['https://downloads.intercomcdn.com/i/o/572368840/68ad7b91fb67c4cebb9aa8b9/Screenshot+2022-08-31+at+16.34.42.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/572371923/315dae13947c013470162e20/Screenshot+2022-08-31+at+16.37.58.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/572387605/47a4bd10139fcfbdd2046dde/Screenshot+2022-08-31+at+16.57.45.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/572389767/9f62bed995aae51cc6d9b35b/Screenshot+2022-08-31+at+17.00.17.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/572391317/1dfe0956a3ffcdf272ef6923/Screenshot+2022-08-31+at+17.02.28.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/572396243/2d83226f5c5a36d70d07429b/Screenshot+2022-08-31+at+17.05.03.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/572403093/bfda53dddd44bf85d55a9726/Screenshot+2022-08-31+at+17.17.54.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/572405561/67ca1408facf7c9db003ebc4/Screenshot+2022-08-31+at+17.21.03.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/572408828/334f2ffec2dfd4f5b7ef28f4/Screenshot+2022-08-31+at+17.23.49.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/559355832/f92f49e958587bc53231cb6a/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/559362435/8884cf49d083b53e30bb0bae/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/559363864/7502b607d515e74527a431b4/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/559368601/730969b8311aed26d0ec1ad4/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/559372143/6fed008f3e3406935b817c1b/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/559374440/f14603f8e6dd22308022c5b9/image.png',
None], dtype=object) ] | docs.gearset.com |
. SAML identity provider to use an attribute
associated with your users, like user name or email, as the source identity when calling
AssumeRoleWithSAML. You do this by adding an attribute to the SAML
assertion.. | https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-ebs/interfaces/assumerolewithsamlresponse.html | 2022-09-24T23:31:02 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.aws.amazon.com |
Node Groups
Example of a node group.
Grouping nodes can simplify a node tree by hiding away complexity and reusing repetitive parts.
Conceptually, node groups allow you to treat a set of nodes as though it were just one node. They’re similar to functions in programming: they can be reused (even in different node trees) and can be customized by changing their “parameters.”
As an example, say you created a “Wood” material that you would like to have in different colors. One way of doing this would be to duplicate the entire material for every color, but if you did that, you’d have to go over all those copies again if you later wanted to change the density of the grain lines. Instead, it would be better to move the nodes that define the wood look into a node group. Each material can then reuse this node group and just supply it with a color. If you then later want to change the grain line density, you only have to do it once inside the node group, rather than for every material.
Node groups can be nested (that is, node groups can contain other node groups).
Note
Recursive node groups are prohibited for all the current node systems to prevent infinite recursion. A node group can never contain itself (or another group that contains it).
Interface. To do this, drag a connection from the hollow socket on the right side of the Group Input node to the desired input socket on the node requiring an input. The process is similar for the Group Output regarding data you want to be made available outside the group.
Edit Group
Reference
- Header
- Shortcut
Tab, Ctrl-Tab
With a node group selected, press Tab to move into it and see its content. Press Tab again (or Ctrl-Tab) to leave the group and go back to its parent, which could be the top-level node tree or another node group. You can refer to the breadcrumbs in the top left corner of the node editor to see where you are in the hierarchy.
Example of an expanded node group.
Make Group
Reference
- Shortcut
Ctrl-G
To create a node group, select the nodes you want to include, then press Ctrl-G or click.
The “Add” menu of each node editor contains an “Output” category with node types such as “Material Output.” These node types should not be confused with the “Group Output” node found in node groups, and should not be used in node groups either (only in the top-level node tree).
Ungroup
Reference
- Shortcut
Ctrl-Alt-G.
Reusing Node Groups
Reference
- Shortcut
Shift-A
Existing node groups can be placed again after they’re initially defined, be it in the same node tree or a different one. It’s also possible to import node groups from a different blend-file using. | https://docs.blender.org/manual/en/latest/interface/controls/nodes/groups.html | 2022-09-24T23:27:12 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['../../../_images/interface_controls_nodes_groups_example.png',
'../../../_images/interface_controls_nodes_groups_example.png'],
dtype=object)
array(['../../../_images/render_cycles_optimizations_reducing-noise_glass-group.png',
'../../../_images/render_cycles_optimizations_reducing-noise_glass-group.png'],
dtype=object) ] | docs.blender.org |
The Inspector¶
This page explains how the Inspector dock works in-depth. You will learn how to edit properties, fold and unfold areas, use the search bar, and more.
Warning
This page is a work-in-progress.
Overview of the interface¶
Let's start by looking at the dock's main parts.
At the top are the file and navigation buttons.
Below it, you can find the selected node's name, its type, and the tools menu on the right side.
If you click the tool menu icon, a drop-down menu offers some view and edit options.
Then comes the search bar. Type anything in it to filter displayed properties. Delete the text to clear the search.
| https://docs.godotengine.org/en/latest/tutorials/editor/inspector_dock.html | 2022-09-24T22:01:09 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['../../_images/inspector_overview.png',
'../../_images/inspector_overview.png'], dtype=object)
array(['../../_images/inspector_top_buttons.png',
'../../_images/inspector_top_buttons.png'], dtype=object)
array(['../../_images/inspector_node_name_and_tools.png',
'../../_images/inspector_node_name_and_tools.png'], dtype=object)
array(['../../_images/inspector_tools_menu.png',
'../../_images/inspector_tools_menu.png'], dtype=object)
array(['../../_images/inspector_search_bar.png',
'../../_images/inspector_search_bar.png'], dtype=object)] | docs.godotengine.org |
Import Failures
If the cluster import process was not successful, follow the steps described below to debug and diagnose the issue. Some common scenarios when import can fail are described below
Typical Failure Scenarios¶
Blueprint Sync Failure¶
By default, the "minimal cluster blueprint" is selected for imported clusters. Users can override the default and specify a different blueprint.
Note that the minimal blueprint is extremely lightweight and should not collide/conflict with any existing resources on the cluster. Users are recommended to import a Kubernetes cluster with the minimal blueprint first before trying a custom blueprint.
Common scenarios for potential collision are
- Metrics Server already exists on imported cluster
- Ingress Controller already exists on imported cluster and using port 443
Insufficient Resources¶
Imported cluster does not have the Insufficient Resources for the k8s mgmt Operator and the specified addons in the blueprint.
Incompatible Kubernetes Version¶
Imported cluster is running an incompatible, older version of Kubernetes
Security Block¶
3rd Party security product already in cluster blocking the creation of required k8s resources such as namespaces etc.
Network Security¶
Imported cluster unable to pull required container images from the container registry due to existing network security policies.
Privileged Namespaces¶
When you run "kubectl apply..", two namespaces for the controller will be created on the imported cluster.
- rafay-system
- rafay-infra
"rafay-system" Namespace¶
The "rafay-system" namespace is a critical, monitored namespace. It should contain several pods as listed below. Users can use the following kubectl command to list the pods in this namespace.
kubectl get po -n rafay-system NAME READY STATUS RESTARTS AGE controller-manager-588577488f-9vs29 1/1 Running 0 8d debug-client-7cd86579bd-bcj8f 1/1 Running 0 8d edge-client-769767854b-m8r7w 1/1 Running 0 8d rafay-connector-5ffddccd99-gn6gl 1/1 Running 6 8d relay-agent-585c799cbd-2bj5m 1/1 Running 0 8d secretstore-admission-webhook-b57c94688-46v62 1/1 Running 0 63d l4err-77b5c5b949-kmbzs 1/1 Running 0 8d nginx-ingress-controller-2jlwb 1/1 Running 0 8d nginx-ingress-controller-qz4j6 1/1 Running 0 8d
"rafay-infra" Namespace¶
The "rafay-infra" namespace contains the Kubernetes resources for infrastructural components managed by the controller.
kubectl get po -n rafay-infra NAME READY STATUS RESTARTS AGE log-aggregator-6847784f79-tbb5z 1/1 Running 0 151d log-router-qtc4f 2/2 Running 0 77d log-router-zmfkf 2/2 Running 0 77d rafay-metrics-server-58689d8d66-njxgm 1/1 Running 0 77d rafay-prometheus-adapter-7cc76d654c-cwrx7 1/1 Running 0 7h37m rafay-prometheus-alertmanager-0 2/2 Running 0 7h37m rafay-prometheus-kube-state-metrics-567cff6b85-rqntx 1/1 Running 0 77d rafay-prometheus-node-exporter-mh9sc 1/1 Running 0 7h37m rafay-prometheus-node-exporter-rgwzk 1/1 Running 0 7h37m rafay-prometheus-server-0 2/2 Running 0 7h37m | https://docs.rafay.co/clusters/import/debug/ | 2022-09-24T23:44:07 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.rafay.co |
Table of Contents
CloudDAV (WebDAV Protocol)
Last updated on April 14th, 2021
Enabling and Configuring CloudDAV
CloudDAV is the File Fabric implementation of WebDAV with a unique twist, it works over any Storage Cloud mapped to a File Fabric account whether that Storage Cloud is WebDAV enabled or not.
This is made possible by the protocol adaptors that the File Fabric provides to enable public and private storage cloud to be accessible over common protocols.
To use CloudDAV once it is enabled for an account:
URL:
or (EU Server)
or
https://<configured webdav url>/ (Enterprise on-premises)
(This page provides more information on configuring a domain for CLoudDAV).
Authentication: Your Enterprise File Fabric username or password
You can use a variety of WebDAV clients or Apps to access Files using CloudDAV.
Example of Apps include:
iOS Apps: Pages, Numbers, Keynote, OmniFocus, OmniGraffle, OmniGraphSketcher, WPS Office + others
Android Apps: WPS Office, WebDAV Navigator, X-plore file manager, Total Commander + others
CloudDAV Options
There are two options available to be set after login to the File Fabric and visiting the “My DashBoard” link on the right sidebar.
The first option is whether to disable file versioning if using CloudDAV. By default this will be set to “on”. When creating or editing files this stops previous version being kept and versioned.
The second option is whether to always update, on access, the File Fabric cache when viewing files over CloudDAV. This means that the File Fabric will always check directly with the Clouds that you are using rather than using the cache. This has the advantage that you will always see any new files that you added direct to these Clouds but has the disadvantage that it could be slower on the initial view of each directory.
Things to Note.
If you added files externally to your Cloud Storage and you cannot see them via CloudDAV then you either need to enable the CloudDAV real-time refresh from the CloudDAV options available from your Cloud Dashboard on the website, or you need to do a manual refresh of your Cloud Files Cache.. To understand how to do a manual refresh please see here.
File Fabric Encrypted files that are uploaded where the end user holds the encryption key (ie. not stored in the File Fabric) are not displayed from a WebDAV client connection as it makes no sense to show them as they cannot be un-encrypted from a standard WebDAV client.
If you are trying to map a network drive using File Fabric CloudDAV on any version of Windows from XP onwards then we would recommend you install these registry fixes that solve many issues with Windows WebDAV, including authentication and correct working with Office 2010+.
There is also a blog post specifically for mapping CloudDAV using the WebDAV feature with Windows.
Also note that in corporate environments WebDAV can be disabled at a policy level. In the following registry keys:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\NetworkProvider\HwOrder\ProviderOrder HKLM\SYSTEM\CurrentControlSet\Control\NetworkProvider\Order\ProviderOrder
the value “,webclient” must be at the end of the value in the key. If it is not then WebDAV is restricted at an OS level and will not work. Adding the value back may resolve the issue.
If you are using an Amazon WorkSpaces Windows 10 Server instance then WebDAV is not enabled by default. You need to enable 'WebDAV Redirector' in Features and restart WorkSpaces to be able to work with WebDav.
CloudDAV is not supported for use with Mac OS X Finder's “add server” feature and CloudDAV will reject attempts by Finder to connect. See this blog post as to why this is.
TLS Support
The File Fabric no longer supports TLS v1.0 and now only support TLS v1.1 and TLS v1.2. This change is in keeping up with the compliance requirements of PCI DSS regarding acceptable encryption.
Windows 7 did not ship with support for TLS v1.1 or TLS v1.2, whereas Windows 10 does. Microsoft does have an article on patching this issue:
Note that we do not test or support our software on Windows 7. The information about Windows 7 is provided on an 'as-is' basis.
Pre-Requisite for Enterprise Appliance: CloudDAV is configured and turned on at a package level | https://docs.storagemadeeasy.com/clouddav | 2022-09-24T23:28:37 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.storagemadeeasy.com |
Topic Analyzer¶
Note
There exists an API for topic analyzer too, but the magic of the analyzer dissappears when using API.
Create¶
GUI¶
Navigate to Tools -> Topic Analyzer and click on the CREATE button on top-left. Choose the name for your Analyzer (Description). Define the query and select indices on which the query will be executed. If you leave Query empty, it will take all documents from the selected indices. If you have any searches defined in your project, they will appear in a dropdown menu if you click on the field Query - you can use existing searches as queries.
Choose fields on which the clustering is done. The selected fields should contain textual data.
Note
It is recommended to use lemmatized or tokenized data. Lemmatization is especially useful with morphologically complex languages. You can tokenize and lemmatize the data with MLP.
Embedding - if selected then its phraser is used for tokenization. You can leave it empty as well.
Keywords filter - defines a filter (as a regular expression) for unwanted significant words.
Stopwords - custom stopwords to ignore during clustering.
Clustering algorithm - an algorithm to use for clustering. Minibatchkmeans is a time efficient variant of kmeans with a potential tradeff in quality. Vectorizer - a method for creating document vectors.
Document limit - number of documents that will be clustered. Possible maximum is 10000.
Num cluster - number of final clusters.
Num dims - maximum possible dimension of document vectors.
Num topics - number of dimensions when Use LSI is selected.
Use LSI - if selected then high dimensional document-term vectors are reduced into lower dimensional vectors that base on “topics”.
Note
How to choose the number of clusters?
General advice would is to better have too many clusters than too few. Think about how many documents you are planning to cluster and choose the number so that the average cluster is small enough to inspect it manually with ease. For example, if you are going to cluster 1000 documents to 50 clusters then average cluster would contain 20 documents.
Fig. 62 Creating a Clustering¶
Viewing clusters¶
GUI¶
Click View clusters under Actions. You will see an overwiew about obtained clusters. For each cluster the document count and average cosine similarity between its documents is given as well as a list of significant words.
Note
Interpreting document count
Cluster with significantly larger document count often indicates that the clustering algorithm has failed to separate these documents by the topic. It doesn’t necessarily mean that the clustering process in general has been unsuccessful as often it is impossible to cluster all documents perfectly. However, you still might want to take a closer look to such clusters as there may be other reasons for such results as well. For example, the documents in that cluster may contain similar noise or stopwords that makes them artifically similar to each other. Sometimes increasing the number of clusters might help as well.
Interpreting average similarity
Average similarity is an average cosine similarity between all the documents in the cluster. It ranges between 0 and 1 and higher score indicates that the documents in that cluster are more similar to each other. However, the score has some disadvantages. For example, when a cluster contains 9 documents that are very similar to each other and 10th document is very different from all others, then the score might appear low althought fixing that cluster would be very easy.
Viewing documents inside cluster¶
GUI¶
Click on a cluster that is in your interest, this opens you a detailed view of a cluster content.
Operations with the cluster¶
GUI¶
Tag documents¶
If the cluster contains documents from the same topic it is advisable to tag the documets and delete the cluster. Click on Tag button. This operation adds a texta_fact to each of the document in the cluster, with specified name and a string value. From now on, these documents will be ignored in further clustering processes
Delete documents¶
This functionality is useful if some documents in the cluster are from a different topic and you want to remove them - select the documents that you want to remove and click on trash bin icon.
Add more documents¶
You might want to know whether there exists more documents similar to the ones in the cluster, and if so, add those to the cluster as well, so you could tag them all together. Click on a “More like this” button to query similar documents. In the opened view, select document which you would like to add to the cluster and click on a + button.
Delete the cluster¶
It is advisable to delete the cluster after you have tagged it. Click on Delete button to do it.
Fig. 64 Cluster details view¶ | https://docs.texta.ee/topic_analyzer.html | 2022-09-24T23:37:50 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['_images/create_clustering.png', '_images/create_clustering.png'],
dtype=object)
array(['_images/clusters_view.png', '_images/clusters_view.png'],
dtype=object)
array(['_images/cluster_details_view.png',
'_images/cluster_details_view.png'], dtype=object)] | docs.texta.ee |
Choose your operating system:
Windows
macOS
Linux
In order to understand and use the content on this page, make sure you are familiar with the following topics:
In this Quick Start Guide, you will learn how to set up a C++ project in the Unreal Engine and program your first C++ gameplay class in Visual Studio. By the time you've completed this tutorial, you will know how to do the following:
Create a new C++ Project
Create a new Actor class in C++
Edit that C++ class in your development environment, adding visual representation and functionality
Compile your project
Test your new Actor in the Unreal Editor
1. Required Setup
Launch the Unreal Editor. When the Project Browser comes up, click Games Project category and select a Blank template. Make sure that you have C++ and Starter Content enabled, choose your preferred Save Location and Name for this project, and then click Create Project. In our instance, we're naming our project QuickStart.
Any Blueprint project can be converted to a C++ project. If you have a Blueprint project that you want to add C++ to, create a new C++ class per the next section, and the editor will set up your code environment for you. Also note that using a C++ project does not prevent you from using Blueprint. C++ projects simply set up the base classes for your project in C++ instead of Blueprint.
2. Create a New C++ Class
In the Unreal Editor, click the File drop-down menu, then select the New C++ Class... command.
Click image for full size.
The Choose Parent Class menu will display. You can choose an existing class to extend, adding its functionality to your own. Choose Actor, as it is the most basic type of object that can be placed in the world, then click Next.
Click image for full size.
In the Name Your New Actor menu, name your Actor FloatingActor and click Create Class.
Click image for full size.
The Unreal Engine will automatically compile and reload with our new class selected in the Content Browser, and your programming environment will automatically open with
FloatingActor.cpp.
3. Edit Your C++ Class
4. Compile and Test Your C++ Code
5. End Result
You should now see your cube gently floating up and down over the table while it slowly rotates.
Congratulations! You've created your first Actor class entirely with C++! While this represents a very simple object and only scratches the surface of what you can do with C++ source code, you have at this point touched on all the essentials of creating, editing, and compiling C++ code for your game. You are now ready for more complex gameplay programming challenges, and we suggest a few below.
6. On Your Own!
Now that you know how to build a simple C++ Actor, try making it more configurable. For instance, you can add variables to control its behavior:
In FloatingActor.h:
... public: UPROPERTY(EditAnywhere, BlueprintReadWrite, Category="FloatingActor") float FloatSpeed = 20.0f; UPROPERTY(EditAnywhere, BlueprintReadWrite, Category="FloatingActor") float RotationSpeed = 20.0f; ...
In FloatingActor.cpp:
... NewLocation.Z += DeltaHeight * FloatSpeed; //Scale our height by FloatSpeed float DeltaRotation = DeltaTime * RotationSpeed; //Rotate by a number of degrees equal to RotationSpeed each second ...
By adding these variables in the header and replacing the float values we were using to scale DeltaHeight and DeltaRotation in the .cpp, we can now edit the float and rotation speed in the Details Panel when we select our Actor.
You can experiment by adding other kinds of behavior to the Tick function using Location, Rotation, and Scale.
You can also try attaching other kinds of components in C++ to create a more complex object. Refer to the Creating and Attaching Components guide for examples of different types of components you have available, and try adding a Particle System Component to add a bit of flare to your floating object.
Finally, if you right-click your own Actor class in the Content Browser, you will find the option to extend it, either in C++ or in Blueprint, enabling you to create new variations of it.
You can have a whole library of FloatingActors, each substituting different Meshes or parameters as you so choose.
Sample Code
FloatingActor.h
// Copyright 1998-2019 Epic Games, Inc. All Rights Reserved. #pragma once #include "CoreMinimal.h" #include "GameFramework/Actor.h" #include "FloatingActor.generated.h" UCLASS() class QUICKSTART_API AFloatingActor : public AActor { GENERATED_BODY() public: // Sets default values for this actor's properties AFloatingActor(); UPROPERTY(VisibleAnywhere) UStaticMeshComponent* VisualMesh; protected: // Called when the game starts or when spawned virtual void BeginPlay() override; public: // Called every frame virtual void Tick(float DeltaTime) override; };
FloatingActor.cpp
// Copyright 1998-2019 Epic Games, Inc. All Rights Reserved. #include "FloatingActor.h" // Sets default values AFloatingActor::AFloatingActor() { // Set this actor to call Tick() every frame. You can turn this off to improve performance if you don't need it. PrimaryActorTick.bCanEverTick = true; VisualMesh = CreateDefaultSubobject<UStaticMeshComponent>(TEXT("Mesh")); VisualMesh->SetupAttachment(RootComponent); static ConstructorHelpers::FObjectFinder<UStaticMesh> CubeVisualAsset(TEXT("/Game/StarterContent/Shapes/Shape_Cube.Shape_Cube")); if (CubeVisualAsset.Succeeded()) { VisualMesh->SetStaticMesh(CubeVisualAsset.Object); VisualMesh->SetRelativeLocation(FVector(0.0f, 0.0f, 0.0f)); } } // Called when the game starts or when spawned void AFloatingActor::BeginPlay() { Super::BeginPlay(); } // Called every frame void AFloatingActor::Tick(float DeltaTime) { Super::Tick(DeltaTime); FVector NewLocation = GetActorLocation(); FRotator NewRotation = GetActorRotation(); float RunningTime = GetGameTimeSinceCreation(); float DeltaHeight = (FMath::Sin(RunningTime + DeltaTime) - FMath::Sin(RunningTime)); NewLocation.Z += DeltaHeight * 20.0f; //Scale our height by a factor of 20 float DeltaRotation = DeltaTime * 20.0f; //Rotate by 20 degrees per second NewRotation.Yaw += DeltaRotation; SetActorLocationAndRotation(NewLocation, NewRotation); } | https://docs.unrealengine.com/5.0/en-US/unreal-engine-cpp-quick-start/ | 2022-09-24T21:43:30 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['./../../Images/programming-and-scripting/programming-language-implementation/cpp-in-unreal-engine/unreal-engine-cpp-tutorials/cpp-programming-quick-start/ProgrammingQuickStartTopic.jpg',
'ProgrammingQuickStartTopic.png'], dtype=object)
array(['./../../Images/programming-and-scripting/programming-language-implementation/cpp-in-unreal-engine/unreal-engine-cpp-tutorials/cpp-programming-quick-start/PRQuickStart_6-1.jpg',
'PRQuickStart_6-1.png'], dtype=object)
array(['./../../Images/programming-and-scripting/programming-language-implementation/cpp-in-unreal-engine/unreal-engine-cpp-tutorials/cpp-programming-quick-start/PRQuickStart_6-2.jpg',
'PRQuickStart_6-2.png'], dtype=object)
array(['./../../Images/programming-and-scripting/programming-language-implementation/cpp-in-unreal-engine/unreal-engine-cpp-tutorials/cpp-programming-quick-start/PRQuickStart_6-3.jpg',
'PRQuickStart_6-3.png'], dtype=object)
array(['./../../Images/programming-and-scripting/programming-language-implementation/cpp-in-unreal-engine/unreal-engine-cpp-tutorials/cpp-programming-quick-start/PRQuickStart_6-4.jpg',
'PRQuickStart_6-4.png'], dtype=object) ] | docs.unrealengine.com |
How to Add/Drop classes online
Login to your ShowGroundsLive account using the LogIn button on the top of the page:
On the left hand side of the page, locate the “Actions” tab, and select the “Class Add/Drops” option.
You will then be brought to this page and can choose which entry & which show you are adding or dropping the classes in.
Once you have selected the entry you are adding/dropping the classes for, it will bring you to this page.
From here, you will select the “Class Add/Drops” button in the top right corner.
You will then be on this page where you can add/drop classes for the selected entry.
You will need to select the rider you are adding/dropping the classes for, then you need to input the division or classes you are adding.
Note, you can add multiple classes at once by separating each class by a comma. Example, 1, 2, 3, 6, 9
Once you have added or dropped your classes, hit the appropriate button for the action you wish to do, then select the save button at the bottom of the page.
Once you have added/dropped your classes and saved the action, they will appear at the bottom of the page as a pending request until the show office completes the changes.
Trainer Add/Drops
Adding and dropping classes as a trainer is similar to single rider add/drops.
Locate the Add/Drop button under the actions on the left side of the page, and then select the “Trainer Add/Drop” button.
Once you have selected the Trainer Add/Drop button and are on the show you want to do the add or drop for, a listing of all of the entries you are listed as the trainer for will appear in a numerical listing. It will also show the classes affiliated with the entry by number.
You can then bulk add/drop classes by putting the numbers in the add or drop box, separating the numbers by commas, or by group. Ex, 160-163. Once you have done your add/drops, hit the save button in the bottom corner and they will be listed as pending until the show office picks up the request!
| http://docs.showgroundsonline.com/knowledgebase/west-palms-events-how-to-add-drop-classes-online/ | 2022-09-24T23:47:48 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['http://docs.showgroundsonline.com/wp-content/uploads/2022/04/295x93ximg_624b118aad8a2.png.pagespeed.ic.m9IsB8h55r.png',
None], dtype=object)
array(['http://docs.showgroundsonline.com/wp-content/uploads/2022/03/326x257ximg_621e3b9b22b9b.png.pagespeed.ic.3wSoLj12ML.png',
None], dtype=object)
array(['http://docs.showgroundsonline.com/wp-content/uploads/2022/03/1020x559ximg_621e3bc264cc5.png.pagespeed.ic.vOwVNewawB.png',
None], dtype=object)
array(['http://docs.showgroundsonline.com/wp-content/uploads/2022/03/1001x713ximg_621e3c04d8fbe.png.pagespeed.ic.OQm6G76xF2.png',
None], dtype=object)
array(['http://docs.showgroundsonline.com/wp-content/uploads/2022/03/1043x824ximg_621e3c33a2f34.png.pagespeed.ic.Z5Km3NeBzr.png',
None], dtype=object)
array(['http://docs.showgroundsonline.com/wp-content/uploads/2022/03/533x70ximg_621e3ca58d1e5.png.pagespeed.ic.Q537vsw7sG.png',
None], dtype=object)
array(['http://docs.showgroundsonline.com/wp-content/uploads/2022/03/958x66ximg_621e3d3d79d13.png.pagespeed.ic.Qi6kVozGcj.png',
None], dtype=object)
array(['http://docs.showgroundsonline.com/wp-content/uploads/2022/09/1022x247ximg_631b8e9672d31.png.pagespeed.ic.9NnOuqvfj8.png',
None], dtype=object)
array(['http://docs.showgroundsonline.com/wp-content/uploads/2022/09/img_631b8ee47c83e.png',
None], dtype=object)
array(['http://docs.showgroundsonline.com/wp-content/uploads/2022/09/img_631b8f4bf13e7.png',
None], dtype=object) ] | docs.showgroundsonline.com |
Migrate to a different cloud provider or region#
Any Aiven service can be relocated to a different cloud vendor or region. This is also valid for PostgreSQL® where the migration happens without downtime. Cloud provider/region migration features mean that you can relocate a service at any time, for example to meet specific latency requirements for a particular geography.
To migrate a PostgreSQL service to a new cloud provider/region
Log in to the Aiven web console and select the PostgreSQL instance you want to move.
In the Overview tab, click Migrate Cloud.
Select the new cloud provider and region where you want to deploy the PostgreSQL instance, then click Create
The PostgreSQL cluster will enter the
REBALANCING state, still serving queries from the old provider/region.
New nodes will be added to the existing PostgreSQL cluster residing in the new provider/region and the data will be replicated to the new nodes. Once the new nodes are in sync, one of them will become the new primary node and all the nodes in the old provider/region will be decommissioned. After this phase the cluster enters in the
RUNNING status, the PostgreSQL endpoint will not change.
Tip
To have consistent query time across the globe, consider creating several read-only replicas across different cloud provider/regions | https://docs.aiven.io/docs/products/postgresql/howto/migrate-cloud-region.html | 2022-09-24T22:28:26 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['../../../../_images/migrate-cloud.png',
'Migrate Cloud button on Aiven web console'], dtype=object)
array(['../../../../_images/migrate-rebalancing.png',
'Image showing a PostgreSQL cluster in ``REBALANCING`` state'],
dtype=object)
array(['../../../../_images/migrate-running.png',
'Image showing a PostgreSQL cluster in ``RUNNING`` state'],
dtype=object) ] | docs.aiven.io |
Response-Only and Inactive Fields
Read about fields that are only sent in responses from the API, or are inactive for a specific request.
Warning
If you have used these fields in existing requests, you need to remove them or they may cause a validation error.
Response-Only Fields
Some fields are only available in the responses sent by the API. The values for these fields are generated by the API and you cannot use these fields in requests to the API.
These are outlined in the following table:
Inactive Fields for Calculate Tax Request
The fields listed in the following table cannot be used with the Calculate Tax request:
If you need further help, contact support at.
Updated 12 months ago
Did this page help you? | https://docs.assure.taxamo.com/docs/inactive-and-response-only-fields | 2022-09-24T23:03:29 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.assure.taxamo.com |
How Can We Help?
Course Alert
Features Description
This plugin introduces a new notification feature to automatically send custom reminders for courses:
- Login Reminder: user has not begun the course X days after enrolling
- Completion Reminder: user has not completed the course X days after the first access
- Course Closing reminder: user has not completed the course X days before the course end date (if set)
Each reminder can be activated separately for each course.
For each reminder it will be possible to set a standard text or a custom notification text and scheduling.
Cron Job
To allow the system to properly work you need to set up a cron job on your server calling periodically (i.e every 10 minutes) the following URL:
[platform_url]/plugins/CourseAlert/Cron/cron.coursealert.php
Reminders activation and settings
The course alert function is available for every course. From the course management page, just click on the new envelope icon to access the configuration panel:
This will open a configuration panel where you’ll be able to:
- Select one or more reminder for each course
- Define to send them by e-mail or by SMS
- Define how many days after or before the “trigger” your reminder should be sent.
- Set a custom notification text
NOTE: to use SMS you’ll need to purchase SMS credit and configure the SMS settings under the general settings of your LMS. SMS messages are subject to a text length limit of 144 chars.
Notification text customization
Writing a custom message
The message field for each notification displays a default text: just click on the text area and start writing to enter a custom text.
Conventional HTML tags can be used for text formatting.
DynamicTags
The following dynamic tags are available to customize your Course Alert notification text:
- [user]: will print the name of the user receiving the notification
- [course]: displays the course title
- [days]: displays the number of days as set in the notification settings
Edit the default message
The default text message can be edited through the standard Forma language management features | https://docs.formafarm.com/knowledge-base/plugins/course-alert/course-alert/ | 2022-09-24T21:42:23 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['https://docs.formafarm.com/wp-content/uploads/2020/12/coursealert-envelope-1024x486.png',
None], dtype=object)
array(['https://docs.formafarm.com/wp-content/uploads/2020/12/coursealert-panel-guided-1024x777.png',
None], dtype=object) ] | docs.formafarm.com |
At the moment we have only one third party library in the module. It is the Reserved Username list by ShouldBee on GitHub. With the use of this list you can forbid users to choose a username (Username Validation Hook) that could be misleading (like admin or owner) or not working (such as openid or files). | https://docs.proxeuse.com/docs/modules-plugins/whmcs/whmcs-nextcloud-provisioning-module/third-party-libraries/ | 2022-09-24T23:23:25 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.proxeuse.com |
Getting Started¶
After the Installation and familiarizing with the most important Input Files and Output Files, the best way to get started with SEED is to try to replicate some Test Cases and start understanding the main parameters with the help of the Parameter File Generator.
Installation¶
SEED code is hosted on GitLab.
In order to build the latest development version of SEED do the following
(you may have to modify the
Makefile.local in the
src folder):
git clone cd SEED/src make seed
SEED makes use of the
BOOST C++ Libraries.
The necessary header files are distributed along with SEED.
The binary is compiled into the
SEED/bin folder.
After installation run SEED with the following command:
seed_4 seed.inp >& log
For compiling and running the parallel version of SEED, refer to Running SEED on a cluster.
Input Files¶
The following is a list of the input files required for a SEED run:
The file
seed.inpcontains the most frequently modified input values (path and name of structural input files, list of residues forming the binding pocket, switch between polar and apolar docking, …). For detailed information see Input Parameters.
The parameter file
seed.par(filename can be specified in i1) contains less frequently modified input/output options, parameters for docking, energy and clustering. Modification of some parameters in this file is recommended only to advanced users who wish to fine tune the SEED energy model.
Both
seed.inpand
seed.parallows comment lines (starting with a
#) and the files must terminate with the word
end. All the other lines hold information read by the program. It is important that this information is provided in the correct order for parsing by the program. In this documentation and in the pdf user manual, parameters referring to the input and parameter files are indicated by i and p respectively. For a detailed description refer to Parameters. For a full working example of the input and parameter file try out the Parameter File Generator and the test cases in the Tutorials section.
A standard SYBYL mol2 format file containing all the fragments to dock. This file is simply the concatenation of all the fragments, expressed in mol2 format. Note that different conformations of the same fragment are treated as different fragments. Partial charges are written in the 9th column in the
@<TRIPOS>ATOMrecord.
In order to retrieve the correct van der Waals parameters from
seed.parfile (block p29), the CHARMM atom types should be specified in the mol2 file. This is done using the alternative atom type specified by the record
@<TRIPOS>ALT_TYPE, which takes the following form:
@<TRIPOS>ALT_TYPE CGenFF_4.0_ALT_TYPE_SET CGenFF_4.0 1 CG331 2 CG301 3 CG331 4 CG324 ...
Where
CGenFF_4.0_ALT_TYPE_SETsets a user-defined name (for example
CGenFF_4.0) for the alternative atom type set. This name is repeated on the next line, followed by the list of “atom number-atom type” pairs for each atom in the molecule. This list should span a single line, but can be broken by using
\\. It is recommended to keep the SYBYL atom types on the 6th column of the record
@<TRIPOS>ATOMas they are recognized by most cheminformatics and visualization software. The first line of the SYBYL record
@<TRIPOS>MOLECULEspecifies the fragment name. It is convenient (but not necessary) to have unique names for each fragment. In case fragments with duplicate names are found in the input, they will be renamed in all the output files appending to their name the dollar sign $ and an incremental index. As the fragment mol2 input file is read sequentially, the number of fragments in it does not have to be specified a priori.
A standard SYBYL mol2 file for the receptor with partial charges on the 9th column in the
@<TRIPOS>ATOMrecord (as for the fragments) and CHARMM atom types specified by the
@<TRIPOS>ALT_TYPErecord (refer to the fragment file description for details).
Output Files¶
The main SEED output file, whose filename is specified in p6 (by default
seed.out),
contains detailed information about the energy values
(with both fast and accurate model) and results of clustering.
The first term of p28 is the maximal number
of lines that can be written in the main output file for each docking step of each fragment
type. The second term of p28 gives control on which information may be discarded in
the output file (print level).
A directory
outputs in which all the output files are written is
automatically created by the program. Note that if a directory named
outputs is
already present, it will be overwritten by the SEED run.
<FragmentMol2FileName>_clus.mol2 contains the fragment top poses
per cluster ranked by accurate energy after the postprocessing step. This file is the concatenation
of a mol2 file for each saved pose. The maximum number of poses to be saved per cluster
can be set in p5 (first value). The comment line of the SYBYL mol2
record
@<TRIPOS>MOLECULE (6th line after the record identifier) contains
some useful information about the pose,
i.e. increasing pose index, cluster number, total energy and fragment number
(
Fr_nu). The latter represents the program internal numbering of the pose and
it is not interesting per se, but it can be used to match the pose
to docking information written in
seed.out.
seed_clus.dat is a summary table containing the separate energy terms for
each fragment position saved to
<FragmentMol2FileName>_clus.mol2.
This information can be also retrieved from the main output file.
Columns are organized as follows:
- Name: Fragment name.
- Pose: Incremental pose number. This index restarts at 1 for each new fragment.
- Cluster: Cluster number.
- Fr_nu: Fragment number. This is SEED internal pose number.
- Tot: Total binding energy.
- ElinW: Electrostatic interaction in water.
- rec_des: Desolvation of the receptor upon complex formation.
- frg_des: Desolvation of the fragment upon complex formation.
- vdW: Van der Waals interaction energy.
- DElec: Electrostatic difference upon fragment binding. It is given by ElinW-DG_hydr. It roughly represents how good the fragment feels in the protein compared to how good it feels in water.
- DG_hydr: Free energy of hydration of the fragment.
- Tot_eff: Tot/HAC.
- vdW_eff: vdW/HAC.
- Elec_eff: ElinW/HAC.
- HAC: Heavy atom count. It is the total number of non-hydrogen atoms in the fragment.
- MW: Molecular weight of the fragment.
<FragmentMol2FileName>_best.mol2 contains the best fragment positions,
according to the total binding energy, irrespective of the cluster they belong to
(maximum number of saved poses set by p5, second value). The difference with respect to
<FragmentMol2FileName>_clus.mol2 is that the user can set the total number
of poses to be saved instead of the number of cluster members.
seed_best.dat is the same as
seed_clus.dat but matching
<FragmentMol2FileName>_best.mol2.
The writing of the above
*_clus.mol2 and
*_best.mol2 files is activated or deacti-
vated by p3 (first and second value respectively). The writing of the
seed_clus.dat and
seed_best.dat summary table is activated or deactivated by p4
(first and second value respectively). Note that the maximum number of poses and
poses per cluster to be saved (p5) are upper bounds as the number of generated poses
may be smaller than the number of poses requested in output. The four parameters
for writing the output files (p3 and p4) can be switched on/off independently.
Note that the number of cluster members to be saved (first value of p5)
implicitly determines the maximum number of poses for which to evaluate the accurate binding energy.
Thus in general it is advisable to set this number to a value higher than one,
in order to be sure to consider a meaningful number of poses,
and to suppress the corresponding mol2 file output (first value of p3 set to
n)
as it may quickly become big.
Other output files¶
Besides the docking output files containing structural information and energy values, SEED generates some additional output files.
The grids for the evaluation of fast van der Waals energy,
fast screened interaction energy and receptor desolvation can be saved on disk and
reused for a subsequent run (see p7, p8, p9).
The grid files are saved by default in the
`scratch subfolder.
When a new project is started, it can be very useful to first generate and visualize the vectors used for ligand placement, before performing any docking (see Vectors for docking for details). Vectors are saved in the following mol2 files and can be opened in a molecular viewer:
polar_rec.mol2contains vectors distributed uniformly on a spherical region around each ideal H-bond direction. The deviation from ideal hydrogen bond geometry and the number of additional vectors to distribute uniformly on the spherical region are set in p12.
polar_rec_reduc_angle.mol2contains vectors of
polar_rec.mol2which are selected according to an angle criterion (i4, p14). Vectors pointing outside of the binding site are discarded. The file
polar_rec_reduc_angle.mol2exists only if the angle criterion has been activated by the user (i4).
polar_rec_reduc.mol2contains vectors of
polar_rec.mol2(or of
polar_rec_reduc_angle.mol2if the angle criterion has been activated (i4)) which are selected according to favorable van der Waals interaction between all the receptor atoms and a spherical probe on the vector extremity. The aim is to discard receptor vectors that point into region of space occupied by other atoms of the receptor and select preferentially vectors in the concave regions of the receptor. The van der Waals radius of the probe is specified in p15. The number of selected vectors is controlled with p2.
apolar_rec.mol2contains points distributed uniformly on the solvent-accessible surface of the receptor. The density of surface points is set in p22.
apolar_rec_reduc_angle.mol2contains vectors of
apolar_rec.mol2which are selected according to an angle criterion (i4, p14). Vectors pointing outside of the binding site are discarded. The file
apolar_rec_reduc_angle.mol2exists only if the angle criterion has been activated by the user (i4).
apolar_rec_reduc.mol2contains points of
apolar_rec.mol2. (or of
apolar_rec_reduc_angle.mol2if the angle criterion has been activated (i4)) which are selected according to their hydrophobicity. For this purpose a low dielectric sphere is placed on each of these points. The hydrophobicity is defined as the weighted sum of the receptor desolvation energy due to the presence of the probe and the probe/receptor van der Waals interaction. The weighting factors and the probe radius are set in p22. The number of selected apolar points is controlled with p2.
Of the six files listed above one should visualize
polar_rec_reduc.mol2 and
apolar_rec_reduc.mol2. It is
useful to modify the appropriate parameters if the vector distributions
do not meet the user’s expectation, since fragments are docked using the vectors
present in these files.
As soon as the you are happy with the generated vectors, you can just read the maps
(first value of p7, p8, p9 set to
r)
instead of generating and writing them again (first value set to
w).
The file
sas_apolar.pdb contains points defining the solvent accessible
surface of the binding site, which can be visualized with a molecular viewer.
Troubleshooting¶
If after starting a SEED run the program exits unexpectedly, the
keyword
WARNING should be looked for in the main output file
(
seed.out, p6) to find hints on possible problems
(wrong path for filenames, unknown value for some parameters, …).
The docking workflow implemented in SEED involves many filtering steps, hence, if the main output file does not contain any fragment position for a given fragment type, it can be due to several reasons: the center of the spherical cutoff (i6) might be misplaced (outside the binding site), the checking of steric clashes (p10 and p11) too strict, the van der Waals energy cutoff (p19) for apolar fragments too severe, the total energy cutoff (third value of i7), or the energy cutoff for the second clustering (fourth value of i7) too stringent. To find out what the reason could be, the following part of the main output file should be investigated:
Total number of generated fragments of type 1 (BENZ) : 118800
Fragments that passed the sphere checking : 102894
Fragments that passed the bump checking : 49007
Fragments that passed the vdW energy cutoff : 22100
Fragments that passed the total energy cutoff : 17794
Parallel SEED¶
SEED has an MPI-parallel version. If you want to run a SEED screening campaign on a cluster, refer to Running SEED on a cluster.
MC minimization¶
Stochastic minimization with a rigid-body Monte Carlo Simulated Annealing scheme can be enabled to run on the top generated poses; these are the poses for which the accurate binding energy is evaluated and their maximum number (per cluster) of can be tuned with the first value of p5.
MC minimization can be used in both Docking running mode and Energy evaluation mode. The latter can be useful when rescoring poses not generated by SEED. In this case, if you had to run rescoring without minimizing the poses, you might get very unfavourable energy values due to the ruggedness of the van der Waals energy landscape.
If you want to know more details, refer to Monte Carlo Simulated Annealing. | https://caflischlab-seed.readthedocs.io/en/latest/getstart.html | 2022-09-24T23:20:13 | CC-MAIN-2022-40 | 1664030333541.98 | [] | caflischlab-seed.readthedocs.io |
.
While that does provide good specificity for writing custom CSS rules, it is now common for themes to use common libraries such as Bootstrap, Foundation, and Vuetify. These libraries provide pre-defined layout rules you may wish to employ to managed your Store Locator Plus layouts. Rather than write your own custom CSS rules and hope you catch all the corner cases and browser-specific rules while building a responsive design you can now deploy standards already included in your themes.
For example, use the typical 12-point grid “row” class on the Page List Wrapper Class and “col” on the Page List Item class and let the library manage the flex-box rules that give you nice orderly rows that collapse into proper columns on mobile devices.
Extending [slp_pages] Shortcode Attributes
You can specify these values in the [slp_pages] shortcode as well.
[slp_pages style="full" no_map=”1” pages_directory_entry_css_class="slp_page location_details text-center col-xs-12"]
This generates HTML similar to the following:
| https://docs.storelocatorplus.com/pages-appearance-settings/ | 2022-09-24T22:01:59 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['https://i0.wp.com/docs.storelocatorplus.com/wp-content/uploads/2018/04/Power-New-Page-List-Wrapper-Classes-2018-04-16_14-22-29.png?resize=685%2C321&ssl=1',
None], dtype=object)
array(['https://i0.wp.com/docs.storelocatorplus.com/wp-content/uploads/2018/04/HTML-Snippet-for-slp_page-css-class-modifiers-2018-04-17_13-50-09.png?resize=688%2C200&ssl=1',
None], dtype=object) ] | docs.storelocatorplus.com |
.
Features
- Show your visitors that your website is safe by displaying the McAfee Secure trustmark.
- Get your site scanned for malware, viruses, and other malicious activities.
- Increase sales and conversions by reminding your visitors that your website is part of the secure web.
- Manage your online reputation with ratings and reviews brought to you by TrustedSite.
Setup & Usage
Once you've activated the McAfee Secure Integration app in your website admin panel, head to the "Settings > McAfee Secure" menu, and click the "Activate Now" button to get started creating your McAfee Secure account.
On the popup that appears after clicking the activate button, enter your site url and email address, and follow the rest of the setup steps (if you already have a McAfee Secure account click to log in).
After entering your details and signing up the integration should complete automatically.
Once your website has been certified, the free version of McAfee Secure will show the trustmark on your website to the first 500 visitors each month. After the limit has been reached the trustmark will no longer be displayed for the remainder of the month. You can remove this limit and enable several more features to upgrading to their Pro service.
. | https://docs.thatwebsiteguy.net/using-the-mcafee-secure-integration-app/ | 2022-09-24T23:00:35 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.thatwebsiteguy.net |
App Management
You can manage File Fabric Apps and third party apps from:
for the SaaS hosted service
and similarly for enterprise hosted File Fabric instances:
https://<ENDPOINT DOMAIN>/?p=authtoken
Removing the token will remove access to the File Fabric for the Application.
There is also the option of generating a token for applications that require it.
Also See: Managing User Devices | https://docs.storagemadeeasy.com/appmanagement | 2022-09-24T22:56:16 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.storagemadeeasy.com |
Hardware Design Files¶
The Inertial Sense hardware design files are available on our IS-hdw repository to facilitate product hardware development and integration.
- PCB Libraries - Schematic and layout files for printed circuit board designs.
- Products - 3D models and resources for the IMX, Rugged, EVB, and products useful for CAD and circuit board designs. | http://docs.inertialsense.com/user-manual/hardware/hardware_design/ | 2022-09-24T22:33:20 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.inertialsense.com |
ShowGrounds's Documentation
- Eagle Eye Chronos – Timing Integration Set Up
- SGL – Timing API – Version 1 – Setup Instructions
- Setting up Google Analytics & Facebook Pixel on Your Plugin Site
- USEF Updates
- Stabling Reservations – Check in/out times
- Classes – List View Reports
- Reports – Overview
- Credit Card Credentials Error Message
- Payments Listview – Reports
- Master Fees – IMPORTANT CHANGES 1-10-2022
- RVS – Streaming
- Remote Desktop Connection
- Payment Batches
- Shows – Overview
- Automatic Trainer Accounts
- Formatting the Date on Your Computer
- Feed Delivery
- System Preferences
- Horses List View – Reports
- Bad Email Handling in ShowGrounds
- Users Overview
- Users List View
- Users Detail
- Reports List View
- Fees Overview
- Payments Overview
- Horses Overview
- Classes Overview
- Divisions Overview
- Circuits Overview
- Reports Detail
- Entries Overview
- Reports List View – Quick Actions
- RTO Overview
- Error Messages
- Locked Records
- Getting Started with ShowGroundsLive
- Installing ShowGrounds V17.4 Client – Windows
- Shows List View
- Entries List View
- Company Administration
- Logging In
- The Pallette
- Setting Up a Show
- Entries List View – Quick Actions
- Groups Overview
- Groups List View
- Groups Detail
- Series Overview
- Series List View
- Payments List View
- Series Detail
- Divisions List View
- Circuits List View
- RTO List View
- Classes List View
- Fees List View
- Shows List View – Quick Action
- Horses List View
- Horses List View – Quick Actions
- Horses Detail
- Payments List View – Quick Actions
- Payments Detail
- Divisions List View – Quick Actions
- Divisions Detail
- Circuits List View – Quick Actions
- Circuits Detail
- RTO List View – Quick Actions
- Classes List View – Quick Actions
- Classes Detail View
- RTO Detail
- Fees List View – Quick Actions
- Fees Detail
- Shows Listview – Reports
- Shows Detail
- Entries List View – Reports
- Entries Detail
- Setting Up a Class
- FEI Integration Tools
- Ryegate Scoring – Configuring for communications with SRN
- SMS Set Up Guide
- Modules
- Devices
- To Do’s
- Reports
- Logins
- CRON Jobs
- Groups
- Users
- Shows
- Shows – List View
- Simple Scheduler – Managing and Applying “Breaks” to a schedule
- Shows – Creating Your First Show
- 1 – Shows Overview
- Shows – Modifying and updating
- Circuits
- Circuits Overview
- Facilities
- Facilities Overview
- Tax Rates | http://docs.showgroundsonline.com/doc/showgrounds-manual/ | 2022-09-24T21:49:58 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.showgroundsonline.com |
Use M3DB as remote storage for Prometheus#
M3DB is an excellent candidate for a highly scalable remote storage solution for your Prometheus monitoring system. Many organizations are already using Prometheus and come to M3DB when they have outgrown their existing storage setup. With the ability to scale as needed and store large quantities of time series data, serving as backend storage for Prometheus is one of the main use cases for M3DB.
The steps here are designed to use with an existing Prometheus setup; for a quick example to try things out, try the Prometheus getting started guide which uses Prometheus to monitor itself as a starting point.
Configure M3DB to store data in Prometheus#
Configure Prometheus to write data to remote storage. Copy the Service URI value from the Prometheus (Write) tab. In your Prometheus configuration file (mine is called
prometheus.yml), add the following section, replacing
<PROM_WRITE_URL>with the URL you copied.
remote_write: - url: "<PROM_WRITE_URL>"
Prometheus is now configured to send data to your Aiven for M3 service.
So that you can still access the data using your existing Prometheus service, configure Prometheus with the
remote_readconfiguration to read data from the remote storage. Copy the Service URI value from the Prometheus (Read) tab. In your Prometheus configuration file, add the following section, replacing
<PROM_READ_URL>with the URL you copied.
remote_read: - url: "<PROM_READ_URL>" read_recent: true
The
read_recent parameter makes Prometheus read all data from the remote storage; this is useful so that you can test that the setup is working. Without this setting, Prometheus will return the most recent data from local storage if it still has it there.
Run Prometheus, and check that it starts successfully. After giving it some time to scrape a few metrics, you should see data in the Prometheus web interface. Verify that there is data in M3, either by querying the database directly or by using Grafana to visualize the data stored there. | https://docs.aiven.io/docs/products/m3db/howto/prometheus-storage.html | 2022-09-24T22:39:04 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.aiven.io |
Start the ADONIS Web Client
Before using the ADONIS web client, the ADONIS application server and the Apache Tomcat web server have to be started.
Start Services
In order to start the ADONIS application server and the Apache Tomcat web server:
Open the Services management. Press Windows+R to open the Run box, enter services.msc, and then click OK.
Start the ADONIS application server (service name e.g. "ADONISServer12.0Service") and the Apache Tomcat web server (service name e.g. "Tomcat9").
Once both the application server and the web server are running, the web client can be started.
Use Web Client
In order to open the web client login page:
Open a web browser and navigate to "".
On other machines the web client can be accessed using the address "http://<SERVER_NAME>:<TOMCAT_PORT>/ADONIS12_0/".
<SERVER_NAME> is the name of the server machine, <TOMCAT_PORT> is the HTTP/1.1 Connector Port defined during setup at which Apache Tomcat (and therefore the web client) is accessible. The default value is “8000”. | https://docs.boc-group.com/adonis/en/docs/12.0/installation_manual/ins-3700000/ | 2022-09-24T23:26:03 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.boc-group.com |
Arduino as an ESP-IDF component¶
This method is recommended for advanced users. To use this method, you will need to have the ESP-IDF toolchain installed.
For a simplified method, see Installing using Boards Manager.
ESP32 Arduino lib-builder¶
If you don’t need any modifications in the default Arduino ESP32 core, we recommend you to install using the Boards Manager.
Arduino Lib Builder is the tool that integrates ESP-IDF into Arduino. It allows you to customize the default settings used by Espressif and try them in Arduino IDE.
For more information see Arduino lib builder
Installation¶
Note
Latest Arduino Core ESP32 version is now compatible with ESP-IDF v4.4. Please consider this compatibility when using Arduino as a component in ESP-IDF.
Download and install ESP-IDF.
For more information see Get Started.
Create a blank ESP-IDF project (use sample_project from /examples/get-started) or choose one of the examples.
In the project folder, create a new folder called
componentsand clone this repository inside the newly created folder.
mkdir -p components && \ cd components && \ git clone arduino && \ cd arduino && \ git submodule update --init --recursive && \ cd ../.. && \ idf.py menuconfig
Note
If you use Arduino with ESP-IDF often, you can place the arduino folder into global components folder.
Configuration¶
Depending on one of the two following options, in the menuconfig set the appropriate settings.
Go to the section
Arduino Configuration --->
For usage of
app_main()function - Turn off
Autostart Arduino setup and loop on boot
For usage of
setup()and
loop()functions - Turn on
Autostart Arduino setup and loop on boot
Experienced users can explore other options in the Arduino section.
After the setup you can save and exit:
Save [S]
Confirm default filename [Enter]
Close confirmation window [Enter] or [Space] or [Esc]
Quit [Q]
Option 1. Using Arduino setup() and loop()¶
In main folder rename file main.c to main.cpp.
In main folder open file CMakeList.txt and change main.c to main.cpp as described below.
Your main.cpp should be formatted like any other sketch.
//file: main.cpp #include "Arduino.h" void setup(){ Serial.begin(115200); while(!Serial){ ; // wait for serial port to connect } } void loop(){ Serial.println("loop"); delay(1000); }
Option 2. Using ESP-IDF appmain()¶
In main.c or main.cpp you need to implement
app_main() and call
initArduino(); in it.
Keep in mind that setup() and loop() will not be called in this case.
Furthermore the
app_main() is single execution as a normal function so if you need an infinite loop as in Arduino place it there.
//file: main.c or main.cpp #include "Arduino.h" extern "C" void app_main() { initArduino(); // Arduino-like setup() Serial.begin(115200); while(!Serial){ ; // wait for serial port to connect } // Arduino-like loop() while(true){ Serial.println("loop"); } // WARNING: if program reaches end of function app_main() the MCU will restart. }
Build, flash and monitor¶
For both options use command
idf.py -p <your-board-serial-port> flash monitor
The project will build, upload and open the serial monitor to your board
Some boards require button combo press on the board: press-and-hold Boot button + press-and-release RST button, release Boot button
After a successful flash, you may need to press the RST button again
To terminate the serial monitor press [Ctrl] + [ ] ]
Logging To Serial¶
If you are writing code that does not require Arduino to compile and you want your ESP_LOGx macros to work in Arduino IDE, you can enable the compatibility by adding the following lines:
#ifdef ARDUINO_ARCH_ESP32 #include "esp32-hal-log.h" #endif
FreeRTOS Tick Rate (Hz)¶
The Arduino component requires the FreeRTOS tick rate CONFIG_FREERTOS_HZ set to 1000Hz in make menuconfig -> Component config -> FreeRTOS -> Tick rate.
Compilation Errors¶
As commits are made to esp-idf and submodules, the codebases can develop incompatibilities that cause compilation errors. If you have problems compiling, follow the instructions in Issue #1142 to roll esp-idf back to a different version. | https://docs.espressif.com/projects/arduino-esp32/en/latest/esp-idf_component.html | 2022-09-24T23:34:36 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.espressif.com |
Algorithms
An algorithm is step-by-step set of instructions for completing a task.
Searching
In computer science, a search algorithm is any algorithm which solves the search problem, namely, to retrieve information stored within some data structure, or calculates in the search space of a problem domain, either with discrete or continuous values.
For now, we are going to go over two different types of searches:
- Linear Search
- Binary Search
For the following examples, we are going to be using a row of lockers with numbers inside (an array) and we will look through them to find something, while returning a boolean (
true or
false) as a result.
A linear search is where we move in a line (usually start to end or end to start). The idea of the algorithm is to iterate across the array from left to right, searching for a specified element.
Worst-case scenario: We have to look through the entire array of n elements, either because the target element is the last element of the array or doesn't exist in the array at all.
Best-case scenario: The target element is the first element of the array, and so we can stop looking immediately after we start.
Now lets look through the lockers to find one with the number 50 inside. Some pseudocode for linear search could be written as:
For i from 0 to n–1 // from start (0) to end (n-1) If i'th element is 50 Return true // if the i'th element is 50 - return true Return false // if not 50, return false
Worst-case scenario: We have to divide a list of n elements in half repeatedly to find the target element, either because the target element will be found at the end of the last division or doesn't exist in the array at all.
Best-case scenario: The target element is at the midpoint of the full array, and so we can stop looking immediately after we start.
Some pseudocode for binary search could be written as:
If no items Return false If middle item is 50 Return true Else if 50 < middle item Search left half Else if 50 > middle item Search right half
Big O
Computer scientists have created a way to describe algorithms (how well it is designed), and it's generally called big O.
The more formal way to describe this is with big O notation, which we can think of as “on the order of”. For example, if our algorithm is linear search, it will take approximately O(n) steps, “on the order of n”. In fact, even an algorithm that looks at two items at a time and takes n/2 steps has O(n). This is because, as n gets bigger and bigger, only the largest term, n, matters.
There are some common running times (how many seconds does it take, how many steps does it take, etc.):
(lower is better)
- O(n2)
- O(n log n)
- O(n) (linear search)
- O(log n) (binary search)
- O(1)
Computer scientists might also use big Ω, big Omega notation, which is the lower bound of number of steps for our algorithm. (Big O is the upper bound of number of steps, or the worst case, and typically what we care about more.) With linear search, for example, the worst case is n steps, but the best case is 1 step since our item might happen to be the first item we check. The best case for binary search, too, is 1 since our item might be in the middle of the array.
And we have a similar set of the most common big Ω running times:
(lower is better)
- Ω(n2)
- Ω(n log n)
- Ω(n) (counting the number of items)
- Ω(log n)
- Ω(1) (linear search, binary search)
Linear Search
Now let's create a program to better visualize a lienar search:Here we initialize an array with some values, and we check the items in the array one at a time, in order.
And in each case, depending on whether the value was found or not, we can return an exit code of either 0 (for success) or 1 (for failure).
We can do the same for names:
We can’t compare strings directly, since they’re not a simple data type but rather an array of many characters, and we need to compare them differently. Luckily, the
string library has a
strcmp function which compares strings for us and returns
0 if they’re the same, so we can use that.
Now lets implement a phone book with the same ideas:Now, if the name at a certain index in the
namesarray matches who we’re looking for, we’ll return the phone number in the
numbersarray, at the same index. But that means we need to particularly careful to make sure that each number corresponds to the name at each index, especially if we add or remove names and numbers.
Let's improve the above code using our own custom data type!
Structs
We can make our own custom data types called structs:We can think of structs as containers, inside of which are multiple other data types.
Here, we create our own type with a struct called
person, which will have a
string called
name and a
string called
number. Then, we can create an array of these struct types and initialize the values inside each of them, using a new syntax,
., to access the properties of each
person.
In our loop, we can now be more certain that the
number corresponds to the
name since they are from the same
person element.
Sorting
The process of Sorting can be explained as a technique of rearranging the elements in any particular order, which can be set ready for further processing by the program logic. In C, there are multiple sorting algorithms available, which can be incorporated inside the code.
Bubble Sort
In bubble stort, the idea of the algorithm is to move higher valued elements generally towards the right and lower value elements generally towards the left.
Let's take 8 random numbers (
6,
3,
8,
5,
2,
7,
4,
1) and try to sort them in C.
First, we can look at the first two numbers and swap them so they are in order:
The next pair,
6 and
8, are in order, so we don’t need to swap them.
The next pair,
8 and
5, need to be swapped:
We continue until we reach the end of the list:
Our list isn’t sorted yet, but we’re slightly closer to the solution because the biggest value,
8, has been shifted all the way to the right.
We repeat this with another pass through the list, over and over, until it is sorted correctly.
The pseudocode for this might look like:
Repeat n–1 times For i from 0 to n–2 If i'th and i+1'th elements out of order Swap them
- Since we are comparing the
i'thand
i+1'thelement, we only need to go up to n – 2 for
i. Then, we swap the two elements if they’re out of order.
- And we can stop after we’ve made n – 1 passes, since we know the largest n–1 elements will have bubbled to the right.
We have n – 2 steps for the inner loop, and n – 1 loops, so we get n2 – 3n + 2 steps total. But the largest factor, or dominant term, is n2, as n gets larger and larger, so we can say that bubble sort is O(n2).
Worst-case scenario: The array is in rever order; we have to "bubble" each of the n elements all the way across the array, and since we can only fully bubble one element into position per pass, we must do this n times.
Best-case scenario: The array is already perfectly sorted, and we make no swaps on the first pass.
We’ve seen running times like the following, and so even though binary search is much faster than linear search, it might not be worth the one–time cost of sorting the list first, unless we do lots of searches over time:
- O(n2) (bubble sort)
- O(n log n)
- O(n) (linear search)
- O(log n) (binary search)
- O(1)
And Ω for bubble sort is still n2, since we still check each pair of elements for n – 1 passes.
Selection Sort
In selection sort, the idea of the algorithm is to find the smallest unsorted element and add it to the end of the sorted list. This basically builds a sorted list, one element at a time.
We can take another approach with the same set of numbers:
6 3 8 5 2 7 4 1
First, we’ll look at each number, and remember the smallest one we’ve seen. Then, we can swap it with the first number in our list, since we know it’s the smallest:
Now we know at least the first element of our list is in the right place, so we can look for the smallest element among the rest, and swap it with the next unsorted element (now the second element):
We can repeat this over and over, until we have a sorted list.
The pseudocode for this might look like:
For i from 0 to n–1 Find smallest item between i'th item and last item Swap smallest item with i'th item
With big O notation, we still have running time of O(n2), since we were looking at roughly all n elements to find the smallest, and making n passes to sort all the elements.
Worst-case scenario: We have to iterate over each of the n elements of the array (to find the smallest unsorted element) and we must repeat this process n times, since only one element gets sorted on each pass.
Best-case scenario: Exactly the same! There's no way to gurantee this array is sorted until we go through this process for all the elements.
So it turns out that selection sort is fundamentally about the same as bubble sort in running time:
- O(n2) (bubble sort, selection sort)
- O(n log n)
- O(n) (linear search)
- O(log n) (binary search)
- O(1)
The best case, Ω, is also n2.
We can go back to bubble sort and change its algorithm to be something like this, which will allow us to stop early if all the elements are sorted:
Repeat until no swaps For i from 0 to n–2 If i'th and i+1'th elements out of order Swap them
Now, we only need to look at each element once, so the best case is now Ω(n):
- Ω(n2) (selection sort)
- Ω(n log n)
- Ω(n) (bubble sort)
- Ω(log n)
- Ω(1) (linear search, binary search)
Insertion Sort
In insertion sort, the idea of the algorithm is to build your sorted array in place, shifting elements out of the way if necessary to make room as you go. This is different to bubble sort and selection sort, where we slide actually slide elements out of the way while sorting.
In pseudo code:
Call the first element of the array "sorted". Repeat until all elements are sorted: Look at the next unsorted element. Insert into the "sorted" portion by shifting the requisite number of elements.
Best-case scenario: The array is already perfectly sorted, and we simply keep moving the line between "unsorted" and "sorted" as we examine each element.
Insertion sort can be seen as: O(n2) and Ω(n).
We can use a visualization tool, found here, with animations for how the elements move within arrays for both bubble sort and insertion sort.
Recursion
Recall that in week 0, we had pseudocode for finding a name in a phone book, where we had lines telling us to “go back” and repeat some steps:
1 Pick up phone book 2 Open to middle of phone book 3 Look at page 4 If Smith is on page 5 Call Mike 6 Else if Smith is earlier in book 7 Open to middle of left half of book 8 **Go back to line 3** 9 Else if Smith is later in book 10 Open to middle of right half of book 11 **Go back to line 3** 12 Else 13 Quit
1 Pick up phone book 2 Open to middle of phone book 3 Look at page 4 If Smith is on page 5 Call Mike 6 Else if Smith is earlier in book 7 **Search left half of book** 8 Else if Smith is later in book 9 **Search right half of book** 10 Else 11 Quit
This seems like a cyclical process that will never end, but we’re actually dividing the problem in half each time, and stopping once there’s no more book left.
Recursion occurs when a function or algorithm refers to itself (references its own name in the code), as in the new pseudocode above.
Let's try to visualize this with simple code.
The factorial function (n!) is defined over all positive integers. n! equals all of the positive integers less than or equal to n, multiplied together. Thinking in terms programming, we'll define the mathematical function n! as
fact(n).
fact(1) = 1 fact(2) = 2 * 1 fact(3) = 3 * 2 * 1 fact(4) = 4 * 3 * 2 * 1 ...
fact(1) = 1 fact(2) = 2 * fact(1) fact(3) = 3 * fact(2) fact(4) = 4 * fact(3) ...
fact(n) = n * fact(n-1).
This forms the basis for a recusive definition of the factorial function.
Every recursive function has two cases that could apply, given any input:
- The base case, which when triggered will terminate the recursive process.
- The recursive case, which is where the recursion will actually occur.
We can see this in the following code:
In general, but not always, recursive functions replace loops in non-recursive functions:
Below is the iterative version of the same code above (notice how much simpler the recursive version is).
In week 1, we implemented a “pyramid” of blocks in the following shape:
# ## ### ####
This was the code we created for that problem set:
- Here, we use
forloops to print each block in each row.
But notice that a pyramid of height 4 is actually a pyramid of height 3, with an extra row of 4 blocks added on. And a pyramid of height 3 is a pyramid of height 2, with an extra row of 3 blocks. A pyramid of height 2 is a pyramid of height 1, with an extra row of 2 blocks. And finally, a pyramid of height 1 is just a pyramid of height 0, or nothing, with another row of a single block added on.
With this idea in mind, we can write:
Now, our
drawfunction first calls itself recursively, drawing a pyramid of height
h - 1. But even before that, we need to stop if
his 0, since there won’t be anything left to drawn.
After, we draw the next row, or a row of width
h.
Merge Sort
In merge sort, the idea of the algorithm is to sort smaller arrays and then combine those arrays together (merge them) in sorted order.
We can take the idea of recusion to sorting, with another algorithm called merge sort. The pseudocode might look like:
If only one item Return Else Sort left half of items (assuming n > 1) Sort right half of items (assuming n > 1) Merge sorted halves
We will use an unsorted list to demonstrate merge sorting:
7 4 5 2 6 3 8 1
First, we'll sort the left half (the first four elements):
Well, to sort that, we need to sort the left half of the left half first:
Now, we have just one item,
7, in the left half, and one item,
4, in the right half. So we’ll merge that together, by taking the smallest item from each list first:
And now we go back to the right half of the left half, and sort it:
Now, both halves of the left half are sorted, so we can merge the two of them together. We look at the start of each list, and take
2 since it’s smaller than
4. Then, we take
4, since it’s now the smallest item at the front of both lists. Then, we take
5, and finally,
7, to get:
Next, we do the same thing for the right half of numbers and end up with:
And finally, we can merge both halves of the whole list, following the same steps as before. Notice that we don’t need to check all the elements of each half to find the smallest, since we know that each half is already sorted. Instead, we just take the smallest element of the two at the start of each half.
It took a lot of steps, but it actually took fewer steps than the other algorithms we’ve seen so far. We broke our list in half each time, until we were “sorting” eight lists with one element each.
Since our algorithm divided the problem in half each time, its running time is logarithmic with O(log n). And after we sorted each half (or half of a half), we needed to merge together all the elements, with n steps since we had to look at each element once.
Worst-case scenario: We have to split n elements up and then recombine them, effectively doubling the sorted subarrays as we build them. (Combining sorted 1-element arrays into 2-element arrays, combining soorted 2-element arrays into 4-element arrays...) - O(n log n).
Best-case scenario: The array is already perfectly sorted. But we still have to split and recombine it back together with this algorithm. - Ω(n log n).
So our total running time is O(n log n):
- O(n2) (bubble sort, selection sort)
- O(n log n) (merge search)
- O(n) (linear search)
- O(log n) (binary search)
- O(1)
To see this in real time, watch this video to see multiple sorting algorithms running at the same time.
Algorithms Summary
Algorithm Problems
To see the problem sets for the covered algorithms, please click here. | https://docs.nicklyss.com/c-algorithms/ | 2022-09-24T21:43:47 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.nicklyss.com |
Data Binding RadChart
Many business applications rely on database data, for example MS SQL, Oracle, MySQL, Access ODBC as well as XML data and business objects in multiple-tier scenarios. RadChart allows automatic binding to all of these using either standard Microsoft supplied data source controls SqlDataSource, AccessDataSource, XmlDataSource and ObjectDataSource or any DataSourceControl implementation. The end result is shown in the chart immediately at design-time.
You can also an supports binding to a data source.
IListSource: Provides functionality to an object to return a list that can be bound to a data source.
Some of the implementations of these interfaces include:
Array: See Data Binding RadChart to an Array for an example.
-
CollectionBase Objects.
DataView, DataTable, DataSet: See Data Bindng to a Database Object, Data binding RadChart to an ObjectDatasource, DataBinding to an XML file examples.
Generic List class: See Data Binding RadChart to a Generic List for an example of binding to a generic list.
You can also bind to XML data through an XMLDataSource or directly to an XML file.
Once you have the data source defined you can use the RadChart Wizard, Property Editor or code for selecting which data columns will be used to populate the chart. The key properties for binding to a data source are: DataRelatedPropertiesSeriesDataRelatedPropertiesAxisDataRelatedPropertiesChart | https://docs.telerik.com/devtools/winforms/controls/chart/building-radcharts/data-binding-radchart | 2022-09-24T22:29:27 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.telerik.com |
Energy UK comments on Ofgem's capacity adequacy assessment
It's good to review energy capacity
Reviewing the 'store cupboard capacity’ needed by the UK to keep the lights on is timely and necessary, Energy UK chief executive Angela Knight said today. Mrs Knight was responding to Ofgem’s capacity adequacy assessment which revealed how much spare energy capacity there needs to be to cope with fluctuations in demand.
Angela Knight said:
"Energy companies want to make sure that their customers can rely on having the power they want, when they want it. There was much concern earlier this year about whether the UK had sufficient electricity generating capacity. The industry was keen to see the authorities do further analysis of the issues and decide how much standby power might be needed, just in case. The Government’s confirmation today that it will introduce a market to ensure we have sufficient capacity in the long-term is good news but we need to get going on this and for Ofgem, National Grid and DECC to work together to make arrangements to avoid any energy gap opening up over the next few years."
We will work with the authorities on this and the development of other tools – such as a capacity auction - to tackle energy demand and to ensure a fair price is paid to have energy at the flick of a switch.
Contacts
- Energy UK press office - 020 7747 2941
- Out of hours - 07500 707 327 | https://www.docs.energy-uk.org.uk/media-and-campaigns/press-releases/21-2013/215-energy-uk-comments-on-ofgem-s-capacity-adequacy-assessment.html | 2020-07-02T19:03:35 | CC-MAIN-2020-29 | 1593655879738.16 | [] | www.docs.energy-uk.org.uk |
Energy UK marks a decade of the Climate Change Act
Ten years to the day since the Climate Change Act was passed, Energy UK is marking the occasion by publishing a ten-point action plan setting out how the UK can continue as a world leader in tackling climate change.
The power sector has made a massive contribution in helping the UK’s carbon emissions fall to a level last seen in 1890 - thanks to the huge increase in the amount of electricity generated by low carbon sources over the last decade. The power sector has more than halved its own emissions since the Act was passed and low carbon sources now supply 51% of the electricity generated in the UK.
During this time the cost of renewables, such as wind and solar, has plummeted not only making clean energy increasingly cheaper but also boosting the UK economy with an estimated 400,000 people employed in low carbon jobs across the country.
Energy UK’s ten-point action plan therefore - such as current energy Minister Claire Perry and former environment Secretary of State David Miliband - scientists, academics and regulators as well as figures from the energy sector and environmental groups, offer their different perspectives on how the Climate Change Act came into force, its influence in effecting the power sector’s transformation and the challenges that lie ahead both for the UK and the rest of the world.
To accompany this publication, Energy UK has also produced a short film where a group of 10 years olds from Micklem Primary School in Hemel Hempstead talk about why climate change matters to them.
The publication will be officially launched tonight at a Parliamentary event marking the anniversary held in partnership with the Committee on Climate Change (CCC), the All-Party Parliamentary Climate Change Group (APPCCG) and Policy Connect. Speaking at the event will be Dr Caroline Lucas MP (APPCCG Chair), The Rt Hon Claire Perry MP (Minister of State for Energy and Clean Growth), The Rt Hon. Ed Miliband MP (former Secretary of State for the Department of Energy and Climate Change), Lawrence Slade (Energy UK’s chief executive), Dr Emily Shuckburgh (British Antarctic Survey) and The Rt Hon. the Lord Deben (Chair of the CCC)
Energy UK's chief executive, Lawrence Slade said:
“Ten years on from the Act, it’s an appropriate time to reflect on the astonishing transformation that it has helped bring about in the power sector in particular, to a degree few thought possible at the time. It was a landmark and courageous piece of legislation which committed us to binding targets for reducing emissions and has made the UK a world leader in tackling climate change.
“As well as the environmental benefits resulting from this success story, decarbonisation has boosted our economy, with the investment and innovation seen over the last ten years giving clean energy an ever growing share of the power we use - at an ever falling cost.
“But as well as celebrating the achievements, we cannot lose sight that much more work and even greater challenges lie ahead. That’s why our ten-point action plan sets out the path that the UK needs to follow if we are to keep leading the way in tackling climate change.”.
- Watch the latest Energy in the UK film here | https://www.docs.energy-uk.org.uk/media-and-campaigns/press-releases/412-2018/6928-energy-uk-marks-a-decade-of-the-climate-change-act.html | 2020-07-02T18:57:46 | CC-MAIN-2020-29 | 1593655879738.16 | [] | www.docs.energy-uk.org.uk |
Orientation
Quick. The following topics describe the major components of this product.
Application monitoring
Infrastructure monitoring
Capacity planning
TrueSight Presentation Server integrates with other BMC products to enable tools that help IT Operations monitor and manage the infrastructure or aid in troubleshooting and remediation of outages. For further information about this product, see Key concepts.
Product roles
The primary product roles align with the default Authorization Profiles that define the groups, roles, and permissions that grant access to the TrueSight Presentation Server interfaces.
- Executives or tenant – Checks the health of the applications and services they are responsible for delivering.
- IT Operations – Monitors the performance and availability of applications and the IT infrastructure. Investigates the cause and initiates remediation.
- Technology Specialist – Sets up and monitors the performance and availability of his respective technology to ensure that it meets the prescribed business requirements and SLAs.
- Application Specialist – Models business applications and manages the software and IT infrastructure that compose the part of an application.
- Capacity Planner – Reserves and schedules IT infrastructure resources, forecasts and models changes to adjust IT resources, report the cost of IT resources to business and IT and application stakeholders.
- Solutions Administrator – Plans, deploys, installs, configures, secures, and upgrades TrueSight Operations Management and TrueSight Capacity Optimization and their data providers.
For details about these roles, see Managing users and access control.
Product features
The TrueSight console provides the features that enable the product's core use cases and enable users to perform tasks with the product.
TrueSight console features
In the TrueSight console, users monitor applications and infrastructure, plan IT resources, configure the TrueSight system components, administer security, set up monitoring, and set up data collection for TrueSight Infrastructure Management, TrueSight App Visibility Manager, BMC Synthetic Transaction Execution Adapter, and TrueSight Capacity Optimization.
To get started, see the following topics:
- Getting started with dashboards
- Getting started with infrastructure monitoring
Getting started with application monitoring
- Getting started with synthetic transaction monitoring
- Getting started with the TrueSight Capacity Optimization console
- Getting started with solution administration in the TrueSight console
Product documentation
The TrueSight Presentation Server documentation helps new and experienced users implement or use this product. Based on your role, the following sections of the documentation are recommended:.
When working in the product, click the Help icons to directly link to the topic pertaining to your task. For working offline, you can download an Adobe Acrobat PDF of this documentation from the PDFs page.
Presentation Server documentation sections
Each section identifies the type of information available and links to instructions.
TrueSight Operations Management deployment
- Planning
- Installing
- Upgrading
TrueSight Capacity Optimization deployment
- Planning
- Installing
- Upgrading | https://docs.bmc.com/docs/tsps113/orientation-765456085.html | 2020-07-02T19:10:16 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.bmc.com |
Recently Viewed Topics
Manage Policies
The Policies page display your existing policies.
When you select a policy from the list of existing policies (placing a check in the box besides its name), the More button will appear.
Upload a Policy
The Upload button allows you to upload a previous policy. Using the native file browser box, select the policy from your local system and select Open.
Download a Policy
Select Download to open the browser’s download dialog box. From here, you can open the policy in an external program (e.g., text editor) or save the policy to the directory of your choice. Depending on the browser, the policy may be downloaded automatically.
Note: Passwords and .audit files contained in a policy will not be exported.
Copy a Policy
To copy a policy, select the policy, select More, and select Copy.
Delete a Policy
To delete a policy, select the policy, select X or More, and select Delete. | https://docs.tenable.com/nessus/6_6/Content/HowToManagePolicies.htm | 2017-04-23T11:52:53 | CC-MAIN-2017-17 | 1492917118552.28 | [] | docs.tenable.com |
You can disable Airmail from being shown in the Menu Bar/Dock. Go to Preference > General and then you could see the option as shown in the picture below.
In the above screenshot you could see a blue box mark, it’s where you could choose either Dock or Menubar and also you can choose Airmail being shown on both Dock as well as Menu bar.
Did this answer your question? | http://docs.airmailapp.com/airmail-for-mac/airmail-icon-dockmenu-bar-airmail-for-macos | 2017-04-23T11:44:42 | CC-MAIN-2017-17 | 1492917118552.28 | [array(['https://uploads.intercomcdn.com/i/o/8484622/3256e49ed3c71abc78e6cc38/Screen%2520Shot%25202016-07-01%2520at%252018.07.39.png',
None], dtype=object) ] | docs.airmailapp.com |
Set Up Usage Monitoring
What you see in usage monitoring is the result of traffic monitoring. You first position the monitoring point so that it has access to the traffic passing through your network. When you start usage monitoring, the monitoring point starts generating flow records in real time. AppNeta Performance Manager (APM) collates those records into groups and classes so that you can easily understand how your network resources are being consumed.
Prerequisites
- Set up a monitoring point.
- Make sure that you set the monitoring point location.
- Turn on email notifications for usage monitoring events.
Declare local subnets
If you want APM to be able to distinguish between inbound and outbound traffic, you need to tell it which subnets are local to the monitoring point. Configuring traffic direction has the added benefit of hostname resolution for local subnets.
- Navigate to Usage > Monitoring Points.
- Click ‘configure’ next to any monitoring point.
- Select ‘traffic direction’ from the drop down.
- Enter a network address and subnet mask.
- Click ‘+’ for each additional subnet.
- Click ‘apply’ before navigating to any other configuration option, e.g., traffic direction.
Start monitoring
On the monitoring point list page, click ‘start usage monitoring’. It takes at least 5 minutes for the first results to come in.
There is no indicator for whether usage monitoring is turned on and collecting data. This is because your APM service instance is stateless. Instead, look at ‘most recent traffic rate’ on the monitoring points page.
This column shows the average rate of traffic for the most recent 5-minute period. If monitoring is on, this column will update. Since usage data rolls in every 5 minutes, it may take up to 10 minutes for you to see any change.
Next steps
Let monitoring run for a while, then consider these additional features:
- Usage alerts
- Describe when APM should trigger alerts in terms of flow characteristics—e.g., rate, volume, ip address—and corresponding thresholds.
- Custom apps
- If you don’t see a particular app your looking for—for example, if it’s being lumped together with tcp—define a custom app.
- User resolution
- If you’re running Active Directory on Windows Server 2008 R2 in your network, you can enable user resolution
- Second interface
- Monitoring point r40 or higher support a second capture interface so you can monitor multiple points in your network. You can also give each interface a meaningful name so that there’s no confusion when starting and stopping monitoring. | https://docs.appneta.com/Usage/usage-deploy.html | 2017-04-23T11:59:15 | CC-MAIN-2017-17 | 1492917118552.28 | [] | docs.appneta.com |
Hostname Resolution
By default hostnames are resolved every time a usage monitoring page is loaded, whether or not the scrubbed time range is in the past. Configuring traffic direction ensures that ip addresses in local subnets are resolved at the time flow records are generated, and that that information preserved for the duration of usage monitoring history; addresses in external subnets will continue to be resolved in real-time. | https://docs.appneta.com/usage-hostname-resolution.html | 2017-04-23T11:55:18 | CC-MAIN-2017-17 | 1492917118552.28 | [] | docs.appneta.com |
Debugging the Swift Compiler¶
Contents
Abstract¶
This document contains some useful information for debugging the swift compiler and swift compiler output.
Printing the Intermediate Representations¶
The most important thing when debugging the compiler is to examine the IR. Here is how to dump the IR after the main phases of the swift compiler (assuming you are compiling with optimizations enabled):
Parser. To print the AST after parsing:
swiftc -dump-ast -O file.swift
SILGen. To print the SIL immediately after SILGen:
swiftc -emit-silgen -O file.swift
Mandatory SIL passes. To print the SIL after the mandatory passes:
swiftc -emit-sil -Onone file.swift
Well, this is not quite true, because the compiler is running some passes for -Onone after the mandatory passes, too. But for most purposes you will get what you want to see.
Performance SIL passes. To print the SIL after the complete SIL oprimization pipeline:
swiftc -emit-sil -O file-swift
IRGen. To print the LLVM IR after IR generation:
swiftc -emit-ir -Xfrontend -disable-llvm-optzns -O file.swift
LLVM passes. To print the LLVM IR afer LLVM passes:
swiftc -emit-ir -O file.swift
Code generation. To print the final generated code:
swiftc -S -O file.swift
Compilation stops at the phase where you print the output. So if you want to
print the SIL and the LLVM IR, you have to run the compiler twice.
The output of all these dump options (except
-dump-ast) can be redirected
with an additional
-o <file> option.
Debugging on SIL Level¶
Options for Dumping the SIL¶
Often it is not sufficient to dump the SIL at the begin or end of the optimization pipeline. The SILPassManager supports useful options to dump the SIL also between pass runs.
The option
-Xllvm -sil-print-all dumps the whole SIL module after all
passes. Although it prints only functions which were changed by a pass, the
output can get very large.
It is useful if you identified a problem in the final SIL and you want to check which pass did introduce the wrong SIL.
There are several other options available, e.g. to filter the output by
function names (
-Xllvm -sil-print-only-function/
s) or by pass names
(
-Xllvm -sil-print-before/
after/
around).
For details see
PassManager.cpp.
Dumping the SIL and other Data in LLDB¶
When debugging the swift compiler with LLDB (or Xcode, of course), there is
even a more powerful way to examine the data in the compiler, e.g. the SIL.
Following LLVM’s dump() convention, many SIL classes (as well as AST classes)
provide a dump() function. You can call the dump function with LLDB’s
expression -- or
p command.
For example, to examine a SIL instruction:
(lldb) p Inst->dump() %12 = struct_extract %10 : $UnsafeMutablePointer<X>, #UnsafeMutablePointer._rawValue // user: %13
To dump a whole function at the beginning of a function pass:
(lldb) p getFunction()->dump()
SIL modules and even functions can get very large. Often it is more convenient to dump their contents into a file and open the file in a separate editor. This can be done with:
(lldb) p getFunction()->dump("myfunction.sil")
You can also dump the CFG (control flow graph) of a function:
(lldb) p Func->viewCFG()
This opens a preview window containing the CFG of the function. To continue debugging press <CTRL>-C on the LLDB prompt. Note that this only works in Xcode if the PATH variable in the scheme’s environment setting contains the path to the dot tool.
Other Utilities¶
To view the CFG of a function (or code region) in a SIL file, you can use the
script
swift/utils/viewcfg. It also works for LLVM IR files.
The script reads the SIL (or LLVM IR) code from stdin and displays the dot
graph file. Note: .dot files should be associated with the Graphviz app.
Using Breakpoints¶
LLDB has very powerful breakpoints, which can be utilized in many ways to debug the compiler and swift executables. The examples in this section show the LLDB command lines. In Xcode you can set the breakpoint properties by clicking ‘Edit breakpoint’.
Let’s start with a simple example: sometimes you see a function in the SIL output and you want to know where the function was created in the compiler. In this case you can set a conditional breakpoint in SILFunction constructor and check for the function name in the breakpoint condition:
(lldb) br set -c 'hasName("_TFC3nix1Xd")' -f SILFunction.cpp -l 91
Sometimes you want to know which optimization does insert, remove or move a
certain instruction. To find out, set a breakpoint in
ilist_traits<SILInstruction>::addNodeToList or
ilist_traits<SILInstruction>::removeNodeFromList, which are defined in
SILInstruction.cpp.
The following command sets a breakpoint which stops if a
strong_retain
instruction is removed:
(lldb) br set -c 'I->getKind() == ValueKind::StrongRetainInst' -f SILInstruction.cpp -l 63
The condition can be made more precise e.g. by also testing in which function this happens:
(lldb) br set -c 'I->getKind() == ValueKind::StrongRetainInst && I->getFunction()->hasName("_TFC3nix1Xd")' -f SILInstruction.cpp -l 63
Let’s assume the breakpoint hits somewhere in the middle of compiling a large
file. This is the point where the problem appears. But often you want to break
a little bit earlier, e.g. at the entrance of the optimization’s
run
function.
To achieve this, set another breakpoint and add breakpoint commands:
(lldb) br set -n GlobalARCOpts::run Breakpoint 2 (lldb) br com add 2 > p int $n = $n + 1 > c > DONE
Run the program (this can take quite a bit longer than before). When the first breakpoint hits see what value $n has:
(lldb) p $n (int) $n = 5
Now remove the breakpoint commands from the second breakpoint (or create a new one) and set the ignore count to $n minus one:
(lldb) br delete 2 (lldb) br set -i 4 -n GlobalARCOpts::run
Run your program again and the breakpoint hits just before the first breakpoint.
Another method for accomplishing the same task is to set the ignore count of the breakpoint to a large number, i.e.:
(lldb) br set -i 9999999 -n GlobalARCOpts::run
Then whenever the debugger stops next time (due to hitting another breakpoint/crash/assert) you can list the current breakpoints:
(lldb) br list 1: name = 'GlobalARCOpts::run', locations = 1, resolved = 1, hit count = 85 Options: ignore: 1 enabled
which will then show you the number of times that each breakpoint was hit. In
this case, we know that
GlobalARCOpts::run was hit 85 times. So, now
we know to ignore swift_getGenericMetadata 84 times, i.e.:
(lldb) br set -i 84 -n GlobalARCOpts::run
LLDB Scripts¶
LLDB has powerful capabilities of scripting in python among other languages. An often overlooked, but very useful technique is the -s command to lldb. This essentially acts as a pseudo-stdin of commands that lldb will read commands from. Each time lldb hits a stopping point (i.e. a breakpoint or a crash/assert), it will run the earliest command that has not been run yet. As an example of this consider the following script (which without any loss of generality will be called test.lldb):
env DYLD_INSERT_LIBRARIES=/usr/lib/libgmalloc.dylib break set -n swift_getGenericMetadata break mod 1 -i 83 process launch -- --stdlib-unittest-in-process --stdlib-unittest-filter "DefaultedForwardMutableCollection<OpaqueValue<Int>>.Type.subscript(_: Range)/Set/semantics" break set -l 224 c expr pattern->CreateFunction break set -a $0 c dis -f
TODO: Change this example to apply to the swift compiler instead of to the stdlib unittests.
Then by running
lldb test -s test.lldb, lldb will:
- Enable guard malloc.
- Set a break point on swift_getGenericMetadata and set it to be ignored for 83 hits.
- Launch the application and stop at swift_getGenericMetadata after 83 hits have been ignored.
- In the same file as swift_getGenericMetadata introduce a new breakpoint at line 224 and continue.
- When we break at line 224 in that file, evaluate an expression pointer.
- Set a breakpoint at the address of the expression pointer and continue.
- When we hit the breakpoint set at the function pointer’s address, disassemble the function that the function pointer was passed to.
Using LLDB scripts can enable one to use complex debugger workflows without needing to retype the various commands perfectly everytime.
Debugging Swift Executables¶
One can use the previous tips for debugging the swift compiler with swift executables as well. Here are some additional useful techniques that one can use in Swift executables.
Determining the mangled name of a function in LLDB¶
One problem that often comes up when debugging swift code in LLDB is that LLDB shows the demangled name instead of the mangled name. This can lead to mistakes where due to the length of the mangled names one will look at the wrong function. Using the following command, one can find the mangled name of the function in the current frame:
(lldb) image lookup -va $pc Address: CollectionType3[0x0000000100004db0] (CollectionType3.__TEXT.__text + 16000) Summary: CollectionType3`ext.CollectionType3.CollectionType3.MutableCollectionType2<A where A: CollectionType3.MutableCollectionType2>.(subscript.materializeForSet : (Swift.Range<A.Index>) -> Swift.MutableSlice<A>).(closure #1) Module: file = "/Volumes/Files/work/solon/build/build-swift/validation-test-macosx-x86_64/stdlib/Output/CollectionType.swift.gyb.tmp/CollectionType3", arch = "x86_64" Symbol: id = {0x0000008c}, range = [0x0000000100004db0-0x00000001000056f0), name="ext.CollectionType3.CollectionType3.MutableCollectionType2<A where A: CollectionType3.MutableCollectionType2>.(subscript.materializeForSet : (Swift.Range<A.Index>) -> Swift.MutableSlice<A>).(closure #1)", mangled="_TFFeRq_15CollectionType322MutableCollectionType2_S_S0_m9subscriptFGVs5Rangeqq_s16MutableIndexable5Index_GVs12MutableSliceq__U_FTBpRBBRQPS0_MS4__T_" | http://apple-swift.readthedocs.io/en/latest/DebuggingTheCompiler.html | 2017-06-22T20:27:29 | CC-MAIN-2017-26 | 1498128319902.52 | [] | apple-swift.readthedocs.io |
Get-SCOMTieredManagementGroup
Syntax
Get-SCOMTieredManagementGroup [-ComputerName <String[]>] [-Credential <PSCredential>] -Id <Guid[]> [-SCSession <Connection[]>] [-Confirm] [-WhatIf] [<CommonParameters>]
Get-SCOMTieredManagementGroup [-Name] <String[]> [-ComputerName <String[]>] [-Credential <PSCredential>] [-SCSession <Connection[]>] [-Confirm] [-WhatIf] [<CommonParameters>]
Get-SCOMTieredManagementGroup [-ComputerName <String[]>] [-Credential <PSCredential>] [-OnlyForConnector] [-SCSession <Connection[]>] [-Confirm] [-WhatIf] [<CommonParameters>].. IDs for tiered management groups.
Specifies an array of names for tiered management groups.
Indicates that the cmdlet returns only tiered management groups that are available to connectors..
Prompts you for confirmation before running the cmdlet.
Shows what would happen if the cmdlet runs.
The cmdlet is not run. | https://docs.microsoft.com/en-us/powershell/systemcenter/systemcenter2016/OperationsManager/vlatest/Get-SCOMTieredManagementGroup | 2017-06-22T21:36:46 | CC-MAIN-2017-26 | 1498128319902.52 | [] | docs.microsoft.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.