content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Lists training jobs.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
list-training: TrainingJobSummaries
list-training training jobs created after the specified time (timestamp).
--creation-time-before (timestamp)
A filter that returns only training jobs created before the specified time (timestamp).
--last-modified-time-after (timestamp)
A filter that returns only training jobs modified after the specified time (timestamp).
--last-modified-time-before (timestamp)
A filter that returns only training jobs modified before the specified time (timestamp).
--name-contains (string)
A string in the training job name. This filter returns only training jobs whose name contains the specified string.
--status-equals (string)
A filter that retrieves only training jobs with a specific status.
Possible values:
- InProgress
- Completed
- Failed
- Stopping
- Stopped
--sort-by (string)
The field to sort results by. The default is CreationTime .
Possible values:
- Name
- CreationTime
- Status
-.
TrainingJobSummaries -> (list)
An array of TrainingJobSummary objects, each listing a training job.
(structure)
Provides summary information about a training job.
TrainingJobName -> (string)The name of the training job that you want a summary for.
TrainingJobArn -> (string)The Amazon Resource Name (ARN) of the training job.
CreationTime -> (timestamp)A timestamp that shows when the training job was created.
TrainingEndTime -> (timestamp)A timestamp that shows when the training job ended. This field is set only if the training job has one of the terminal statuses (Completed , Failed , or Stopped ).
LastModifiedTime -> (timestamp)Timestamp when the training job was last modified.
TrainingJobStatus -> (string)The status of the training job.
NextToken -> (string)
If the response is truncated, Amazon SageMaker returns this token. To retrieve the next set of training jobs, use it in the subsequent request. | https://docs.aws.amazon.com/cli/latest/reference/sagemaker/list-training-jobs.html | 2020-02-17T00:59:28 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.aws.amazon.com |
All content with label concurrency+data_grid+events+gridfs+hotrod+httpd+infinispan+lock_striping+maven+mvcc+notification+searchable+setup+tutorial+tx+webdav+write_through.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, deadlock, contributor_project, archetype, jbossas, nexus, guide, schema, listener,
cache, s3, amazon, memcached, grid, ha, jcache, test, api, xsd, ehcache, wildfly, documentation, jboss, userguide, write_behind, 缓存, eap, ec2, streaming, eap6, hibernate, getting_started, interface, custom_interceptor, clustering, eviction, large_object, out_of_memory, mod_jk, jboss_cache, index, hash_function, configuration, batch, buddy_replication, loader, xa, cloud, remoting, murmurhash2, jbosscache3x, read_committed, xml, distribution, cachestore, cacheloader, hibernate_search, cluster, development, websocket, async, transaction, interactive, xaresource, build, domain, demo, scala, cache_server, installation, client, mod_cluster, non-blocking, as7, migration, filesystem, jpa, user_guide, gui_demo, eventing, student_project, client_server, infinispan_user_guide, murmurhash, standalone, snapshot, repeatable_read, docs, jgroup, consistent_hash, batching, jta, faq, 2lcache, jsr-107, protocol, lucene, jgroups, locking, rest, hot_rod
more »
( - concurrency, - data_grid, - events, - gridfs, - hotrod, - httpd, - infinispan, - lock_striping, - maven, - mvcc, - notification, - searchable, - setup, - tutorial, - tx, - webdav, - write_through )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/concurrency+data_grid+events+gridfs+hotrod+httpd+infinispan+lock_striping+maven+mvcc+notification+searchable+setup+tutorial+tx+webdav+write_through | 2020-02-17T01:59:15 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.jboss.org |
All content with label deadlock+grid+gridfs+guide+hibernate_search+infinispan+installation+jgroups+jira+repeatable_read+searchable+server+test+testng+tx+write_behind+write_through.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, replication, transactionmanager, dist, release, partitioning, query, intro, pojo_cache, archetype, jbossas, lock_striping, nexus, schema,
listener, cache, amazon, memcached, jcache, api, xsd, ehcache, maven, documentation, roadmap, wcm, youtube, userguide, 缓存, ec2, s, hibernate, getting, aws, interface, custom_interceptor, clustering, setup, eviction, fine_grained, concurrency, examples, jboss_cache, import, index, events, configuration, hash_function, batch, buddy_replication, loader, pojo, cloud, mvcc, tutorial, notification, presentation, read_committed, xml, distribution, started, cachestore, data_grid, resteasy, integration, cluster, br, websocket, transaction, async, interactive, xaresource, build, gatein, demo, cache_server, scala, command-line, client, non-blocking, migration, jpa, filesystem, article, user_guide, gui_demo, eventing, shell, client_server, infinispan_user_guide, standalone, snapshot, hotrod, webdav, docs, consistent_hash, batching, store, whitepaper, jta, faq, 2lcache, as5, jsr-107, locking, rest, hot_rod
more »
( - deadlock, - grid, - gridfs, - guide, - hibernate_search, - infinispan, - installation, - jgroups, - jira, - repeatable_read, - searchable, - server, - test, - testng, - tx, - write_behind, - write_through )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/deadlock+grid+gridfs+guide+hibernate_search+infinispan+installation+jgroups+jira+repeatable_read+searchable+server+test+testng+tx+write_behind+write_through | 2020-02-17T01:42:33 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.jboss.org |
NavContainerHelper - Overriding scripts in NAV containers
If:
-.
Reasons to override
- Copy Add-Ins from a network location
SetupLicense.ps1
The responsibility of the SetupLicense script is to ensure that a license is available for the NAV Service Tier.
Default Behavior..
Reasons to override
- Changes to ClientUserSettings.config
- Copy additional files. If you need to copy additional files, invoke the default behavior and perform copy-item cmdlets like:
Example:
Copy-Item "$roleTailoredClientFolder\New | https://docs.microsoft.com/en-us/archive/blogs/freddyk/navcontainerhelper-overriding-scripts-in-nav-containers | 2020-02-17T02:26:09 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
Interview with a Network Security Expert, Wiki Ninja, and Turkish Avenger - Mustafa Kaya
Hi welcome back TechNet Wiki Readers, our valuable Wiki Ninjas!
For another Monday blog post, we'll be doing another Interview with a Wiki Ninja!
This week's guest is Mustafa Kaya.
And here's his User Page: Mustafa KAYA(tr-TR)
Here are his Wiki Statistics:
- Total number of articles: 19
- Article Edits: 64
- 40 Comments
His top Wiki Articles:
SCCM 2012 Enerji Maliyeti Hesaplama ve Çevresel Etkisi – GREEN IT
Windows Storage Server 2012 R2 – BranchCache Konfigurasyon (tr-TR)
Windows Storage Server 2012 R2 – DFS Genel Bakis ve Kurulum (tr-TR)
Find this interview in Turkish here:
Pazartesi - Mustafa Kaya ile Roportaj
Let's get to the interview!
Who are you, where are you, what are you doing? What is your special technology to you?
Hi, my name is Mustafa Kaya; I was born in Antalya; I continue to work in Antalya. I've been in the software industry working as a professional for about 10 years. My ventured out in my career in high school and started studying in the area of computer hardware register. During my high school, I bought a server with Windows NT, I threw myself into the world of education. At a private technology firm, I have been working as a system engineer.
My main operations and interests are in High Availability and Disaster Recovery, virtualization solutions, Microsoft Exchange, and the Microsoft System Center family of products. My specialty is in the area of network security and Network Solutions.
What's your biggest project right now?
I've had different solution areas throughout my career; I took part in many large-scale projects. Each one is valuable for me. Whether large or small, I do the best I can. I'm trying to show better care and effort and learn every year. My current project is in an important industrial district, 450, the company's fiber optic network project.
Where do you contribute to other than TNWiki?
Apart from my TechNet Wiki family, I contribute to cozumpark.com, bilisimtoplulugu.com and other places on the Internet. I'm trying to contribute to various technical groups. Also I am carrying on my personal blog,
What is TechNet Wiki is for what? And for whom?
TechNet Wiki is about Microsoft technologies and is for providing resources in the field of information technologies. I contribute to the valuable platform as a volunteer. If you're engaged in research and development on Microsoft products, you can rely on TechNet Wiki to help for problems you encounter. If you're in search of resources, tehn TechNet Wiki will be very useful for you!
What interests you about TechNet Wiki?
I follow all the new technologies and the related Wiki articles with great interest. Our friends are separated by their work and private lives when they're sharing in the sacrifice by writing these great articles. I would like to thank all our friends for their labour and effort!
What is your favorite Wiki article you've contributed?
SCCM 2012 energy cost calculation and environmental impact – GREEN IT: an article about energy consumption and greenhouse gas emissions!
What would you like to be if you weren't an IT-Pro?
I always wanted to be a teacher. The great Turkish nation deserves every effort I can to serve it!
Do you have any comments for the TechNet Wikiproduct group?
TechNet Wiki helps Microsoft provide support all over the world. It's a really successful common platform. You can get ideas on a topic you're looking for and you can quickly access a resource. The expert Team also participates in many Turkish Technet Avengers efforts. It is truly a valuable platform. This platform is for me very enjoyable.
======
Find this interview in Turkish here:
Pazartesi - Mustafa Kaya ile Roportaj
Thank you, Mustafa! Thanks for all the kind words about TechNet Wiki! We know it's not perfect, and we've got some improvements coming in 2015, but we really appreciate your kind words!
Please join me in thanking Mustafa for his contributions!
- Ninja Ed | https://docs.microsoft.com/en-us/archive/blogs/wikininjas/interview-with-a-network-security-expert-wiki-ninja-and-turkish-avenger-mustafa-kaya | 2020-02-17T02:32:04 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
AttachmentRead Event
Occurs when an attachment in an e-mail item has been opened for reading.
Subobject**_AttachmentRead(ByVal Attachment As Attachment)**
*object * An object that evaluates to one of the objects in the Applies To list. In Microsoft Visual Basic Scripting Edition (VBScript), use the word Item.
Attachment Required. The Attachment that was opened.
Example
This Visual Basic for Applications (VBA) example displays a message when the user tries to read an attachment. The sample code must be placed in a class module such as ThisOutlookSession, and the
TestAttachRead() procedure should be called before the event procedure can be called by Microsoft Outlook. For this example to run, there has to be at least one item in the Inbox with subject as 'Test' and containing at least one attachment.
Public WithEvents myItem As outlook.MailItem Public olApp As New Outlook.Application Private Sub myItem_AttachmentRead(ByVal myAttachment As Outlook.Attachment) If myAttachment.Type = olByValue Then MsgBox "If you change this file, also save your changes to the original file." End If End Sub Public Sub TestAttachRead() Dim atts As Outlook.Attachments Dim myAttachment As Outlook.Attachment Set olApp = CreateObject("Outlook.Application") Set myItem = olApp.ActiveExplorer.CurrentFolder.Items("Test") Set atts = myItem.Attachments myItem.Display End Sub
This VBScript example reminds the user to also save changes to the original file.
Sub Item_AttachmentRead(ByVal ReadAttachment) If ReadAttachment.Type = 1 then MsgBox "If you change this file, also save your changes to the original file."Add Event | BeforeAttachmentSave Event | Using events with Automation | https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2003/aa209976%28v%3Doffice.11%29 | 2020-02-17T01:23:55 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
A base class for events related to the cluster.
This class exists primarily to simplify writing event types. Plugins should not listen for this low-level base class; they should listen for specific subclasses.
Cluster node events are not part of the
ApplicationEvent hierarchy. Most cluster
node events happen in response to system-level actions, like new nodes joining or existing nodes departing, rather than
happening in response to user actions, so they have their own hierarchy. | https://docs.atlassian.com/bitbucket-server/javadoc/5.16.0/api/reference/com/atlassian/bitbucket/event/cluster/ClusterNodeEvent.html | 2020-02-17T00:56:14 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.atlassian.com |
This repository contains the ANYbotics software development guideline.
Keywords: example, package, template
Author(s): Peter Fankhauser, Remo Diethelm, Gabriel Hottiger, Yvain de Viragh
Maintainer(s): Remo Diethelm
Affiliation: ANYbotics AG
License: The source code is released under a proprietary license.
Please report bugs and request features using the Issue Tracker.
Content of the repository:
Content of the example catkin packages:
Missing content: | https://docs.leggedrobotics.com/average_calculator_doc/ | 2020-02-17T02:06:44 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.leggedrobotics.com |
Rebuke Time Stealers!
It occurred to me today, that in the course of our work lives, we are continually faced with static and noise, that has little to no value to our end goals at all. It could be that 15th email in a conversation thread that has long lost its way, or that phone call from someone asking you for an answer to a question they have no idea why they’re asking, and don’t really care what the answer is, just so that they can say they had the conversation, or that tasks that makes you scratch your head and ask, who is going to derive any value from the energy, resource and time I’m about to expend on this.
It’s kind of depressing actually, when you look back on your day at work, and realise that the total magnitude achieved through the amount of effort expended is closer to zero than 1. It makes it harder to come into work the next day, knowing that the sands of time are just going to draw you closer to the middle of the hourglass then spit you out into the inverse, just so that it can all start again a few weeks from now.
Wow, bummer.
That is unless you decide that the only thing that matters during the work day, is that you only expend energy on tasks that have a measurable benefit to someone else. It doesn’t have to be direct, it could be indirect; the only thing that is important is that you are able to arrive at an answer to the question, how is what I’m about to do going to help someone else?
And it doesn’t have to be altruistic, or in the running for a Nobel Prize, it just needs to be truly productive.
How? Well, that’s easy. Next time someone sends you that email with no rational point or action, just delete it. Next time someone phones you with that lame request for blah blah blah, excuse yourself and hang up, in fact, anytime anyone tries to steal your time for unproductive tasks, rebuke them, and rebuke them hard!
Why? Because it sets a precedent, and it creates a cumulative effect. The precedent is, don’t approach me unless your intentions are true. Don’t approach me if you’re just going through the motions, or just wanting to fill time, or just wanting to look busy. Just don’t! What’s the cumulative effect? Well, it means you have more time to attack those meaty problems. The ones that everyone is too busy to get solved. And what’s more, the person who you rebuked now has some time on their hands to do something else. If they try and steal time from someone else, and that person rebukes them, finally they’re just going to find something productive to do. If everyone starts adopting that approach, keeping their eyes on the end goal, and evaluates every action using the simple poser, “Am I about to get something done, or am I about to waste someone’s time?”, then I’m convinced, the net effect will be dramatic.
And at least you won’t feel so confounded at the end of each day on your way home wandering, “How did I do so much today, and get so little done!?” | https://docs.microsoft.com/en-us/archive/blogs/davidlem/rebuke-time-stealers | 2020-02-17T02:42:02 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
A text label is a NUI control that displays a short text string. It is implemented through the
Tizen.NUI.BaseComponents.TextLabel class.
Text labels are lightweight, non-editable, and do not respond to user input. They can support multiple languages and scripts, including right-to-left scripts, such as Arabic.
Figure: Text label example, positioned to top left
For an example of displaying text with a text label, see NUI Quick Start.
To create a text label:
Create an instance of the Tizen.NUI.BaseComponents.TextLabel class and define the label text as a parameter:
TextLabel label = new TextLabel("Hello World");
You can also create the
Tizen.NUI.BaseComponents.TextLabel class instance separately and define the label text by setting its
Text property:
TextLabel label = new TextLabel(); label.Text = "Hello World";
Note
To display properly, the
Textproperty must be a UTF-8 string. Any
CR+LFnew line characters are replaced by
LF.
Define the label position on-screen with the
ParentOrigin property of the
Tizen.NUI.BaseComponents.TextLabel class:
label.ParentOrigin = ParentOrigin.TopLeft;
Add the text label control to a window:
= .Instance; . ( );= .Instance; . ( );
Note
A text label control can only be added to a window, or to a view that is on a window.
You can request a specific font using the
FontFamily, the
FontStyle, and the
PointSize properties of the Tizen.NUI.BaseComponents.TextLabel class:
FontFamily is a string with the font family name, for example,
FreeSerif.
FontStyle is a JSON-formatted string with the font style. The following list describes some possible keys and common values for them:
The
width key defines the spacing between glyphs. Some commonly-used values include
condensed,
semiCondensed,
normal,
semiExpanded, and
expanded.
The
weight key defines the thickness or darkness of the glyphs. Some commonly-used values include
thin,
light,
normal,
regular,
medium, and
bold.
The
slant key defines whether to use italics. Some commonly-used values include
normal or
roman,
italic, and
oblique.
Usually
italic is a separate font, while
oblique is generated by applying a slant to the
normal font.
PointSize is a float with the font size in points. To calculate the point size from the height in pixels, use the following formula, where
vertical_dpi is the device’s vertical resolution in dots per inch:
point_size = 72 * pixels / vertical_dpi
The following example code specifies font properties:
label.FontFamily = "FreeSerif"; PropertyMap fontStyle = new PropertyMap(); fontStyle.Add("weight", new PropertyValue("bold")); fontStyle.Add("slant", new PropertyValue("italic")); label.FontStyle = fontStyle; label.PointSize = 12.0f;
If no font is specified, styling defaults are used, and a suitable font for displaying the text label is automatically selected from the platform. However, it is possible that the automatically-selected font cannot render all the characters contained within the text label. For example, Latin fonts often do not provide Arabic glyphs.
Setting a font size programmatically is not ideal for applications which support multiple screen resolutions, and for platforms which support multiple logical font sizes. Also, any changes made to the platform font settings override sizes that have been programmatically set.
A more flexible approach is to prepare various JSON stylesheets and request a different style for each platform. The Tizen.NUI.NUIApplication class has constructors which take a stylesheet argument:
class Example : NUIApplication Example example = new Example("example-path/example.json");
To change the font for standard text controls, the following JSON syntax can be used:
{ "styles": { "textlabel": { "fontFamily": "FreeSerif", "fontStyle": { "weight": "bold", "slant": "italic" }, "pointSize": 8 } } }
However, the same
pointSize is unlikely to be suitable for all text controls in an application. To define custom styles for existing controls, simply set a style name for each case, and provide a style override in a JSON stylesheet.
You can provide further flexibility for the various screens by mapping the logical size to a physical size in the stylesheet.
To align the text in a text label:
To enable text wrapping, use the
MultiLine property of the Tizen.NUI.BaseComponents.TextLabel class:
label.MultiLine = true;
To align the text horizontally to the beginning, center, or end of the available area, set the
HorizontalAlignment property of the
Tizen.NUI.BaseComponents.TextLabel class with the corresponding value of the Tizen.NUI.HorizontalAlignment enumeration:
label.HorizontalAlignment = HorizontalAlignmentType.Begin; label.HorizontalAlignment = HorizontalAlignmentType.Center; label.HorizontalAlignment = HorizontalAlignmentType.End;
The following table illustrates the available values of the
Tizen.NUI.HorizontalAlignment enumeration for both left-to-right (Latin) and right-to-left (Arabic) script.
The above examples assume that the label size is greater than the minimum required.
For text decorations, the Tizen.NUI.BaseComponents.TextLabel class provides several properties. All properties are writable and none are animatable.
Table: Text label properties
To use the decorations, simply set the applicable property:
To change the color of the text, use the
TextColor property:
label.Text = "Red Text"; label.TextColor = Color.Red;
Figure: Colored text
To add a drop shadow to the text, set the
Shadow property:
window.BackgroundColor(Color.Blue); label1.Text = "Plain Text"; label2.Text = "Text with Shadow"; PropertyMap shadow = new PropertyMap(); shadow.Add("offset", new PropertyValue("1 1")); shadow.Add("color", new PropertyValue("black")); pixelLabel.Shadow = shadow; label3.Text = "Text with Bigger Shadow"; PropertyMap shadow = new PropertyMap(); shadow.Add("offset", new PropertyValue("2 2")); shadow.Add("color", new PropertyValue("black")); label3.Shadow = shadow; label4.Text = "Text with Color Shadow"; PropertyMap shadow = new PropertyMap(); shadow.Add("offset", new PropertyValue("1 1")); shadow.Add("color", new PropertyValue("red")); label4.Shadow = shadow;
Figure: Text with drop shadow (top), bigger shadow (middle), and color shadow (bottom)
Shadow parameters can also be set using a JSON string.
To underline the text label, set the
Underline property:
label1.Text = "Text with Underline"; PropertyMap textStyle = new PropertyMap(); textStyle.Add("enable", new PropertyValue("true")); label1.Underline = textStyle;
Figure: Text with underline
You can set the underline color and height using a property map:
label2.Text = "Text with Color Underline"; PropertyMap textStyle = new PropertyMap(); textStyle.Add("enable", new PropertyValue("true")); textStyle.Add("color", new PropertyValue(Color.Green)); label2.Underline = textStyle;
Figure: Text with color underline
By default, the underline height is based on the font metrics. For example, the underline text figures above have a 1 pixel height. You can also specify the height you want:
PropertyMap textStyle = new PropertyMap(); textStyle.Add("enable", new PropertyValue("true")); textStyle.Add("height", new PropertyValue(2.0f)); /// 2-pixel height label1.Underline = textStyle;
To enable text scrolling, set the
EnableAutoScroll property to
true:
label.EnableAutoScroll = true;
Once enabled, scrolling continues until the loop count is reached, or
EnableAutoScroll is set to
false. When
EnableAutoScroll is set to
false, the text completes its current scrolling loop before stopping.
Figure: Auto-scrolling text
Auto-scrolling enables text to scroll within the text table. You can use it to show the full content, if the text exceeds the boundary of the control. You can also scroll text that is smaller than the control. To ensure that the same part of the text is not visible in more than one place at the same time, you can configure the gap between repetitions. The left-to-right text always scrolls left and the right-to-left text scrolls right.
The scroll speed, gap, and loop count can be set in the stylesheet, or through the following properties:
AutoScrollSpeed property defines the scrolling speed in pixels/second.
AutoScrollLoopCount property specifies how many times the text completes a full scroll cycle. For example, if this property is 3, the text scrolls across the control 3 times and then stops. If this property is 0, scrolling continues until
EnableAutoScroll is set to
false.
Setting
EnableAutoScroll to
false stops scrolling, whilst maintaining the original loop count value for the next start.
AutoScrollGap property specifies the amount of whitespace, in pixels, to display before the scrolling text is shown again. This gap is automatically increased if the given value is not large enough to prevent the same part of the text from being visible twice at the same time.
Auto-scrolling does not work with multi-line text; it is shown with the
Begin alignment instead.
You can use markup elements to change the style of the text. Since the text controls do not process markup elements by default, you must first set the
EnableMarkup property of the Tizen.NUI.BaseComponents.TextLabel class to
true:
TextLabel label = new TextLabel("Hello World"); label.EnableMarkup = true;
Note
The markup processor does not check for markup validity, and styles are rendered in a priority order. Incorrect or incompatible elements can cause the text to be badly rendered.
The following markup elements are currently supported:
<color>
Sets the color for the characters inside the element, using the
value attribute to define the color. The supported attribute values are
red,
green,
blue,
yellow,
magenta,
cyan,
white,
black, and
transparent. Web colors and colors defined in 32-bit hexadecimal
0xAARRGGBB format are also supported.
The following examples both create text in red:
label.Text = "<color value='red'>Red Text</color>"; /// Color coded with a text constant
label.Text = "<color value='0xFFFF0000'>Red Text</color>"); /// Color packed inside an ARGB hexadecimal value
<font>
Sets the font values for the characters inside the element.
The following attributes are supported:
family: Font name
size: Font size in points
weight: Font weight
width: Font width
slant: Font slant
For more information on the attribute values, see Selecting Fonts.
The following example sets the font family and weight:
label.Text = "<font family='SamsungSans' weight='bold'>Hello world</font>"; | https://docs.tizen.org/application/dotnet/guides/nui/textlabel | 2020-02-17T00:37:56 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.tizen.org |
Installing WP1099 Plugin
Installing the WP1099 plugin is just as easy as installing any other WordPress plugin.
Download the WP1099 Plugin
If you haven't already, click here (link opens in new tab) and click the purchase button.
Once you complete the purchase process, you will receive an email containing a link to download the plugin as well as your license key. You will need the license key to activate the plugin once it is installed.
After downloading the plugin, login to the site where you want to activate WP1099.
Install WP1099 Plugin
When you log into your site, go to the Plugins > Add New menu. Then click the Upload Plugin button. Use the file uploader to find the WP1099 plugin that you saved on your computer. Then click the Install Now button.
Once the plugin is installed, click the Activate Plugin button.
Activate WP1099 License
Once the plugin is activated, click on WP1099 on the left side menu in your WordPress administrative area.
This is where you will enter your license key for the plugin.
- Copy your license key from the purchase confirmation email, or from your account area.
- Paste it into the box at the top of the WP1099 Settings screen and click the Save License button.
Click the Activate License button.
Once your license is activated, you should see a green "Your license is valid." notice.
Set the Tax Year
This setting allows you to see reports from prior tax years (if available). You will need to select the current tax year here in order to see this year's payouts in the Payouts tab.
Simply click the drop-down menu and select the current tax year, or whichever year you want to view the payouts for (i.e. 2017, 2018, etc.). Click Save Changes once you have selected the correct year. You can change this setting at any time.
Your Business Information
Below the Tax Year, there are a number of fields that ask for information about your business. This information will be used to populate the CSV export file that is generated on the Payouts tab.
Enter the information as appropriate, and click the Save Changes button. You shouldn't have to update this information again unless any of your business information changes. If it does, simply update these fields and save your changes. | https://docs.amplifyplugins.com/article/4-installing-wp1099-plugin | 2020-02-17T00:59:05 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/594bf4a704286305c68d4a6f/images/594c09ff04286305c68d4b9b/file-BmlFw3zV4Y.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/594bf4a704286305c68d4a6f/images/594c090304286305c68d4b88/file-ayWaGx1Jk3.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/594bf4a704286305c68d4a6f/images/594c0a102c7d3a0747ce1c34/file-q7XPEPNboA.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/594bf4a704286305c68d4a6f/images/594c0dd42c7d3a0747ce1c64/file-WCqoYPIxjp.png',
None], dtype=object) ] | docs.amplifyplugins.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Stop-COMPSentimentDetectionJob-JobId <String>-Select <String>-PassThru <SwitchParameter>-Force <SwitchParameter>
IN_PROGRESSthe job is marked for termination and put into the
STOP_REQUESTEDstate. If the job completes before it can be stopped, it is put into the
COMPLETEDstate; otherwise the job is be stopped and put into the
STOPPEDstate. If the job is in the
COMPLETEDor
FAILEDstate when you call the
StopDominantLanguageDetectionJoboperation, the operation returns a 400 Internal Request Exception. When a job is stopped, any documents already processed are written to the output location.
AWS Tools for PowerShell: 2.x.y.z | https://docs.aws.amazon.com/powershell/latest/reference/items/Stop-COMPSentimentDetectionJob.html | 2020-02-17T01:10:21 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.aws.amazon.com |
Archive and download unresolved references
Searchers and global administrators can now download unresolved references for the given period of time. To do it:
- Open company administration menu
- Click on the "Archives" tab
Create a new archive by providing name of the archive and two dates limiting when accession number was added to IPsurvey and Click "OK":
The dates are defined like this:
- Added after: DD-MM-YYYY (Current year - 2 years)
- Added before: DD-MM-YYYY (Current year -1 year)
Example: Archive oktober 2013, for the periode April 2012 - April 2013:
- Added after: 01-04-2010
- Added before: 31-03-2012
To download the archive:
- Click on the archive name in the list view
- choose "Open" when windows ask (to open or save the file), this opens the file in Excell
- Copy the cell which is active in excell (ctrl+c)
- Swich to "Internal" and "WPI" as the active database
- Type "(or" and insert the accession numbers (ctrl+v) and type ")/AN" after the last accession number
- List the following fields for all the found families "..li 1 an ap pr pn icai ti ab"
- Copy the listed fields to a word document and sort the families one at each page, include the licens heading on all pages.
Example file ready to printout before it is send to the client: Zealand_unresolved_2012-2013.pdf
To delete an existing archive, mark the archive(s) in the list and choose "Delete archive" from the tools menu.
Comment from Konstantin:
The archive feature is made where you can say to IPsurvey not to resolve families even if they are younger than 1 year, just by defining the date range.
There are couple of scenarios where it can be useful:
- You can choose shorter cycles for sending the unresolved families. In this case IPsurvey will never resolve the archived families.
- When company has stopped with IPsurvey then all their families can be archived and send as a hardcopy. | https://docs.dkpto.dk/display/IPS/Archiving+and+exporting+unresolved+references+-+intern | 2020-02-17T01:56:53 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['/download/attachments/4325392/2013-07-24%2010_53_49-IPSurvey%20-%20Edit%20Test%20Company%20A_S.png?version=1&modificationDate=1411024388717&api=v2&effects=border-simple,blur-border',
None], dtype=object)
array(['/download/attachments/4325392/2013-07-24%2010_57_13-IPSurvey%20-%20Edit%20Test%20Company%20A_S.png?version=1&modificationDate=1411024388780&api=v2&effects=border-simple,blur-border',
None], dtype=object) ] | docs.dkpto.dk |
Circumference Constructs a circumference. Syntax circumference() circumference(Real) circumference(Point, Real) circumference(Point, Point, Point) circumference(Point, Point) circumference(Polynomial) circumference(Arc) Description circumference() Constructs the unit circumference: center at the origin and radius one. circumference(Real) Given a real , constructs a circumference centered at the origin and radius . circumference(Point, Real) Given a point and a real , constructs a circumference of center and radius . circumference(Point, Point, Point) Given three points, constructs a circumference that intersects all of them. circumference(Point, Point) Given two points and , constructs a circumference with center going through . circumference(Polynomial) Given the equation of a circumference, constructs the object. circumference(Arc) Given an arc, constructs a circumference containing it. Related functions Center, Radius Table of Contents Syntax Description Related functions | https://docs.wiris.com/en/calc/commands/geometry/circumference?do=login§ok=0eac95e47b3ac82bff3969696f5a3295 | 2020-02-17T01:15:56 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.wiris.com |
Lagoon - Docker Build and Deploy System for OpenShift & Kubernetes¶
Lagoon solves what developers are dreaming about: A system that allows developers to locally develop their code and their services with Docker and run the exact same system in production. The same Docker images, the same service configurations and the same code.
Who are you?¶
In order to get you started at the right spot, follow one of the links below:
- If you want to use Lagoon to host your Website or Application, visit Using Lagoon
- If you want to develop Lagoon (i.e. add features), Developing Lagoon
TL;DR: How Lagoon Works¶
- Developers define and configure their needed services (like Nginx, PHP, MySQL) within YAML files (like docker-compose.yml) and then test them with docker-compose itself.
- When they are happy, they push the code to Git.
- Lagoon parses the YAML files, builds the needed Docker images, creates the needed resources in OpenShift, pushes them to a Docker registry and monitors the deployment of the containers.
- When all is done, Lagoon informs the developers via different ways (Slack, e-mail, website, etc.).
Questions? Ideas? Meet the maintainers and contributors:
#lagoon in amazee.io RocketChat
A couple of things about Lagoon¶
- Lagoon is based on microservices. A whole deployment and build workflow is very complex; not only do we have multiple sources (like Github, Bitbucket, Gitlab, etc.), multiple OpenShift servers and multiple notification systems (Slack, Rocketchat, etc.), but each deployment is unique and can take from seconds to hours. So it's built with flexibility and robustness in mind. Having microservices that all communicate through a messaging system (RabbitMQ) allows us to scale individual services up and down, survive down times of individual services and also to try out new parts of Lagoon in production without affecting others.
- Lagoon uses multiple programming languages. Each programming language has specific strengths and we try to decide which language makes the most sense for each service. Currently, a lot is built in Node.js, partly because we started with it but also because Node.js allows asynchronous processing of webhooks, tasks and more. We are likely going to change the programming language of some services, but this is what is great about micro services. We can replace a single service with another language without worrying about other parts of the platform.
- Lagoon is not Drupal specific. Everything has been built so that technically it can run any Docker image. There are existing Docker images specifically for Drupal and support for specific Drupal tools like Drush. But that's it.
- Lagoon is DevOps. It allows developers to define the services they need and customize them like they need. You might think this is not the right way to do it and gives too much power to developers. We believe though that as system engineers we need to empower developers and if we allow them not only to define the services locally but also to run and test them locally, they will find bugs and mistakes themselves.
- Lagoon runs on Docker and OpenShift. (That one should be obvious, right?)
- Lagoon can be completely developed and tested locally.
- Lagoon is completely integration tested which means we can test the whole process from receiving Git webhooks until a Docker container with the same Git hash is deployed in OpenShift.
- Lagoon is built and deployed via Lagoon. (Mind blown? ;) )
- Most important: It's a work in progress. It's not fully done yet. At amazee.io we believe that as a hosting community, we need to work together and share code where we can.
In order to understand the Lagoon infrastructure and how the services work together, here is a schema:
History of Lagoon¶
As described, Lagoon is a dream come true. At amazee.io we've been hosting Drupal for more than 8 years and this is the fourth major iteration of our hosting platform. The third iteration was built around Puppet and Ansible, where every single piece of the platform was done with configuration management. This allowed very fast setup of new servers, but at the same time was also lacking customizability for developers. We implemented some customizability (some already with Docker in production), but we've never been completely happy with it. With the rise of decoupled Drupal and the need to run Node.js on the server side, plus the requests for Elasticsearch or different Solr versions, we realized that our existing platform wasn't enough.
At the same time, we've been using Docker for multiple years for local development and it was always an idea to use Docker for everything in production. The only problem was the connection between local development and production environments. There are other systems that allow you to run Drupal in Docker in production but nothing allowed you to test the exact same images and services locally and in production.
Lagoon was born and has been developed since 2017 into a system that runs Docker in production and will replace our third generation hosting platform with a cutting edge all Docker based system.
At amazee.io we also believe in open source, and it was always troubling for us why open source code like Drupal is hosted with proprietary hosting platforms. We believe the strength and success of a hosting company are not the deployment systems or service configurations, but rather the people and their knowledge that run the platform, their processes and skills to react quickly to unforeseen situations and last but not least, the support they provide their clients. | https://lagoon.readthedocs.io/en/latest/ | 2020-02-17T01:16:41 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['./images/lagoon-logo.png',
'The Lagoon logo is a blue hexagon split in two pieces with an L-shaped cut'],
dtype=object) ] | lagoon.readthedocs.io |
Software Download Directory
Live Forms v8.1 is no longer supported. Please visit Live Forms Latest
for our current Cloud Release. Earlier documentation is available too.
To access a control’s style properties, click on the control in your form, then click the Style tab in the Forms Designer’s Property area.
Some controls have more style properties than others, so the specific style properties you see depend on which control you click in your form. As you define style properties for individual controls, remember that these will override any form-level properties you have defined. Each control style property is described below.
On this page:
Most controls have a width property. For input controls, the property specifies the width of the area in which users enter data; for example, you might narrow a control used for entering zip codes or widen a control for a full first, middle, and last name.
All control widths are specified in columns. The width is selected by clicking on a grid in the style tab. When you drag and drop most controls from the palette on to the canvas, the control will be 12 columns wide. The trigger, link and panel controls are the exception - the default widths of the trigger and link controls is 3 columns while the panel is 6. To change the width, simply click the number of divisions on the grid to specify how wide you want the control to be. For example, a text control dropped on the canvas from the palette has a width of 12 columns as shown in the image:
To make this control 6 columns wide, click on the 6th division of the grid. The results are shown in the image:
When you make a control N columns wide (e.g. 6), the entire control takes up 6 columns. As a result, if there is space, controls will float up next to other controls. You can prevent this using the New Line.property.
Here are some important facts to know when working with control widths:
Width is a crucial property when designing mulit-column forms. Live Forms makes this very easy to do. Refer to this documentation for some tips about the drag and drop feature when designing multi-column forms.
Control widths are ignored on the iPhone and the iPad even if the designer has explicitly specified them.
IE8 does not support the CSS3 calc() function. The width of a Section control is set to 98%. If you have a Section inside a Panel that has fewer columns in your form, the right edge does not align.
Radio and Checkbox controls have an extra property called Item Width. You can use this property to change the layout of the options from vertical (one radio/checkbox button below the next) to horizontal. This is useful to save vertical space on long forms. And also useful to improve ease of use for forms with questions that each have the same set of options.
See this image as an example. The Item Width can be entered in % or px values.
Check this property to show/hide the Item labels for Radio and Checkbox controls. If checked, the item labels are not visible but they still take up space on the screen. Item Labels in a table do not take up screen space.
This lets you specify the color that will appear behind the control. Type any valid CSS color name or its hexadecimal RGB equivalent. For example, if you want a red background, you can type the word RED or #aa2211.
These properties are controlled by the Styles that you apply to your form. However, you can change the font size and color for any specific control on the form. Specify the color by typing any valid CSS color name or its hexadecimal equivalent in the Label Color field.
These properties work well when you want your entire label to have the same size and color, but for more sophisticated labels you can type XHTML in the control’s label property field. For instance, use XHTML if you want to apply two different font colors inside the same label. Typing XHTML also gives you more font precision, since the label size property lets you pick generic font sizes only--small, medium, and so on. There may be controls for which you want a font size somewhere between the small and medium options in the dropdown.
Changing the label color does not affect the decorator. It is always the same color as defined by the style.
Check this checkbox to make the control's label bold.
Check this checkbox to italicize the control’s label.
The center property only applies to the Message control. Checking this, will center the message text. It works best with None and Bordered message types.
This property applies only to the Section control. When you drag a section control from the palette and drop it onto the designer canvas, the expand/collapse icon will be visible and the Expand/Collapse checkbox will be checked. Uncheck this to hide the icon.
To display the date picker, check the checkbox on the Style Property panel. If checked, you will see the date picker inside the the date control and the date portion of the date/time input control.
The Date Picker's large font makes it easy to see.
Clicking on the today link in the date picker brings you to the current date. If you are looking at another month, July for example, and you click on the today link, the date picker will automatically bring you to the current month with today’s date highlighted(bold). Select the date to populate the field in your form/flow. To close the Date Picker, click the close link.
Month and year menus facilitate quick navigation to much earlier/later dates. For example, selecting a past date when entering a Date of Birth is much easier when utilizing the dropdown menus. When selecting a date in the past, select the year first.
The Date Picker described above only applies to the desktop view. Date and Date-Time controls on mobile devices display device specific date pickers.
The New Line Property will be present for all controls that have the 12 column width selector. Check New Line and the control always appears on a new line. For example: Let's say you have First Name and Last Name controls side by side in your form.
If you want the Last Name control to be positioned under the First Name, check the New Line property.
Trigger and Link controls have this property checked by default.
The following properties are no longer supported in Live Forms: border style, width, color, margins, padding and option width for Radio and Checkbox controls. | https://docs.frevvo.com/d/display/frevvo81/Style+Properties | 2020-02-17T00:54:10 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['/d/images/icons/linkext7.gif', None], dtype=object)] | docs.frevvo.com |
The ViewModel Pattern in Silverlight – An Example
During the last few weeks I was working with Silverlight again quite a bit. This meant I had to write some code for several showcase projects, too. Of course no real production code (beware) but nevertheless in the end the applications were doing what they were supposed to do. However as it usually happens in those cases you have to decide between a quick and dirty fire and forget kind of stumble into the programming of the app or to put some basic effort in planning and designing to have at least the basic rudiment of an application architecture. And although I’m in the role of architect evangelist I’m always tempted to start coding without big thinking right away. This time however we took some time and designed our applications so that we would have a nice separation between UI and application logic. In particular we chose a ViewModel approach which is quite common in the world of WPF and Silverlight. ViewModel stands for Model-View-ViewModel (MVV) and is a variation of the widely known Model-View-Controller (MVC) pattern. I won’t dig deep into the explanation of this patterns as they have been described in depth at many places already including John Gossman’s or David Hill’s Blog for MVV or the Portland Pattern Repository for MVC. This said I want to focus on a short example driven walkthrough on how to create an Silverlight Application implementing the MVV pattern.
The example application allows to view songs and song lyrics of the current top artists listed on LastFM. In order to aggregate the data this small sample application already is a mash up of two different web services.
- LastFM REST web service provides the list of top artists
- The LiyricsWiki REST web service provides the lyrics for a particular song
So in order to give you a high level impression of the application here is a simple architecture sketch.
The Visual Studio 2008 project is organized accordingly. To maintain the highest level of simplicity a service access layer has been omitted although this would be something you would probably want to consider in a real application development project.
The folders contain the following:
- Icons contains the icons for the Silverlight 3 out-of-browser feature as I enabled OOB for this sample app however it is obviously not relevant for the ViewModel part
- Model contains, well, the ViewModel class and any other class necessary to build the object model for this application. In this case those are classes like Artist and Track which mainly consist of private fields and the related public properties.
- Views contains the UI which in this sample is a single XAML page with it’s code behind
- Servicedata contains some constant REST URIs for the services I call
As this baseline structure could be already called something like a best practice for ViewModel projects one could use such a structure as a base template for such applications.
Now let’s start with the meat of the application. The best way to start off with would probably be to create an empty ViewModel class stub which basically is a standard C# class stub. Next step could be to create something like a ViewModel Base class which enables change notification for properties of the ViewModel. This is absolutely helpful in order to have your views automatically updated when the properties change to which any UI Element is bound to.
This would look something like this:
public abstract class ViewModelBase : INotifyPropertyChanged { protected void OnPropertyChanged(string propertyName) { PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(propertyName)); } } public event PropertyChangedEventHandler PropertyChanged; }
So the ViewModel then extents this ViewModelBase class and instantly gains those notification capabilities when the OnPropertyChanged method is called in the setter of a property.
The same requirement also exists for collections which are data bound however this is almost even easier as Silverlight (as well as WPF) comes with a special collection class which implements the Observer pattern which of course is tightly related with any MVC pattern. This class is a generic collection class and is called ObservableCollection<T>. So with this knowledge we can start filling our ViewModel with life. For this sample this would look like the following up until now:
public class RadioGaGaViewModel : RadioGaGa.Model.ViewModelBase { #region Fields private ObservableCollection<Artist> topArtists; private ObservableCollection<Track> topTracks; private string currentLyrics = string.Empty; #endregion #region C'tor public RadioGaGaViewModel() { this.topTracks = new ObservableCollection<Track>(); this.topArtists = new ObservableCollection<Artist>(); } #endregion #region Properties public string CurrentLyrics { get { return this.currentLyrics; } set { this.currentLyrics = value; OnPropertyChanged("CurrentLyrics"); } } public ObservableCollection<Track> TopTracks { get { return this.topTracks; } } public ObservableCollection<Artist> TopArtists { get { return this.topArtists; } } #endregion }
Now what we have to do next is to make the ViewModel available to the UI for data binding. This can be easily done by following this little sequence of tasks:
- Register an event handler for the Loaded-Event of the UserControl in the XAML markup of the respective view.
<UserControl ... x:
- Create a handler method stub in the code behind file of the XAML page
private void OnMainPage_Loaded(object sender, RoutedEventArgs e) { }
- Instantiate a new ViewModel object
private void OnMainPage_Loaded(object sender, RoutedEventArgs e) { RadioGaGaViewModel model = new RadioGaGaViewModel(); }
- Create a public property which sets the DataContext for the UserControl
public RadioGaGaViewModel ViewModel { get { return DataContext as RadioGaGaViewModel; } set { DataContext = value; } }
- Set the property and assign the ViewModel instance you just created
private void OnMainPage_Loaded(object sender, RoutedEventArgs e) { RadioGaGaViewModel model = new RadioGaGaViewModel(); ViewModel = model; }
After doing all this you can bind your the properties of your UI controls to the Properties of your ViewModel which are surfaced to the controls via the DataContext set on the top level FrameworkElement. For example you could create a DataTemplate for the items of a ListBox and could bind the respective properties like shown in the sample below.
<ListBox.ItemTemplate> <DataTemplate> <Border CornerRadius="5" BorderThickness="1" BorderBrush="Black" Margin="1,0,0,1" Padding="2" MinWidth="320"> <Border.Background> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FFB2B2B2"/> <GradientStop Color="#FFFFFFFF" Offset="1"/> </LinearGradientBrush> </Border.Background> <Grid Height="55" HorizontalAlignment="Left"> <Grid.ColumnDefinitions> <ColumnDefinition MaxWidth="40" /> <ColumnDefinition /> </Grid.ColumnDefinitions> <StackPanel Orientation="Vertical" Grid. <TextBlock Text="{Binding Path=Rank}" FontSize="12" FontWeight="Bold" Foreground="Black" TextAlignment="Left" VerticalAlignment="Center" Padding="5,0,0,0" /> <Image Source="{Binding Path=Images[0]}" Height="30" Width="30" HorizontalAlignment="Left"/> </StackPanel> <StackPanel Orientation="Vertical" Grid. <TextBlock Text="{Binding Path=Title}" FontSize="12" FontWeight="Bold" Foreground="Black" TextAlignment="Left" VerticalAlignment="Center" Padding="5,0,0,0" /> <StackPanel Orientation="Horizontal"> <TextBlock Text="Artist:" FontSize="12" Foreground="Black" TextAlignment="Left" VerticalAlignment="Center" Padding="5,0,0,0" /> <TextBlock Text="{Binding Path=Artist.ArtistName}" FontSize="12" Foreground="Black" TextAlignment="Left" VerticalAlignment="Center" Padding="5,0,0,0" /> </StackPanel> <TextBlock Text="{Binding Path=Playcount}" FontSize="12" Foreground="Black" TextAlignment="Left" VerticalAlignment="Center" Padding="5,0,0,0" /> </StackPanel> </Grid> </Border> </DataTemplate> </ListBox.ItemTemplate>
And that’s basically all for creating a ViewModel pattern based architecture. Next step would be to implement the business logic that fills the ViewModel properties. In my case these are the calls to the different REST based web services, which of course are called asynchronously (also as Silverlight doesn’t support anything else) in order to still have an responsive UI while the services are accessed.
For our sample the final result can be tested here:
As I already equipped my development machines with Silverlight 3 Beta this only works with the SL3 Beta runtime. So if you are still on 2 you have to be content with this screenshot (pretty, eh? ;))
The Visual Studio 2008/SL 3 Beta Tools solution can be downloaded from my SkyDrive.
I hope this all is helpful and easy to understand. As always feel free to send comments or corrections, etc.
Two more Sidenotes:
- LyricsWiki also offers a SOAP web service however I was not able to use it with Silverlight when letting Visual Studio generate the service proxy (with svcutil for Silverlight probably). So I switched to the REST based version
- This project can quite easily be migrated to WPF and vice versa. I’ve done this two time due to the fact mentioned in <1> because I wasn’t aware of the REST interface in the beginning. This shows that WPF and Silverlight get closer and closer with their features and APIs so that a develop once run everywhere scenario will become more and more feasible. | https://docs.microsoft.com/en-us/archive/blogs/astrauss/the-viewmodel-pattern-in-silverlight-an-example | 2020-02-17T02:43:59 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
BizTalk Server 2006 Launch Webcast Series
Oops, so I am a little late with this entry….. It was way down in my inbox and I overlooked blogging it!! So sorry. Anyhow, you have to check out these BizTalk 2006 Launch webcasts…. if you miss them, I am sure they will be recorded for later viewing… Register at:
Automate and manage your business processes with BizTalk Server
Learn how the Microsoft® Business Process Integration offering helps you create, manage, monitor, and change dynamic business processes. This webcast series shows you how you can develop rich enterprise solutions through:
- Application integration
- Trading partner management
- Service-oriented architecture
- Business Activity Monitoring
This webcast series is being presented in conjunction with the launch of Microsoft BizTalk® Server 2006. Join us and get a jump on benefits, including simpler setup, comprehensive management and deployment capabilities, and seamless upgrading from BizTalk Server 2004. | https://docs.microsoft.com/en-us/archive/blogs/cvidotto/biztalk-server-2006-launch-webcast-series | 2020-09-18T17:42:45 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.microsoft.com |
Communication in a microservice architecture
In a monolithic application running on a single process, components invoke one another using language-level method or function calls. These can be strongly coupled if you're creating objects with code (for example,
new ClassName()), or can be invoked in a decoupled way if you're using Dependency Injection by referencing abstractions rather than concrete object instances. Either way, the objects are running within the same process. The biggest challenge when changing from a monolithic application to a microservices-based application lies in changing the communication mechanism. A direct conversion from in-process method calls into RPC calls to services will cause a chatty and not efficient communication that won't perform well in distributed environments. The challenges of designing distributed system properly are well enough known that there's even a canon known as the Fallacies of distributed computing that lists assumptions that developers often make when moving from monolithic to distributed designs.
There isn't one solution, but several. One solution involves isolating the business microservices as much as possible. You then use asynchronous communication between the internal microservices and replace fine-grained communication that's typical in intra-process communication between objects with coarser-grained communication. You can do this by grouping calls, and by returning data that aggregates the results of multiple internal calls, to the client.
A microservices-based application is a distributed system running on multiple processes or services, usually even across multiple servers or hosts. Each service instance is typically a process. Therefore, services must interact using an inter-process communication protocol such as HTTP, AMQP, or a binary protocol like TCP, depending on the nature of each service.
The microservice community promotes the philosophy of "smart endpoints and dumb pipes" This slogan encourages a design that's as decoupled as possible between microservices, and as cohesive as possible within a single microservice. As explained earlier, each microservice owns its own data and its own domain logic. But the microservices composing an end-to-end application are usually simply choreographed by using REST communications rather than complex protocols such as WS-* and flexible event-driven communications instead of centralized business-process-orchestrators.
The two commonly used protocols are HTTP request/response with resource APIs (when querying most of all), and lightweight asynchronous messaging when communicating updates across multiple microservices. These are explained in more detail in the following sections.
Communication types
Client and services can communicate through many different types of communication, each one targeting a different scenario and goals. Initially, those types of communications can be classified in two axes.
The first axis defines if the protocol is synchronous or asynchronous:
Synchronous protocol. HTTP is a synchronous protocol. The client sends a request and waits for a response from the service. That's independent of the client code execution that could be synchronous (thread is blocked) or asynchronous (thread isn't blocked, and the response will reach a callback eventually). The important point here is that the protocol (HTTP/HTTPS) is synchronous and the client code can only continue its task when it receives the HTTP server response.
Asynchronous protocol. Other protocols like AMQP (a protocol supported by many operating systems and cloud environments) use asynchronous messages. The client code or message sender usually doesn't wait for a response. It just sends the message as when sending a message to a RabbitMQ queue or any other message broker.
The second axis defines if the communication has a single receiver or multiple receivers:
Single receiver. Each request must be processed by exactly one receiver or service. An example of this communication is the Command pattern.
Multiple receivers. Each request can be processed by zero to multiple receivers. This type of communication must be asynchronous. An example is the publish/subscribe mechanism used in patterns like Event-driven architecture. This is based on an event-bus interface or message broker when propagating data updates between multiple microservices through events; it's usually implemented through a service bus or similar artifact like Azure Service Bus by using topics and subscriptions.
A microservice-based application will often use a combination of these communication styles. The most common type is single-receiver communication with a synchronous protocol like HTTP/HTTPS when invoking a regular Web API HTTP service. Microservices also typically use messaging protocols for asynchronous communication between microservices.
These axes are good to know so you have clarity on the possible communication mechanisms, but they're not the important concerns when building microservices. Neither the asynchronous nature of client thread execution nor the asynchronous nature of the selected protocol are the important points when integrating microservices. What is important is being able to integrate your microservices asynchronously while maintaining the independence of microservices, as explained in the following section.
Asynchronous microservice integration enforces microservice's autonomy
As mentioned, the important point when building a microservices-based application is the way you integrate your microservices. Ideally, you should try to minimize the communication between the internal microservices. The fewer communications between microservices, the better. But in many cases, you'll have to somehow integrate the microservices. When you need to do that, the critical rule here is that the communication between the microservices should be asynchronous. That doesn't mean that you have to use a specific protocol (for example, asynchronous messaging versus synchronous HTTP). It just means that the communication between microservices should be done only by propagating data asynchronously, but try not to depend on other internal microservices as part of the initial service's HTTP request/response operation.
If possible, never depend on synchronous communication (request/response) between multiple microservices, not even for queries. The goal of each microservice is to be autonomous and available to the client consumer, even if the other services that are part of the end-to-end application are down or unhealthy. If you think you need to make a call from one microservice to other microservices (like performing an HTTP request for a data query) to be able to provide a response to a client application, you have an architecture that won't be resilient when some microservices fail.
Moreover, having HTTP dependencies between microservices, like when creating long request/response cycles with HTTP request chains, as shown in the first part of the Figure 4-15, not only makes your microservices not autonomous but also their performance is impacted as soon as one of the services in that chain isn't performing well.
The more you add synchronous dependencies between microservices, such as query requests, the worse the overall response time gets for the client apps.
Figure 4-15. Anti-patterns and patterns in communication between microservices
As shown in the above diagram, in synchronous communication a "chain" of requests is created between microservices while serving the client request. This is an anti-pattern. In asynchronous communication microservices use asynchronous messages or http polling to communicate with other microservices, but the client request is served right away.
If your microservice needs to raise an additional action in another microservice, if possible, do not perform that action synchronously and as part of the original microservice request and reply operation. Instead, do it asynchronously (using asynchronous messaging or integration events, queues, etc.). But, as much as possible, do not invoke the action synchronously as part of the original synchronous request and reply operation.
And finally (and this is where most of the issues arise when building microservices), if your initial microservice needs data that's originally owned by other microservices, do not rely on making synchronous requests for that data. Instead, replicate or propagate that data (only the attributes you need) into the initial service's database by using eventual consistency (typically by using integration events, as explained in upcoming sections).
As noted earlier in the Identifying domain-model boundaries for each microservice section, duplicating some data across several microservices isn't an incorrect design—on the contrary, when doing that you can translate the data into the specific language or terms of that additional domain or Bounded Context. For instance, in the eShopOnContainers application you have a microservice named
identity-api that's in charge of most of the user's data with an entity named
User. However, when you need to store data about the user within the
Ordering microservice, you store it as a different entity named
Buyer. The
Buyer entity shares the same identity with the original
User entity, but it might have only the few attributes needed by the
Ordering domain, and not the whole user profile.
You might use any protocol to communicate and propagate data asynchronously across microservices in order to have eventual consistency. As mentioned, you could use integration events using an event bus or message broker or you could even use HTTP by polling the other services instead. It doesn't matter. The important rule is to not create synchronous dependencies between your microservices.
The following sections explain the multiple communication styles you can consider using in a microservice-based application.
Communication styles
There are many protocols and choices you can use for communication, depending on the communication type you want to use. If you're using a synchronous request/response-based communication mechanism, protocols such as HTTP and REST approaches are the most common, especially if you're publishing your services outside the Docker host or microservice cluster. If you're communicating between services internally (within your Docker host or microservices cluster), you might also want to use binary format communication mechanisms (like WCF using TCP and binary format). Alternatively, you can use asynchronous, message-based communication mechanisms such as AMQP.
There are also multiple message formats like JSON or XML, or even binary formats, which can be more efficient. If your chosen binary format isn't a standard, it's probably not a good idea to publicly publish your services using that format. You could use a non-standard format for internal communication between your microservices. You might do this when communicating between microservices within your Docker host or microservice cluster (for example, Docker orchestrators), or for proprietary client applications that talk to the microservices.
Request/response communication with HTTP and REST
When a client uses request/response communication, it sends a request to a service, then the service processes the request and sends back a response. Request/response communication is especially well suited for querying data for a real-time UI (a live user interface) from client apps. Therefore, in a microservice architecture you'll probably use this communication mechanism for most queries, as shown in Figure 4-16.
Figure 4-16. Using HTTP request/response communication (synchronous or asynchronous)
When a client uses request/response communication, it assumes that the response will arrive in a short time, typically less than a second, or a few seconds at most. For delayed responses, you need to implement asynchronous communication based on messaging patterns and messaging technologies, which is a different approach that we explain in the next section.
A popular architectural style for request/response communication is REST. This approach is based on, and tightly coupled to, the HTTP protocol, embracing HTTP verbs like GET, POST, and PUT. REST is the most commonly used architectural communication approach when creating services. You can implement REST services when you develop ASP.NET Core Web API services.
There's additional value when using HTTP REST services as your interface definition language. For instance, if you use Swagger metadata to describe your service API, you can use tools that generate client stubs that can directly discover and consume your services.
Additional resources
Martin Fowler. Richardson Maturity Model A description of the REST model.
Swagger The official site.
Push and real-time communication based on HTTP
Another possibility (usually for different purposes than REST) is a real-time and one-to-many communication with higher-level frameworks such as ASP.NET SignalR and protocols such as WebSockets.
As Figure 4-17 shows, real-time HTTP communication means that you can have server code pushing content to connected clients as the data becomes available, rather than having the server wait for a client to request new data.
Figure 4-17. One-to-one real-time asynchronous message communication
SignalR is a good way to achieve real-time communication for pushing content to the clients from a back-end server. Since communication is in real time, client apps show the changes almost instantly. This is usually handled by a protocol such as WebSockets, using many WebSockets connections (one per client). A typical example is when a service communicates a change in the score of a sports game to many client web apps simultaneously. | https://docs.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/communication-in-microservice-architecture | 2020-09-18T18:35:58 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['media/communication-in-microservice-architecture/sync-vs-async-patterns-across-microservices.png',
'Diagram showing three types of communications across microservices.'],
dtype=object)
array(['media/communication-in-microservice-architecture/request-response-comms-live-queries-updates.png',
'Diagram showing request/response comms for live queries and updates.'],
dtype=object)
array(['media/communication-in-microservice-architecture/one-to-many-communication.png',
'Diagram showing push and real-time comms based on SignalR.'],
dtype=object) ] | docs.microsoft.com |
I want to use English for my operations. For example, I want English displays or a chatbot that can understand English after training. What should I do?
Go to
Account Settings
Change
Display Language to Chinese
Save
Log in to <LINE Developers> and create a new bot or select the bot you want to switch to Chinese.
Click
Settings >
NLU Engine on the left sidebar
Change
Language to Chinese
The language settings for the “NLU Engine” and the “Display Language” are separate. In other words, you can train a Chinese bot under English interface, and train an English bot under Chinese interface. | https://docs-en.yoctol.ai/common-issues/how-to-switch-languages-chinese-english | 2020-09-18T17:02:32 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs-en.yoctol.ai |
Docs to OpenAPI v1.2.1 Release Notes
Docs to OpenAPI v1.2.1 was released on June 27, 2019. This version adds improvements and bug fixes, the most notable of which is the ability to preview previously entered expressions in the Status dialog:
Improvements
- DTOA-350 - Change the font to fixed width for expression
- DTOA-351 - Change the revert icon to undo icon
- DTOA-352 - Show the preview of the expression from an output in Console Dialog
Bugs
- DTOA-325 - Fix the aspect ratio of the extract logo from metadata for marketplace
- DTOA-347 - Tooltip is not showing properly when selecting an input element
- DTOA-349 - Config import not working when set to ignore URL + exclude OpenAPI data | https://docs.torocloud.com/docs-to-openapi/releases/notes/1.2.1/ | 2020-09-18T17:06:37 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['../../../placeholders/img/releases/1.2.1/compressed/previewing-previous-expression-gif.png',
'Previewing previously entered expressions'], dtype=object) ] | docs.torocloud.com |
# Python Notebook Format
Otter ships with an assignment development and distribution tool called Otter Assign, an Otter-compliant fork of [jassign]() that was designed for OkPy. Otter Assign allows instructors to create assignments by writing questions, prompts, solutions, and public and private tests all in a single notebook, which is then parsed and broken down into student and autograder versions.
Otter's notebook format groups prompts, solutions, and tests together into prompts. Autograder tests are specified as cells in the notebook and their output is used as the expected output of the autograder when genreating tests. Each question has metadata, expressed in a code block in YAML format when the question is declared. Tests generated by Otter Assign follow the [Otter-compliant OK format](../test_files/index.md).
**Note:** Otter Assign is also backwards-compatible with jassign-formatted notebooks. To use jassign format with Otter Assign, specify the `--jassign` flag in your call to `otter assign`. While the formats are very similar, jassign's format has some key differences to the Otter Assign format, and many of the behaviors described below, e.g. intercell seeding, are not compatible with jassign format. For more information about formatting notebooks for jassign, see [its documentation]().
## Assignment Metadata
In addition to various command line arugments discussed below, Otter Assign also allows you to specify various assignment generation arguments in an assignment metadata cell. These are very similar to the question metadata cells described in the next section. Assignment metadata, included by convention as the first cell of the notebook, places YAML-formatted configurations inside a code block that begins with `BEGIN ASSIGNMENT`:
````
```
BEGIN ASSIGNMENT
init_cell: false
export_cell: true
...
```
````
This cell is removed from both output notebooks. These configurations, listed in the YAML snippet below, can be **overwritten** by their command line counterparts (e.g. `init_cell: true` is overwritten by the `--no-init-cell` flag). The options, their defaults, and descriptions are listed below. Any unspecified keys will keep their default values. For more information about many of these arguments, see [Usage and Output](usage.md). Any keys that map to sub-dictionaries (e.g. `export_cell`, `generate`) can have their behaviors turned off by changing their value to `false`. The only one that defaults to true (with the specified sub-key defaults) is `export_cell`.
```yaml
run_tests: true # whether to run tests on the resulting autograder directory
requirements: requirements.txt # path to a requirements file for Gradescope; appended by default
overwrite_requirements: false # whether to overwrite Otter's default requirements rather than appending
init_cell: true # include an Otter initialization cell at the top of the notebook
check_all_cell: true # include a check-all cell at the end of the notebook
export_cell: # include an export cell at the end of the notebook; set to false for no cell
pdf: true # include a PDF in the export zip file
filtering: true # whether the PDF in the export should be filtered
instructions: '' # additional instructions for submission included above export cell
grade_from_log: false # whether to grade students' submissions from serialized environments in the log
seed: null # a seed for intercell seeding during grading
public_multiplier: null # a percentage of test points to award for passing public tests
pdfs: # configurations for generating PDFs for manually-graded questions. defaults to false
course_id: '' # Gradescope course ID for uploading PDFs for manually-graded questions
assignment_id: '' # Gradescope assignment ID for uploading PDFs for manually-graded questions
filtering: true # whether the PDFs should be filtered
service: # confgiurations for Otter Service
notebook: '' # path to the notebook to submit if different from the master notebook name
endpoint: '' # the endpoint for your Otter Service deployment; required
auth: google # auth type for your Otter Service deployment
assignment_id: '' # the assignment ID from the Otter Service database
class_id: '' # the class ID from the Otter Service database
save_environment: false # whether to save students' environments in the log for grading
variables: {} # a mapping of variable names -> types for serialization
ignore_modules: [] # a list of module names whose functions to ignore during serialization
files: [] # a list of file paths to include in the distribution directories
```
All paths specified in the configuration should be **relative to the directory containing the master notebook**. If, for example, I was running Otter Assign on the `lab00.ipynb` notebook in the structure below:
```
| dev
| - requirements.txt
| lab
| lab00
| - lab00.ipynb
| - utils.py
| data
| - data.csv
```
and I wanted my requirements from `dev/requirements.txt` to be include, my configuration would look something like this:
```yaml
requirements: ../../requirements.txt
files:
- data/data.csv
- utils.py
...
```
A note about Otter Generate: the `generate` key of the assignment metadata has two forms. If you just want to generate and require no additional arguments, set `generate: true` in the YAML and Otter Assign will simply run `otter generate` from the autograder directory (this will also include any files passed to `files`, whose paths should be **relative to the directory containing the notebook**, not to the directory of execution). If you require additional arguments, e.g. `points` or `show_stdout`, then set `generate` to a nested dictionary of these parameters and their values:
```yaml
generate:
seed: 42
show_stdout: true
show_hidden: true
```
You can also set the autograder up to automatically upload PDFs to student submissions to another Gradescope assignment by setting the necessary keys in the `pdfs` subkey of `generate`:
```yaml
generate:
pdfs:
token: YOUR_GS_TOKEN # required
class_id: 1234 # required
assignment_id: 5678 # required
filtering: true # true is the default
```
If you have an Otter Service deployment to which you would like students to submit, the necessary configurations for this submission can be specified in the `service` key of the assignment metadata. This has the required keys `endpoint` (the URL of the VM), `assignment_id` (the ID of the assignment in the Otter Service database), and `class_id` (the class ID in the database). You can optionally also set an auth provider with the `auth` key (which defaults to `google`).
```yaml
service:
endpoint: # required
assignment_id: hw00 # required
class_id: some_class # required
auth: google # the default
```
If you are grading from the log or would like to store students' environments in the log, use the `save_environment` key. If this key is set to `true`, Otter will serialize the stuednt's environment whenever a check is run, as described in [Logging](../logging.md). To restrict the serialization of variables to specific names and types, use the `variables` key, which maps variable names to fully-qualified type strings. The `ignore_modules` key is used to ignore functions from specific modules. To turn on grading from the log on Gradescope, set `generate[grade_from_log]` to `true`. The configuration below turns on the serialization of environments, storing only variables of the name `df` that are pandas dataframes.
```yaml
save_environment: true
variables:
df: pandas.core.frame.DataFrame
```
As an example, the following assignment metadata includes an export cell but no filtering, no init cell, and calls Otter Generate with the flags `--points 3 --seed 0`.
````
```
BEGIN ASSIGNMENT
filtering: false
init_cell: false
generate:
points: 3
seed: 0
```
````
## Autograded Questions
Here is an example question in an Otter Assign (default `1`); how many points the question is worth
As an example, the question metadata below indicates an autograded question `q1` worth 1 point.
````
```
BEGIN QUESTION
name: q1
manual: false
```
````
### Solution Removal
Solution cells contain code formatted in such a way that the assign parser replaces lines or portions of lines with prespecified prompts. Otter uses the same solution replacement rules as jassign. From the [jAssign docs]():
* A line ending in `# SOLUTION` will be replaced by `...`, properly indented. If
that line is an assignment statement, then only the expression(s) after the
`=` symbol will be replaced.
* A line ending in `# SOLUTION NO PROMPT` or `# SEED` will be removed.
* A line `# BEGIN SOLUTION` or `# BEGIN SOLUTION NO PROMPT` must be paired with
a later line `# END SOLUTION`. All lines in between are replaced with `...` or
removed completely in the case of `NO PROMPT`.
* A line `""" # BEGIN PROMPT` must be paired with a later line `""" # END
PROMPT`. The contents of this multiline string (excluding the `# BEGIN
PROMPT`) appears in the student cell. Single or double quotes are allowed.
Optionally, a semicolon can be used to suppress output: `"""; # END PROMPT`
```python
def square(x):
y = x * x # SOLUTION NO PROMPT
return y # SOLUTION
nine = square(3) # SOLUTION
```
would be presented to students as
```python
def square(x):
...
nine = ...
```
And
```python
pi = 3.14
if True:
# BEGIN SOLUTION
radius = 3
area = radius * pi * pi
# END SOLUTION
print('A circle with radius', radius, 'has area', area)
def circumference(r):
# BEGIN SOLUTION NO PROMPT
return 2 * pi * r
# END SOLUTION
""" # BEGIN PROMPT
# Next, define a circumference function.
pass
"""; # END PROMPT
```
would be presented to students as
```python
pi = 3.14
if True:
...
print('A circle with radius', radius, 'has area', area)
def circumference(r):
# Next, define a circumference function.
pass
```
### Test Cells.
**Note:** Currently, the conversion to OK format does not handle multi-line tests if any line but the last one generates output. So, if you want to print
twice, make two separate test cells instead of a single cell with:
```python
print(1)
print(2)
```
**If a question has no solution cell provided**, the question will either be removed from the output notebook entirely if it has only hidden tests or will be replaced with an unprompted `Notebook.check` cell that runs those tests. In either case, the test files are written, but this provides a way of defining additional test cases that do not have public versions. Note, however, that the lack of a `Notebook.check` cell for questions with only hidden tests means that the tests are run _at the end of execution_, and therefore are not robust to variable name collisions.
### Intercell Seeding
Otter Assign maintains support for [intercell seeding](../seeding.md) by allowing seeds to be set in solution cells. To add a seed, write a line that ends with `# SEED`; when Otter runs, this line will be removed from the student version of the notebook. This allows instructors to write code with deterministic output, with which hidden tests can be generated.
Note that seed cells are removed in student outputs, so any results in that notebook may be different from the provided tests. However, when grading, seeds are executed between each cell, so if you are using seeds, make sure to use **the same seed** every time to ensure that seeding before every cell won't affect your tests. You will also be required to set this seed as a configuration of the `generate` key of the assignment metadata if using Otter Generate with Otter Assign.
## Manually Graded Questions
Otter Assign also supports manually-graded questions using a similar specification to the one described above. To indicate a manually-graded question, set `manual: true` in the question metadata. A manually-graded question is defined by three parts:
* A question cell with metadata
* (Optionally) a prompt cell
* A solution cell
Manually-graded solution cells have two formats:
* If a code cell, they can be delimited by solution removal syntax as above.
* If a Markdown cell, the start of at least one line must match the regex `(|\*{2})solution:?(<\/strong>|\*{2})`.
The latter means that as long as one of the lines in the cell starts with `SOLUTION` (case insensitive, with or without a colon `:`) in boldface, the cell is considered a solution cell. If there is a prompt cell for manually-graded questions (i.e. a cell between the question cell and solution cell), then this prompt is included in the output. If none is present, Otter Assign automatically adds a Markdown cell with the contents `_Type your answer here, replacing this text._`.
Manually graded questions are automatically enclosed in `` and `` tags by Otter Assign so that only these questions are exported to the PDF when filtering is turned on (the default). In the autograder notebook, this includes the question cell, prompt cell, and solution cell. In the student notebook, this includes only the question and prompt cells. The `` tag is automatically inserted at the top of the next cell if it is a Markdown cell or in a new Markdown cell before the next cell if it is not.
An example of a manually-graded code question:
An example of a manually-graded written question (with no prompt):
An example of a manuall-graded written question with a custom prompt:
| https://otter-grader.readthedocs.io/en/latest/_sources/otter_assign/python_notebook_format.md.txt | 2020-09-18T17:28:25 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['images/assign_sample_question.png', None], dtype=object)
array(['images/assign_sample_code_manual.png', None], dtype=object)
array(['images/assign_sample_written_manual.png', None], dtype=object)
array(['images/assign_sample_written_manual_with_prompt.png', None],
dtype=object) ] | otter-grader.readthedocs.io |
Xanthomonas arboricola pv. juglandisdrip. rainsplash. | http://docs.metos.at/Xanthomonas+arboricola+pv.+juglandis | 2020-09-18T18:25:18 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.metos.at |
The following are definition files that will be used by the MemSQL Operator to create your cluster. Create new definition files and copy and paste the contents of each code block into those files.
The
deployment.yaml and
memsql-cluster.yaml files have placeholders that must be updated before they can be used.
deployment.yaml
Create a deployment definition file using the template below.
apiVersion: apps/v1 kind: Deployment metadata: name: memsql-operator spec: replicas: 1 selector: matchLabels: name: memsql-operator template: metadata: labels: name: memsql-operator spec: serviceAccountName: memsql-operator containers: - name: memsql-operator image: "OPERATOR_IMAGE" imagePullPolicy: Always args: [ # Cause the operator to merge rather than replace annotations on services "--merge-service-annotations", # Allow the process inside the container to have read/write access to the `/var/lib/memsql` volume. "--fs-group-id", "5555" ] env: - name: WATCH_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: OPERATOR_NAME value: "memsql-operator"
You must edit this file and replace
OPERATOR_IMAGE with the either the local
memsql/operator Docker image you pulled down (such as
"memsql-operator"), or add in an
imagePullSecrets section under the
spec section and reference a Kubernetes Secret that you can create via
kubectl apply.
Refer to the Kubernetes documentation for more information on
imagePullPolicy and creating Secrets.
rbac.yaml
Copy the following to create a ServiceAccount definition file that will be used by the MemSQL Operator.
apiVersion: v1 kind: ServiceAccount metadata: name: memsql-operator --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: memsql-operator rules: - apiGroups: - "" resources: - pods - services - endpoints - persistentvolumeclaims - events - configmaps - secrets verbs: - '*' - apiGroups: - policy resources: - poddisruptionbudgets verbs: - '*' - apiGroups: - batch resources: - cronjobs verbs: - '*' - apiGroups: - "" resources: - namespaces verbs: - get - apiGroups: - apps - extensions resources: - deployments - daemonsets - replicasets - statefulsets verbs: - '*' - apiGroups: - memsql.com resources: - '*' verbs: - '*' --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: memsql-operator subjects: - kind: ServiceAccount name: memsql-operator roleRef: kind: Role name: memsql-operator apiGroup: rbac.authorization.k8s.io
memsql-cluster-crd.yaml
Create a CustomResourceDefinition file to define the MemSQLCluster resource type.
memsql-cluster.yaml
Create a MemSQLCluster definition file to specify the configuration settings for your MemSQL cluster.
apiVersion: memsql.com/v1alpha1 kind: MemsqlCluster metadata: name: memsql-cluster spec: license: LICENSE_KEY adminHashedPassword: "HASHED_PASSWORD" nodeImage: repository: memsql/node tag: 6.8.9-24b9cbd386 redundancyLevel: 1 serviceSpec: objectMetaOverrides: labels: custom: label annotations: custom: annotations aggregatorSpec: count: 3 height: 0.5 storageGB: 256 storageClass: standard objectMetaOverrides: annotations: optional: annotation labels: optional: label leafSpec: count: 1 height: 1 storageGB: 1024 storageClass: standard objectMetaOverrides: annotations: optional: annotation labels: optional: label
You must edit the following placeholders in this file to properly set up your cluster:
Change the
namevalue to the cluster name of your choice.
Specify your license key and a hashed version of a secure password for the
admindatabase user on the cluster. The
adminuser is the default user you can use when logging into your cluster. The account is created by the Operator during cluster deployment and has an explicit set of grants (defined at the end of this step) to reduce the scope of this user.
Note that, as of MemSQL 7.1.4, license checks are now
cgroup-aware and respect container resource boundaries for containerized deployments. While this does not change how license checks are performed, nor does it change how capacity is allocated, it does change how the resources allocated to the container are checked.
To include the license key and hashed password, you have the following two options:
Replace
LICENSE_KEYwith your license key from the MemSQL Customer Portal and change
HASHED_PASSWORDto a hashed version of a secure password for the
admindatabase user on the cluster. The following python script shows how to create a hashed password:
from hashlib import sha1 print("*" + sha1(sha1('secretpass').digest()).hexdigest().upper())
Use Kubernetes secrets to pass in the license key and hashed passwords. You will need one or two secrets (they can be separate or the same secret) with the keys set to the correct values. Note: The password still needs to be hashed inside the secret. It cannot be a bare password.
licenseSecret: name: "my-secret" key: "license" adminHashedPasswordSecret: name: "my-secret" key: "password"
Under
nodeImage,
tagspecifies the version of
memsql/nodethat will be deployed in your cluster. This value aligns with the version number of the MemSQL database engine that is running in the container (e.g. 6.8.9-24b9cbd386 contains the 6.8.9 version of MemSQL).
You can use different versions of the engine by going to the Docker Hub page for
memsql/nodeand selecting a different tag. Because of recent updates to the
memsql/nodecontainer, you should select a tag that is
6.8.9-24b9cbd386or newer. Also, running different versions of the MemSQL engine in one cluster is not supported, so you must ensure that each node in your Kubernetes cluster has the same
tagvalue.
Change
redundancyLevelto
2if you want to enable high availability. It is highly recommended you set this value to
2for production deployments. Note: You must have an even number of leaf nodes to enable high availability. Refer to Managing High Availability for more information.
The
objectMetaOverridessections are optional. By including these sections, you can override the metadata annotations and labels at either the node or service layer (
objectMetaOverrides).
Change
countto alter the number of aggregator or leaf nodes in your cluster.
The
heightvalue specifies the vCPU and RAM size of an aggregator or leaf node where a
heightof
1equals 8 vCPU cores and 32 GB of RAM. The smallest value you can set is
0.5(4 vCPU cores, 16 GB of RAM).
The
storageGBvalue corresponds to the amount of storage each aggregator or leaf should request for their persistent data volume.
The
storageClassvalue specifies which storage class to use for the PersistedVolume in the Kubernetes cluster. You should change this value to align with the default (or custom) storage class available to your cluster.
For advanced users, you can also declare a
globalVariablessection. This is an optional section that allows you to specify values for MemSQL engine variables.
- Prior to Operator 1.2.0, the supported engine variables are:
default_partitions_per_leaf
columnstore_segment_rows
columnstore_flush_bytes
columnstore_window_size
transaction_buffer
snapshot_trigger_size
minimal_disk_space
pipelines_max_concurrent
auditlog_level
- As of Operator 1.2.0, nearly all engine variables are supported except for:
redundancy_level
sync_permissions
local_file_system_access_restricted
- As of Operator 1.2.1, nearly all engine variables are supported except for:
redundancy_level
sync_permissions
- Refer to List of Engine Variables for more information.
globalVariables: transaction_buffer: "8m" default_partitions_per_leaf: "4"Info
If it is not overridden in the
globalVariablessection, the Operator will set
default_partitions_per_leafto a value equal to the
heightmultiplied by the vCPU cores per unit.
Users may declare an
envVariablessection. This is an optional section that allows environment variables to be specified. The currently supported environment variables include:
MALLOC_ARENA_MAX
envVariables: MALLOC_ARENA_MAX: 4
Once you have finished creating your definition files, you can deploy your cluster.
To learn more about a configuration option below while remaining on this page, right-click on the configuration option and open it in a separate tab.
The
admin user has the following database permissions:
USAGE
INSERT
UPDATE
DELETE
CREATE
DROP
PROCESS
INDEX
ALTER
SHOW METADATA
CREATE TEMPORARY TABLES
LOCK TABLES
CREATE VIEW
ALTER VIEW
DROP VIEW
SHOW VIEW
DROP DATABASE
CREATE DATABASE
CREATE ROUTINE
ALTER ROUTINE
EXECUTE
CREATE PIPELINE
DROP PIPELINE
START PIPELINE
ALTER PIPELINE
SHOW PIPELINE | https://docs.memsql.com/v7.1/guides/deploy-memsql/self-managed/kubernetes/step-3/ | 2020-09-18T17:43:41 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.memsql.com |
Docker is a software container platform, which bundles only the libraries and settings, required to make a software run isolated on a shared operating system.
Running Jethro Manager using Docker, guarantees that it will always run in the same manner, regardless of where it is deployed.
This article will explain in detail, step by step, how to configure a Jethro Manager Docker container on a Linux machine:
Setup Jethro Manger Docker
1. Install and start Docker
yum install docker service docker start
2. Download and Load the Image
In order to run a docker container, you should first have the image loaded into your local docker repository.
Download the image as .tar:
wget
Run 'docker load' with the full name of the tar file downloaded. For example:
docker load --input jethromanager_docker-1.4.0-55d.tar
3. Prepare folders to mount with the docker image file system
Since a docker container is a stateless independent file system, separated from the host's file system, it is recommended to create folders on the host's file system, and to mount them to the container's file system.
That way it would keep the information collected by Jethro persistent, even if the container will be lost.
The following code block will suggest a set of folder names to be used for the needs of Jethro's persistancy, but you can also use other paths if you prefer so.
# create a main folder for all the sub folders described below # mkdir /jethro_docker_volume # create a folder for the instances logs # mkdir /jethro_docker_volume/instances_logs # give all users the permission to read write and execute the files within those folders # chmod -R 777 /jethro_docker_volume/
4. Plan the preffered configurations for running the image
Docker allows multiple parameters of configuration (called 'OPTIONS'), to be set when running the image. In addition, each specific docker image can offer/require it's own environment variables.
For Jethro Manager Docker image, the following parameters needs to be defined, when the image will start to run:
- Container Name - Decide on a name for the image container. Specifying a name gives the ability to use it when referencing the container within a Docker network, instead of using a long generated ID.
Recommended name: 'jethroManagerDocker'.
- Ports Mapping - Jethro exposes its services to external connections through ports. The ports which are exposed within the Jethro Docker image, needs to be mapped to ports that can be exposed on the host.
- Normally, Jethro uses the following ports:
- 9100 - For Jethro Manager.
- 9111-9200 - For the query engines of each instance.
- SSH connections normally uses the port 22 (Not related to Jethro specifically, this is a port commonly used on most Linux environments for establishing a secured log in to the machine).
- Since the SSH port used by the Host, is the same port used by the Docker image (22), it is recommended to map the Docker image SSH port, to a port address which is not in conflict with the Host one's (for example 9322).
5. Collect the image information
To run the Docker container, we will need to collect two parameters:
- 'IMAGE REPOSITORY'
- 'TAG'
Those can be found by running the following command:
docker images
The result should look like:
REPOSITORY TAG IMAGE ID CREATED SIZE jethrodata/jethromanager 1.4.0-64d b207f0062d32 2 months ago 785MB
6. Create and start a Container
Now that we have prepared the folders for mounting, the ports mapping, the values for the volumes mount, and the image information, we are ready to hit the 'run' command. The basic 'docker run' command takes this form:
docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]
For example:
docker run -d --privileged --name jethroManagerDocker -p 9100-9200:9100-9200 -p 9322:22 -v /jethro_docker_volume/instances_logs:/var/log/jethro jethrodata/jethromanager:1.4.0-55d
7. Browse to Jethro Manager
1) To open Jethro Manger from a browser, point it to http://<IP>:<PORT>.
The IP is the host's IP, and the PORT is usually the default port for Jethro Manager - 9100.
However, if the mapping of the container's 9100 port in the host machine is different, use the mapped port instead.
2) Once Jethro Manager UI opens in the browser, you will be navigated to the server screen to establish an SSH connection.
To do so, enter the IP of the host machine, and provide its SSH key for user 'jethro' (located in the Jethro server machine under /home/jethro/.JethroKeys/id_rsa).
Connnecting to Jethro containers
If you need to connnect to the container, or to interact with it, there are two methods available:
1) SSH - use the IP of the machine, port 9322 (unless if you decided to change it), and the credentials: user jethro, password jethro.
2) Bash - You can use the local machine to connect to the Docker machine, and run shell or bash commands on it. To do so:
- Run 'docker ps' and get the container-name, or container-id
Run 'docker exec -it <container-name-or-id> bash' or 'docker exec -it jethroManagerDocker sh'
For example:
docker exec -it jethroManagerDocker bash docker exec -it 4e51f73265a7 sh
Maintenance
docker stop <CONTAINER> - Stop a Container
docker start <CONTAINER> - Start a Container
docker rm <CONTAINER> - Remove a Container
docker rmi <IMAGE> - Remove an Image
To collect information about the list of images loaded on the host, Run:
docker images
It will show all top level images, their repository and tags, when they were created, and their size.
The tag column will include the Jethro Manager version.
To collect information about the list of containers running on the host, Run:
docker ps
It will show only running containers by default. To see all containers: docker ps -a
Troubleshooting
If you can't connect to the server or to any of the instances, make sure that:
1) The mapped ports of these instances are open.
2) The server is open for SSH communication on the mapped port for SSH. You can try to use it's internal IP in case the public one didn't work well.
3) Check the Maintaining Jethro Manager page for more information.
About the Image Content
See Also
Setting up Jethro Server using Docker - On a Local File System
Setting up Jethro Server using Docker - On NFS
Setting up Jethro Server using Docker - On Hadoop | http://docs.jethro.io/display/JethroManager1x5x/Setting+up+Jethro+Manager+using+Docker | 2020-09-18T17:55:41 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.jethro.io |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
New-WAFRGeoMatchSet-Name <String>-ChangeToken <String>-Select <String>-PassThru <SwitchParameter>-Force <SwitchParameter>
GeoMatchSetthat contains those countries and then configure AWS WAF to block the requests. To create and configure a
GeoMatchSet, perform the following steps:
ChangeTokenparameter of a
CreateGeoMatchSetrequest.
CreateGeoMatchSetrequest.
GetChangeTokento get the change token that you provide in the
ChangeTokenparameter of an UpdateGeoMatchSet request.
UpdateGeoMatchSetSetrequest to specify the countries that you want AWS WAF to watch for.
Nameafter you create the
GeoMatchSet.
AWS Tools for PowerShell: 2.x.y.z | https://docs.aws.amazon.com/powershell/latest/reference/items/New-WAFRGeoMatchSet.html | 2020-09-18T17:52:52 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.aws.amazon.com |
Planning your installation
Before you start installing the BMC Cloud Lifecycle Management solution, you must gather information about the required parameters that the installer prompts for each product. You can then review the installation timing of each product to plan for the installation.
Note
If any of your hosts accidentally crashes after you install a product successfully, you must reinstall the product on the same host or a different host. However, the installer does not allow you to perform such an installation because the registry file contains a successfully installed status for the product. To reinstall the product on the same host or on a different host, contact BMC Customer Support.
The following sections explain how you can plan for the BMC Cloud Lifecycle Management solution installation:
Gathering information for the installation
Use the planning spreadsheet to help prepare input values for the installer. To avoid installation errors, refer to the spreadsheet when you run the installation.
Note
This planning spreadsheet replaces the installation worksheets found in the separate product installation guides.
To plan for your installation using the spreadsheet:
- Depending on your environment, download and open the planning spreadsheet for Linux or planning spreadsheet for Microsoft Windows document.
- To prepare for the installer prompts, enter your selections and parameter values in the Value column with the help of your DBA or system administrator.
- Launch the BMC Cloud Lifecycle Management installer.
- Start installing a product, based on the installation order.
- Copy parameter values from the spreadsheet and paste them into the product fields in the installer.
Installation timing
The following table lists the estimated installation timing of all products within the BMC Cloud Lifecycle Management solution. You can use this information for planning your installation of the solution.
Note
The installation timing varies, based on the hardware configuration and system performance of the installer host and the product target hosts.
Enabling logs if you run into problems
Note
- These steps are optional and you should only perform them on an "as needed" basis, because they can slow down the AR System server response time during installations or upgrades.
- The AR System must be installed before you can perform this procedure.
- Log on to the AR System server.
http://<hostname:<port>/arsys
- Open the AR System Administration Console (select Applications >AR System Administration >AR System Administration Console).
- Open the Server Information window (select System >General >Server Information).
- Click the Log Files tab.
- Enable the following logs:
- API Log
- Escalation Log
- Filter Log
- SQL Log
- Plug-in Log
- Click Apply and Save.
Related topics
System requirements
Port mappings | https://docs.bmc.com/docs/cloudlifecyclemanagement/41/installing/preparing-for-installation/planning-your-installation | 2020-09-18T18:16:42 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.bmc.com |
Knowi provides two modes of working with data: ‘Pull’, which enables our extractors to work with existing data from datastores, and, ‘Push’ which can be used to send real-time data using our API.
Push API:
Send some data:
Open a terminal and paste the following:
curl -i -X POST -d '{"entity": "Push Simple", "data":[{"Some Data":20,"Some Other Data":40}]}'
Modify the data section as you see fit. This submits data into Cloud9 Charts using curl, but could just as well be implemented directly into code.
See the live results here
This chart will be updated automatically as you send more data to it.
Push API
Another example, which uses the Push API for real-time tracking of a messaging campaign can be found here:
| https://docs.knowi.com/hc/en-us/articles/115008057908-Release-Notes-Apr-30-2014 | 2020-09-18T16:43:20 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['/hc/article_attachments/115013727167/PushAPI.png', 'PushAPI.png'],
dtype=object)
array(['/hc/article_attachments/115013727247/mceclip0.png', None],
dtype=object) ] | docs.knowi.com |
This closure proxy stores an expectation and checks it before each call to the target closure. It is used by the Grails mocking framework.
Constructor.
Creates a new
MockClosureProxy wrapping the given
closure.
c- The closure to wrap.
Empty implementation.
args- The arguments to the target closure.
Checks whether the target "method" is expected or not, on the
basis that this closure is mocking a method with the name
methodName.
args- The arguments to the "method" (actually the argumetns to the target closure invocation). | http://docs.grails.org/4.0.2/api/grails/test/MockClosureProxy.html | 2020-09-18T16:51:12 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.grails.org |
Powdery mildew of Melon.
Source::
In FieldClimate.com the risk of Powdery mildew is detected by the sensors: leaf wetness and temperature. Condtions). | http://docs.metos.at/Powdery+mildew+of+Melon | 2020-09-18T17:34:54 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['http://vegetablemdonline.ppath.cornell.edu/Images/Cucurbits/PowderyMildew/Cuc_PM.jpg',
'Powdery Mildew Photo Collage'], dtype=object) ] | docs.metos.at |
This article provides instructions to configure and register macOS based devices with Appspace App.
What’s in this article:
Prerequisites
- The device must meet the manufacturer’s minimum hardware and technical specifications. Please refer to Supported Devices & Operating macOS (.dmg) client to your device.
- Double-click the Appspace App .dmg file to extract Appspace App installation files.
- Drag the Appspace App icon to the Applications folder to install.
- Once installation is complete, click the Appspace App application in your Applications folder to launch it.ImportantYou may encounter the following error message on macOS Catalina when launching Appspace App, “Appspace App can’t be opened because Apple cannot check it for malicious software”. We are currently trying to resolve this issue.
A quick workaround to launch the Appspace App, is to right-click the Appspace App application in the Applications folder, and click Open anyway;
Or
Navigate to System Preferences > Security & Privacy > General, and select the “App Store and identified developers” option for the Allow apps downloaded from section.
- Proceed to register your device.
Uninstall Appspace App
To uninstall Appspace App from a device, follow the instructions in the following How to delete apps on your Mac article: | https://docs.appspace.com/latest/device/install-appspace-app-on-macos/ | 2020-09-18T16:54:05 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.appspace.com |
. Expectation: : 1
What just happened?
You may now wonder why we chose the first table in this step. Here’s an explanation: Recall that our database contains two pre-loaded tables: yellow_tripdata_sample_2019_01 and yellow_tripdata_staging.
yellow_tripdata_sample_2019_01 contains the 2019 taxi data. Since we want to build an Expectation Suite based on what we know about our taxi data from the January 2019 data set, we want to use it for profiling.
yellow_tripdata_staging contains the February 2019 data, loaded to a staging table that we want to validate before promoting it scroll run the automated Profiler on. Remember how we want to add some tests on the
passenger_countcolumn to ensure that its values range between 1 and 6? Let’s uncomment just this one line:
included_columns = [ # 'vendor_id', # 'pickup_datetime', # 'dropoff_datetime', 'passenger_count', ... ]
The next cell passes the Profiler config to the
BasicSuiteBuilderProfiler, which will then profile the data and create the relevant Expectations to add to your
taxi.demosuite.
The last cell does several things again: It saves the Expectation Suite to disk, runs the validation against the loaded data batch, and then builds and opens Data Docs, so you can look at the validation results.
Let’s execute all the cells and wait for Great Expectations to open a browser window with Data Docs. Pause here to read on first and find out what just happened!
What just happened?¶
You can create and edit Expectations using several different workflows. The CLI just used one of the quickest and simplest: scaffolding Expectations using an automated Profiler.
This Profiler connected to your data (using the Datasource you configured in the previous step), took a quick look at the contents, and produced an initial set of Expectations. These Expectations are not intended to be very smart..
A first look at real Expectations¶
The newly profiled Expectations are stored in an Expectation Suite.
By default, Expectation Suites are stored in a JSON file in a subdirectory of your
great_expectations/ folder. You can also configure Great Expectations to store Expectations to other locations, such as S3, Postgres, etc. We’ll come back to these options in the last step of the tutorial.
If you open up the file at
great_expectations/expectations/taxi/demo.json in a text editor, you’ll see the following:
{ "data_asset_type": "Dataset", "expectation_suite_name": "taxi.demo", "expectations": [ ... { "expectation_type": "expect_column_values_to_not_be_null", "kwargs": { "column": "passenger_count" }, "meta": { "BasicSuiteBuilderProfiler": { "confidence": "very low" } } }, { "expectation_type": "expect_column_distinct_values_to_be_in_set", "kwargs": { "column": "passenger_count", "value_set": [ 1.0, 2.0, 3.0, 4.0, 5.0, 6.0 ] }, "meta": { "BasicSuiteBuilderProfiler": { "confidence": "very low" } } }, ...
There’s a lot of information in the JSON file. We will focus on just the snippet above:
Every Expectation in the file expresses a test that can be validated against data. You can see that the Profiler generated several Expectations based on our data, including
expect_column_distinct_values_to_be_in_set, with the
value_set containing the numbers 1 through 6. This is exactly what we wanted: An assertion that the
passenger_count column contains only those values!
Now we only have two problems left to solve:
These dense JSON objects are very hard to read. How can we have a nicer representation of our Expectations?
How do we use this Expectation Suite to validate that new batch of data we have in our
stagingtable?
Let’s execute all the cells and wait for Great Expectations to open a browser window with Data Docs. Go to the next step in the tutorial for an explanation of what you see in Data Docs! | https://docs.greatexpectations.io/en/latest/guides/tutorials/getting_started/create_your_first_expectations.html | 2020-09-18T17:07:14 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['../../../_images/jupyter_scaffold.gif',
'../../../_images/jupyter_scaffold.gif'], dtype=object)] | docs.greatexpectations.io |
Using the Filter Field
The Crawler knows where to get API data via CSS selectors. But sometimes, acquired raw data needs further processing. One of the ways you can transform acquired data is via the Filter field. The Filter field allows the user to transform data through text replacement.
A filter requires a regular expression pattern and a replacement text. When a field is configured to use a filter, all obtained values for that field will be checked for regular expression matches; matching substrings will be replaced with the replacement text. The Filter field can be configured to hold multiple filters.
Example
For this example, we're going to look at how we can modify the obtained path value using the Filter field so that it follows the OpenAPI specification; assuming the structure of the documentation page we need to crawl looks like below.
First, let's specify the location of the operation path using the Path field:
Instead of curly braces
{}, path parameters are enclosed in brackets
[] in our example HTML document. To fix this,
we'll be adding a filter for the Path field. To do this, click on the Path field to show its Filter field, and add the
entry below:
This tells the Crawler to swap brackets with curly braces. For our example HTML snippet, the Crawler is expected to execute the following transformations: | https://docs.torocloud.com/docs-to-openapi/usage/crawler/filter/ | 2020-09-18T17:03:16 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.torocloud.com |
Artificial Intelligence/Machine Learning Talks from Leaders Paris 2017
I was very honored to deliver two talks at Leaders Paris 2017: a main stage talk "Innovations in AI and Machine Learning" and a workshop "AI Now!" The talks were posted to YouTube and are available now.
In the "Innovations in AI and Machine Learning" talk, I discussed some of Microsoft’s recent innovations in artificial intelligence and machine learning, including my lie detection work using EEG and machine learning, Microsoft’s work in speech detection that has surpassed human transcribers, how Xbox used the Cognitive Services Text Analytics to monitor their users’ sentiment and save Christmas, using object detection to help conservationists save giraffes, how a machine learning algorithm’s prediction of when to plant peanuts improved the peanut yield by 30%, how bots can save time and money in a support/call center scenario, and how machine learning models can describe the world to the blind.
I also presented a workshop "AI Now!", in which Walter De Brouwer talks about some innovations in the field, and I give some resources to get started with machine learning: the Bot Framework, training options, pre-trained models like the Microsoft Cognitive Services, tools and technology to build models with your own data, and competition sites which give you cool problems to solve and the data to solve them with, as a way to practice and hone your machine learning skills.
In addition, I was interviewed for Today Software Magazine during the event. Here is the video and article transcription. | https://docs.microsoft.com/en-us/archive/blogs/jennifer/artificial-intelligencemachine-learning-talks-from-leaders-paris-2017 | 2020-09-18T18:38:28 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.microsoft.com |
Delete. See Application sharing.. | https://docs.servicenow.com/bundle/kingston-application-development/page/build/applications/task/t_DeleteAnApplication.html | 2020-08-03T18:33:53 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.servicenow.com |
Table of Contents
Product Index
The Giger Wastes; a mysterious landscape few dare to travel! This set is a single large highly detailed terrain model with separate instanced 'things' props for even more detail. Also included are three Iray sky domes with different lighting times of day to get you going right out of the. | http://docs.daz3d.com/doku.php/public/read_me/index/59433/start | 2020-08-03T18:18:07 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.daz3d.com |
The best way to discover BotDistrikt is to try it out yourself. Whether you are thinking of making a new bot from scratch or already have one in use, get started with the BotDistrikt Platform with a 14-day free trial with the following guide. No credit card required!
If you already have a Bot on the platform, you can brush up on the platform navigation and some of our features here: | https://docs.botdistrikt.com/ | 2020-08-03T18:33:27 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.botdistrikt.com |
Create a SOAP web service activity Use this template to create a custom SOAP activity. Before you beginRole required: web_service_admin, activity_admin, activity_creator About this taskFor instructions on using the activity template process flow, see create custom activities. Procedure Create a custom activity. This action creates a custom activity using a template. After setting up general properties and creating input variables, configure the SOAP web service Execution Command: OptionDescription Map the input variables Use the variables you created to configure the command that Orchestration executes on the SOAP web service. Web service message Specify the SOAP web service message to use for this activity. If you need information on SOAP web services messages, see SOAP message. Web service message function Specify the SOAP message function available in conjunction with the SOAP web service. Endpoint If you enter an endpoint in this field, it overrides the endpoint URL configured in the SOAP message web service. Click the lock icon to open the input field and add the endpoint. SOAP message parameters Name-value pairs to pass to the SOAP endpoint. You can create these parameters manually, or drag input variables into the parameter fields and then assign a value. Parameters defined in the SOAP message that use ${} can be assigned data from this activity template. Use the Additional attribute column to configure the system to not escape the text. By default, text sent to the SOAP message is escaped. The Name column is auto-populated if the users have provided variables using variable substitution in the SOAP message. Use MID Server Check box that determines if a MID Server must be used to invoke the SOAP web service. If the SOAP web service message function defines a MID Server, that MID Server is used instead of the one selected here. Required MID Server capabilities MID Server with the appropriate capabilities for connecting to the SOAP endpoint. By default, the system selects a MID Server with SOAP capabilities. This field is available when the Use MID Server check box is selected. Timeout Allowed duration of the SOAP web service request before it times out, in seconds. The default is 10. Authentication Determines what type of authentication is required for the endpoint. The options are: Use existing credentials in SOAP message: Uses credential definitions from the SOAP message definition. Override with Basic Authentication credentials: Uses |basic authentication credentials. Overrides the credentials in the SOAP message definition. Basic authentication credentials must be provisioned before they are available for selection. Override with Certificate Authentication credentials: Overrides the credentials in the SOAP message definition with certificate authentication credentials. Override with Both Basic and Certificate Authentication credentials: Overrides the credentials in the SOAP message definition with both basic authentication or certificate authentication credentials. Override with WS-Security Username profile: Overrides the credentials in the SOAP message definition with credentials defined a WS Security Profile. Credentials Required REST endpoint basic authentication credentials. This field is available when Override with Basic Authentication credentials is selected in the Authentication field. Only basic authentication credentials appear in the selection list, which includes credentials stored on the instance and credential IDs from an external storage system. If you are using credentials stored in a CyberArk safe, you can override the default safe defined in the MID Server configuration file by adding the name of a different safe as a prefix to the credential ID, separated by a colon. For example, newsafe:orch-test-f5. See Configure the MID Server for CyberArk for details. Protocol Profile Protocol profile to use for authentication. This field is available when the authentication type is either Override with Certificate Authentication credentials or Override with Both Basic and Certificate Authentication credentials. Note: You can map parameter values in a test payload to variables in the Outputs tab automatically. See automap output variables. What to do next Use auto-mapping to generate outputs and parsing rules (recommended for JDBC) If you do not use auto-mapping, you can manually create output variables and create parsing rules Auto-map SOAP activity output variablesThe ServiceNow activity designer allows you to map parameter values in a SOAP test payload to variables in the Outputs stage automatically.Configure the SOAP execution commandUse the input variables you created to configure the command that Orchestration executes on the SOAP endpoint.SOAP template execution parametersYou use execution parameters to create the input process script in the Pre processing form of the activity designer. Create a JavaScript array in a SOAP templateThese are instructions for creating JavaScript arrays using SOAP execution parameters.SOAP template post-processing parametersUse these parameters to create a post-processing script. | https://docs.servicenow.com/bundle/newyork-servicenow-platform/page/administer/orchestration-activity-designer/task/t_CreateASOAPWebServiceActivity.html | 2020-08-03T17:40:27 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.servicenow.com |
Azure Security Center now in Public Preview
Hey! | https://docs.microsoft.com/en-us/archive/blogs/azuresecurity/azure-security-center-now-in-public-preview | 2020-08-03T18:12:16 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.microsoft.com |
Say you, say me, say task complete
Changing subjects for a bit from ADO.NET Data Services to an easy way to impress your friends...
If you've never heard about this, you're in for a treat. The Microsoft Speech API (SAPI) is very, very easy to automate, and can come in handy when you want to call attention to something.
For example, I often find that I run command-line scripts or programs that take a long time to complete. I have a script on my path, speak.js, which simply says 'task complete'. I can go ahead and type the command into the console (so it goes into the buffer and runs when whatever is going on finishes), and then go distract myself with something else.
And, as you can see, the script is very, very simple.
// This script uses the Speech API to speak to the user.
//
// Useful for batch scripts or typing on the command-line to compensate for
// short attention spans (or multitasking).
//
// Usage:
// speak.js -- says 'Task complete'
// speak.js /say:"Hello, world!" -- says 'Hello, world!'
// Uses the Speech API to speak to the user.
function Say(text)
{
var voice = new ActiveXObject("SAPI.SpVoice");
try {
voice.Speak(text);
}
catch (e) {
// See for error codes.
// SPERR_DEVICE_BUSY 0x80045006 -2147201018
if (e.number == -2147201018) {
WScript.Echo("The wave device is busy.");
WScript.Echo(" Happens sometimes over Terminal Services.");
}
}
}
var text = WScript.Arguments.Named.Item("say");
if (text == null) {
text = "Task complete";
}
Say(text);
One more tool for your toolkit - enjoy!
This posting is provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified at. | https://docs.microsoft.com/en-us/archive/blogs/marcelolr/say-you-say-me-say-task-complete | 2020-08-03T18:24:42 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.microsoft.com |
Create or modify disability insurance benefits You can add or modify a disability insurance benefit for an employee using the Disability Insurance Benefits module. Before you beginRole required: sn_hr_core.basic, or sn_hr_core.manager About this task Employees can ask questions about disability insurance benefits or be enrolled in or modify their disability insurance program. A disability insurance benefits case is opened and can be viewed and managed. Procedure Navigate to HR Profile > Disability Insurance Benefits. Click New to open a disability benefit record. Complete the form. (The fields you see depend on how the form is configured and what fields are selected to display.) Field Description Plan The name of the disability insurance plan. Click the Lookup using list icon and select the plan for the employee. Plan type The type of health benefit plan. Fills in when the plan is selected. Plan ID The identification number of the health plan. Start date Date when the benefit is active for the employee and beneficiaries. End date Date when the benefit is no longer active for the employee and beneficiaries. Employee The user who requested enrollment. Click the Lookup using list icon and select the user. Employee Contribution (per paycheck) Dollar amount employee contributes to plan per paycheck. Employee Contribution (per year) Dollar amount employee contributes to plan per year. Employer Contribution (per paycheck) Dollar amount employer contributes to plan per paycheck. Employer Contribution (per year) Dollar amount employer contributes to plan per year. Click Submit. The health benefit is listed in the HR Disability Benefits list. To modify insurance benefit data, find the existing insurance benefit in the HR Disability Benefits list. You can use the list search menu by typing the employee name and pressing Enter. Click the amount under the Employee Contribution column to open the form. The HR Disability Benefit form opens displaying benefit name and other populated fields. Modify the form. Click Update. | https://docs.servicenow.com/bundle/kingston-hr-service-delivery/page/product/human-resources/task/t_CreateOrModifyDisabilityBenefit.html | 2020-08-03T18:19:59 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.servicenow.com |
This section covers the various channel adapters and messaging gateways provided by Spring Integration to support message-based communication with external systems.
Each system, from AMQP to Zookeeper, has its own integration requirements, and this section covers them.
Endpoint Quick Reference Table
As discussed in the earlier sections, Spring Integration provides a number of endpoints used to interface with external systems, file systems, and others.
For transparent dependency management Spring Integration provides a bill-of-materials POM to be imported into the Maven configuration:
<dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.integration</groupId> <artifactId>spring-integration-bom</artifactId> <version>5.3.2.RELEASE</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>
To recap:
Inbound channel adapters are used for one-way integration to bring and expects a result.
The following table summarizes the various endpoints with quick links to the appropriate chapter.
In addition, as discussed in Core Messaging, Spring Integration provides endpoints for interfacing with Plain Old Java Objects (POJOs).
As discussed in Channel Adapter, the
<int:inbound-channel-adapter> element lets you poll a Java method for data.
The
<int:outbound-channel-adapter> element lets you send data to a
void method.
As discussed in Messaging Gateways, the
<int:gateway> element lets any Java program invoke a messaging flow.
Each of these works without requiring any source-level dependencies on Spring Integration.
The equivalent of an outbound gateway in this context is using a service activator (see Service Activator) to invoke a method that returns an
Object of some kind.
Starting with version
5.2.2, all the inbound gateways can be configured with an
errorOnTimeout boolean flag to throw a
MessageTimeoutException when the downstream flow doesn’t return a reply during the reply timeout.
The timer is not started until the thread returns control to the gateway, so usually it is only useful when the downstream flow is asynchronous or it stops because of a
null return from some handler, e.g. filter.
Such an exception can be handled on the
errorChannel flow, e.g. producing a compensation reply for requesting client. | https://docs.spring.io/spring-integration/reference/html/endpoint-summary.html | 2020-08-03T18:41:32 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.spring.io |
TOPICS×
About profiles for processing metadata, images, and videos
A.. For an example, see organize assets using folders ..
Reprocessing assets in a folder
Applies to Dynamic Media - Scene7 mode only in AEM 6.4 any time.
You can optionally adjust the batch size of the reprocess workflow from a default of 50 assets up to 1000 assets. When you run the Scene7: Reprocess Assets workflow on a folder, assets are grouped together in batches, then sent to the Dynamic Media server for processing. Following processing, the metadata of each asset in the entire batch set is updated on AEM. If the batch size is very large, you may experience a delay in processing. Or, if the batch size is too small, it can cause too many round trips to the Dynamic Media server.
If you are performing a bulk migration of assets from Dynamic Media Classic to AEM, AEM,.
- The workflow considers all files in the selected folder, recursively.
- If there are one or more sub-folders with assets in the main selected folder, the workflow will reprocess every asset in the folder hierarchy.
- As a best practice, you should avoid running this workflow on a folder hierarchy that has more than 1000 assets.
- Near the upper-left corner of the page, from the drop-down list, click Timeline .
- Near the lower-left corner of the page, to the right of the Comment field, click the carat icon ( ^ ) .
- Click Start Workflow .
- From the Start Workflow drop-down list, choose Scene7: Reprocess Assets .
- (Optional) In the Enter title of workflow text field, enter a name for the workflow. You can use the name to reference the workflow instance, if necessary.
- Click Start , then click Confirm .To monitor the workflow or check its progress, from the AEM main console page, click Tools > Workflow . On the Workflow Instances page, select a workflow. On the menu bar, click Open History . You can also terminate, suspend, or rename a selected workflow from the same Workflow Instances page.
Adjusting the batch size of the reprocess workflow
(Optional) The default batch size in the reprocessing workflow is 50 assets per job. This optimal batch size is governed by the average asset size and the mime types of assets on which the reprocess is run. A higher value means you will have many files in a single reprocessing job. Accordingly, the processing banner stays on AEM assets for a longer time. However, if the average file size is small–1 MB or less–Adobe recommends that you increase the value to several hundred, but never more than a 1000. If the average file size is large–hundreds of megabytes–Adobe recommends that you lower the batch size up to 10.
To optionally adjust the batch size of the reprocess workflow
- In Experience Manager, tap Adobe Experience Manager to access the global navigation console, then tap the Tools (hammer) icon > Workflow > Models .
- On the Workflow Models page, in Card View or List View, select Scene7: Reprocess Assets .
- On the tool bar, click Edit . A new browser tab opens the Scene7: Reprocess Assets workflow model page.
- On the Scene7: Reprocess Assets workflow page, near the upper-right corner, tap Edit to "unlock" the workflow.
- In the workflow, select the Scene7 Batch Upload component to open the toolbar, then tap Configure on the toolbar.
- On the Batch Upload to Scene7—Step Properties dialog box, set the following:
- In the Title and Description text fields, enter a new title and description for the job, if desired.
- Select Handler Advance if your handler will advance to the next step.
- In the Timeout field, enter the external process timeout (seconds).
- In the Period field, enter a polling interval (seconds) to test for the completion of the external process.
- In the Batch field , enter the maximum number of assets (50-1000) to process in a Dynamic Media server batch processing upload job.
- Select Advance on timeout if you want to advance when the timeout is reached. Deselect if you want to proceed to the inbox when the timeout is reached.
- In the upper-right corner of the Batch Upload to Scene7 – Step Properties dialog box, tap Done .
- In the upper-right corner of the Scene7: Reprocess Assets workflow model page, tap Sync . When you see Synced , the workflow runtime model is successfully synchronized and ready to reprocess asset in a folder.
- Close the browser tab that shows the Scene7: Reprocess Assets workflow model. | https://docs.adobe.com/content/help/en/experience-manager-64/assets/administer/processing-profiles.html | 2020-08-03T19:25:21 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.adobe.com |
TOPICS×
Release notes - Target Node.js SDK
Release notes related to Adobe Target's Node.js SDK .
The Target Node.js SDK lets you deploy Target server-side.
This Node.js SDK helps you easily integrate Target with other Adobe Experience Cloud solutions, such as the Adobe Experience Cloud Identity Service, Adobe Analytics, and Adobe Audience Manager.
The Node.js SDK introduces best practices and removes complexities when integrating with Target via our delivery API so that your engineering teams can focus on business logic.
Learn more about the Target Node.js SDK on the Adobe Tech Blog - Open Sourcing the New Adobe Target Node.js SDK .
Version 1.0.0 (October 9, 2019)
The following sections provide more information about version 1.0.0 of the Target Node.js SDK:
Added
- Target View Delivery v1 API support, including Page Load and View prefetch.
- Full support for delivering all types of offers authored in the Visual Experience Composer (VEC).
- Support for prefetching and notifications for performance optimization by caching prefetched content.
- Support for optimizing performance in hybrid Target integrations via serverState when Target is deployed both on the server-side and on the client-side.We are introducing a setting called serverState that contains experiences retrieved via server-side, so that at.js v2.2+ will not make an additional server call to retrieve the experiences. This approach optimizes page load performance.
- Open sourced on GitHub as Target Node.js SDK .
- New sendNotifications() API method for sending displayed/clicked notifications to Target for content prefetched via getOffers() .
- Simplified View Delivery API request building, with internal field auto-completion with defaults (for example, request.id , request.context , etc.).
- Validation of SDK API method arguments.
- Updated README, samples, and unit tests.
- New CI flow set up using GitHub Actions.
- Added CoC, Contribution guidelines, PR, and issue templates
Changed
- Project renamed to target-nodejs-sdk .
- Major refactoring, replacing Target BatchMbox v2 API with Target View Delivery v1 API.
- create() API method arguments have been modified, removing redundant nesting (see old method declaration here ).
- getOffers() API method arguments have been modified (see old method declaration here ).
- The getTargetCookieName() API method has been replaced with TargetCookieName accessor. See TargetClient utility accessors .
- The getTargetLocationHintCookieName() API method has been replaced with TargetLocationHintCookieName accessor. See TargetClient utility accessors .
Removed
- Target BatchMbox v2 API support.
- The getOffer() API method has been removed, use the getOffers() API method instead. | https://docs.adobe.com/content/help/en/target/using/implement-target/server-side/releases-nodejs.html | 2020-08-03T18:39:22 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.adobe.com |
Installation, Configuration & Usage of rclone
USB support is unable to provide support for rclone, this is due to the large volume of variables and different configurations possible with rclone. The guides found here on the knowledge-base should be able to guide you through using rclone, and any further questions can easily be answered with a quick Google search. You may also be able to find community support for rclone through our community Discord server or the Rclone forums.
In this guide we will be going over the installation of rclone. We'll also cover basic usage such as setting up a remote, and how to copy files between your seedbox and a remote file host.
Installation
To begin, make sure you know how to SSH into your slot. All rclone commands are performed via SSH. You can find a guide on SSH here.
To install rclone run the command below, this command will automatically install rclone to your slot for you.
rclone stable
curl | bash
rclone beta
curl | bash
Configuration
Now we need to configure a remote to use with rclone. For this guide we will be configuring Google Drive. This is the most common remote people tend to use as it offers large storage capacities for a reasonable price. If you wish to use a different cloud host feel free to modify the steps you take.
- Run the command
rclone config
kbguides@lw914:~$ rclone config 2019/06/15 18:16:33 NOTICE: Config file "/home27/kbguides/.config/rclone/rclone.conf" not found - using defaults No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q>
nand then Enter. Type the name you wish to use for your remote and then press Enter once more.
- Scroll through the list of supported remotes and pick the one you wish. For this example we will be using Google Drive, so we will type
12then press Enter.
Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value ... 11 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" 12 / Google Drive \ "drive" 13 / Hubic \ "hubic" ...
- You will be prompted to enter your
client_id. If you have not generated your keys yet or do not know what it is I recommend using this guide to help you generate them Configuring Oauth for Google Drive.
- Once you have followed the steps in the guide, copy your client ID and paste it into the terminal, next press Enter. Now copy your client secret, paste it in, and again press Enter
Google Application Client Id Setting your own is recommended. See for how to create your own. If you leave this blank, it will use an internal key which is low performance. Enter a string value. Press Enter for the default (""). client_id> example12345 Enter a string value. Press Enter for the default (""). client_secret> example12345
- Choose the scope you wish to give rclone. Full access is safe and likely the most useful one to you, so in this case we will type
1then press Enter.
- Unless you know what you are doing, leave the
root folderblank and press Enter. Leave Service Account Credentials JSON file path blank also, again press Enter. Then type
nto choose to not edit advanced config and press Enter.
- Type
nto choose to not use auto config and press Enter. You will be provided with a URL, copy this URL and paste it into your web browser. Choose the Google Drive account you wish to use and click Allow to give rclone permission to use it. You will be given a code, copy this and place it into your terminal, then press Enter. Finally type
nto choose to not configure as a team drive and press Enter.
Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> n If your browser doesn't open automatically go to the following link: <URL WILL BE HERE> Log in and authorize rclone for access Enter verification code> random string Configure this as a team drive? y) Yes n) No y/n> n
- You will be shown a confirmation screen. If all is okay type
yand then press Enter to save your configuration. If you notice any issues, you can edit them from here by typing
e, or delete them using
d. Finally, press
qand then Enter to quit the rclone config wizard.
-------------------- [test] type = drive client_id = blank client_secret = blank scope = drive token = {"access_token":"blank"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d>
Usage
rclone is interacted with purely through SSH. Please ensure you are familiar with the Linux terminal and using SSH prior to trying to use rclone.
General Commands
These commands are useful to remember. They allow you to interact with rclone and move files around between your local and remote storage or even between two remote destinations.
config - Execute this command to add, modify or remove remote file hosts. Usage: rclone config copy - Used to copy files between two locations, remote -> remote, remote -> local, local -> remote Usage: rclone copy [-P] {origin} {destination} move - Same as copy however does not leave the files at the source Usage: rclone move [-P] {origin} {destination} sync - Will make the destination directory identical to the origin. If files exist on destination that do not on origin they will be deleted. Be careful with the sync command as it can cause date loss. Usage: rclone sync [-P] {origin} {destination} When dealing with remote filesystems use: {remote}:{path} For example, if you wished to copy a file named movie.mkv from your current working directory to a path named Movies in a remote name Drive you'd use this command: rclone copy movie.mkv Drive:Movies | https://docs.usbx.me/books/rclone/page/installation-configuration-usage-of-rclone | 2020-08-03T18:14:18 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.usbx.me |
Probably the most useful endpoint in the API,
/api/query enables extracting data from the storage system in various formats determined by the serializer selected. Queries can be submitted via the 1.0 query string format or body content.
The
/query endpoint is documented below. As of 2.2 data matching a query can be deleted by using the
DELETE verb. The configuration parameter
tsd.http.query.allow_delete must be enabled to allow deletions. Data that is deleted will be returned in the query results. Executing the query a second time should return empty results.
Warning
Deleting data is permanent. Also beware that when deleting, some data outside the boundaries of the start and end times may be deleted as data is stored on an hourly basis.
Request parameters include:
An OpenTSDB query requires at least one sub query, a means of selecting which time series should be included in the result set. There are two types:
A query can include more than one sub query and any mixture of the two types. When submitting a query via content body, if a list of TSUIDs is supplied, the metric and tags for that particular sub query will be ignored.
Each sub query can retrieve individual or groups of timeseries data, performing aggregation or grouping calculations on each set. Fields for each sub query include:
Rate Options
When passing rate options in a query string, the options must be enclosed in curly braces. For example:
m=sum:rate{counter,,1000}:if.octets.in. If you wish to use the default
counterMax but do want to supply a
resetValue, you must add two commas as in the previous example. Additional fields in the
rateOptions object include the following:
Downsampling
Downsample specifications const if an interval, a unit of time, an aggregator and (as of 2.2) an optional fill policy. The format of a downsample spec is:
<interval><units>-<aggregator>[-<fill policy>]
For example:
1h-sum 30m-avg-nan 24h-max-zero
See Aggregators for a list of supported fill policies.
Filters
New for 2.2, OpenTSDB includes expanded and plugable filters across tag key and value combinations. For a list of filters loaded in the TSD, see /api/config/filters. For descriptions of the built-in filters see Filters. Filters can be used in both query string and POST formatted queries.. Fields for POST queries pertaining to filters include:
For URI queries, the type precedes the filter expression in parentheses. The format is
<tagk>=<type>(<filter_expression>). Whether or not results are grouped depends on which curly bracket the filter is in. Two curly braces are now supported per metric query. The first set is the group by filter and the second is a non group by filter, e.g.
{host=wildcard(web*)}{colo=regexp(sjc.*)}. This specifies any metrics where the colo matches the regex expression "sjc.*" and the host tag value starts with "web" and the results are grouped by host. If you only want to filter without grouping then the first curly set must be empty, e.g.
{}{host=wildcard(web*),colo=regexp(sjc.*)}. This specifies nany metrics where colo matches the regex expression "sjc.*" and the host tag value starts with "web" and the results are not grouped.
Note
Regular expression, wildcard filters with a pre/post/in-fix or literal ors with many values can cause queries to return slower as each row of data must be resolved to their string values then processed.
Note
When submitting a JSON query to OpenTSDB 2.2 or later, use either
tags OR
filters. Only one will take effect and the order is indeterminate as the JSON parser may deserialize one before the other. We recommend using filters for all future queries.
Filter Conversions
Values in the POST query
tags map and the group by curly brace of URI queries are automatically converted to filters to provide backwards compatibility with existing systems. The auto conversions include:
The full specification for a metric query string sub query is as follows:
m=<aggregator>:[rate[{counter[,<counter_max>[,<reset_value>]]]}:][<down_sampler>:][explicit_tags:]<metric_name>[{<tag_name1>=<grouping filter>[,...<tag_nameN>=<grouping_filter>]}][{<tag_name1>=<non grouping filter>[,...<tag_nameN>=<non_grouping_filter>]}]
It can be a little daunting at first but you can break it down into components. If you're ever confused, try using the built-in GUI to plot a graph the way you want it, then look at the URL to see how the query is formatted. Changes to any of the form fields will update the URL (which you can actually copy and paste to share with other users). For examples, please see Query Examples.
TSUID queries are simpler than Metric queries. Simply pass a list of one or more hexadecimal encoded TSUIDs separated by commas:
tsuid=<aggregator>:<tsuid1>[,...<tsuidN>]{host=foo,type=idle}
Please see the serializer documentation for request information: HTTP Serializers. The following examples pertain to the default JSON serializer.
{ "start": 1356998400, "end": 1356998460, "queries": [ { "aggregator": "sum", "metric": "sys.cpu.0", "rate": "true", "tags": { "host": "*", "dc": "lga" } }, { "aggregator": "sum", "tsuids": [ "000001000002000042", "000001000002000043" ] } } ] }
2.2 query with filters
{ "start": 1356998400, "end": 1356998460, "queries": [ { "aggregator": "sum", "metric": "sys.cpu.0", "rate": "true", "filters": [ { "type":"wildcard", "tagk":"host", "filter":"*", "groupBy":true }, { "type":"literal_or", "tagk":"dc", "filter":"lga|lga1|lga2", "groupBy":false }, ] }, { "aggregator": "sum", "tsuids": [ "000001000002000042", "000001000002000043" ] } } ] }
The output generated for a query depends heavily on the chosen serializer HTTP Serializers. A request may result in multiple sets of data returned, particularly if the request included multiple queries or grouping was requested. Some common fields included with each data set in the response will be:
Unless there was an error with the query, you will generally receive a
200 status with content. However if your query couldn't find any data, it will return an empty result set. In the case of the JSON serializer, the result will be an empty array:
[]
For the JSON serializer, the timestamp will always be a Unix epoch style integer followed by the value as an integer or a floating point. For example, the default output is
"dps"{"<timestamp>":<value>}. By default the timestamps will be in seconds. If the
msResolution flag is set, then the timestamps will be in milliseconds.
[ { "metric": "tsd.hbase.puts", "tags": {}, "aggregatedTags": [ "host" ], "annotations": [ { "tsuid": "00001C0000FB0000FB", "description": "Testing Annotations", "notes": "These would be details about the event, the description is just a summary", "custom": { "owner": "jdoe", "dept": "ops" }, "endTime": 0, "startTime": 1365966062 } ], "globalAnnotations": [ { "description": "Notice", "notes": "DAL was down during this period", "custom": null, "endTime": 1365966164, "startTime": 1365966064 } ], "tsuids": [ "0023E3000002000008000006000001" ], "dps": { "1365966001": 25595461080, "1365966061": 25595542522, "1365966062": 25595543979, ... "1365973801": 25717417859 } } ]
[ { "metric": "tsd.hbase.puts", "tags": {}, "aggregatedTags": [ "host" ], "dps": [ [ 1365966001, 25595461080 ], [ 1365966061, 25595542522 ], ... [ 1365974221, 25722266376 ] ] } ]
For the following example, two TSDs were running and the query was:{host=*}. This returns two explicit time series.
[ { "metric": "tsd.hbase.puts", "tags": { "host": "tsdb-1.mysite.com" }, "aggregatedTags": [], "dps": { "1365966001": 3758788892, "1365966061": 3758804070, ... "1365974281": 3778141673 } }, { "metric": "tsd.hbase.puts", "tags": { "host": "tsdb-2.mysite.com" }, "aggregatedTags": [], "dps": { "1365966001": 3902179270, "1365966062": 3902197769, ... "1365974281": 3922266478 } } ]
[ { "metric": "tsd.hbase.puts", "tags": {}, "aggregatedTags": [ "host" ], "query": { "aggregator": "sum", "metric": "tsd.hbase.puts", "tsuids": null, "downsample": null, "rate": true, "explicitTags": false, "filters": [ { "tagk": "host", "filter": "*", "group_by": true, "type": "wildcard" } ], "rateOptions": null, "tags": { } }, "dps": { "1365966001": 25595461080, "1365966061": 25595542522, "1365966062": 25595543979, ... "1365973801": 25717417859 } }, { "statsSummary": { "datapoints": 0, "rawDatapoints": 56, "aggregationTime": 0, "serializationTime": 20, "storageTime": 6, "timeTotal": 26 } } ]
© 2010–2016 The OpenTSDB Authors
Licensed under the GNU LGPLv2.1+ and GPLv3+ licenses. | https://docs.w3cub.com/opentsdb/api_http/query/ | 2020-08-03T17:57:58 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.w3cub.com |
qt5_add_resources(<VAR> file1.qrc [file2.qrc ...] [OPTIONS ...])
Creates source code from Qt resource files using the Resource Compiler (rcc). Paths to the generated source files are added to
<VAR>.
Note: This is a low-level macro. See the CMake AUTORCC Documentation for a more convenient way to let Qt resource files be processed with
rcc. For embedding bigger resources, see qt5_add_big_resources.
You can set additional
OPTIONS that should be added to the
rcc calls. You can find possible options in the rcc documentation.
set(SOURCES main.cpp) qt5_add_resources(SOURCES example.qrc) add_executable(myapp ${SOURCES})
© The Qt Company Ltd
Licensed under the GNU Free Documentation License, Version 1.3. | https://docs.w3cub.com/qt~5.13/qtcore-cmake-qt5-add-resources/ | 2020-08-03T18:25:49 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.w3cub.com |
TOPICS×
Release notes for Dynamic Tag Management
Release notes and known issues for Dynamic Tag Management.
DTM sunset announced
Adobe has released plans to sunset DTM by the end of 2020. For more information and scheduling, see DTM Plans for a Sunset in the Adobe community forums.
Current release notes
The June 17, 2016 Dynamic Tag Management release includes the following changes:
New Features
Other updates
In addition to the notes for each release, the following resources provide additional information: | https://docs.adobe.com/content/help/en/dtm/using/whatsnew.html | 2020-08-03T18:35:29 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.adobe.com |
Installing Bazel on Ubuntu
Supported Ubuntu Linux platforms:
- 16.04 (LTS)
-1. upgrade bazel | https://docs.bazel.build/versions/0.19.1/install-ubuntu.html | 2020-08-03T18:33:47 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.bazel.build |
SAP Dual Stack Environment Restriction
The redesigned ERS resource type introduced in SPS-L 9.4.0 (which operates in a hierarchy separate from the corresponding central services instance) does not support an SAP dual stack (ABAP+Java) environment where there are two pairs of central services and enqueue replication server instances (e.g., ASCS00/ERS10 and SCS01/ERS11) installed under the same SID. Customers with an SAP dual stack (ABAP+Java) environment installed under the same SID should continue to use the pre-9.4.0 ERS resource design (which is located at the top of the SAP hierarchy with a dependency on the corresponding ASCS/SCS resource).
Failed
Note: This only applies to the pre-9.4.0 ERS resource design (which is located at the top of the SAP hierarchy with a dependency on the corresponding ASCS/SCS resource). For more details see ERS Resource Types in LifeKeeper.
Creating an ERS resource without any additional SAP resource dependents will cause initial in-service to fail on switchover.
Solution: Create ERS as parent of CI/Core instance (SCS or ASCS), then retry in-service.
SAP instance processes in an inconsistent state
Issuing concurrent administrative commands while a migration of an SAP resource is in-progress may leave the SAP instance processes in an inconsistent state, which may require manual intervention to resolve.
Post your comment on this topic.
Post your comment on this topic. | http://docs.us.sios.com/spslinux/9.4.0/en/topic/sap-recovery-kit-known-issues-restrictions | 2020-08-03T18:38:14 | CC-MAIN-2020-34 | 1596439735823.29 | [] | docs.us.sios.com |
Create a Discovery legacy identifier If your system is configured to use legacy identifiers, you can create new identifiers to examine the attributes from specific tables that extend tables in the default Discovery rules. About this task Staring with the Geneva release, identifiers for new and existing discoveries are completely replaced for new instances with CMDB identifiers from the CMDB Identification and Reconciliation framework. Instances without Service Mapping that are upgraded to Geneva can still use the legacy identifiers for existing and new Discoveries. Both identifier versions are available in these instances, but only the legacy identifiers are used. The default identifiers provided with Discovery should be adequate for most discoveries. However, if you need to discover data from a child table, such as Linux Server [cmdb_ci_linux_server], you can create a custom legacy identifier and select the exact criteria you want to add to the CMDB. Procedure Navigate to Discovery Definition > CI Identification > Identifiers. Click New. Fill out the unique fields provided by the identifier form. Table 1. Identifier form fields Field Input Fields Applies to Select the ServiceNow table of the device class for this identifier. For example, if the class is Printer, then the table is cmdb_ci_printer. Order Configure the order in which the identification criteria are evaluated. An example might be serial number - 910, network name - 920, and computer name - 930. Script Create the conditions that determine what Discovery should do when the results are returned from a search of the CMDB for this identifier. For example, you might want to stop Discovery if two or more CIs in the CMDB match this identifier. Or you might want to evaluate additional identifiers even after a match has been established with this identifier. Use the scripting methods described below to tell Discovery how to respond. The completed Discovery identifier form looks like this: | https://docs.servicenow.com/bundle/geneva-it-operations-management/page/product/discovery/task/t_CreateDiscoLegacyIdentifier.html | 2018-08-14T15:11:00 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.servicenow.com |
Beginners in Yate
Yate (acronym for Yet Another Telephony Engine) is a next-generation telephony engine, is a free and open source communications software with support for video, voice and instant messaging.
Based on Voice over Internet Protocol (VoIP) and PSTN, it can easily be extended. It supports SIP, H.323, IAX, MGCP, Jingle, Jabber, E1, T1, analogic, ISDN PRI, BRI, and SS7.
It is written in C++, having in mind a modular design, allowing the use of scripting languages like Perl, Python or PHP to create external functionalities.
Note The instructions below are suitable for linux platforms. For other platforms go to Download page and follow the instructions from there. After you install Yate you can skip chapter How to get Yate source from SVN and go to chapter Configuration Files.
How to get Yate source from SVN
Since you are going the full process of fetching and building Yate you will need the following:
- Basic software development tools:
- A subversion (svn) client
As root go to /usr/src or where ever you'd like to store source code. Once you have the svn client installed, getting the sources is a simple command:
svn checkout yate-SVN cd yate-SVN
First command will fetch a copy of the SVN TRUNK (where the code is committed) in a new directory called yate-SVN. The second command will change your current directory to the Yate sources directory.
For more information go to page Installation.
How to compile
To generate configure file run this, then configure the source code:
./autogen.sh ./configure
compile it:
make
How to run
- to run in debug mode:
./run -vvvvvv
- to run in the daemon mode:
./run -d
For more details about what parameters that can be set when Yate starts you can give command
./run --help
As an advice for debbuging purpose run Yate with this parameters:
-v Verbose debugging (you can use more than once) -d Daemonify, suppress output unless logged -l filename Log to file -Dt Timestamp debugging messages relative to program start
Configuration Files
The files that you can configure are in /usr/src/yate-SVN/conf.d.
Note Each file has a .sample termination. You have to create a new file with the same name but with .conf.
Each parameter in the files have a section that is in brackets. Comment in this files are done by using ;.
Adding Users
Go to /usr/src/yate-SVN/conf.d where Yate was installed and rename regfile.conf.sample into regfile.conf.
Then edit regfile.conf to add users.
We are going to add 2 users:user 100 with password 001 and user 200 with password 002 like this:
[100] password=001 [200] password=002
There is another way of adding users if you wish to use a database, the file to use is register.conf.
SIP Configuration
The file used is ysipchan.conf. No configuration is needed in this file because by default Yate will bind with all the network interfaces on your server on port 5060. If other programs use this port then you have to use another free port and put it in [general] section.
Routing
There is no need to define any routing for registered SIP users on the machine. Yate will know to route calls between the users defined in regfile.conf.
To add authentication requirement for all inbound calls add in the regexroute.conf file:
[default] ${username}^$=-;error=noauth
To define routing to other registered users, PSTN, gateways you need to edit regexroute.conf.
To register users in a database you can use register.conf. And then make your route rules in regexroute.conf.
Configure phones
You have to configure the users from regfile on two SIP phone (you could use a SIP softphone as well) to call Yate Server.
Test the setup
If Yate is running when you change the configuration file, you need to reload Yate for the changes to become effective.
From the phone, dial one of the following:
99991001 - you should hear dial tone 99991002 - you should hear busy tone 99991003 - you should hear ring tone
These numbers are defined for testing purposes and you can find all of them in the "regexroute.conf" file. When you call one of these numbers, you get a standard telephony tone.
Then make a call from one phone to the other and check the audio: if you cannot hear the other end then you may have a NAT problem.
Also to see the flow of the messages, use a telnet client. Connect like this:
telnet localhost 5038
And write next commands to enable the debug and color the output:
debug on color on
Then make the call and watch SIP messages flow.
See also | http://docs.yate.ro/wiki/index.php?title=Beginners_in_Yate&oldid=7811 | 2018-08-14T15:40:56 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.yate.ro |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Initiates the asynchronous execution of the CreatePresignedNotebookInstanceUrl operation.
This is an asynchronous operation using the standard naming convention for .NET 4.5 or higher. For .NET 3.5 the operation is implemented as a pair of methods using the standard naming convention of BeginCreatePresignedNotebookInstanceUrl and EndCreatePresignedNotebookInstanceUrl.
Namespace: Amazon.SageMaker
Assembly: AWSSDK.SageMaker.dll
Version: 3.x.y.z
Container for the necessary parameters to execute the CreatePresignedNotebookInstance | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/SageMaker/MISageMakerCreatePresignedNotebookInstanceUrlAsyncCreatePresignedNotebookInstanceUrlRequestCancellationToken.html | 2018-08-14T16:42:17 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.aws.amazon.com |
ServicesOutsourced Internal Audit
Increasingly, organizations have recognized that outsourcing is required to realize the full potential of the internal audit process.
The internal audit of your operational management control system should:
- Either confirm the efficacy of risk management controls, or expose unmitigated risks;
- Identify aspects of operational controls that do not comply with the defined criteria;
- Identify barriers to successes within process flows;
- Ensure the internal supply chain interfaces are operating efficiently.
Because personnel who are already occupied are usually tasked with conducting the internal audit, it is challenging for internal resources to effectively execute audits that achieve these objectives. Additionally, it is virtually impossible for internal resources to be as objective as needed for an optimal audit outcome. In a flat organization, this stems from the fact that the auditor, by definition, likely has a vested interest in one or more of the processes audited. In a hierarchical organization, audit objectivity may be impaired by the possibility of reprisal for producing negative audit results. Furthermore, internal personnel with the organizational insight that is necessary to properly assess the control systems, from a strategic perspective, are typically not available.
Internal audits conducted by Array follow the intent of ISO 19011 and are ensured to meet any requirements imposed by registration bodies.
In addition, Array’s audits provide valuable constructive criticism of your process controls, interfaces and metrics. | https://array-strategies-training-docs.herokuapp.com/services/outsourced-internal-audit.html | 2018-08-14T16:46:55 | CC-MAIN-2018-34 | 1534221209165.16 | [] | array-strategies-training-docs.herokuapp.com |
This section details the supported versions and requirements for the RabbitMQ plugin.
The RabbitMQ plugin supports RabbitMQ 2.x and later running on Linux and Windows platforms, and requires v4.6 or later of the vRealize Hyperic agent and the vRealize Hyperic server.
This version of the plugin requires that RabbitMQ has the rabbitmq-management plugin installed and running. | https://docs.vmware.com/en/vRealize-Hyperic/5.8.4/com.vmware.hyperic.resource.configuration.metrics.doc/GUID-07DC41F5-F6CA-41C5-A653-1DA36DC6BD7B.html | 2018-08-14T15:56:08 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.vmware.com |
The second annual eCrime Researchers Sync-Up, organised in conjunction with University College Dublin's Centre for Cybercrime Investigation, on March
7th and 8th, 2012 is a two-day exchange of presentations and discussions related to eCrime research in progress and, as importantly, for networking of researchers within
the disciplines that are defining the cybercrime research field today.
The eCrime Researchers Sync-Up has been established as an extension of the annual
APWG eCrime Research Summit held every year in the fall in the United States, an event that
itself, government and professionals from law enforcement to discuss their research, find axes of common interest and establish
fruitful, long-term General Foy Shiver for details via email at foy [at] apwg.org.
APWG members, research partner and correspondents are invited and encouraged to submit their proposals for presentations of their own works-in-progress
and panel discussions. Proposals should be in the form of a title and extended abstract. All proposals should be submitted to the APWG Program Committee
at [email protected]. Abstracts will be accepted till Midnight Eastern Time on January 31, 2012.
Topics of interest include (but are not limited to):
We have tried to keep registration for this event as low as possiable to help cover cost. Registration includes lunch both days and dinner Wednesday night. The rates listed below offer a discounted "early bird" registration rate prior to Feruary 21st.
General Registration
APWG Members
$ 75.00
$ 95.00
Students & University Faculty
Law Enforcement & Government Employees
All Others
$ 100.00
$ 125.00
Registration is available now, click here . . .
Venue:
Our second Radisson Blu St. Helen's Hotel for the special discounted rate of 117.16 Euro per night based on single occupancy.
These rates are inclusive of breakfast each day and are available the nights of March 6th, 7th & 8th.
To access the special rate use the Hotel's web site booking option with the Promotional Code "APWG".
(note:
If you do not see the option for Promotional Code above the "View Rates" button click "More search options" to display the additional fields. )
Contact admin at apwg.org if you have problems.
Radisson Blu St. Helen's Hotel
Stillorgan Road
Dublin, 4, IE
Telephone: +353 1 218 6000
Hotel Map Details Here
Dublin Transport
A convenient location on the N11 Stillorgan dual carriageway provides guests of this hotel in Dublin easy access to the City Centre,
southern suburbs and southeast Ireland. There is also easy access to Fosters Avenue and the M50 motorway and ample on-site complimentary car parking.
Travel to UCD (Conference Venue):
If coming direct from the airport, the Aircoach stops outside the gates of UCD. For those travelling from the hotel, the University College Dublin campus is less than 1 km across Stillorgan road... | http://docs.apwg.org/ecrimeresearch/2012syncup/cfp.html | 2018-08-14T16:03:24 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.apwg.org |
If you need to review if time entries or punches were deleted, you can use the Deleted Time Report.
1. You will want to start by clicking Reports in the top navigation followed by Deleted Time Report as shown below:
2. Once on the Deleted Time Report page you will be able to select specific or all employees and select the date range. Click Submit when done.
3. Populated results will show you the following:
- Time when the time entry or punch was deleted
- Author Username (Author refers to the person who deleted the punch)
- Author First and Last Name
- Author IP Address
- Punched Username (Punched refers to the employee who made the punch)
- Punched First and Last Name
- Details - The details will include the shift ID, the date of the time entry or punch, and the Location/Department/Position (If applicable).
If it's a time entry that was deleted, details will simply show the duration of the time entry. If it was a punch that was deleted you will be able to view the punch in and out time as well as the duration.
Export To Options:
- CSV
- Excel | https://docs.buddypunch.com/en/articles/3658522-deleted-time-report | 2020-03-28T21:39:55 | CC-MAIN-2020-16 | 1585370493120.15 | [array(['https://downloads.intercomcdn.com/i/o/179306398/e19c18d63a9d83b8e0b3c360/Screen+Shot+2020-01-23+at+3.45.18+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/179306710/b8f6230b9ab6f29fc081c273/Screen+Shot+2020-01-23+at+3.46.46+PM.png',
None], dtype=object) ] | docs.buddypunch.com |
When there is a featured planned or being worked on, the status will most likely be changed in the Feature Bucket.
If you monitor the Feature Bucket closely you will see the status changing time to time on features that are planned or being worked.
We’re highly recommending our users to stick with us, planning the future of ListingPro. Participating on our Feature Bucket Page, sending comments, voting upon existing requests, and stay tuned for notifications and upcoming releases.
The roadmap is the most crucial part of the ListingPro development. At this stage, we’re collecting many hypotheses and scenarios, getting ready to fetching all the possible feature, the behavior, conditions and finding a better place to conclude the final solution.
The roadmap is really important, the whole team gets united concentrating between long hours of meeting in order to identify any missing obstacle and sharing those important scenarios with our engineers, back-end, front-end developers with the final structure that the team has identified as a better solution to implement such features.
We invite you to be part and take this experience with us by sending your ideas on our Feature Bucket Page.
If you want to know more about how to request a feature and share your ideas, you can click on the button below.
How to request a feature? | https://docs.listingprowp.com/knowledgebase/where-is-the-roadmap/ | 2020-03-28T20:55:11 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.listingprowp.com |
Crate mp4ameta
See all mp4ameta's items
A library to read ITunes style MPEG-4 audio metadata.
A structure that represents a MPEG-4 audio metadata atom.
A structure able to represent any error that may occur while performing metadata operations.
A MPEG-4 audio tag containing metadata atoms
A structure representing the different types of content an Atom might have.
A struct that holds the different types of data an Atom can contain following
Table 3-5 Well-known data types.
Atom
Kinds of errors that may occur while performing metadata operations.
Type alias for the result of tag operations. | https://docs.rs/mp4ameta/0.1.0/x86_64-pc-windows-msvc/mp4ameta/ | 2020-03-28T21:22:50 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.rs |
DK11 for Delphi | FMX.GisViewerWnd.TGIS_ViewerWnd.Mode | Constructors | Methods | Properties | Events | Events
Mode of reaction to mouse events. Window can be treated as a selected area (gisSelect) or map can be dragged within windows (TGIS_ViewerMode.Drag). Normally TGIS_ViewerMode.Select.
Available also on: Delphi VCL | .NET WinForms | .NET WPF | Java.
// Delphi published property Mode : TGIS_ViewerMode read write default TGIS_ViewerMode.Select;
// C++ Builder published: __property TGIS_ViewerMode* Mode = {read, write, default=TGIS_ViewerMode.Select}; | https://docs.tatukgis.com/DK11/api:dk11:delphi:fmx.gisviewerwnd.tgis_viewerwnd.mode | 2020-03-28T21:15:41 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.tatukgis.com |
CONTENT¶
This object is designed to generate content by making it possible to finely select records and rendering them.
The register-key SYS_LASTCHANGED is updated with the tstamp-field of the records selected which has a higher value than the current.
The cObject RECORDS in contrast is for displaying lists of records from a variety of tables without fine graining.
Property
table
Description
The table, the content should come from. Any table can be used; a check for a table prefix is not done.
In standard configuration this will be “tt_content”.
Property
renderObj
Description
The cObject used for rendering the records resulting from the query in .select.
If .renderObj is not set explicitly, then < [table name] is used. So in this case the configuration of the according table is being copied. See the notes on the example below.
Default
< [table name].
.collect: (integer /stdWrap) If set, all content elements found on the current and parent pages will be collected. Otherwise, the sliding would stop after the first hit. Set this value to the amount of levels to collect on, or use “-1” to collect up to the siteroot.
.collectFuzzy: (boolean /stdWrap) Only useful in collect mode. If no content elements have been found for the specified depth in collect mode, traverse further until at least one match has occurred.
.collectReverse: (boolean /stdWrap) Reverse order of elements in collect mode. If set, elements of the current page will be at the bottom.
Note: The sliding will stop when reaching a folder. See $cObj->checkPid_badDoktypeList.
[tsref:(cObject).CONTENT]
Example:¶.
Example:¶ { ..... } | https://docs.typo3.org/m/typo3/reference-typoscript/7.6/en-us/ContentObjects/Content/Index.html | 2020-03-28T21:55:17 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.typo3.org |
Research Storage
Latest revision as of 16:23, 13 November 2014
Research Storage is a scalable storage fabric designed to grow with your research operations. Data set sizes are growing dramatically. Keeping enough storage on hand to manage Big Data is a challenge for everyone. The Research Storage service provides nearly unlimited capacity to hold the data important to your research. The service is built using flexible technologies that can support any research data requirement.
[edit] Availability
A 1TB default allocation is available to individual users of the Cheaha HPC system to to help users adhere to cluster storage policies for their scratch and home directories and facilitate analyzing data on cluster. This storage space is available as an internal cluster file system via the standard SSH interface. Data can be transferred to and from the path `/rstore/user/$USER/default`.
IMPORTANT: Data on the cluster is not backed up by default. It is the responsibility of the user to ensure their research data sets are protected against loss. Please see other service offerings below if you are interested in backup services for your data.
Cluster scratch space is designated as a temporary storage location for actively analyzed data sets. Analysis results should be removed from scratch after jobs are completed. Cluster home directory space is designated for use as a workflow support space, holding custom codes and other tools needed to analyze data but not the data itself.
Because many data sets are large, transferring them to and from the cluster can require a significant amount of time. The research storage space now available on the cluster is designated as a place where users can keep data sets over longer periods of time for subsequent analysis and avoid crowding the scratch storage or abusing the home directory space.
Additional research storage space can be allocated for individuals or groups on request according to the rates for Research Storage.
[edit] Description
[edit] Request Storage
[edit]. | https://docs.uabgrid.uab.edu/w/index.php?title=Research_Storage&diff=prev&oldid=4912 | 2020-03-28T21:53:11 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.uabgrid.uab.edu |
Configure the Electronic reporting framework
This topic explains how to set up the basic functionality for Electronic reporting (ER). It also describes the steps that you must complete before you can set up ER.
Prerequisites for ER setup
Before you can set up ER, you must set up the required document types in Document management:
- A document type for Microsoft Office documents that are used as templates for ER reports.
- A document type that is used to store the output of ER reports in the jobs archive.
- A document type that is used to store the output of ER reports so that they can be viewed in other programs.
- A document type that is used to handle files in the ER framework for all other purposes.
For each document type, the following attribute values can be selected.
Set up ER
Use the following procedure to set up the basic functionality of ER for all legal entities.
- On the Electronic reporting parameters page, on the Attachments tab, define the types of documents that should be used for file storage in the ER framework.
- On the LCS tab, define the number of parallel threads that should be used to load an ER configuration from repositories in Microsoft Dynamics Lifecycle Services (LCS), so that the configurations are loaded in the most efficient manner. The value can vary from 1 to 15, depending on the available resources of the current program. Note that the real number of threads will be defined automatically, based on this setting, and on the number of other tasks and their priorities.
- On the Configuration provider table page, create ER provider records. Each provider can be marked as Active. The active provider’s name and Internet address are stored in an ER configuration as attributes of the owner of the configuration.
Optional setup for ER
In addition to the basic functionality, ER has other functionality that you can set up.
- On the Electronic reporting destination page, define the ER output destinations for each file output of each ER format configuration. Use the document types of the Document management framework that you set up earlier. You can also use this page to set up the optional functionality of ER for each legal entity. For more information, see the topic about ER destinations that is linked in the "See also" section of this topic.
- Whenever you add new Application Object Tree (AOT) artifacts or update existing AOT artifacts that are used as data sources (tables, views, or data entities) in ER, use the Rebuild table references menu item (Organization administration > Electronic reporting > Rebuild table references) to bring your AOT changes into the ER metadata.
Frequently asked questions
Question: What is the optimal number of parallel threads to use to load an ER configuration from LCS?
Answer: To calculate the optimal number of parallel threads, use the following empirical formula: Cores ÷ 2 + 1(2). For example, if the program runs on a virtual machine (VM) that has two CPUs, and each CPU contains four cores, the optimal number is five or six parallel threads.
Question: I have added a custom table to the AOT. I created a new ER model mapping configuration for my ER data model. During the design of the model mapping, I tried to add a new data source type, Table records, that refers to my table. I could manually add my table name to the Table lookup, and the ER model mapping accepted it without errors or warnings. However, my table’s name isn't included in the list of available choices that the Table lookup of this data source offers. How do I include the name of my table?
Answer: To include the name of your custom table in the Table lookup, use the Rebuild table references menu item as described in the Optional setup for ER" section earlier in this topic.
Question: Why can’t I mark the Microsoft provider as Active in the Electronic reporting workspace in my production environment?
Answer: The Microsoft provider is used to mark ER configurations that have been designed and maintained by Microsoft. We expect that Microsoft will release new versions of the configurations in the future. We recommend that you not mark the Microsoft provider as Active. Otherwise, you can update the configurations. (For example, you can change the content and register new versions.) These updates will cause issues in the future, when Microsoft provides new versions of the configurations, and those new versions must be imported and adopted. Instead, register a new ER provider for your company, and use it for your ER configurations maintenance. To reuse a Microsoft configuration, select it as the base for your derived copy. To incorporate changes that are provided by Microsoft, rebase your configuration to a new version of the Microsoft configuration when it becomes available. | https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/analytics/electronic-reporting-er-configure-parameters | 2018-01-16T17:41:37 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.microsoft.com |
The size of a virtual disk is limited to 8 TB. However, your hardware version, bus type, and controller type also impact the size of your virtual disks.
To discover your SCSI controller type, open the virtual machine .vmx file. The value of the setting
scsi0.virtualDev determines your SCSI controller type. | https://docs.vmware.com/en/VMware-Workstation-Player-for-Linux/14.0/com.vmware.player.linux.using.doc/GUID-0E980DDB-5050-4109-A9BC-596572ED722D.html | 2018-01-16T17:37:16 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.vmware.com |
Study of Registration Practices of the
COLLEGE OF DIETITIANS OF ONTARIO, 2007
ISBN 978-1-4249-6454 Dietitians of Ontario (CDO) Dietitians Dietitians of Ontario (CDO) operates in accordance with the Dietetics Act, 1991 and the Regulated Health Professions Act, 1991.
Only dietitians registered with the CDO can use the title ”Dietitian” or ”Registered Dietitian” (RD), or some variation, abbreviation or equivalent of these titles in another language. Only dietitians registered with the CDO can hold themselves out to the public as a person who is qualified to practise in Ontario as a dietitian.
In Ontario, registered dietitians are educated in the sciences related to foods and human nutrition, and they are trained to apply their knowledge in a variety of settings. Registered dietitians are required to continue their professional development in order to expand the competencies needed for delivering safe, ethical and high-quality dietetics services.
The practice of registered dietitians is varied and includes the following:
clinical assessment
treatment and care planning
health promotion and disease prevention
food and nutrition product development and promotion
food service systems
public policy
education and research.
Registered dietitians practise in many different settings, such as hospitals, public health units, home care agencies, primary care centres, long-term care facilities, private practice, fitness centres, industry and government.
Other practitioners, such as nutritionists, may provide nutrition advice, but under Ontario law only registered dietitians are held accountable to the public and to the CDO for their conduct, the quality of their care and the nutrition services that they provide.
There is a definite shortage of dietitians throughout Ontario, but especially in rural areas and northern Ontario. Ontario is a net importer of dietitians from other provinces, registering almost as many dietitians from out-of-province as are graduated from dietetic programs in Ontario.
The labour market for dietitians has been expanding in recent years because of the expansion of government-funded positions and increasing demand for private practice dietitians.
The CDO’s staff consists of nine employees (8.5 full-time equivalents), three of whom are involved in the registration process.
There are nine categories of registration requirements for the General Certificate of Registration.
Academic Preparation: All applicants must have graduated from an accredited program in foods and nutrition from a Canadian university, or an equivalent that meets the academic subject-area requirements for accredited programs in dietetics.
Practical Training: All applicants must complete the practical training requirement: an internship program, practicum, or program of practical experience that is accredited (or approved as equivalent to accredited) by the Council of CDO.
Language Proficiency: All applicants must be able to speak and write either English or French with reasonable fluency. The CDO will require an applicant to take language tests if his or her language of dietetic education is not English or French.
Record of Ethical and Competent Practice: The applicant must satisfy the CDO that he or she has not been found guilty of, and is not the subject of any current proceedings for, professional misconduct, incompetence or incapacity in Ontario or any other jurisdiction in relation to the practice of dietetics or any other profession.
Good Conduct: The applicant must not have been found guilty of any criminal offence or an offence under the Food and Drug Act (Canada) or the Narcotic Act (Canada).
Upgrading Requirement: If an applicant completed the academic and practical training requirements more than three years before the date of application, he or she must have practised safely as a registered dietitian or must have successfully completed a refresher or upgrading program approved by the CDO's Registration Committee.
Canadian Citizenship/Permanent Residency Requirement: The applicant must be a Canadian citizen or a permanent resident of Canada or must be authorized under the Immigration Act (Canada) to engage in the practice of dietetics.
Canadian Academic and Practical Training (CAPT): If an applicant was trained outside of North America and meets the CDO’s academic and practical training requirements, he or she must complete one advanced course in clinical nutrition, and practical training under the supervision of a registered dietitian in Canada. The purpose of the CAPT requirement is to familiarize applicants with current dietetic practice in Ontario. This helps them prepare for the Canadian Dietetic Registration Examination.
Examination Requirement: All applicants must successfully complete the Canadian Dietetic Registration Examination (CDRE), unless they have continuously registered since October 1, 1998, with the dietetic regulatory body of one of the following provinces: British Columbia, Alberta, Saskatchewan, Manitoba, New Brunswick, Nova Scotia, Prince Edward Island or Newfoundland/Labrador.
An applicant who has satisfied the nine categories of registration requirements is eligible to apply for a General Certificate of Registration.
An applicant who has met the first eight categories of requirements, and who has already applied to write the CDRE, may be issued a Temporary Certificate to practise under the title of “registered dietitian.” Applicants who have previously held a Temporary Certificate and/or have previously failed the examination will not be issued a Temporary Certificate.
The particular application process that an applicant must follow depends on his or her academic and practical experiences. Please see the CDO website for complete details. To satisfy the basic requirements in categories 1 and 2, applicants should submit documents as described below.
Category 1: Academic Qualifications
Graduates from academic programs accredited by Dietitians of Canada must submit the following to the CDO:
A photocopy of academic degree(s)
Official transcripts, sent directly to the CDO from the transcript offices of all the universities that the applicant attended.
Graduates from a foods and nutrition university program that is not accredited by Dietitians of Canada must submit the following:
A photocopy of academic degree(s)
Official transcripts, sent directly to the CDO from the transcript offices of all the universities that the applicant attended
Four official copies of course descriptions of all courses completed (e.g., program handbook, calendar, syllabus)
A completed Educational Summary form for each degree/diploma obtained, including information about the number of hours per week and the number of weeks attended for each course completed.
Category 2: Practical Training Program
Graduates from a practical program accredited by Dietitians of Canada must submit the following to the CDO:
A copy of the applicant’s internship certificate
An original letter from the applicant’s internship coordinator confirming that the applicant completed his or her practical experience.
Graduates from a foods and nutrition university program that is not accredited by Dietitians of Canada must submit one of the following:
Four copies (one original and three copies) of the applicant’s competency package (Entry-Level Competency); or
Four copies of a Master’s Competency package — if the practicum component of the degree is not accredited by Dietitians of Canada.
Additional Requirements
Any applicant who has completed dietetic training more than three years ago must also submit the following:
A copy of his or her resumé or curriculum vitae
A completed Upgrading/Refresher Programs and Continuing Education Activities form, which is available at the CDO
A completed Verification of Private Practice form, if the applicant has been in private practice within the past three years before application. The CDO will advise the applicant about the use of this form.
Documents Under a Different Name
If an applicant submits any documents that are under a different name than the one he or she is using at the time of registration, the applicant must provide proof of the change of name (e.g., a copy of a marriage certificate).
In addition to the documents that must be submitted by domestically educated and trained applicants, applicants with international credentials must submit the following documents:
Proof of language testing scores, if the applicant’s first language of instruction is neither English nor French
Verification from Comparative Education Services (University of Toronto) that the applicant’s level of education is equivalent to a Canadian university degree
Original transcripts, sent directly to the CDO by the educational institution, or original transcripts with a notarized translation if the transcripts are in neither English nor French
A completed Education Summary form with attached course descriptions, issued by the university or universities (course syllabus)
A letter sent directly to the CDO by the institution that provided the applicant’s program of practical training, or the original letter with a notarized translation, if the original letter was not written in English or French
All documents needed to meet the CAPT requirement.
There is no formal process for applicants who are unable to provide documentation. The CDO has only ever had one case where an internationally trained applicant did not have access to documents because they were lost. In that case, the person was not registered.
The CDO helps applicants to locate required documents or alternative documentation.
Before applying to the CDO, each internationally trained applicant must have his or her academic degrees assessed by Comparative Education Services (CES) at the University of Toronto to determine whether the degrees are equivalent to university-level degrees in Canada.
All applicants, domestically or internationally trained, who lack an accredited foods and nutrition education program and/or an accredited program of dietetics practical training must have their academic credentials and/or practical training assessed for equivalency by the CDO’s Registration Committee.
All applicants must have graduated from an academic program in foods and nutrition from a university that is accredited by Dietitians of Canada, or an equivalent that meets the academic subject-area requirements for accredited programs in dietetics.[1] Equivalency is defined as any university program in nutrition that meets the following subject-area requirements:
12 credits in humanities/social sciences
9 credits in natural sciences (3 each in general chemistry, organic chemistry and microbiology)
18 credits in professional subjects (3 each in basic foods, advanced foods, basic principles of management and communications arts and 6 in basic human nutrition)
9 credits in human nutrition (3 each in advanced, clinical and community nutrition)
9 credits in food service systems management (food service systems organizations and management, quantity food production management, food service facilities, cost control and accounting, and personnel)
12 credits in supporting sciences (9 total in biochemistry and physiology and 3 total in advanced social science, statistics and computers).
There are no formal work experience requirements to be registered with the CDO. However, in order to be eligible for registration, all applicants must have some accredited practical training.
The council of the CDO approves as an equivalent to its practical experience requirements any internship, practicum, or program of practical experience that is successfully completed after the applicant's academic training and that:
Is at least 35 weeks in duration
Is supervised by a registered dietitian
Involves ongoing formal evaluation of performance against competencies that are substantially similar to those used by Dietitians of Canada in accrediting an internship program or practicum. The applicant’s program must comprehensively cover those competencies.
The following institutions in Ontario offer internship programs that have been accredited by Dietitians of Canada:
Aramark Canada Ltd., Toronto
Grand River Hospital, Kitchener
Hamilton Health Sciences Corporation, Hamilton
London Health Sciences Centre, London
Mount Sinai Hospital, Toronto
North York General Hospital, Toronto
St. Michael's Hospital, Toronto
Sunnybrook & Women's College Health Science Centre, Toronto
The Hospital for Sick Children, Toronto
The Ottawa Hospital (Ottawa Hospital Internship Program), Ottawa
University Health Network, Toronto.
The following internship programs are also accredited:
Northern Ontario Dietetic Internship (Northern School of Medicine), Sudbury/Thunder Bay
Southeastern Ontario Dietetic Internship Program (Kingston, Frontenac and Lennox & Addington Public Health), Kingston.
In addition, the following master’s degree programs offer accredited practical training:
Masters of Applied Nutrition Program, University of Guelph
Masters of Health Sciences, Program in Community Nutrition, University of Toronto
Masters in Health Sciences in Foods and Nutrition, Brescia University College, University of Western Ontario.
Dietitians of Canada maintains an up-to-date list of accredited internship programs on its website.
If an applicant was trained outside of North America and meets CDO’s academic and practical training requirements, he or she normally must also fulfill the Canadian Academic and Practical Training (CAPT) requirement, by completing:
one advanced course in clinical nutrition
practical training under the supervision of a registered dietitian in Canada.
The duration of the practical training must be at least 10 weeks, but it must be extended if the applicant is assessed as not meeting the practical training outcomes for CAPT.
The purpose of the CAPT requirement is to familiarize applicants with current dietetic practice in Ontario. This helps applicants prepare for the Canadian Dietetic Registration Examination. For more detailed information about the CAPT requirement, visit the CDO website.
All applicants must successfully complete the Canadian Dietetic Registration Examination (CDRE), unless they have been continuously registered since October 1, 1998, with the dietetic regulatory body in one of the following provinces: British Columbia, Alberta, Saskatchewan, Manitoba, New Brunswick, Nova Scotia, Prince Edward Island or Newfoundland/Labrador. Applicants become eligible to write the CDRE after they have satisfied registration requirements 1 through 8 (see section 3.a.i).
The CDRE is a multiple-choice examination that tests knowledge, application and critical thinking in the following areas: assessment, planning, implementation, evaluation, communication and professional practice. The CDRE is designed to confirm competence. It is based on the Competencies for the Entry-Level Dietitian (Dietitians of Canada, 1996) and is being administered in nine provinces in Canada.
Applicants may write the CDRE in English or in French.
An applicant has four years or three opportunities, whichever comes first, to pass the CDRE. The first attempt must be made within one year of becoming eligible to write it. An applicant who fails the first attempt is allowed to write it a second time. After a second failure, the applicant must complete academic and practical upgrading before making a third and final attempt. If an applicant does not pass on the third attempt, within four years of becoming eligible, the applicant is not eligible to continue in the examination process.
For applicants to the CDO, the exam is administered every year in May and November in Toronto and at alternative sites in Ontario on request. Requests must be made in writing at the time of the application for registration.
The Canadian Dietetic Registration Examination (CDRE) Preparation Guide explains the basis of the exam and the examination process and provides sample questions. A copy of the most recent version of the guide is available from the CDO and is posted on the CDO website. It is provided when an applicant has been deemed eligible to write the examination and has paid the exam fee of $400 to the CDO.
In addition, as of 2007, there are plans for providing an online exam-preparation learning module on the CDO website by June 2008.
All applicants must be reasonably fluent in written and oral English or French.
If an applicant's language of instruction for his or her dietetic education and training is not English or French, the applicant must demonstrate language proficiency in either language. Test results must be sent directly to the CDO by the testing organization.
The applicant can demonstrate adequate English language proficiency either by:
Completing Ryerson University’s Internationally Educated Dietitians Pre-registration Program (IDPP), or
Achieving the required scores on the following Educational Testing Service (ETS) tests.
Test of English as a Foreign Language (TOEFL)
Test of Spoken English
A score of 50 is required on a Test of Spoken English (TSE), or a speaking component score of 26 on a TOEFL (iBT), the Internet-based TOEFL test.
An applicant can prove his or her French proficiency through:
A TESTCan score of 4.5 in listening and reading, and
A writing score of Band 4.0, and
An interview test that confirms speaking performance.
The fees listed below are the total fees. There are no additional taxes.
Once an applicant meets all the registration requirements, he or she may be registered within days. The practical training/internship process usually takes approximately one year to complete.
Once an application has the necessary documents, it is assessed by the Registration Committee at their next scheduled meeting. The Registration Committee normally meets eight or nine times a year.
The following Ontario universities offer undergraduate programs in dietetics that have been accredited by Dietitians of Canada, a national association that carries out program accreditation.
University of Western Ontario, Brescia University College
Department of Food and Nutritional Sciences
London
Ryerson University
School of Nutrition
Toronto
University of Guelph
Department of Family Relations and Applied Nutrition
Guelph
The following nutritional sciences program is new and has not yet been accredited by Dietitians of Canada.
Programme de Baccalauréat en Sciences de la nutrition[2]
Faculté des Sciences de la santé
Université d’Ottawa
Ottawa
For a full list of accredited programs in Canada, visit the website of Dietitians of Canada.
Internal reviews are carried out by the Registration Committee of the CDO. The Registration Committee is composed of eight people, four of whom are public appointees and four of whom are elected registered dietitians.
Applications are assessed by a panel of the Registration Committee. Each panel must be composed of at least three members of the Registration Committee and must have a combination of public appointees and registered dietitians.
An applicant may appeal the decision of the Registration Committee or a panel of the Registration Committee to the Health Professions Appeal and Review Board (HPARB).
Complaints about the CDO’s services to applicants and about timeliness are considered by the Registrar and Executive Director.
The Raymond Chang School of Continuing Education at Ryerson University offers a bridging program, called the International Dietitians Pre-registration Program (IDPP), to help internationally educated applicants complete education and training to meet the CDO’s registration requirements. The program delivers university credits in food and nutrition and provides practical training and assessment that meets the Canadian Academic and Practical Training (CAPT) requirements (see 3.e.iii).
For the purposes of registration with the CDO, applicants who complete the IDPP are normally assessed to have the equivalent to an accredited internship program. In addition, the IDPP provides applicants with mentors, exposes them to a network of colleagues across different work settings, facilitates the job search process and helps applicants to prepare for the Canadian Dietetic Registration Examination.
Note: Applicants who complete the bridging program are exempted from having to demonstrate their language proficiency through language tests.
The program is one year in length and has two intakes a year. It costs approximately $3,000 (Canadian).
In compliance with the Agreement of the Alliance of Canadian Dietetic Regulatory Bodies, which came into effect on October 1, 1998, and under the CDO’s registration regulations, an individual registered with another Canadian dietetic regulatory body, other than the Ordre professionnel des diététistes du Québec, is deemed to have met the academic, practical training and examination requirements for a CDO General Certificate of Registration, if the individual fulfills all of the following conditions:
When applying to the CDO, the applicant is still registered with a regulatory body of dietitians in another jurisdiction in Canada.
The applicant was registered with a regulatory body of dietitians in Canada on October 1, 1998, or has passed the Canadian Dietetic Registration Examination (CDRE) before the date of application to the CDO.
The applicant's registration in the other jurisdiction (from October 1, 1998 onward) was not in a restricted category such as temporary, qualifying, honorary, retired, inactive, associate or special.
When applying to the CDO, the applicant's registration in the other jurisdiction (from October 1, 1998 onward) is not subject to any conditions, restrictions or limitations other than those that apply to all members.
Registration with the regulatory body in one jurisdiction is not transferable to the regulatory body in another jurisdiction. Therefore, the applicant must apply to the CDO using standard application procedures. In some instances, the applicant's qualifications for registration may be subject to further assessment by the Registration Committee
The CDO maintains regular contact with its applicants, especially to help applicants to submit the necessary documents to complete an application.
The CDO does not experience backlogs in its registration process. Once an application file is complete, it is reviewed by a panel of the Registration Committee at the next scheduled meeting. The committee typically meets eight or nine times a year. Decisions of the Registration Committee panels are normally written and issued within 10 days of the panel decision.
Complaints concerning some aspect of the registration process will be handled by the CDO Registrar. Ultimately, applicants who disagree with a registration decision can take their case to the Health Professions Appeal and Review Board (HPARB).
The Ministry of Citizenship and Immigration conducted a survey in 2005 to collect information about occupational regulatory bodies in Ontario.
Since the 2005 survey, the bridging program has been implemented. Also, the language policy of the CDO has changed, so that applicants who complete the bridging program are exempted from having to demonstrate language proficiency through language tests.
Definitions used in these tables:
Alternative class of licence: a class of licence that enables its holder to practise with limitations; additional registration requirements must be met in order to be fully licensed. Alternative classes of licence granted by the College of Dietitians of Ontario are specified under the tables below.
Applicant: a person who has applied to start the process for entry to the profession.
Applicant actively pursuing licensing: an applicant who had some contact with the College of Dietitians of Ontario within the year specified. The CDO does not track this information.
Inactive applicant: an applicant whose CDO file has been closed because he or she did not submit required information by the CDO’s deadlines.
Member: a person who is currently able to use the protected title or professional designation “dietitian” or ”registered dietitian” (RD), or some variation, abbreviation or equivalent of these titles in another language.
1 Both general and temporary members.
2 “Not employed” members.).
College of Dietitians of Ontario website.. Last accessed: March 28, 2008.
Dietitians of Canada website.. Last accessed: March 28, 2008.
Representatives of the College of Dieticians of Ontario met with staff of the Office of the Fairness Commissioner on December 7, 2007, to provide further information for this study. | http://docs.fairnesscommissioner.ca/docs/dietitians.htm | 2018-01-16T16:54:48 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.fairnesscommissioner.ca |
Product Catalog The product catalog is a set of information about individual models. Models are specific versions or various configurations of an asset. Asset managers use the product catalog as a centralized repository for model information. A detailed and well-maintained product catalog can coordinate with service catalog, asset, procurement, request, contract, and vendor information. Models published to the product catalog are automatically published to the Service Catalog. The service catalog includes information about goods (models) and services. If the model is available from multiple vendors, a model can be listed more than once . Models are included with the Asset Management application. Keep the following in mind when working with the product catalog. A product catalog item can be linked to multiple vendor catalog items or to a single model. A model can only have one product catalog item. A vendor catalog item can only have a single product catalog item. Components installed with Product CatalogThe following components are installed with the Product Catalog plugin. ModelsModels are specific versions or various configurations of an asset. Models are used for managing and tracking assets through various ServiceNow platform asset applications, including Product Catalog, Asset Management, and Procurement.Vendor catalog itemsThe vendor catalog is a list of goods available from different vendors.Product catalog itemsProduct catalog items are hardware and software that you can track and offer in the service catalog.Model categoriesModel categories associate CI classes with asset classes. Model categories are part of the Product Catalog application.Related ConceptsAsset and CI management | https://docs.servicenow.com/bundle/kingston-it-service-management/page/product/product-catalog/concept/c_ProductCatalog.html | 2018-01-16T17:44:06 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.servicenow.com |
Visual task board structure There are different types of task boards for different kinds of task management. All types of boards share the same overall structure. Table 1. Board types Board type Description Freeform boards Display any kind of task record, including personal tasks. Members of freeform boards can add and remove task cards and lanes. Flexible boards Display tasks that match the configured filter against a particular table. Members of flexible boards can add task cards, which are removed automatically when the tasks no longer match the filter conditions. Members can define custom lanes, similar to a freeform board. Guided boards Display tasks that match the configured filter against a particular table, like flexible boards. Members of guided boards can add task cards, which are removed automatically when the tasks no longer match the filter conditions. Guided board lanes correspond to field values and cannot be edited in most cases. The icon beside the board name on the Task Boards page identifies the type of board. Freeform boards appear with a grid of four squares (); flexible boards appear with a vertical line beside two squares (); guided boards appear with two vertical lines (). Figure 1. Task Boards screen All boards have these elements: Table 2. Board elements Element Description Quick panel Displays labels and users associated with the board. Board members can use the quick panel to filter cards or to quickly label or assign tasks. Members can also configure what appears in the quick panel. Lanes Organize cards on a board into vertical groups. These groups often represent the status of the task, such as To Do, Doing, and Done. Each board is composed of one or more lanes. When using a guided board, each lane represents a possible field value. For example, a board on the Incident table can display one lane for each State value such as New, Active, or Resolved. Users can move cards from one lane to another to update the task that card represents. Cards Represent individual tasks. Users can add comments, attachments, and labels to cards and assign users to cards. Each card is tied to a task record; updating one immediately updates the other. For freeform boards, each card represents a personal task. For flexible and guided boards, each card represents a record from the list that board was created from. Task board tools Displays board information, a board members, the board activity stream, and board labels. Figure 2. Board layout | https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/use/visual-task-boards/reference/r_BoardStructure.html | 2018-01-16T17:44:24 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.servicenow.com |
»
73 Op. Att'y Gen. 37 (1984)
Up
Up
73 Op. Att'y Gen. 37, 37 (1984)
Hospitals; Law Enforcement; Licenses And Permits; Public Records; Regulation And Licensing, Department Of;
The public records law permits the Department of Regulation and Licensing to refuse to disclose records relating to complaints against health care professionals while the matters are merely "under investigation"; good faith disclosure of the same will not expose the custodian to liability for damages; and prospective continuing requests for records are not contemplated by the public records law. OAG 10-84
February 17, 1984
73 Op. Att'y Gen. 37, 38 (1984)
BARBARA NICHOLS,
Secretary
Department of Regulation and Licensing
73 Op. Att'y Gen. 37, 38 (1984)
You have asked for my opinion regarding access under the open records law to certain investigative files in your custody.
73 Op. Att'y Gen. 37, 38 (1984)
Your Department and the various licensing and regulatory boards created in your Department are responsible for the regulation and licensing of a variety of professions. You are presently concerned with the boards that oversee health care providers, namely the Dentistry Examining Board, Medical Examining Board, Board of Nursing and Pharmacy Examining Board. You state that a major newspaper has requested a current list and monthly update of all pending investigations before those boards.
73 Op. Att'y Gen. 37, 38 (1984)
Pursuant to section 440.20, "[a]ny person may file a complaint before any examining board and request any examining board to commence disciplinary proceedings against any permittee, registrant or license or certificate holder."
73 Op. Att'y Gen. 37, 38 (1984)
Section RL 2.03(7) of the Wisconsin Administrative Code defines "informal complaint" as follows:
73 Op. Att'y Gen. 37, 38 (1984)
"Informal complaint" means any written information submitted to the division [of enforcement] or any board by any person which requests that a disciplinary proceeding be commenced against a licensee or which alleges facts, which if true, warrant discipline. "Informal complaint" includes requests for disciplinary proceedings as specified in s. 440.20, Stats.
73 Op. Att'y Gen. 37, 38 (1984)
You state that most informal complaints come from sources "lacking sufficient expertise to evaluate the appropriateness of the professional practice alleged or the legality of the conduct." All informal complaints are subject to an initial screening pursuant to section RL 2.035 of the Wisconsin Administrative Code, which reads as follows:
73 Op. Att'y Gen. 37, 38-39 (1984)
All informal complaints received shall be referred to the division for filing, screening and, if necessary, investigation. Screening shall be done by the board, or, if the board directs, by a board member or the division. In this section, screening is a preliminary review of complaints to determine whether an investigation is necessary. Considerations in screening include, but are not limited to:
73 Op. Att'y Gen. 37, 39 (1984)
(1) Whether the person complained against is licensed;
73 Op. Att'y Gen. 37, 39 (1984)
(2) Whether the violation is a fee dispute;
73 Op. Att'y Gen. 37, 39 (1984)
(3) Whether the matter alleged, if taken as a whole, is trivial; and
73 Op. Att'y Gen. 37, 39 (1984)
(4) Whether the matter alleged is a violation of any statute, rule or standard of practice.
73 Op. Att'y Gen. 37, 39 (1984)
You state that as a practical matter this provision is used only as a broad jurisdictional screen and matters are routinely placed "under investigation" without any preliminary evaluation of the merits. Therefore, a very high percentage of informal complaints are identified in department records as being "under investigation."
73 Op. Att'y Gen. 37, 39 (1984)
The Division of Enforcement conducts investigations of all persons and entities identified as "under investigation." If the investigation discloses a violation of law a formal complaint may be drafted and a disciplinary proceeding commenced by the filing of a Notice of Hearing with the respective board office and the designated hearing examiner. The threshold burden for issuance of a formal complaint varies from board to board. The Medical Examining Board must make a finding of probable cause after the investigation is substantially completed and before a formal complaint can issue and a disciplinary proceeding can be commenced. Other boards do not have this specific probable cause requirement for issuance of a formal complaint. Instead, the decision to issue a formal complaint is controlled by the professional and ethical constraints of the prosecuting attorney and the respective board. Formal complaints are not issued until the investigation has been substantially completed and a violation of law identified. If after a hearing on the allegations of the formal complaint the board determines that a violation of law has occurred, it may reprimand, suspend, revoke or limit the license of the licensee.
73 Op. Att'y Gen. 37, 39-40 (1984)
The investigations conducted by the Division of Enforcement result in a substantial number of the informal complaints "under investigation" being closed without commencement of any formal disciplinary proceeding. The majority of these cases are closed because the investigation did not result in the collection of evidence sufficient to form a basis for prosecution. More specifically, 98% of the Dentistry Board, 82% of the Medical Board, 92% of the Pharmacy Board and 59% of the Board of Nursing investigations completed between January 1, 1983, and July 31, 1983, were closed without commencement of formal disciplinary proceedings.
73 Op. Att'y Gen. 37, 40 (1984)
You also state that matters "under investigation" are treated differently so that the apparently less serious allegations or weaker cases may remain "under investigation" for longer periods of time, thus possibly creating a false impression as to the severity or extent of an alleged violation if the information were publicized.
73 Op. Att'y Gen. 37, 40 (1984)
You state the following with respect to the rights and interests of persons under investigation:
73 Op. Att'y Gen. 37, 40 (1984)
The licensee has no meaningful legal recourse to challenge his status as "under investigation" during the pendency of the investigation. The Department's action at this phase of the administrative process is probably not reviewable in any legal forum.
73 Op. Att'y Gen. 37, 40 (1984)
Physicians, dentists, pharmacists and nurses have significant reputational interests to protect. Their professional and economic success and well being are directly related to the image they maintain in both the public and private sectors. A professional will not make a referral to another professional who he or she suspects may be incompetent. Similarly, a member of the public will not seek health care from an individual who he or she perceives as possessing questionable skill and knowledge.
73 Op. Att'y Gen. 37, 40 (1984)
You ask:
73 Op. Att'y Gen. 37, 40 (1984)
1. Under the facts and circumstances herein stated, does public records law prohibit the Department from disclosing, 40-41 (1984)
2. Under the facts and circumstances herein stated, does public records law permit the Department to not disclose, 41 (1984)
3. What liability, if any, does the records custodian incur if he or she makes a good faith but incorrect decision to disclose a record in response to a public records request? To what extent, if any, do Wis. Stats. secs. 895.50(2)(c) and 895.50(3) provide immunity from liability for a records custodian who makes a good faith but incorrect decision to disclose a record in response to a public records request?
73 Op. Att'y Gen. 37, 41 (1984)
4. What obligation, if any, does the Department have under the public records law to honor prospective requests for monthly updates of records not in the possession of the agency at the time the request is made?
73 Op. Att'y Gen. 37, 41 (1984)
As to questions 1 and 2, it is my opinion that the public records law does not prohibit disclosure but does permit nondisclosure under the facts and circumstances described.
73 Op. Att'y Gen. 37, 41 (1984)
In order to find a prohibition against disclosure there must be a specific statutory provision which establishes the prohibition. Sec. 19.36(1), Stats. I am aware of none pertaining to the records in question.
73 Op. Att'y Gen. 37, 41 (1984)
However, section 19.35(1) provides:
73 Op. Att'y Gen. 37, 41 (1984).
73 Op. Att'y Gen. 37, 41 (1984)
Section 19.85(1)(b) and (f) authorize closed meetings for the following purposes:
73 Op. Att'y Gen. 37, 41-42 (1984)
(b)
Considering dismissal, demotion,
licensing or discipline of any
public employe or
person licensed by a board
or commission
or the investigation of charges against such person
, or considering the grant or denial of tenure for a university faculty member, and the taking of formal action on any such matter; provided that the faculty member or other public employe employe or person licensed requests that an open session be held.
73 Op. Att'y Gen. 37, 42 (1984)
....
73 Op. Att'y Gen. 37, 42 (1984)
.
73 Op. Att'y Gen. 37, 42 (1984)
It is my opinion that the tentativeness of matters "under investigation" in your Department would justify nondisclosure until it is decided whether to commence disciplinary proceedings. In particular, I am struck by the fact that matters are placed "under investigation" with minimal if any preliminary evaluation of the merits and a very small percentage of the informal complaints ultimately result in formal proceedings. I am also persuaded that the reputational interests at stake are predictably substantial and that improvident public disclosure that investigations are pending would have an undue adverse effect on professional reputations in the vast majority of cases where formal disciplinary proceedings are ultimately deemed unwarranted.
73 Op. Att'y Gen. 37, 42 (1984)
I agree with your opinion that these circumstances are different from those involved in
Newspapers, Inc. v. Breier
, 89 Wis. 2d 417, 279 N.W.2d 179 (1979). The supreme court held that the daily arrest list, or "blotter," kept by a police department is open to inspection. The court said:
73 Op. Att'y Gen. 37, 42-43 (1984).
73 Op. Att'y Gen. 37, 43 (1984)
Breier
, 89 Wis. 2d at 438.
73 Op. Att'y Gen. 37, 43 (1984)
Although dicta, the court distinguishes between pending investigations and cases where probable cause has been found and an arrest made based thereon. In the former situation the court anticipates that while an investigation is in flux private reputational interests as well as law enforcement interests will outweigh the general public interest in access to public records.
73 Op. Att'y Gen. 37, 43 (1984)
Also, the court in
Breier
expressly declined to decide whether there is public access to "rap sheets" which contain arrest records of individuals. But the court did say that the public policy reasons for the disclosure or nondisclosure of "rap sheet" records may differ markedly from the reasons which led the court to rule the police blotter accessible.
Breier
, 89 Wis. 2d at 424.
73 Op. Att'y Gen. 37, 43 (1984)
These statements by the court reveal a sensitivity to reputational interests of persons under investigation and indicate it may very well be proper to keep investigative files confidential until the investigation is substantially completed. This sensitivity is also reflected in Supreme Court Rule 22.24 relating to investigations of attorneys and section 757.93 relating to investigations of judges.
73 Op. Att'y Gen. 37, 43 (1984)
The court's statements in
Breier
do not expressly sanction the nondisclosure of investigative files, but they serve to suggest that such a position is not patently indefensible and may be entirely appropriate. This indicates to you that it would not be unreasonable to decide to keep your pending investigation files confidential.
73 Op. Att'y Gen. 37, 43-44 (1984)
Attendant to a decision to keep pending investigation files confidential is an obligation to ensure that investigations are conducted expeditiously and that the decision to close the investigation or pursue disciplinary action is made without unnecessary delay. It would not be justified to broadly characterize inactive files as pending investigations so as to foreclose public access.
73 Op. Att'y Gen. 37, 44 (1984)
I also want to make very clear that this opinion relates only to pending investigation files. Once an investigation is completed and the decision whether to pursue disciplinary action is made, there may no longer be sufficient reasons for keeping the file confidential. Specifically, the concern for a premature and probable adverse effect on the reputation of a licensee being "under investigation" is allayed when the file also discloses that the investigation found no basis for pursuing disciplinary action. It may still be justified in some cases to decline to disclose some or all of an investigative file even after the investigation is resolved. However, these determinations should be rare and must be made on a case-by-case basis.
73 Op. Att'y Gen. 37, 44 (1984)
As to your third question, it is my opinion that if a custodian makes a good faith decision to disclose a public record, the custodian would be generally immune from liability by virtue of the principle of public officer immunity as described in
Lister v. Board of Regents
, 72 Wis. 2d 282, 300, 240 N.W.2d 610 (1976), and specifically immune from liability for invasion of privacy by virtue of section 895.50(2)(c). If information relating to a matter "under investigation" is disclosed, you should nevertheless stress that it is inappropriate to draw any adverse conclusions from the mere pendency of the investigation.
73 Op. Att'y Gen. 37, 44 (1984)
As to your fourth question regarding prospective continuing requests for monthly updates, it is my opinion that the open records law does not contemplate that such a request be honored. The right of access applies only to extant records, and the law contemplates custodial decisions being made with respect to a specific request at the time the request is made. Secs. 19.32(2) and 19.35(1)(a), (h) and (4), Stats.
73 Op. Att'y Gen. 37, 44 (1984)
BCL:RWL
___________________________
Down
Down
/misc/oag/archival/_129
false
oag
/misc/oag/archival/_129
oag/vol73-37
oag/vol73-37
section
true
»
Miscellaneous Documents
»
Opinions of the Attorney General
»
Opinions of the Attorney General - prior to 2000
»
73 Op. Att'y Gen. 37 | http://docs.legis.wisconsin.gov/misc/oag/archival/_129 | 2018-01-16T17:26:07 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.legis.wisconsin.gov |
As a Cluster Administrator or Cluster Operator, you can enable each service in your stack to re-start automatically. Enabling auto-start for a service causes the ambari-agent to attempt re-starting service components in a stopped state without manual effort by a user. Auto-Start Services is enabled by default, but only the Ambari Metrics Collector component is set to auto-start by default.
As a first step, you should enable auto-start for the worker nodes in the core Hadoop
services, the DataNode and NameNode components in YARN and HDFS, for example. You should also
enable auto-start for all components in the SmartSense service. After enabling auto-start,
monitor the operating status of your services on the Ambari Web dashboard. Auto-start attempts
do not display as background operations. To diagnose issues with service components that fail
to start, check the ambari agent logs, located at:
/var/log/ambari-agent.log on the component host.
To manage the auto-start status for components in a service:
Steps
In Auto-Start Services, click a service name.
Click the grey area in the Auto-Start Services control of at least one component, to change its status to
Enabled.
The green icon to the right of the service name indicates the percentage of components with auto-start enabled for the service.
To enable auto-start for all components in the service, click
Enable All.
The green icon fills to indicate all components have auto-start enabled for the service.
To disable auto-start for all components in the service, click
Disable All.
The green icon clears to indicate that all components have auto-start disabled for the service.
To clear all pending status changes before saving them, click Discard.
When you finish changing your auto-start status settings, click Save.
- -
To disable Auto-Start Services:
Steps
In Ambari Web, click Admin > Service Auto-Start.
In Service Auto Start Configuration, click the grey area in the Auto-Start Services control to change its status from
Enabledto
Disabled.
Click Save.
More Information
Monitoring Background Operations | https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.1.0/bk_ambari-operations/content/enable_service_auto_start.html | 2018-01-16T17:23:35 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.hortonworks.com |
Workflow scope Workflow application scope determines the access that an application has to the information in a workflow, specifically to the data contained in the activities in that workflow. When a workflow is created, it inherits the application scope from the gear menu for the logged in user. This scope cannot be changed in the workflow editor. When the workflow executes, it runs in this scope and can only be called from a different application if the workflow’s accessibility setting permits access to all scopes (public). Otherwise, the workflow’s application scope is private to the application. Note: Any script that is created in the workflow editor, such as an advanced script in an If activity, runs in the scope of the workflow. All core activities provided in the base system or for Orchestration run in the scope of the workflow. Custom activities run in their own scope, even if it is different from that of the workflow. The scope of a custom activity can be private or public.. Workflow scope restrictions There are some restrictions to public and private application scopes. During runtime, publicly scoped workflows can access other application resources, as long as these resources are set to be accessible to all application scopes. Privately scoped workflows in a private application scope can only access resources private to its scope. Due to scope access boundaries, any privately scoped workflows that make calls out to other scoped resources fail with either an exception or a hung activity while waiting for returned results. This occurs when making calls to these common global resources: ECC queues Tasks Approvals Events SLA timers Timers Script includes Business rules Workflow APIs As you design workflows, validate the visibility and accessibility of all resources prior to deployment. See Application scope. For information on how to configure the scope for a workflow, see Workflow properties. | https://docs.servicenow.com/bundle/jakarta-servicenow-platform/page/administer/using-workflows/concept/c_WorkflowScope.html | 2018-01-16T17:48:11 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.servicenow.com |
use
use
This article is In Progress.
W3C Recommendation
Needs Summary: This article does not have a summary. Summaries give a brief overview of the topic and are automatically included on some listing pages that link to this article.
Overview Table
Compatibility
Desktop
Mobile
Examples
View live exampleHere, the use element is used to create three instances of a rectangle defined in a defs tag.
<!DOCTYPE html> <html> <head></head> <body> <svg> <defs> <rect id="myRect" width="50" height="50" /> </defs> <use x="50" y="50" fill="red" xlink: <use x="75" y="75" fill="green" xlink: <use x="100" y="100" fill="blue" xlink: </svg> </body> </html>
Notes
Remarks
Note: In addition to the attributes, properties, events, methods, and styles listed above, SVG elements also inherent core HTML attributes, properties, events, methods, and styles.
For more information, see the SVG specification.
Standards information
- Scalable Vector Graphics: Document Structure, Section 5.11.8
The SVGUseElement object has these events:
The SVGUseElementUseElement object has these properties:
- animatedInstanceRoot: Gets the animated root of the instance tree of a use element.
-).
- height: Gets or sets the height of an element.
- href: Gets an xlink:href attribute of an element.
- instanceRoot: Gets the root of the instance tree of a use element.
- mask: Sets or retrieves a value that indicates a SVG mask.
- nearestViewportElement: Gets a value that indicates which element established the current viewport.
- ownerSVGElement: Gets the nearest ancestor svg element.
- pointerEvents: Sets or retrieves a value that specifies under what circumstances a given graphics element can be the target element for a pointer event in SVG.
-.
-] | https://docs.webplatform.org/wiki/svg/elements/use | 2015-05-22T09:56:40 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.webplatform.org |
Difference between revisions of "Development FAQ"
From Joomla! Documentation
Revision as of 07:18, 19 March 2008
SVN.
See for an explanation of the concepts behind branching and merging.
Breaking a Lock on a File. | https://docs.joomla.org/index.php?title=Development_FAQ&diff=3397&oldid=3246 | 2015-05-22T11:15:51 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Difference between revisions of "Why should you immediately change the name of the default admin user?"
From Joomla! Documentation
Latest revision as of 05:29, 3 May 2013
Joomla 1.5
!
Bold text
Joomla 2.5
Joomla 2.5 let's you choose the name of your Super Administrator account while installing, so you don't need to rename it later. | https://docs.joomla.org/index.php?title=Why_should_you_immediately_change_the_name_of_the_default_admin_user%3F&diff=cur&oldid=73681 | 2015-05-22T11:12:34 | CC-MAIN-2015-22 | 1432207924919.42 | [array(['/images/c/c8/Compat_icon_1_5.png', 'Joomla 1.5'], dtype=object)
array(['/images/5/53/Compat_icon_2_5.png', 'Joomla 2.5'], dtype=object)] | docs.joomla.org |
Difference between revisions of "JToolBar::appendButton"::appendButton
Description
Set a value.
Description:JToolBar::appendButton [Edit Descripton]
public function appendButton ( )
- Returns string The set value.
- Defined on line 98 of libraries/joomla/html/toolbar.php
See also
JToolBar::appendButton source code on BitBucket
Class JToolBar
Subpackage Html
- Other versions of JToolBar::appendButton
SeeAlso:JToolBar::appendButton [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JToolBar::appendButton&diff=prev&oldid=57915 | 2015-05-22T11:13:41 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Difference between revisions of "Extensions Module Manager Who Online"
From Joomla! Documentation
Revision as of 12:00, 3 May 2011
Who's Online
This Module displays information about Users currently browsing the web site. An example is shown below:
The Module Type name for this Module is "mod_whosonline". It is not related to any other component.
Module Parameters
- Caching. Always set to "Never".
-. | https://docs.joomla.org/index.php?title=Help16:Extensions_Module_Manager_Who_Online&diff=58109&oldid=58108 | 2015-05-22T10:25:47 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Revision history of "JTableMenuType::check::check/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JTableMenuType::check== ===Description=== {{Description:JTableMenuType::check}} <span class="editsection" style="font-size:76%;"> <nowi..." (and the only contributor was "Doxiki2")) | https://docs.joomla.org/index.php?title=JTableMenuType::check/1.6&action=history | 2015-05-22T11:11:35 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Difference between revisions of "Landing Pages"
From Joomla! Documentation
Revision as of 15:57, 9 May 2008.
- ACL
- Admin (redirect to Administrator, (disambiguation))
- Administrator (Application), Administrator (User) maybe others?
- API
- Backup
- Banner
- Beez
- Calendar
- com_migrator
- Configuration
- configuration.php
- CSS
- Database (disambiguation page, used for Framework packages et al)
- Demo
- Design
- Events
- Explorer
- Features
- Front page
- Global configuration
- Install
- Installation (disambiguation) for Extension installation and Joomla installation (already created).
- Installer (disambiguation page, used for Framework packages et al)
- Joomla powered sites
- Languages
- LDAP
- License
- Logo
- Module positions
- mosGetParam
- mosLoadModules
- mosMainBody
- News
- Requirements
- Release
- Restricted access
- Search
- Security
- session save path
- Setup
- Sites
- Template contest
- Tutorial
- User guide
- Video
- Video tutorials | https://docs.joomla.org/index.php?title=Landing_Pages&diff=6587&oldid=2085 | 2015-05-22T10:05:21 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Difference between revisions of "Template Development"
From Joomla! Documentation
Revision as of 20:19, 27 August 2012
<translate></translate>
Subcategories
This category has the following 3 subcategories, out of 3 total.
C
J
- [×] JavaScript (17 P)
- [×] JavaScript/en (empty)
Pages in category ‘Template Development’
The following 106 pages are in this category, out of 106 total. | https://docs.joomla.org/index.php?title=Category:Template_Development&diff=71962&oldid=34164 | 2015-05-22T10:25:12 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Changes related to "Migrating from 1.5 to 1.6"
← Migrating from 1.5 to 1.6
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20130509174806&target=Migrating_from_1.5_to_1.6 | 2015-05-22T10:09:58 | CC-MAIN-2015-22 | 1432207924919.42 | [] | docs.joomla.org |
Housing White Paper
On the Day Briefing
Today saw the publication of the long awaited, much trailed Housing White Paper titled ‘Fixing Our Broken Housing Market‘. Announced by Sajid Javid, Secretary of State as:
“ambitious proposals to help fix the housing market so that more ordinary working people from across the country can have the security of a decent place to live” are the proposals set out today the catalyst needed or is the housing industry the proverbial slow turning oil tanker?
The Government has moved away from its all encompassing focus on home ownership, which was front and centre of the previous administration, recognising the need to address quality and security in the rental market alongside steps to support people into home ownership.
The White Paper is organised into four themes and this briefing note will follow the same format:
- Planning for the Right Homes in the Right Places
- Building Homes Faster
- Diversifying the Market
- Helping People Now
Planning for the Right Homes in the Right Places
The White Paper rightly identifies the need to plan for the right homes in the right places and sees this as critical to the success of the recently released Industrial Strategy. This is a welcome recognition of the valuable underpinning role that housing plays in supporting the Industrial Strategy – several Northern Housing Consortium (NHC) members were concerned regarding the lack of reference to housing or construction in the ten strategic pillars and the White Paper’s positioning of housing as key to the Industrial Strategy is very useful.
The White Paper accuses Local Authorities of “ducking” difficult decisions regarding planning strategies which we don’t feel is a helpful description of the challenging environment many Local Authorities find themselves in. NHC members have many examples of how they are seeking to improve delivery of new homes across the North.
Assessing housing needs
The White Paper identifies planing as “slow, expensive and bureacratic” and feels this system is not supported by the lack of standard methodology for assessing housing needs.
Government proposals in the White Paper build on the work of the Local Plans Experts Group as the Government seeks to make plans easier and provide Local Authorities with the support they need. Government plans include:
- Intervening to ensure plans are put in place where Local Authorities do not deliver.
- Setting out in regulations a requirement for plans to be reviewed at least once every five years – an authority will have to update their plans if their existing housing target can no longer be justified.
- Consulting on changes to the National Policy Planning Framework (NPPF) to ensure authorities are working constructively with neighbouring authorities to delivery housing needs through a new “Statement of Common Ground”.
- Across the North we see the emergence of devolution deals increasingly focusing on housing – the White Paper’s proposals will allow spatial development strategies to allocate strategic sites at scales broader than individual authorities and will remove the requirement for each LA to produce a single plan.
- Consulting on the introduction of a standardised approach to assessing housing requirements and LA’s choosing not to use this approach would have to explain and justify to Planning Inspectorate. This new approach is likely to become live in April 2018.
- Strengthening national policy to ensure local planning authorities have clear policies for addressing housing requirements of groups with particular needs – for example: older or disabled people.
Making Land Ownership more transparent
The White Paper wishes to open up land ownership to greater transparency, perhaps taking its cue from the success of open data projects in other areas of government policy. This openness will stretch not just to land ownership but also greater transparency around land interests – for example, restrictive covenants or purchase options. The Government will also bring forward a Bill to implement the Law Commission’s proposals for reform of restrictive covenants.
The Government appears concerned that through the application of less than transparent land interest options that a degree of ‘land banking’ may be taking place which is inhibiting delivery. We welcome these approaches to greater transparency in land ownership and land interests.
Making Enough Land Available in the Right Places
Government expects Local Authorities to have a clear strategy to maximise use of suitable land. In practice this will focus on bringing brownfield land back into use. The Government will amend the NPPF to indicate “great weight should be attached to the value of using suitable brownfield land within settlements for homes” and that greater density levels should be considered. An underlying presumption is that brownfield land within settlements is suitable for housing unless there are specific and clear reasons why it can’t be used, such as flood risk. In addition to expecting higher density levels where they can be delivered the White Paper also signals a move away from a “one size fits all” approach on space standards. The Government pledges to avoid a “race to the bottom” on this but we will need to see what safeguards the Government can bring forward.
Whilst the NHC supports the principle of brownfield first, there are some localities in the North where brownfield development will have to incur remediation costs and we would welcome Government recognition that these additional costs should not be a barrier in terms of financial viability of site development.
The White Paper continues to support accelerated delivery on public sector land and will provide a £45million Land release Fund. Consult on extending LA flexibility to dispose of land at less than best consideration is a welcome move, although NHC members have rightly pointed out that other parts of the public estate should also be subject to similar flexibilities, particularly when land or assets are sold to other public sector partners.
In a further welcome (albeit limited) announcement, the White Paper is proposing to encourage local planning authorities to consider the social and economic benefits of estate regeneration. NHC Local Authority members continue to work innovatively and collaboratively to progress housing delivery, working with developers, both volume and SME, to ensure that schemes are coming to fruition. The approach undertaken by Wakefield Council to unlock so called stalled sties was cited in the recent Commission for Housing in the North report and has been well received across the North as a model which could be replicated at scale. However, the capacity resources required by LAs to undertake such work should not be underestimated. We welcome the recent announcements regarding a Capacity Fund to support accelerated delivery – indeed the Commission called for such an instrument – and look forward to working with our members, government and the Homes and Communities Agency (HCA) to ensure the Fund works well for the North of England.
Building Homes Faster
The second chapter of the White Paper focuses on efforts need to build homes faster. The Government identifies a lag between plans being developed, full permissions being granted and those homes being built. They state that as of July 2016 there were almost 700,000 homes with detailed planning permissions but building had started on just under one half of these homes. Nathaniel Lichfield Partners recent January 2017 research took a slightly different position on the relationship between outstanding permissions and build out rates.
Many of the points raised by the Government in this chapter reflect the findings of the Commission for Housing in the North – in particular, a need for strong leadership, development certainty and to better support local authority capacity.
Boosting LA capacity and capability to deliver
The Government is committed to take steps to secure the financial sustainability of planning departments and ensure they have the right skilled professionals. To achieve this the Government will:
- increase planing fees – LAs can increase fees by 20% from July 2017 if the additional fee income is invested in planning departments. The Government is further minded to consider an additional increase of 20% for those authorities who are delivering on their housing needs.
- provide £25million of new funding to help ambitious authorities in areas of high housing need to support planning and infrastructure plans.
- deter unnecessary appeals by consulting on introducing a fee for making a planning appeal.
- target the £3bn Housing Infrastructure Fund (capital grant) at areas of greatest housing need.
The White Paper also identifies a need to maximise opportunities presented by strategic infrastructure investment, to ensure high quality digital infrastructure plans are developed and will work with the housing sector to ensure the 2014 Better Connected report, which sought to improve the process for securing utility provision in housing developments, is working as effectively as it could.
The White Paper sets out concerns that hold back development including dealing with protected species and greater simplification of developer contributions although the detail on Community Infrastructure Levy (CIL) changes are not included in the White Paper but will come forward in the Autumn Budget 2017.
A recurring theme of the White Paper relates to transparency – the first chapter focused on transparency of land holdings and chapter two turns its attention to transparency during planning and build out phases. Proposals set out in the White paper include:
- More information about timing and pace of delivery on site by site basis
- Requiring volume house builds to publish aggregate information on build out rates
- Local Authority tools to be strengthened – able to consider how realistic site build out will be, consultation on whether a developers past performance should be taken into consideration and reducing the “completion notices” from three to two years – although on this last point the Government is keen to understand how this approach may impact on SME builders.
- New guidance to be developed for LA’s to encourage use of compulsory purchase powers to support build out of stalled sites.
Having given local authorities some “sharper tools’ the Government will be keen to ensure LAs are held accountable and will introduce the new “Housing Delivery Test”. From November 2017 if delivery falls below 95% of annual target the LA will be expected to publish a recovery action plan. If delivery falls below 85% of annual target the LA would be expected to plan for a 20% buffer on their five-year land supply. From November 2018 if the delivery falls below 25% the presumption in favour of sustainable development (NPPF) will apply. The following year the baseline will move upwards to 40% and from November 2020 the presumption will come into being at 65%.
NHC Member Comment
“We welcome the Government’s housing White Paper today and its recognition of the extent of the housing crisis which is particularly acute in York with many people struggling to rent or buy suitable homes. It presents a wide ranging set of detailed proposals which we will need to consider carefully to see how they will play out in York and we look forward to responding to the formal consultation. We especially welcome measures to bring forward the supply of more affordable homes of different tenures more quickly reflecting the Government’s shift in focus”.
Diversifying the Market
The lack of diversification in the market has been a significant issue since the financial crash. Whilst registered providers were able to step in to pick up slack created by any step backs from the volume builders, the decline in the SME provider has been of particular concern.
The White Paper sets out the scale of the decline. In 2007, 44,000 homes were delivered by small builders, in 2015 this had fallen to just 18,000. The North is not experiencing a particularly different picture in this regard – throughout the Commission for Housing in the North we heard evidence that pointed to the need to reengage SME builders in housing delivery. We therefore welcome the Government’s focus on this agenda although the White Paper does not bring new initiatives with regard to SME builders in particular but recommits existing programmes such as the Home Building Fund, Accelerated Construction and publicising the Help to Buy equity loan programme. We will continue to work with our members to better understand how SME builders across the North are accessing these opportunities and what more could be done.
Similarly the White Paper flags up expansion opportunities from institutional investment in more homes for private rent. Again, we explored this in detail in the Commission for Housing in the North and have previously cited Countryside Homes and Place First as excellent exemplars of this approach. Indeed on the day of publication of the White Paper, You and Yours on Radio 4 focused on housing and included contributions from the NHC’s Deputy Chief Executive Tracy Harrison and was broadcast from Norris Green – a private sector development led by Countryside
The White Paper recognises the need to support security of tenure in the private rented sector and is committed to exploring “family friendly tenancies of three years minimum duration. We welcome the opportunity to work with the Government on making this a reality and ensuring safeguards are put in place to ensure that family friendly tenancies are not circumnavigated by unfair rent hikes.
Additional capacity for housing delivery will come from both the RP and LA sectors according to the White Paper. To support this the Government plans to bring forward a rent policy for the social sector for the period beyond 2020 to enable the sector to support investment in new homes delivery. The Government’s expectations on RP development are further set out with an expectation that tall providers will make best use of the development capacity they have and to continue to deliver on efficiency expectations.
The White Paper hints at the availability of “bespoke deals” for those authorities in housing market areas with high demand and genuine ambition to build, perhaps paving the way for sector specific deals outside of combined authorities.
Finally the Homes and Communities Agency is to be relaunched as Homes England with a unifying purpose of “to make a home within reach for everyone”
The NHC welcomes these proposals and look forward to working with the Government to implement them. We recognise many of our members are looking at how they can boost their own capacity through innovative joint ventures – including the case study of Greater Manchester Providers from the Commission report.
Helping People Now
The preceding chapters of the Housing White Paper sought to put in the place the foundations from which we could accelerate delivery. Chapter Four seeks to provide an overview of the ongoing support from Government.
The Government reconfirms its commitment to the Lifetime ISA. This ISA product will support young people to save by giving a 25% bonus on top of annual savings of £4,000. The savings and bonus can be put towards purchase of a new home or withdrawn at the age of 60. Whilst this product is welcomed, we do hope the limitations of the previous Help to Buy ISA have been ironed out. It had a similar ambition but was “rendered technically useless” as the bonus element could not be used to fund a house deposit.
We welcome the commitment from the government to work with the sector to consider the future of the Help to Buy Equity loan scheme – at its first introduction this product worked well in Northern markets and over the past few years we have engaged with the Government to make the case for the ‘bang for buck’ delivered by HtB (E) in the North and look forward to positively engaging with them on its future beyond 2021.
Whilst we were positive about the impact of Help to Buy, we have raised concerns with the Government about its Starter Homes product. We are pleased that the Government has listened to our (and others) concerns and the White Paper signals a shift away from mandatory 20% starter homes on new developments to a policy expectation of minimum 10% affordable home ownership units and for local areas to best define the appropriate home ownership model. This reflects the calls we previously made to see starter homes as an umbrella term for a range of home ownership models rather than a tightly defined product.
The White Paper also reaffirmed changes in direction of the Affordable Housing Programme that the NHC have previously called for, including greater flexibility in the programme moving away from a focus on shared ownership and specific reference to Rent to Buy. We welcome this change and are committed to working with our members to ensure delivery is maximised.
Conclusions
Whilst the headlines of the White Paper were well trailed, there is much in the detail that will be pleasing to NHC members. The focus on right homes in the right places, the recognition that to accelerate delivery will require boosts to capacity, the value placed on security within the private rented sector, and the flexibility about tenure all reflect calls the NHC has been making with its members over the past months and years.
There are of course areas that have not been addressed by the White Paper but we knew in advance this was to be a policy framework focused on delivery and accelerating supply. That does not mean we will not continue to press for government action on these issues, particularly regeneration, and indeed we do see opportunities in both the Northern PowerHouse Strategy and the Industrial Strategy to continue to make the case for whole market housing solutions that support economic renaissance of the North.
We will be providing a detailed response to the White Paper and working with politicians and civil servants to ensure a strong Northern Voice is heard in Whitehall. These include:
• A forthcoming member dinner (by invitation) with senior CLG civil servants including Director General Helen McNamara.
• Regional roundtables looking at the scope of the White Paper.
• Thematic roundtables exploring the more technical detail of the chapters and themes.
• Existing relevant roundtables – details of which are available on the website.
If you are interested in registering for the roundtables please contact Member Engagement Manager, Callum Smith, at [email protected] and indicate if this is for wider scope or specific areas of interest.
We would be happy to provide in-house presentations to your teams or your boards on the White Paper and the work of the NHC. To progress this, please contact Kate Maughan, Head of Member Engagement at [email protected] | https://docs.northern-consortium.org.uk/policy/1435-housing-white-paper/ | 2019-07-16T00:39:56 | CC-MAIN-2019-30 | 1563195524290.60 | [] | docs.northern-consortium.org.uk |
limit. This is configurable using the
max-db-conns-thresholddatabase metrics agent configuration file setting; for example,
"max-db-conns-threshold":"90"will trigger an event when 90% of connections are used.. The agent must be running at the time of the change in order to detect it.
Database Server Restart
- The database being monitored by VividCortex has been restarted.
Disk Device Almost Full
- The disk device has 10% free space. This is configurable using the
disk-full-threshold-warnOS metrics agent configuration file setting; for example,
"disk-full-threshold-warn":"15"will trigger an event when the disk is down to 15% free space.
Disk Device Full
- The disk device has 5% free space. This is configurable using the
disk-full-threshold-critOS metrics agent configuration file setting; for example,
"disk-full-threshold-crit":"10"will trigger an event when the disk is down to 10% free space.
High Swap Activity
- Swap activity on this disk device has exceeded 100 pages per second.
Long-Running Autovacuum
- You can configure VividCortex to detect that the PostgreSQL autovacuum process took longer than a defined threshold. To enable, edit your host’s
/etc/vividcortex/vc-pgsql-metrics.confconfiguration file to include the
pg-vacuum-eventssetting. Its value is an event level and a time threshold. Acceptable event levels are
info,
warn, and
crit; the time threshold is
Nmnumber of minutes. For example, if you wanted to generate an
infolevel event for a vacuum taking more than 1 minute and a warning after 5 minutes, you would include
"pg-vacuum-events":"info:1m,warn:5m"in your configuration file.
Long-Running Query
- You can configure VividCortex to detect that a query is taking more than a defined threshold to execute. This feature is available for MySQL, PostgreSQL, and MongoDB. To enable, edit your host’s metrics agent configuration file to include
"long-running-event-threshold":"Ns", where
Nis a number of seconds. For example, to detect queries running longer than 5 seconds on a PostgreSQL instance, edit
/etc/vividcortex/vc-pgsql-metrics.confto include the line
"long-running-event-threshold":"5s". You will need to create the file if it does not exist (for example, the file should contain
{ "long-running-event-threshold":"5s" }).
When configured, we will generate an Event for each query that is detected. You can then configure an Event-based Alert to notify you..
Out of Memory Killer
- The Linux Out of Memory (OOM) Killer has killed a process. The process will be named in the event.
PostgreSQL Replication Started
- The VividCortex agent has detected that replication has started running for a replica database.
PostgreSQL Replication Stopped
- The VividCortex agent has detected that replication has stopped running for a replica database..
SQL Injection
- VividCortex has detected a possible SQL injection attempt. You should investigate the query sample to verify. Note that this feature requires that capturing query sample text be enabled, and that the samples are being sent to VividCortex.). Agents restart periodically, such as during automatic upgrades, and an agent shutdown or startup should be considered normal behavior.
Agent Startup
- This event indicates a VividCortex agent has started running. The event reports the specific agent that started, along with its version. Agents restart periodically, such as during automatic upgrades, and an agent shutdown or startup should be considered normal behavior..
Performance Schema Unavailable
- The agent is not able to collect metrics from the database because the
performance_schemais not enabled.
PostgreSQL User Lacks Privileges
- The user assigned to VividCortex for monitoring your database does not have the necessary privileges in order to capture data. Please refer to our privileges documentation for information about what PostgreSQL privileges are required.
Query Samples Unavailable
- A database host’s configuration does not allow the VividCortex agent to collect samples of queries that ran on the host. This is only applicable for hosts that are using the off-host monitoring configuration.
Restart Loop
- One of the agents is caught in restart loop and cannot finish loading. Your database may or may not be monitored. This can happen if the agent does not have sufficient file system privileges or if it cannot download a necessary update. You can refer to the vc-agent-007 log file for more information, or contact Support.. | https://docs.vividcortex.com/general-reference/event-types/ | 2019-07-16T00:20:25 | CC-MAIN-2019-30 | 1563195524290.60 | [] | docs.vividcortex.com |
APOLLO DI:
The.
The most cost-effective start to digital impression-taking.
APOLLO DI
APOLLO DI, the specially developed intraoral scanner for cost-efficient digital impressions.
Easy handling thanks to multitouch control
Small and lightweight camera
Export of scan-data in the laboratory
No follow-up costs
OPEN APOLLO DI*:
Export of digital impression data (captured with the APOLLO DI in the practice and received via the Sirona Connect Portal) in an open ST L format for processing in other CAD/CAM systems. | https://tools4docs.com/products/Sirona-Apollo-DI-Intra-Oral-Camera.html | 2019-07-16T00:32:13 | CC-MAIN-2019-30 | 1563195524290.60 | [] | tools4docs.com |
Running
init-terraform fails:
Permission denied (publickey)
Make sure that your GitHub SSH public key has been added to your geodesic
ssh-agent
Question
When running
init-terraform, it fails while trying to fetch a private github repository.
init-terraform Mounted buckets Filesystem Mounted on eg-staging-terraform-state /secrets/tf Initializing modules... - module.identity Getting source "git::[email protected]:cloudposse/terraform-aws-account-metadata.git?ref=tags/0.1.0" Error downloading modules: Error loading modules: error downloading 'ssh://[email protected]/cloudposse/terraform-aws-account-metadata.git?ref=tags%2F0.1.0': /usr/bin/git exited with 128: Cloning into '.terraform/modules/ce64520f6f20f6ef2bd2674d5f00db4d'... Warning: Permanently added the RSA host key for IP address '194.31.252.103' to the list of known hosts. Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists.
Answer
This usually happens for one of two reasons:
1) The SSH key added to your geodesic
ssh-agent is not the same one authorized with your GitHub account
2) No SSH keys have been added to your
ssh-agent
Run
ssh-add -l to verify the keys are in your
ssh-agent. Remember, that in geodesic
/localhost is your
$HOME directory. | https://docs.cloudposse.com/troubleshooting/init-terraform-fails/ | 2019-07-16T00:06:48 | CC-MAIN-2019-30 | 1563195524290.60 | [] | docs.cloudposse.com |
Forwarder deployment topologies
You can deploy Splunk server. The scenario typically involves universal forwarders forwarding unparsed data from workstations or production non-Splunk servers to a central Splunk server for consolidation and indexing. With their lighter footprint, universal forwarders have minimal impact on the performance of the systems they reside on. In other scenarios, heavy forwarders can send parsed data to a central Splunk indexer.
Here, three universal forwarders are sending data to a single Splunk indexer:
For more information on data consolidation, read "Consolidate data from multiple machines".
Load balancing
Load balancing simplifies the process of distributing data across several Splunk indexers to handle considerations such as high data volume, horizontal scaling for enhanced search performance, and fault tolerance. In load balancing, the forwarder routes data sequentially to different indexers at specified intervals.
Splunk forwarders perform automatic load balancing, in which the forwarder switches receivers at set time intervals. If parsing is turned on (for a heavy forwarder), the switching will occur at event boundaries.
In this diagram, three universal forwarders are each performing load balancing between two indexers:
For more information on load balancing, read "Set up load balancing".
Routing and filtering
In data routing, a forwarder routes events to specific Splunk or third-party servers, Splunk indexers based on event patterns:
For more information on routing and filtering, read "Route and filter data".
Forwarding to non-Splunk systems
You can send raw data to a third-party system such as a syslog aggregator. You can combine this with data routing, sending some data to a non-Splunk system and other data to one or more Splunk servers.
Here, three forwarders are routing data to two Splunk servers and a non-Splunk system:
For more information on forwarding to non-Splunk systems, read "Forward data to third-party systems".
Intermediate forwarding
To handle some advanced use cases, you might want to insert an intermediate forwarder between a group of forwarders and the indexer. In this type of scenario, the end-point forwarders send data to a consolidating forwarder, which then forwards the data on to an indexer, usually after indexing it locally., you need to configure the forwarder as a both a forwarder and a receiver. For information on how to configure a receiver, read "Enable a receiver".
This documentation applies to the following versions of Splunk® Enterprise: 4.3, 4.3.1, 4.3.2, 4.3.3, 4.3.4, 4.3.5, 4.3.6, 4.3.7
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/4.3/Deploy/Forwarderdeploymenttopologies | 2019-07-16T00:35:57 | CC-MAIN-2019-30 | 1563195524290.60 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Hello World!
Remember when we ran that
dusty bundles list command? There was a bundle called
hello-world.
That seems like something we should probably run first. The
hello-world bundle will run two
small Flask apps. Each Flask app will be available from a browser on your Mac, and they'll
each talk to a shared MongoDB instance.
The first step to running any bundle is to activate it:
> dusty bundles activate hello-world Activated bundles hello-world > dusty bundles list +-------------+--------------------------------------------------------+------------+ | Name | Description | Activated? | +-------------+--------------------------------------------------------+------------+ | fileserver | A simple fileserver to demonstrate dusty cp | | | hello-world | Hello world! Two running copies of a simple Flask app. | | +-------------+--------------------------------------------------------+------------+
Once the bundle is activated, you can use
dusty status to see what apps, services, and
libs will be run with your current configuration:
> dusty status +-----------------+---------+----------------------+ | Name | Type | Has Active Container | +-----------------+---------+----------------------+ | flaskone | app | | | flasktwo | app | | | persistentMongo | service | | +-----------------+---------+----------------------+
So there are the two Flask apps we talked about, plus the Mongo service. Great.
Dusty Up
Time to run everything! Once you've activated the bundles you want, just issue
a single
dusty up command and Dusty will take care of the rest.
> dusty up A bunch of lines go here... ... ... Your local environment is now started!
If we check the status again, we should see everything running. We can also
see the containers directly using
docker ps. Keep in mind that everything
you can normally do with Docker will still work when you're using Dusty!
> dusty status +-----------------+---------+----------------------+ | Name | Type | Has Active Container | +-----------------+---------+----------------------+ | flaskone | app | X | | flasktwo | app | X | | persistentMongo | service | X | +-----------------+---------+----------------------+
And now, for the final test, can we reach these apps in our browser? They make
themselves available at
local.flaskone.com and
local.flasktwo.com. Go ahead
and navigate to one. You should see this:
Try hitting both of the URLs. The combination of the individual and shared counter shows that we are indeed running two apps and they are coordinating with each other over the shared Mongo instance.
We've covered the basics of running bundles with Dusty. In the next section, we'll go over how to actively develop our applications, see our changes, and run tests. | https://dusty.readthedocs.io/en/0.2.1/getting-started/hello-world/ | 2019-07-16T00:45:05 | CC-MAIN-2019-30 | 1563195524290.60 | [array(['../../assets/flask-hello-world.png', 'Flask Hello World'],
dtype=object) ] | dusty.readthedocs.io |
The Widgetized page template
The widgetized page is a great way to create pages with customized layout sections, to use as promotional, landing pages etc. The layout of the widgetized page is configured by widgets that have the “Section:” prefix.
Let’s see how to create a landing page similar to the widgetized homepage template that we use at our demo:
The process consists of 4 simple steps. Let’s see each step:
- 1
- First you need to create a custom widget area at Appearance > Widgets, where you will add the section widgets that will define the layout. So, provided that you have already installed the Stag Custom Sidebars plugin, create a custom widget area and let’s give it a name like: “Widgetized Home”. Click the add button to create the widget area.
- 2
- The Widgetized Home widget area has been created. Now, drag the widgets you wish (widgets with the “Section:” prefix) to this area. For our example we have used a “Static Content Section” widget that will output contents of a static page, and the “Section: Latest Posts” widget that will show our latest posts above a nice background. For more info on the Section: widgets, please refer to the section of this documentation.
- 3
- Now, let’s create the page that will have these widgets as a structure. Go to Pages > Add New and create a new page. From the page attributes panel on the right, choose the “Widgetized” template. When you do so, a Sidebar Settings panel will appear on the top right.
- 4
- Now, at the Sidebar Settings panel, choose the sidebar that you have created. Hit update. The widgetized page is ready for action.
Heads Up! If you use a widgetized page template, the content that you will put at the content editor will always display above the section widgets. For example, at our demo page: the first section at the top (Titled “Welcome to Ink” is actually the content of the page itself, displayed above the image that has been configured at the “Background Settings” panel. | https://docs.codestag.com/article/145-the-widgetized-page-template | 2019-07-15T23:59:28 | CC-MAIN-2019-30 | 1563195524290.60 | [] | docs.codestag.com |
Node std_fwd
The std_fwd node is used to specify standard forwarding for sites within a subscription. When the user goes to a site on which the standard forwarding is set, Plesk redirects this user from the requested URL to the destination URL. This is done explicitly: the user sees the real ‘destination’ address in the path bar of the browser.
The std_fwd node is structured as follows:
- The dest_url node is required. It specifies the URL to which the user will be redirected explicitly at the attempt to visit the specified site. Data type: forwardingUrl (string, 1 to 255 characters long, spaces not allowed).
- The ip_address node is required. It specifies IP addresses associated with the site. You can provide either one ip_address node for the site’s IPv4 or IPv6 address or two such nodes for both of them. Data type: ip_address (
common.xsd).
The following sample packet specifies standard forwarding for a new site:
<packet> <webspace> <add> <gen_setup> <name>newdomain.com</name> <owner-id>1234</owner-id> <ip_address>123.123.123.123</ip_address> <ip_address>2002:5bcc:18fd:c:123:123:123:123</ip_address> <status>0</status> </gen_setup> <hosting> <std_fwd> <dest_url></dest_url> <ip_address>123.123.123.123</ip_address> <ip_address>2002:5bcc:18fd:c:123:123:123:123</ip_address> </std_fwd> </hosting> </add> </webspace> </packet> | https://docs.plesk.com/en-US/onyx/api-rpc/about-xml-api/reference/managing-subscriptions/subscription-settings/limits-permissions-and-hosting-settings/hosting-settings/node-std_fwd.33876/ | 2019-07-16T00:26:43 | CC-MAIN-2019-30 | 1563195524290.60 | [array(['/en-US/onyx/api-rpc/images/68422.png', 'image-68422.png'],
dtype=object) ] | docs.plesk.com |
Overview
The Apache Graceful Restart Patch is a patch provided by the Apache organization. The purpose of the patch is to resolve an issue that causes Apache to perform slower graceful restarts when there is a high load on the server. To read more about the patch, view the Apache bug report.
If you select a default profile, EasyApache will install the patch.
Note:
To disable the patch, deselect the Apache Graceful Restart Patch option in the Exhaustive Options list in EasyApache. (Home >> Software >> EasyApache 3)
How to apply the Apache Graceful Restart Patch to a custom profile
To apply the Apache Graceful Restart Patch, perform the following steps:
- Navigate to WHM's EasyApache interface. (Home >> Software >> EasyApache 3)
- Click Start customizing based on profile.
- Click Next Step on the Apache Version and PHP Version steps.
- Click Exhaustive Options List on the Short Options List.
- Select Slow Restart Patch.
- Click Save and Build. | https://hou-1.docs.confluence.prod.cpanel.net/display/EA/Apache+Graceful+Restart+Patch | 2019-07-16T01:00:37 | CC-MAIN-2019-30 | 1563195524290.60 | [] | hou-1.docs.confluence.prod.cpanel.net |
About the Mesosphere Universal InstallerAbout the Mesosphere Universal Installer
A number of different installation methods have emerged to manage the life-cycle of DC/OS on a set of nodes in a cluster. These installation methods include AWS CloudFormation templates, Azure ARM templates, Ansible Playbooks, dcos-launch, dcos-gcp, and terraform-dcos. Each of these methods were designed to solve a particular use case, and therefore had some limitations around supporting the full life-cycle of (provision, deploy, install, upgrade, decommission) of DC/OS. For example, both AWS CloudFormation and Azure ARM template solutions do not support the upgrade process in DC/OS after the cluster is deployed the first time.
Terraform is an open source infrastructure automation tool which uses templates to manage infrastructure for multiple public cloud providers, service providers, and on-premises solutions. Terraform creates your infrastructure, configures resources, and manages communication between agents. The purpose of this tool is to automate most of the manual efforts of managing and maintaining distributed systems. The Universal Installer is built on top of Terraform.
The primary goal of using the Mesosphere Universal Installer is as follows:
- Provide a single unified experience for provisioning, deploying, installing, upgrading, and decommissioning DC/OS on a cluster of machines.
- Create a modular and reusable script to easily decouple DC/OS on various OS and cloud providers to easily install, upgrade, and modify in-place.
- Remove the confusion around which DC/OS installation method should be used in any given scenario. This automated tool helps to build modules that codify best practices for each stage in the cluster life-cycle and hook necessary modules into an existing infrastructure.
DC/OS on Amazon Web Services
DC/OS Azure Resource Manager
DC/OS on Google Cloud Platform
PrerequisitesPrerequisites
The following is required in order to use Terraform templates to deploy DC/OS on cloud providers:
- Install Terraform and possess the required infrastructure credentials and permissions to run and provision resources.
- Prepare a local SDK to your chosen cloud provider. Example: Set up
AWS-CLIand include a default region.
- Prepare to enter your ssh credentials into the instances you launch via Terraform using either an ssh-agent or passing public keys directly. This helps you to interact with the cluster easily.
- Be familiar with the characteristics of the environment (e.g. which cloud provider) you want to run DC/OS on, and understand the environment’s features and limitations.
- Understand the API limits that exist on your account for each supported Terraform provider.
- Know the different quotas that exist to limit the number of resources that are available in different regions for each supported Terraform provider.
- Maintain your Terraform state and understand whether that state is saved locally or in the cloud (i.e, AWS S3, GCP cloud storage, Azure storage account).
- When using Terraform state that is shared, it is recommended to select a backend that supports state locking (i.e, AWS S3, GCP cloud storage, Azure storage account or locally) which ensures that no other user will be able to change the state while another operation is being performed.
Mesosphere Supported Installation MethodsMesosphere Supported Installation Methods
These installation methods are used for fast demos and proofs of concept, as well as production clusters. Upgrades are supported with the following installation methods.
Any of the following methods can be used to install DC/OS:
- Amazon Web Services (AWS): Install DC/OS on AWS by using the Mesosphere Universal Installer.
- Azure: Install DC/OS on Microsoft Azure by using the Mesosphere Universal Installer.
- Google Cloud Platform (GCP): Install DC/OS on Google Cloud Platform (GCP) by using the Mesosphere Universal Installer.
Other Installation MethodsOther Installation Methods
These installation methods are provided by the community and are not tested by Mesosphere. Upgrading DC/OS is not a supported feature when using the following installations.
- CloudFormation on AWS (AWS): Install your DC/OS cluster on Amazon Web Services (AWS) by using the DC/OS templates on AWS CloudFormation.
- Azure Resource Manager templates: Install your DC/OS cluster on Azure by using the Azure Resource Manager templates.
- Mesosphere Universal Installer for DigitalOcean: Install your DC/OS cluster on DigitalOcean by using Terraform templates that are configured to run Mesosphere DC/OS on DigitalOcean.
- Mesosphere Universal Installer for Packet (bare metal): A bare metal environment is a computer system or network in which a virtual machine is installed directly on hardware rather than within the host operating system (OS). Install your DC/OS cluster on Packet bare metal using Terraform templates that are configured to run Mesosphere DC/OS on Packet.
DC/OS on AWS using the Universal Installer
Guide for DC/OS on AWS using the Universal Installer…
DC/OS on Azure using the Universal Installer
Guide for DC/OS on Azure using the Mesosphere Universal Installer…
DC/OS on GCP using the Universal Installer
Guide for DC/OS on GCP using the Universal Installer…
Other Installation methods
Use CloudFormation, AzureRM or other Terraform templates to install DC/OS.… | http://docs-staging.mesosphere.com/1.12/installing/evaluation/ | 2019-07-16T00:05:12 | CC-MAIN-2019-30 | 1563195524290.60 | [] | docs-staging.mesosphere.com |
Redirect URLs Are Shorter and More Informative
In Lightning Experience, the redirect URLs that appear in your browser address field are now more succinct and clear. We fixed URL redirection in feeds to follow platform standards. The standard syntax is shorter, easier on the eyes, and includes the informative feed item ID.
Where: This change applies to Lightning Experience in Contact Manager, Group, Essentials, Professional, Enterprise, Performance, Unlimited, and Developer editions.
Why: The old syntax redirect URL that you got with links in Chatter notification emails was long and provided no useful information about the feed item. JjZUNoYXR0ZXI6ZL0GVza3RvcENoYXR0ZXIiLCJhdHRyaWJ1dGVzI jp7ImZlZWRFbGVtZW50SWQiOiIwRDVCMDAwMDAwbFQ5d0ci
The new syntax is /lightning/<feedItemId>/view. It’s shorter, and it includes the ever-useful feed item ID. | http://releasenotes.docs.salesforce.com/en-us/spring19/release-notes/rn_chatter_redirect_urls.htm | 2019-07-16T00:53:57 | CC-MAIN-2019-30 | 1563195524290.60 | [] | releasenotes.docs.salesforce.com |
Other Salesforce Products and Services
Heroku
Heroku is a cloud-based application platform for building and deploying web apps.
For information on new features, go to the Heroku Changelog.
Success Cloud
The certified experts, consultants, and innovative tools of Salesforce Success Cloud are here to help with professional services, prescriptive advice, and expertise at every stage of your journey. | http://releasenotes.docs.salesforce.com/en-us/spring19/release-notes/rn_other_products.htm | 2019-07-16T00:42:44 | CC-MAIN-2019-30 | 1563195524290.60 | [] | releasenotes.docs.salesforce.com |
Cookie consistency check
The Cookie Consistency check examines cookies returned by users, to verify that they match the cookies that your web site set for that user. If a modified cookie is found, it is stripped from the request before the request is forwarded to the web server. You can also configure the Cookie Consistency check to transform all of the server cookies that it processes, by encrypting the cookies, proxying the cookies, or adding flags to the cookies. This check applies to requests and responses.
An attacker would normally modify a cookie to gain access to sensitive private information by posing as a previously authenticated user, or to cause a buffer overflow. The Buffer Overflow check protects against attempts to cause a buffer overflow by using a very long cookie. The Cookie Consistency check focuses on the first scenario.
If you use the wizard or the GUI, in the Modify Cookie Consistency Check dialog box, on the General tab you can enable or disable the following actions:
- Block
- Log
- Learn
- Statistics
- Transform. If enabled, the Transform action modifies all cookies as specified in the following settings:
- Encrypt Server Cookies. Encrypt cookies set by your web server, except for any listed in the Cookie Consistency check relaxation list, before forwarding the response to the client. Encrypted cookies are decrypted when the client sends a subsequent request, and the decrypted cookies are reinserted into the request before it is forwarded to the protected web server. Specify one of the following types of encryption:
- None. Do not encrypt or decrypt cookies. The default.
- Decrypt only. Decrypt encrypted cookies only. Do not encrypt cookies.
- Encrypt session only. Encrypt session cookies only. Do not encrypt persistent cookies. Decrypt any encrypted cookies.
- Encrypt all. Encrypt both session and persistent cookies. Decrypt any encrypted cookies. Note: When encrypting cookies, the App Firewall adds the HttpOnly flag to the cookie. This flag prevents scripts from accessing and parsing the cookie. The flag therefore prevents a script-based virus or trojan from accessing a decrypted cookie and using that information to breach security. This is done regardless of the Flags to Add in Cookies parameter settings, which are handled independently of the Encrypt Server Cookies parameter settings.
- Proxy Server Cookies. Proxy all non-persistent (session) cookies set by your web server, except for any listed in the Cookie Consistency check relaxation list. Cookies are proxied by using the existing App Firewall session cookie. The App Firewall strips session cookies set by the protected web server and saves them locally before forwarding the response to the client. When the client sends a subsequent request, the App Firewall reinserts the session cookies into the request before forwarding it to the protected web server. Specify one of the following settings:
- None. Do not proxy cookies. The default.
- Session only. Proxy session cookies only. Do not proxy persistent cookies Note: If you disable cookie proxying after having enabled it (set this value to None after it was set to Session only), cookie proxying is maintained for sessions that were established before you disabled it. You can therefore safely disable this feature while the App Firewall is processing user sessions.
- Flags to Add in Cookies. Add flags to cookies during transformation. Specify one of the following settings:
- None. Do not add flags to cookies. The default.
- HTTP only. Add the HttpOnly flag to all cookies. Browsers that support the HttpOnly flag do not allow scripts to access cookies that have this flag set.
- Secure. Add the Secure flag to cookies that are to be sent only over an SSL connection. Browsers that support the Secure flag do not send the flagged cookies over an insecure connection.
- All. Add the HttpOnly flag to all cookies, and the Secure flag to cookies that are to be sent only over an SSL connection.
If you use the command-line interface, you can enter the following commands to configure the Cookie Consistency Check:
set appfw profile <name> -cookieConsistencyAction [**block**] [**learn**] [**log**] [**stats**] [**none**]
set appfw profile <name> -cookieTransforms ([**ON**] | [**OFF**])
set appfw profile <name> -cookieEncryption ([**none**] | [**decryptOnly**] | [**encryptSession**] | [**encryptAll**])
set appfw profile <name> -cookieProxying ([**none**] | [**sessionOnly**])
set appfw profile <name> -addCookieFlags ([**none**] | [**httpOnly**] | [**secure**] | [**all**])
To specify relaxations for the Cookie Consistency check, you must use the GUI. On the Checks tab of the Modify Cookie Consistency Check dialog box, click Add to open the Add Cookie Consistency Check Relaxation dialog box, or select an existing relaxation and click Open to open the Modify Cookie Consistency Check Relaxation dialog box. Either dialog box provides the same options for configuring a relaxation.
Following are examples of Cookie Consistency check relaxations:
Logon Fields. The following expression exempts all cookie names beginning with the string logon_ followed by a string of letters or numbers that is at least two characters long and no more than fifteen characters long:
^logon_[0-9A-Za-z]{2,15}$
Logon Fields (special characters). The following expression exempts all cookie names beginning with the string türkçe-logon_ followed by a string of letters or numbers that is at least two characters long and no more than fifteen characters long:
^txC3xBCrkxC3xA7e-logon_[0-9A-Za-z]{2,15}$
Arbitrary strings. Allow cookies that contain the string sc-item_, followed by the ID of an item that the user has added to his shopping cart ([0-9A-Za-z]+), a second underscore (_), and finally the number of these items he wants ([1-9][0-9]?), to be user-modifiable:
^sc-item_[0-9A-Za-z]+_[1-9][0-9]?$
Caution: Regular expressions are powerful. Especially if you are not thoroughly familiar with PCRE-format regular expressions, double-check any regular expressions you write. Make sure that they define exactly the URL you want to add as an exception, and nothing else. Careless use of wildcards, and especially of the dot-asterisk ( .*) metacharacter/wildcard combination, can have results you do not want or expect, such as blocking access to web content that you did not intend to block or allowing an attack that the Cookie Consistency check would otherwise have blocked.
Important
In release 10.5.e (in a few interim enhancement builds prior to 59.13xx.e build) as well as in the 11.0 release (in builds prior to 65.x),.**
Note
Sessionless Cookie Consistency: The cookie consistency behavior has changed in release 11.0. In earlier releases, the cookie consistency check invokes sessionization. The cookies are stored in the session and signed. A “wlt_” suffix is appended to transient cookies and a “wlf_” suffix is appended to the persistent cookies before they are forwarded to the client. Even if the client does not return these signed wlf/wlt cookies, the App Firewall uses the cookies stored in the session to perform the cookie consistency check.
In release 11.0, the cookie consistency check is sessionless. The App Firewall now adds a cookie that is a hash of all the cookies tracked by the App Firewall. If this hash cookie or any other tracked cookie is missing or tampered with, the App Firewall strips the cookies before forwarding the request to the back end server and triggers a cookie-consistency violation. The server treats the request as a new request and sends new Set-Cookie header(s). The Cookie Consistency check in Citrix ADC version 13.0, 12.1, and NetScaler 12.0 and 11.1 does not have sessionless option. | https://docs.citrix.com/en-us/netscaler/11-1/application-firewall/top-level-protections/cookie-consistency-check.html | 2019-07-16T01:11:49 | CC-MAIN-2019-30 | 1563195524290.60 | [] | docs.citrix.com |
How do I add my custom logo? In order to be able to use a custom logo, you just need to install and activate the Jetpack plugin. Once you do that, you will be able to add you custom logo under Appearance > Customize > Site title, tagline and logo | https://docs.codestag.com/article/401-how-do-i-add-my-custom-logo | 2019-07-16T00:22:34 | CC-MAIN-2019-30 | 1563195524290.60 | [] | docs.codestag.com |
Gateway
OpenFaaS API Gateway / Portal¶
Conceptual design using the OpenFaaS operator faas-provider. Each function is built into an immutable Docker image before being deployed via the faas-cli, UI or REST API.
Click below to view the image full-size:
When deployed each function creates 1 to many Pods/containers depending on the minimum and maximum scaling parameters requested by the user. Functions can also scale to zero and back again through use of the faas-idler or REST API.
See also: auto-scaling.
Reference documentation¶
You can find the reference documentation and any additional settings for the API gateway in the README file for the gateway:
Swagger¶
The OpenFaaS API exposes a RESTful API which is documented with Swagger.
Explore or update the Swagger API documentation¶
The
swagger.yml file can be viewed and edited in the Swagger UI.
Head over to the Swagger editor
Now click File -> Import URL
Type in click OK
You can now view and edit the Swagger, copy back to your fork before pushing changes. | https://docs.openfaas.com/architecture/gateway/ | 2019-07-16T00:09:04 | CC-MAIN-2019-30 | 1563195524290.60 | [] | docs.openfaas.com |
Tutorial 1e: Connecting neurons¶
In the previous parts of this tutorial, the neurons are
still all unconnected. We add in connections here. The
model we use is that when neuron i is connected to
neuron j and neuron i fires a spike, then the membrane
potential of neuron j is instantaneously increased by
a value
psp. We start as before:
from brian import * tau = 20 * msecond # membrane time constant Vt = -50 * mvolt # spike threshold Vr = -60 * mvolt # reset value El = -49 * mvolt # resting potential (same as the reset)
Now we include a new parameter, the PSP size:
psp = 0.5 * mvolt # postsynaptic potential size
And continue as before:
G = NeuronGroup(N=40, model='dV/dt = -(V-El)/tau : volt', threshold=Vt, reset=Vr)
Connections¶
We now proceed to connect these neurons. Firstly, we declare
that there is a connection from neurons in
G to neurons in
G.
For the moment, this is just something that is necessary to
do, the reason for doing it this way will become clear in the
next tutorial.
C = Connection(G, G)
Now the interesting part, we make these neurons be randomly
connected with probability 0.1 and weight
psp. Each neuron
i in
G will be connected to each neuron j in
G
with probability 0.1. The weight of the connection is the
amount that is added to the membrane potential of the target
neuron when the source neuron fires a spike.
C.connect_random(sparseness=0.1, weight=psp)
These two previous lines could be done in one line:
C = Connection(G,G,sparseness=0.1,weight=psp)
Now we continue as before:
M = SpikeMonitor(G) G.V = Vr + rand(40) * (Vt - Vr) run(1 * second) print M.nspikes
You can see that the number of spikes has jumped from around 800-850 to around 1000-1200. In the next part of the tutorial, we’ll look at a way to plot the output of the network.
Exercise¶
Try varying the parameter
psp and see what happens. How large
can you make the number of spikes output by the network? Why?
Solution¶
The logically maximum number of firings is 400,000 = 40 * 1000 / 0.1, the number of neurons in the network * the time it runs for / the integration step size (you cannot have more than one spike per step).
In fact, the number of firings is bounded above by 200,000. The reason for this is that the network updates in the following way:
- Integration step
- Find neurons above threshold
- Propagate spikes
- Reset neurons which spiked
You can see then that if neuron i has spiked at time t, then it
will not spike at time t+dt, even if it receives spikes from
another neuron. Those spikes it receives will be added at step
3 at time t, then reset to
Vr at step 4 of time t, then the
thresholding function at time t+dt is applied at step 2, before
it has received any subsequent inputs. So the most a neuron
can spike is every other time step. | https://brian.readthedocs.io/en/1.4.3/tutorial_1e_connecting_neurons.html | 2019-07-15T23:57:57 | CC-MAIN-2019-30 | 1563195524290.60 | [] | brian.readthedocs.io |
This object gives access to Sounds in Blender.
Get this sound's volume. rtype: float
Set this sound's volume.
f
Get this sound's attenuation value. rtype: float
Set this sound's attenuation.
Get this sound's pitch value. rtype: float
Set this sound's pitch.
Packs the sound into the current blend file.
Note:
An error will be raised if the sound is already packed or the
filename path does not exist.
Unpacks the sound to the samples filename.
mode
Note:
An error will be raised if the sound is not packed or the filename
path does not exist. | https://docs.blender.org/api/249PythonDoc/Sound.Sound-class.html | 2017-08-16T21:43:11 | CC-MAIN-2017-34 | 1502886102663.36 | [] | docs.blender.org |
UDN
Search public documentation:
GameStatsVisualizer
Game Statistics Visualizer ReferenceDocument Summary: An overview of the features available in the Game Statistics viewer. Document Changelog: Created by Josh Markiewicz. Updated by Jeff Wilson.
- Game Statistics Visualizer Reference
- Overview
- The Visualizer Window
- The Database
OverviewThis document will describe the features available for analyzing and visualizing data collected during gameplay sessions. It will also details steps necessary to extend the statistics capturing system for game specific data. More detail about the underlying systems can be found in InstrumentingGameStatistics. The idea is that statistics are streamed to disk during the game and then collected at the end for visualization in the editor. What you record is entirely up to you, but the engine supports a wide range of possibilities out of the box. They can be scaled back or extended as necessary. For the moment, you must include -gamestats at the end of the command line to access the new visualizer tab. For example:
UDK.exe editor -gamestats
The Visualizer WindowThe following highlights the various sections in the game statistics window.
The Game Session Window
The Visualizer Tabs
Basic Stats VisualizerThe purpose of the basic stats visualizer is a catch all for events contained within the data stream. It is very much a dumb terminal, displaying each known stat as a sprite over the playfield. The data is available in both the top down and perspective client windows.
+DrawingProperties=(EventID=0,StatColor=(R=0,G=0,B=0),Size=8,SpriteName="EditorResources.BSPVertex")
Heatmap VisualizerThe heatmap visualizer shows specific data points in aggregate as a range of colors from purple (almost no activity) to red (highest level of activity).
Player Movement VisualizerPlayer movement in this visualizer is represented as an array of colored lines. Each player is given a unique colored line which will play out the movement they made over time.
Adding/Changing The VisualizerBy default one of each visualizer type is created in the tabbed area. The ability to add/remove tabs is set for a later date, but for now if you need more than one of the same kind of visualizer you can change its type via the combo box just below the tab. Inside you will find all known visualizers and you can freely switch between them.
The Time Slider
The Output Window
| https://docs.unrealengine.com/udk/Three/GameStatsVisualizerReference.html | 2017-08-16T21:34:37 | CC-MAIN-2017-34 | 1502886102663.36 | [array(['rsrc/Three/GameStatsVisualizerReference/Overview.jpg',
'Overview.jpg'], dtype=object)
array(['rsrc/Three/GameStatsVisualizerReference/GameSession.jpg',
'GameSession.jpg'], dtype=object)
array(['rsrc/Three/GameStatsVisualizerReference/Filters.jpg',
'Filters.jpg'], dtype=object)
array(['rsrc/Three/GameStatsVisualizerReference/TopDownBasic.jpg',
'TopDownBasic.jpg'], dtype=object)
array(['rsrc/Three/GameStatsVisualizerReference/PerspBasic.jpg',
'PerspBasic.jpg'], dtype=object)
array(['rsrc/Three/GameStatsVisualizerReference/Heatmap.jpg',
'Heatmap.jpg'], dtype=object)
array(['rsrc/Three/GameStatsVisualizerReference/MovementTop.jpg',
'MovementTop.jpg'], dtype=object)
array(['rsrc/Three/GameStatsVisualizerReference/MovementPersp.jpg',
'MovementPersp.jpg'], dtype=object)
array(['rsrc/Three/GameStatsVisualizerReference/TimeControl.jpg',
'TimeControl.jpg'], dtype=object)
array(['rsrc/Three/GameStatsVisualizerReference/Status.jpg', 'Status.jpg'],
dtype=object) ] | docs.unrealengine.com |
Interested.
We’ll be walking you through contributing a patch to Django for the first time. By the end of this tutorial, you should have a basic understanding of both the tools and the processes involved. Specifically, we’ll be covering the following:!
This tutorial assumes you are using Python 3. Get the latest version at Python’s download page or with your operating system’s package manager.
For Windows users
When installing Python on Windows, make sure you check the option “Add python.exe to Path”, so that it is always available on the command line.
As a contributor, you can help us keep the Django community open and inclusive. Please read and follow our Code of Conduct...
Before running the test suite, install its dependencies by first
cd-ing into the Django
tests/ directory and then running:
$ pip install -r requirements/py3.txt
Now we are ready to run the test suite. If you’re using GNU/Linux, Mac OS X or some other flavor of Unix, run:
$ ./runtests.py
Now sit back and relax. Django’s entire test suite has over 9,600. If you’re using Python 3.5+, there will be a couple failures related to deprecation warnings that you can ignore. These failures have since been fixed in.
But this testing thing looks kinda hard...
If you’ve never had to deal with tests before, they can look a little hard to write at first glance. Fortunately, testing is a very big subject in computer programming, so there’s lots of information out there:
unittestdocumentation..
For more information on writing documentation, including an explanation of what the
versionadded bit is all about, see Writing documentation. That page also includes an explanation of how to build a copy of the documentation locally, so you can preview the HTML that will be generated. >.
Before you get too into writing patches for Django, there’s a little more information on contributing that you should probably take a look at:.
© Django Software Foundation and individual contributors
Licensed under the BSD License. | http://docs.w3cub.com/django~1.9/intro/contributing/ | 2017-08-16T21:46:14 | CC-MAIN-2017-34 | 1502886102663.36 | [] | docs.w3cub.com |
Debugging¶
Debugging utilities in gallium.
Debug Variables¶.
Common¶
This option controls if the debug variables should be printed to stderr. This is probably the most useful variable, since it allows you to find which variables a driver uses.
Controls if the Remote Debugger should be used..
Dump information about the current CPU that the driver is running on.
Gallium has a built-in shader sanity checker. This option controls whether the shader sanity checker prints its warnings and errors to stderr.
Whether the Draw module will attempt to use LLVM for vertex and geometry shaders.
Driver-specific¶
Debug Flags for the i915 driver.
Stop the i915 driver from submitting commands to the hardware.
Dump all commands going to the hardware.
Debug Flags for the llvmpipe driver.
Number of threads that the llvmpipe driver should use.
Debug Flags for the freedreno driver.
Remote Debugger¶. | http://gallium.readthedocs.io/en/latest/debugging.html | 2017-08-16T21:32:25 | CC-MAIN-2017-34 | 1502886102663.36 | [] | gallium.readthedocs.io |
Running djangoappengine¶allows you to execute a command on the production database (e.g.,
manage.py remote shellor
manage.py remote createsuperuser)
manage.py deployuploads. Running ‘remote’ executes your local code, but proxies your datastore access against the remote datastore. 10000. | http://djangoappengine.readthedocs.io/en/latest/management.html | 2017-08-16T21:27:09 | CC-MAIN-2017-34 | 1502886102663.36 | [] | djangoappengine.readthedocs.io |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.