content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
TrainingJobSummary
Provides summary information about a training job.
Contents
- CreationTime
A timestamp that shows when the training job was created.
Type: Timestamp
Required: Yes
- LastModifiedTime
Timestamp when the training job was last modified.
Type: Timestamp
Required: No
- TrainingEndTime
A timestamp that shows when the training job ended. This field is set only if the training job has one of the terminal statuses (
Completed,
Failed, or
Stopped).
Type: Timestamp
Required: No
- TrainingJobArn
The Amazon Resource Name (ARN) of the training job.
Type: String
Length Constraints: Maximum length of 256.
Pattern:
arn:aws[a-z\-]*:sagemaker:[a-z0-9\-]*:[0-9]{12}:training-job/.*
Required: Yes
- TrainingJobName
The name of the training job that you want a summary for.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 63.
Pattern:
^[a-zA-Z0-9](-*[a-zA-Z0-9])*
Required: Yes
- TrainingJobStatus
The status of the training job.
Type: String
Valid Values:
InProgress | Completed | Failed | Stopping | Stopped
Required: Yes
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_TrainingJobSummary.html | 2020-07-02T20:16:42 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.aws.amazon.com |
BMC Helix Remedyforce 20.19.02
- BMC Helix Remedyforce 20.20.01
- BMC Helix Remedyforce 20.19.02
- BMC Helix Remedyforce 20.19.01
- BMC Helix Remedyforce 20.18.02
- BMC Helix Remedyforce 20.18.01
- BMC Helix Remedyforce 20.17.02
- BMC Helix Remedyforce 20.17.01
- BMC Helix Remedyforce 20.16.01
- BMC Helix Remedyforce 20.15.03
- BMC Helix Remedyforce 20.15.02
- BMC Helix Remedyforce 20.15.01
Consult the following table for a list of notices and information about updates to BMC Helix Remedyforce.
To view information about the self and auto upgrade push dates for the latest release, see Release schedules .
Tip
Ready-made PDFs are available on the PDFs and videos
If you are using BMC Remedyforce permission sets to manage permissions for your users, you only have to enable new features. However, if you are using BMC Remedyforce profiles or custom permission sets to manage permissions for your users, you might also have to manually configure the updated profile-level permissions in a release. For information about configuring profile-level permissions and the conditions in which you must manually configure these permissions, see Configuring profile-level permissions for new features.
If you are upgrading from a few releases prior to the latest release, you must enable new features and configure the updated profile-level permissions in each interim release. BMC recommends that you first enable new features and update profile-level permissions in the release that immediately follows your current release. You can then enable features in each subsequent release until the latest release. For example, if you are upgrading from version 20.14.01 to 20.15.01 (Winter 15), you must first enable new features and update profile-level permissions in version 20.14.02, and then enable the new features and update profile-level permissions in version 20.15.01.
To confirm whether you have performed the post-upgrade procedures for previous releases, you can review the following table and verify the set of procedures for each release to which you have previously upgraded. The table links to topics that list the updated profile-level permissions and provides information about enabling new features in each release of BMC Remedyforce.
Note
After the automatic upgrade to the latest version, Salesforce sets the Running User of the BMC Remedyforce dashboards to BMC Remedyforce. Because of this issue, an error message is displayed when users access these dashboards. To enable users to view the dashboards, you must configure the appropriate user as the Running User after the automatic upgrade. For more information, see Configuring the Running User of dashboards in BMC Remedyforce.
Last updated: June 19, 2020
Update summary: Added dates for BMC Helix Remedyforce Winter 20 (20.20.01) Patch 3 sandbox and production self and automatic upgrade. You can preview the Patch 3 content by clicking here and use this information to plan your testing.
Note: Winter 20 Patch 3 was supposed to be available on AppExchange yesterday, June 18. Due to an unforeseen outage, it was not available for some duration. The issue has been addressed and the patch is available now.
We apologize for any inconvenience caused.
Tip
To receive an email message when this page is updated or when users post new comments, set a watch on this page. Before you set a watch, ensure that you have logged on to docs.bmc.com with your BMC Support ID.
Product documentation
To access documentation for the Winter 20 release of BMC Helix Remedyforce, click the following links:
Documentation Home | Release notes and notices | Upgrading | Upgrade FAQs
BMC Helix Remedyforce Hybrid Upgrade/Patch Schedule
- Remedyforce customers should refer to the Salesforce Trust website for any scheduled platform maintenance dates that may conflict with a Self Upgrade plan.
- Remedyforce Self Upgrading will upgrade and patch your ORG to the latest version available on the date that you Self Upgrade. If a patch is released after your Self Upgrade date, your ORG will be patched to the latest version on the Automatic Patch dates unless you again Self Upgrade to the latest patch release.
- Remedyforce Automatic Upgrade dates only apply to ORGs running the latest major patched version of Remedyforce.
BMC Helix Remedyforce Winter '20 Patch 3 - Sandbox and Production Self and Automatic Upgrade
BMC Helix Remedyforce Discovery (12.9 Patch 3) - Sandbox and Production Automatic Upgrade | https://docs.bmc.com/docs/BMCHelixRemedyforce/201902/en/home-868129241.html | 2020-07-02T20:08:31 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.bmc.com |
Querying Geographic Data with Spatial Views
Spatial views operate in a similar way as incremental MapReduce views. The request handling, load balancing, and partitioning works the same way as views, and the same staleness parameters are supported as views.
For more information on Spatial views, see Querying data with views. | https://docs.couchbase.com/server/5.0/architecture/querying-geo-data-spatial-views.html | 2020-07-02T19:35:19 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.couchbase.com |
Built In Types: Classname
For the most part, we deal with class types directly via their names. For example:
class Employee { ... } $emp = new Employee( ... );
However, in some applications, it is useful to be able to abstract a class's name rather than to hard-code it. Consider the following:
class Employee { ... } function f(classname<Employee> $clsname): void { $w = new $clsname(); // allocate memory for an object whose type is passed in ... }
For this to work, we need a type that can represent a class name, and that is the type
classname<...
>.
Consider the following:
class C1 { ... } class C2 { public static classname<C1> $p1 = C1::class; public static function f(?classname<C1> $p): classname<C1> { ... } public static vec<classname<C1>> $p2 = vec[C1::class];}
Here, class
C2 has three public members. The first is a property having a classname type, which is initialized using a special form of
the scope-resolution operator,
:: ending in
class. The second is a function that
takes a nullable classname-typed argument, and returns a value of that classname type. The third is a property whose type is a vec of one
classname<C1>.
The value of an expression of a classname type can be converted implicitly or explicitly to type
string. | https://docs.hhvm.com/hack/built-in-types/classname | 2020-07-02T19:45:03 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.hhvm.com |
Academy Awards: Open Source Sci-Tech
Academy Awards, actors, writers, music, directors, producers and best picture right? But there is a lot behind the scenes with VC++ being the language of open source for the Sci-Tech Awards, at least the Sci-Tech Awards that are open source. And yes I know, these posts have nothing to do with Java, in fact the three that I talk about below use C++. So no JTEAM discussion in this blog, but I wanted to call out to some of the totally cool code solutions.
Note that Field3D and OpenVDB are open source, but require Houdini or the animation. Bullet Physics appears to be more of a stand alone tool that works with Autodesk types of apps.
Note that these were last years winners, the 2016 awards have not been offered. However, if your source code is used in the movie industry, you may be able to get a nomination for 2017. Of course the 2016 awards are still being considered.
Bullet Physics
Wow, this library is just posted on GitHub like any other software build:, no mention of receiving the award. Normal issues, and Readme.md. No mention about winning the Academy Awards Sci-Tech nomination. If that happened to me (very unlikely) my wife would faint, my mother-in-law might actually like me finally. Good going Erwin Coumans! And all of the contributors to the Bullet Physics site! No wonder Erwin looks so happy in this picture!
Field3D
Sony has won an Academy Award for Field3D, which is a way to write the specialized formats called voxel data to file. This is data that is used to create the 3D effects you may or may not like. I see this as a useful tool in designing recording systems for virtual reality or augmented reality like the Hololens. Information can be found here: Link, and the Field3D source code can be found here: link. The library is best used with the Houdini design tooling, and has been tested using CentOS and MacOS. If you are investigating real world uses of Python, the MacOS example is a great way to go.
Using Azure with the CentOS, you have a number of options, and Azure does support CentOS.
OpenVDB
Best to just quote the OpenVDB website:
“Open V.”
The site does include a series of PowerPoints that discuss how to use the OpenVDB: link.
What about the other Sci-Tech awards?
Texas Instruments received an award for the DLP technology that made digital projection possible, again you may like the process or not, but good work Texas Instruments!
For the other awards check out: Sci-Tech Awards.
See you at the movies! | https://docs.microsoft.com/en-us/archive/blogs/devschool/academy-awards-open-source-sci-tech | 2020-07-02T20:11:25 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.microsoft.com |
Monitor and Troubleshoot Moogsoft Enterprise
The following topics describe the available health and performance indicators included with the Moogsoft Enterprise system. They also also provide some guidance on how to monitor your system and how to troubleshoot performance problems.
Note
For the locations of specific installation and log files, see Configure Logging.
The following topics provide troubleshooting advice and guidance: | https://docs.moogsoft.com/Enterprise.8.0.0/monitor-and-troubleshoot-moogsoft-enterprise.html | 2020-07-02T17:49:29 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.moogsoft.com |
My adventure creating a Media Center add-in, part 2
As you can probably tell by the timestamp from the previous entry in my running journal about learning how to create a Media Center add-in, I haven't had a ton of time in the past few weeks to devote to making forward progress on this project. When I left off last time, I had gotten the Media Center SDK installed and was about to start exploring it. There are a lot of different ways to go about learning new concepts, and the way I've found that I learn things most effectively is by getting hands-on. I learn new coding concepts best by reverse engineering working real-world code, tweaking it to do new things, then using that knowledge as a basis to start writing something new. So that is what I'm going to start out trying to do.
I decided to start by exploring the contents of the SDK (installed by default to C:\Program Files\Microsoft\Microsoft Windows XP Media Center SDK). I quickly notice Microsoft Windows XP Media Center SDK.chm in the root of this folder so I decide to browse through this first. The high-level introduction to the SDK in the CHM file indicates that there are several options for developers who want to extend Media Center functionality. It appears the major branching point for the type of development is to choose to write a Hosted HTML Application or an Add-In. I'm not a huge fan of HTM/HTA/ActiveX development and I want to exercise some of my coding skills that I haven't gotten a chance to use in a few months, so I think the best choice for what I want to try to learn how to do first is going to be an add-in.
I notice that there is a Sample Addins directory in the SDK folder, and there is a topic in the CHM that lists each of the sample add-ins and what concepts they demonstrate. This looks like it will be really cool and useful for me because these are real-world applications and not simple "hello world" apps. I think I'll start by picking apart these samples one after the other. The Sample Addins directory has the source code and it also has Visual Studio .NET 2002 project files. I only have VS 2003 installed, but that should theoretically be fine because it will prompt me to upgrade the project files when I try to open them.
Since I'm really tired and have to get up early tomorrow I think I'm going to have to make this my stopping point for this time around. Next time I'm going to look at the source code for the sample add-ins and see if I can get them building and plugged into my home Media Center machine to make sure I understand the end-to-end deployment scenario.
Eventually I will have to decide what kind of add-in I want to write when I venture out on my own. I've been trying to brainstorm a bit and read about what other folks are doing. I found a couple of cool sounding projects in my reading so far:
- From Charlie Owen's blog - an add-in to schedule TV recording via an RSS feed. Since Charlie sits across the hall and a couple of doors down from me and I finally got a chance to meet him a few days ago, I should probably stop by and chat with him about this :-)
- On MSDN - an add-in to jump to specific times while playing back media. The author of this one looks to have done a good job of describing the entire end-to-end development and deployment/testing process, and there is sample code here. So this looks like a good place to go for more complex examples after I work through the simpler add-ins that are in the SDK. | https://docs.microsoft.com/en-us/archive/blogs/astebner/my-adventure-creating-a-media-center-add-in-part-2 | 2020-07-02T20:31:45 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.microsoft.com |
This article provides a reference for writing a StreamBase Client API Listener
configuration file where the HOCON type is
com.tibco.ep.streambase.configuration.sbclientapilistener.
The StreamBase client API listener configuration defines port numbers and secure communication indicators. It is separate from the base engine configuration and can therefore be managed separately without having to recreate an application archive and redeploy the application whenever for example a port number changes. = "myapilistenersettings"
-.sbclientapilistener" = { ... ... }
Below shows the configuration's HOCON properties, usage, and syntax example, where applicable.
- ClientAPIListener
Root object for StreamBase client API listener:
associatedWithEngines = [ "javaengine", "otherengine[0-9]" ]
- apiListenerAddress
Listener address configuration for the StreamBase client API. This object is optional.
- portNumber
Specifies the TCP port on which the current EventFlow engine is to listen for client connections. This property is optional and has no default value. The port range is 1025 – 65535, inclusive. A zero value directs the EventFlow engine to find a random, unused port to listen on.
Starting with TIBCO Streaming 10.6.0, the client connection port for EventFlow engines is determined at engine startup to use a random, unused port. This means the default value for portNumber is effectively zero.
TIBCO recommends using the cluster- and node-aware epadmin administration tool, which allows administrators to avoid knowing or using an EventFlow engine's listening port. However, if your project has architectural or legacy reasons to specify a particular port, use this ClientAPIListener > apiListenerAddress > portNumber property in a configuration file in the Studio project's
/src/main/configurationsfolder at node installation time. You can also set a port number in a Studio run configuration, potentially overriding a port defined in a configuration file.
For a running node, you can upload but cannot activate a configuration that specifies a portNumber. The engine that has affinity with the configuration must be stopped to activate such a configuration.
For example:
portNumber = 10020
Note that the Client API listener port for LiveView fragments is determined with a different algorithm, as described on this page.
Note
In previous releases, the default value of 10000 for portNumber was assumed. Most legacy administration tools such as sbc and sbadmin still presume that default. However, starting with release 10.6.0, there is no longer a default value for the StreamBase client API listener port.
- authenticationRealmName
Authentication realm associated with this listener, indicating that user authentication is to be performed for requests handled by this listener. This property is optional and has no default value.
For example:
authenticationRealmName = "ldaprealm"
- pagePool
Connection page pool configuration. This object is optional, with defaults described below.
Note that the product of
pageSize * maxClientPagesmust be a number smaller than or equal to 2^31 - 1 (2147483647).
- pageSize
Use this property to determine the initial size for output buffers; pageSize is also used to calculate the maximum size a client output queue can grow to before the client is disconnected. Refer to
maxClientPagesproperty for related information. The
pageSizeproperty is optional, its default value is 4096 bytes, and its maximum value is 16384.
The
pageSizevalue specifies the memory allocation granularity and should stay small, on the order of 4K to 8K. To provide additional queue space for your connecting clients, increase only the
maxClientPagesvalue.
You can use HOCON power-of-ten or power-of-two suffixes like KB, MB, K, or M as an abbreviation.
For example:
pageSize = 4K
- maxPooledBuffers
Determines how many buffers (per output stream) to maintain in a buffer cache. To disable the cache, set the value to -1. This parameter does not determine when and whether slow clients are disconnected.
You can use HOCON power-of-ten or power-of-two suffixes like KB, MB, K, or M as an abbreviation.
This property is optional and its default value is 1024.
For example:
maxPooledBuffers = 2048
- slowDequeueClientWaitMilliseconds
Determines the behavior of slow dequeuing clients. The server will either disconnect slow clients (the default) or BLOCK the server to wait for slow clients to catch up. A value of -1 causes clients to be disconnected. A value greater than -1 causes the server to sleep for the given amount of time in milliseconds when it detects that a client is running behind. The server continues sleeping until there is available dequeuing space for the client.
This property is optional and its default value is -1.
For example:
slowDequeueClientWaitMilliseconds = 100
- maxClientPages
This property controls the maximum number of pages that a dequeuing client connection can allocate. Depending on the value of
slowDequeueClientWaitMillisecondsthe engine will either disconnect the slow client or BLOCK. This property is used to protect the StreamBase engine from slow or hung dequeuing clients. With the default page size of4096 bytes, the default
maxClientPagesvalue of 2048 provides 8 megabytes.
To allow all dequeuing clients to allocate unlimited memory in the StreamBase engine, set the value to 0. Note that the number of pages that a client allocates will change over time. A client that is consuming tuples as quickly as the engine produces them will only use 1 or 2 pages. The max can be reached with a slow or hung client or if there is a large spike of output data. This property is optional and its default value is 4096.
You can use HOCON power-of-ten or power-of-two suffixes like kB, MB, K, or M as an abbreviation.
For example:
maxClientPages = 4K
- clientHeartbeatIntervalMilliseconds
Int. The heartbeat interval is the number of milliseconds between heartbeat packets sent to clients. Clients can be configured to require a heartbeat packet from the server at a minimum interval. This is used primarily for network segmentation detection.
Setting this option to zero disables heartbeats from the server. Clients connected to such a server will not have heartbeat protection, regardless of their locally configured minimum heartbeat interval.
This property is optional and its default value is 10000.
For example:
clientHeartbeatIntervalMilliseconds = 60000
- connectionBacklog
Int. Number of backlogged connections. Servers with many clients may want to increase the maximum number of backlogged connections to the server. For further details look up the manual page for the system call,
listen. This property is optional and its default value is 10.
For example:
connectionBacklog = 20
- maxPersistentConnections
Int. Maximum number of persistent connections. Each persistent connection uses up server resources. To protect the server from errant client connections a user can specify a maximum number of persistent connections. Any attempted client connections over the limit will be disconnected. This property is optional and its default value is -1, meaning no limit.
For example:
maxPersistentConnections = 10
- idleEnqueueClientTimeoutMilliseconds
Int. Settings for disconnecting idle clients. An idle enqueue client is a client that has enqueued at least one tuple and has been idle for idleEnqueueClientTimeoutMilliseconds.
Clients that have enqueued and subscribed are subject to both idleEnqueueClientTimeoutMilliseconds and idleDequeueClientTimeoutMilliseconds. The server checks clients every idleClientCheckIntervalMilliseconds. The actual point that a client is disconnected will be approximate modulo idleClientCheckIntervalMilliseconds.
Values are in milliseconds. Values greater than zero enable this feature.
This property is optional and its default value is –1, which turns off checking.
For example:
idleEnqueueClientTimeoutMilliseconds = 10000
- idleDequeueClientTimeoutMilliseconds
Int. Settings for disconnecting idle clients. An idle dequeue client is a client that has subscribed to at least one stream at any point and has been idle for idleDequeueClientTimeoutMilliseconds.
Clients that have enqueued and subscribed are subject to both settings. The server checks clients every idleClientCheckIntervalMilliseconds. The actual point at which a client is disconnected is approximate, taking into account the idleClientCheckIntervalMilliseconds period.
Values are in milliseconds. Values greater than zero enable this feature. This property is optional and its default value is –1, which turns off checking.
For example:
idleDequeueClientTimeoutMilliseconds = 5000
- idleClientCheckIntervalMilliseconds
How often the server check should for idle clients, in milliseconds. This property is optional and its default value is 60000.
For example:
idleClientCheckIntervalMilliseconds = 120000
- secureCommunicationProfileName
Name of a secure communication server profile to use when configuring secure communication for a listener. This property is optional and has no default value. If not present, the listener will not use secure connections with its clients.
For example:
secureCommunicationProfileName = "tlsprofile"
The following is an example of the
sbclientapilistener
type.
name = "myapilistenersettings" version = "1.0.0" type = "com.tibco.ep.streambase.configuration.sbclientapilistener" configuration = { ClientAPIListener = { associatedWithEngines = [ "javaengine", "otherengine[0-9]" ] apiListenerAddress = { portNumber = 10000 authenticationRealmName = "ldaprealm" } pagePool = { pageSize = 6K maxPooledBuffers = 2048 slowDequeueClientWaitMilliseconds = 100 maxClientPages = 4K } connectionBacklog = 20 maxPersistentConnections = 10 clientHeartbeatIntervalMilliseconds = 60000 idleEnqueueClientTimeoutMilliseconds = 10000 idleDequeueClientTimeoutMilliseconds = 5000 idleClientCheckIntervalMilliseconds = 120000 } } | https://docs.streambase.com/latest/topic/com.streambase.sfds.ide.help/data/html/hocon/hocon-sb-ClientAPIListener.html | 2020-07-02T18:38:33 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.streambase.com |
Creating and managing form templates
To help ensure accuracy and completeness, you can use a form template to create records. Templates help you configure commonly used workflows, ensure consistency in the way information is captured, increase productivity and efficiency, and reduce errors. By using templates, you can pre-populate fields on a form, which reduces the need to enter commonly used data. Templates are useful if your data must use a standard format. Track-It! provides an ability to define and apply templates for the ticket and assignment records. You must have system administrator permission to create and manage form templates.
The following topics are provided:
Video
The following video (15:01) presentation provides information about ticket templates and Assignment status progression in Track-It!
Key considerations
- You can create templates for ticket and assignment records.
- Although you can edit a template, you cannot change the record type for which it was created.
For example, you can edit a template that is created for tickets, but you cannot change the template and use it to create assignments.
- You can create templates that can be used by specific groups.
For example, you have a group whose expertise is resolving HR tickets. You can create a template that can be specifically used by technicians belonging to this group.
- You can assign form templates to the requestors, which they can use to create tickets.
- You can save the customized form as a template. For more information, see Customizing forms for groups.
- When using the template to create a record, only the fields that are added to the template as default values are populated. The users must enter values manually for the other fields on the record form.
Out-of-the-box templates
Track-It! provides the following out-of-the-box templates:
Before you begin
- Based on your requirements, use form customization to add the preferred fields to the default Ticket and Assignment forms. For more information, see Customizing forms for groups.
- The Select and apply template field must be added and enabled on the default form assigned to the Ticket and Assignment modules.
- For linking Predecessors and Successors to an assignment template, you must first create the required assignment templates.
- Ensure that you have configured the assignment status progression. For more information see Configuring assignment status progression.
Creating a template
- On the header bar, expand the hamburger menu, and select Configuration.
- Select Form Definitions > Form Templates.
- On the Form Templates page, click New.
- In the New Template dialog box, in the Template Information section, perform the following actions:
- From the Template For list, select one of the following record types:
- Ticket
- Assignment.
The dialog box is refreshed after you select a record type and the field options are displayed based on your selection.
- In the Template Name field, enter an appropriate title for the template.
- (Optional) In the Template Description field, enter a short description about the template.
- In the Template Contents section, perform the following actions:
- From the Select Field list, select a field that you want to add to the template.
- In the box next to the field, select or enter a field value.
The field next to the Select Field list changes based on the type of the selected field.
For example, for fields such as Category and Status, you can select a value from the list. For text fields such as Email, you have to enter the information in the box.
- Repeat step no. 5a and 5b to add required fields to your template.
- (Optional) To change the value of a selected field, select the field in the selected fields list and change the value and click Update.
- (Optional) To remove any field, select the field and click Remove.
You can remove multiple fields at a time.
- In the Template Availability section, perform the following actions:
- For the Used By option, choose any one of the following options:
- To make the template available to all users, select Everyone.
- To make the template available to specific groups, select Groups.
In the All Groups list, double-click the groups that you want to add to the Selected Groups list.
- To make the template available to Self Service users, select the Display in Self Service check box.
- (Optional) To make the template unavailable, select the Mark as Inactive check box.
Templates that are marked as inactive cannot be used to create records in Track-It! and Self Service.
- Click Save.
Linking assignment templates to a ticket template
You can link multiple assignment templates to a ticket template. When the ticket template is applied to a record, the linked assignments are created in the same order as the order of the linked assignment templates. When you create a record by using a template that has linked assignments, after you save the record, the linked assignments are created automatically. However, if the linked template does not contain any field, then the linked assignments are not created. If you unlink an assignment template, it does not delete or unlink the existing records that were created before you unlinked the assignment template.
- Create a required ticket template. For more information, see Creating a template.
- Navigate to the Form Templates page (hamburger menu,select Configuration > Form Definitions > Form Templates).
- On the Form Templates page, select the required template and click.
- In the Details section, click Link Templates.
- In the Select from Templates dialog box, select a template and click OK.
- To add more templates, repeat steps 4 and 5.
- Click Save.
- (Optional) To unlink a template from the parent template, select a template and click Unlink.
Linking predecessors and successors to an assignment template
You can link multiple predecessors and successors to an assignment template. You must first select a parent ticket template and link the assignment templates to the parent ticket template, before linking the assignments as predecessors or successors. Linking predecessors and successors helps to establish a chronological sequence of events for various assignments linked to a ticket. The predecessor-successor relationship is a tree relationship, not a sequential one, and can have different workflows. Depending upon the workflow, there can be two or more assignments that need to be completed simultaneously. For more information about predecessors and successors, see Linking predecessors and successors to an assignment.
- Create a required ticket template. For more information, see Creating a template.
- Navigate to the Form Templates page (hamburger menu, select Configuration > Form Definitions > Form Templates).
- On the Form Templates page, select the required template and click.
- In the Details section, in the Select Template to attach a workflow to list, select an appropriate ticket template.
- To add a predecessor, click the Predecessors tab and then click Link Predecessors.
- In the Select from Templates dialog box, select a template and click OK.
- To add a successor, click the Successors tab and then click Link Successors.
- Repeat step 6.
- To add more predecessors or successors, repeat steps 5 through 8.
- Click Save.
- (Optional) To unlink a template, select a template and click Unlink Successors or Unlink Predecessors.
Changing template display order
After you create templates, you can set an order in which templates are displayed in a module. The templates are displayed in the set order in the Select and apply template list in the Ticket or Assignment form in the Track-It! application and when requestors click the Common Requests button while creating a ticket in Track-It! Self Service. By default, the templates are displayed in alphabetical order.
- Navigate to the Form Templates page (hamburger menu, select Configuration > Form Definitions > Form Templates).
- On the Form Templates page, click the More Actions menu, and select Template Display Order.
- In the Template Display Order dialog box, in the Templates For list, select a record type: Ticket or Assignment.
The dialog box is refreshed and the Templates box displays all the templates for the selected record type.
- In the Templates dialog box, select the required templates and use the Move Up and Move Down arrow buttons to set the display order.
- Click Save.
Copying, editing, or deleting a template
When you copy a template, the resulting template has the same contents as the original, but the Details section is not copied. This feature is useful when you need the same content for an issue but want to link different assignments to it.
When you delete a template, the system removes all references to the linked templates. The existing records created with a template are not deleted, but the template reference is not visible on those records. Also, the linked templates are not deleted.
Navigate to the Form Templates page (hamburger menu> Configuration > Form Definitions > Form Templates).
In the Form Templates page, in the templates list, select the required template and perform one of the following actions:
Related topics
Creating and managing tickets
Creating and managing assignments | https://docs.bmc.com/docs/trackit2018/en/creating-and-managing-form-templates-777225620.html | 2020-07-02T18:50:34 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.bmc.com |
dgl.function¶
In DGL, message passing is expressed by two APIs:
send(edges, message_func)for computing the messages along the given edges.
recv(nodes, reduce_func)for collecting the incoming messages, perform aggregation and so on.
Although the two-stage abstraction can cover all the models that are defined in the message passing paradigm, it is inefficient because it requires storing explicit messages. See the DGL blog post for more details and performance results.
Our solution, also explained in the blog post, is to fuse the two stages into one kernel so no explicit messages are generated and stored. To achieve this, we recommend using our built-in message and reduce functions so that DGL can analyze and map them to fused dedicated kernels. Here are some examples (in PyTorch syntax).
import dgl import dgl.function as fn import torch as th g = ... # create a DGLGraph g.ndata['h'] = th.randn((g.number_of_nodes(), 10)) # each node has feature size 10 g.edata['w'] = th.randn((g.number_of_edges(), 1)) # each edge has feature size 1 # collect features from source nodes and aggregate them in destination nodes g.update_all(fn.copy_u('h', 'm'), fn.sum('m', 'h_sum')) # multiply source node features with edge weights and aggregate them in destination nodes g.update_all(fn.u_mul_e('h', 'w', 'm'), fn.max('m', 'h_max')) # compute edge embedding by multiplying source and destination node embeddings g.apply_edges(fn.u_mul_v('h', 'h', 'w_new'))
fn.copy_u,
fn.u_mul_e,
fn.u_mul_v are built-in message functions, while
fn.sum
and
fn.max are built-in reduce functions. We use
u,
v and
e to represent
source nodes, destination nodes, and edges among them, respectively. Hence,
copy_u copies the source
node data as the messages,
u_mul_e multiplies source node features with edge features, for example.
To define a unary message function (e.g.
copy_u) specify one input feature name and one output
message name. To define a binary message function (e.g.
u_mul_e) specify
two input feature names and one output message name. During the computation,
the message function will read the data under the given names, perform computation, and return
the output using the output name. For example, the above
fn.u_mul_e('h', 'w', 'm') is
the same as the following user-defined function:
def udf_u_mul_e(edges): return {'m' : edges.src['h'] * edges.data['w']}
To define a reduce function, one input message name and one output node feature name
need to be specified. For example, the above
fn.max('m', 'h_max') is the same as the
following user-defined function:
def udf_max(nodes): return {'h_max' : th.max(nodes.mailbox['m'], 1)[0]}
Broadcasting is supported for binary message function, which means the tensor arguments
can be automatically expanded to be of equal sizes. The supported broadcasting semantic
is standard and matches NumPy
and PyTorch. If you are not familiar
with broadcasting, see the linked topics to learn more. In the
above example,
fn.u_mul_e will perform broadcasted multiplication automatically because
the node feature
'h' and the edge feature
'w' are of different shapes, but they can be broadcast.
All DGL’s built-in functions support both CPU and GPU and backward computation so they
can be used in any autograd system. Also, built-in functions can be used not only in
update_all
or
apply_edges as shown in the example, but wherever message and reduce functions are
required (e.g.
pull,
push,
send_and_recv).
Here is a cheatsheet of all the DGL built-in functions. | https://docs.dgl.ai/api/python/function.html | 2020-07-02T18:34:48 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.dgl.ai |
Check out Chicago’s SQL Server User Group. Some great, fresh content coming up!
There are a LOT of user groups out there. I personally participate in the local ALM User Groups, the Chicago QAI User Group, a few .NET user groups and Chicago Girls in Tech. Whew! So just in case you had some additional weeknights free, there are a couple more good ones that you might not know about. Recently I ran across some information on the next meeting of the SQL Server user group in Chicago. Check it out!
October SQL Pass user group
Topic: SQL Server Denali: Always On
Date: Thursday, October 6th, 2011 @ 5:30 PM
(You must RSVP by 4:00 PM, Tuesday, October 4th to attend)
Speaker: Dave Paulson, Microsoft
Location: Microsoft Corp, 200 E Randolph Dr, Suite 200, Chicago, IL 60601
Abstract:
SQL Server Denali AlwaysOn Overview Topics:
- High Availability
- Maximizing Resource Utilization
- Performance Enhancement
- Beyond Relational | https://docs.microsoft.com/en-us/archive/blogs/angelab/check-out-chicagos-sql-server-user-group-some-great-fresh-content-coming-up | 2020-07-02T20:40:35 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.microsoft.com |
At this place you will find mainly the technical documentation for Contributors. If you would like to contribute to the development of openVALIDATION, then you have come to the right place.
If you would like to use openVALIDATION as a user or integrate the compiler into another project, please start at this point Ready to go...
We use GitHub issues for both. First, use the github search to check if your issue has been reported already. If you find that someone already reported your problem, feel free to share additional information to help us investigate the bug more quickly.
If you could not find a matching issue and want to open a new ticket, please try to follow the issue template and provide all the necessary information. However, an incomplete ticket that reports a critical bug is still better than no ticket at all.
Please refer to the developer readme for build and test instructions, as well as our coding guidelines.
Pull requests are always welcome! If you have never contributed to an open source project on GitHub before, check out this guide for first-timers. When opening a new pull request, try to stick to the template as much as is reasonable, but don't be afraid to leave out or add information if that would make your PR more concise and readable.
Improving documentation
Improving documentation (code comments, wikis, website) is an excellent way of familiarizing yourself with the project and improving the codebase at the same time!
Fixing lint warnings
We want to get openVALIDATION to be checkstyle clean. In order to do this, open the checkstyle configuration at
build-tools/src/main/resources/google_checks.xml and re-add one of the commented-out lints.
After having fixed all the warnings, submit a pull request.
Fixing bugs
Pick a ticket from the bugtracker and leave a comment letting us know that you are working on a fix. If you have any questions, feel free to discuss it in the ticket. Make sure the fix adheres to the coding guidelines.
Implementing new features
Before integrating a new feature, it's best to discuss it with the core developers first. This can be done in the corresponding GitHub issue in the bugtracker.
Make sure you add tests of your functionality.
All tests should pass, the code should be properly formatted (see coding guidelines) and continuous integration must give its ok on the pull request.
Submit a pull request
All changes to this project must comply with the Apache 2.0 License | https://docs.openvalidation.io/contribution/developer-guide | 2020-07-02T19:27:24 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.openvalidation.io |
Pixel Vision OS includes built-in tools to help you create PV8 games. Each tool is registered to edit a specific Pixel Vision 8 file type. For example, if you open up the Color Tool’s info.json file, you’ll see it is set to edit color files.
Pixel Vision OS includes the following built-in tools:
In addition to the built-in tools, you can also build your own. If a PV8 game is located in a
/Workspace/System/Tools/ folder, Pixel Vision OS will attempt to register it with the file type it is associated with in the
info.json file.
That means you can create or modify any of the built-in tools for your own specific needs. You can even have
Systemfolders on disks and they will load on top of the Workspace drive if you want to run your tools from disks and not install them over the built-in ones. | https://docs.pixelvision8.com/makinggames/tools | 2020-07-02T18:24:39 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.pixelvision8.com |
The resource search path is the path along which StreamBase searches for resource files
that are referenced by a module in the current project. The module search path is
managed by Maven, which looks by default in the
src/main/resources folder of the project, and in any subfolder
thereof.
To reference resources in other EventFlow or LiveView fragment projects in the current workspace, you must add the referenced project as a Maven dependency of the referencing project. For example, to reference modules in EventFlow project Secondary from project Primary, you must add a Maven dependency on Secondary from Primary.
Use
pom.xml file, and
use the POM Editor's Dependency tab. | https://docs.streambase.com/latest/topic/com.streambase.sfds.ide.help/data/html/authoring/search-path-resource.html | 2020-07-02T18:28:43 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.streambase.com |
..
(title then (titlealts) (optional) then (shortdesc) (optional) then (prolog) (optional) then (refbody) (optional) then (related-links) (optional) then (topic or concept or task or reference) (any number) )
>
OASIS DITA Language Specification v1.0 -- 09 May 2005 | http://docs.oasis-open.org/dita/v1.0/langspec/reference.html | 2009-07-03T22:10:04 | crawl-002 | crawl-002-025 | [] | docs.oasis-open.org |
Puppet Server: Running From Source
Included in Puppet Enterprise 2016.1. much) user=> (help). | https://docs.puppet.com/puppetserver/2.3/dev_running_from_source.html | 2017-12-11T04:06:35 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.puppet.com |
Flow Architecture Advanced Use Case
By judiciously combining the architectural options and product features available in Mule, you can, with a minimum of development effort, design and create powerful, robust applications that precisely fit your needs.
The application pictured below leverages multiple flows, two types of queues, clustering, and load balancing to create a Mule application that facilitates all of the following:
high throughput
high availability
high reliability (transactionality)
How It Works
The application builds upon a request-response exchange pattern, in which Web clients submit messages (requests), then wait for responses from the application.
In this particular topology, a Java Message Server (JMS) sits between the clients and the application, receiving messages as they are submitted and managing them using Active MQ, a messaging queue that performs the following:
keeps track of every submission
forwards messages to the application in the order they were submitted
makes sure that the application provides a response to every message within a specified timeout period
forwards each response the appropriate sender
Since the JMS sits outside the application, it is relatively slow compared to the application, which runs on multiple threads within a cluster of Mule nodes. Also, it does not have direct visibility into the success or failure of the individual message processing events within our application. Nevertheless, the JMS provides a form of “high-level transactionality” by ensuring that a response is received for each message.
Within the application, an HTTP endpoint set to the request-response exchange pattern serves as both the application’s message source (i.e. inbound endpoint) and its outbound endpoint, dispatching a response to each sender by way of the JMS.
The message processors within the application are segregated into four flows. Each asynchronous flow runs on a separate thread and begins and ends with a VM endpoint. These VM endpoints share memory through a VM queue. If any of the asynchronous flows fails to execute successfully, the VM queue reports this, thus ensuring a type of flow-level transactionality known as high reliability.
The application has been configured through the Mule Management Console to be deployed and run on a four-node cluster. If any of the nodes go down, one of the others picks up the processing load, thus ensuring high availability. As the following diagram illustrates, even if none of the nodes go down, the asynchronous flows can be processed on the next available node. This type of automatic load balancing promotes high throughput.. | https://docs.mulesoft.com/mule-user-guide/v/3.5/flow-architecture-advanced-use-case | 2017-12-11T03:56:50 | CC-MAIN-2017-51 | 1512948512121.15 | [array(['./_images/load_balancing.png', 'load_balancing'], dtype=object)] | docs.mulesoft.com |
MaintenanceWindowIdentity
Information about the Maintenance Window.
Contents
- Cutoff
The number of hours before the end of the Maintenance Window that Systems Manager stops scheduling new tasks for execution.
Type: Integer
Valid Range: Minimum value of 0. Maximum value of 23.
Required: No
- Description
A description of the Maintenance Window.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 128.
Required: No
- Duration
The duration of the Maintenance Window in hours.
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 24.
Required: No
- Enabled
Whether the Maintenance Window is enabled.
Type: Boolean
Required: No
- Name
The name of the Maintenance Window.
Type: String
Length Constraints: Minimum length of 3. Maximum length of 128.
Pattern:
^[a-zA-Z0-9_\-.]{3,128}$
Required: No
- WindowId
The ID of the Maintenance Window.
Type: String
Length Constraints: Fixed length of 20.
Pattern:
^mw-[0-9a-f]{17}$
Required: No
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_MaintenanceWindowIdentity.html | 2017-12-11T04:15:44 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.aws.amazon.com |
Mule Server Notifications
Mule ESB provides an internal notification mechanism that you can use to access changes that occur on the Mule Server, such as a flow component being added, a Mule Model being initialized, or Mule being started. You can set up your agents or flow components to react to these notifications.
Configuring Notifications
Message notifications provide a snapshot of all information sent into and out of the Mule Server. These notifications are fired whenever a message is received or sent. These additional notifications have some impact on performance, so they are disabled by default. To enable message notifications, you set the type of messages you want to enable using the
<notifications> element in your Mule configuration file. You also register the notification listeners and associate interfaces with specific notifications.
For example, first you create beans for the notification listeners, specifying the class of the type of notification you want to receive:
Next, you specify the notifications you want to receive using the
<notification> element, and then register the listeners using the
<notification-listener> element:
When you specify the COMPONENT-MESSAGE notification, a notification is sent before and after a component is invoked. When you set ENDPOINT-MESSAGE, a notification is sent whenever a message is sent, dispatched, or received on an endpoint. Because the listeners implement the interface for the type of notification they want to receive (for example, the
ComponentMessageNotificationLogger class would implement
org.mule.api.context.notification.ComponentMessageNotificationListener), the listeners receive the correct notifications.
For a list of notification types, see Notifications Configuration Reference. For a list of notification listener interfaces, see Notification Interfaces below.
Specifying a Different Interface
If you want to change the interface that is associated with a notification, you specify the new interface with the
interface-class and
interface attributes:
Configuring a Custom Notification
If you create a custom notification, you also specify the
event-class attribute:
Disabling Notifications
If you want to block a specific interface from receiving a notification, you specify it with the
<disable-notification> element. You can specify the notification type (event), event class, interface, and/or interface class to block.
Using Subscriptions
When registering a listener, you can specify that it only receive notifications from a specific component using the
subscription attribute. For example, to specify that the listener only receive notifications from a flow component called "MyService1", you would configure the listener as follows:
You can also register listeners and filter the subscriptions from your Java code:
To register interest in notifications from all flow components with "Service" in the name, you would use a wildcard string as follows:
For more information, see Registering Listeners Programmatically below.
Firing Custom Notifications
Custom notifications can be fired by objects in Mule to notify custom listeners. For example, a discovery agent might fire a Client Found notification when a client connects.
You fire a custom notification as follows:
Any objects implementing
CustomNotificationListener will receive this notification. It’s a good idea to extend
CustomNotification and define actions for your custom notification type. For example:
Note that non-system objects in Mule can only fire custom notifications through the manager. Attempting to fire other notifications such as
ModelNotification will cause an
UnsupportedOperationException.
Notification Interfaces
The following table describes the Mule server notifications and the interfaces in the
class an object can implement to become a listener for that notification. All listeners extend the
ServerNotificationListener interface.
The listener interfaces all have a single method:
where T is a notification class (listener class without the 'Listener' at the end).
Depending on the listener implemented, only certain notifications will be received. For example, if the object implements
ManagerNotificationListener, only notifications of type
ManagerNotification will be received. Objects can implement more than one listener to receive more types of notifications.
Registering Listeners Programmatically
You can register listeners on the Mule Context as follows:
Registering Listeners Dynamically
By default, you cannot register listeners in the Mule context after Mule has started. Therefore, you would register your listeners in your code before starting Mule. For example:
To change this behavior so that you can add listeners dynamically at run time, you can set the
dynamic attribute on the
<notifications> element. If you just want to enable dynamic notifications for a specific connector, you can set the
dynamicNotification attribute on the connector.
Notification Action Codes
Each notification has an action code that determines the notification type. The action code can be queried to determine its type. For example:
MyObject.java
For a list of the action codes available with each notification type, see the Javadocs for the
package and click on the class of the notification type you want. | https://docs.mulesoft.com/mule-user-guide/v/3.3/mule-server-notifications | 2017-12-11T03:49:38 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.mulesoft.com |
Transactional
Enterprise Edition
Mule applies the concept of[transactions] to operations in application for which the result cannot remain indeterminate. In other words, where a series of steps:
JMS
JDBC
VM
However, there may be situations in which a Mule flow begins with a non-transactional inbound endpoint –. | https://docs.mulesoft.com/mule-user-guide/v/3.4/transactional | 2017-12-11T03:49:24 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.mulesoft.com |
Configure IIS for an HTTP Receive Location
The HTTP receive location uses an application within Internet Information Services (IIS). This topic lists the steps to enable the HTTP receive location within IIS.
Depending on your operating system, the steps to configure the IIS application may vary. Use these steps as a guide, as the user interface may be different on your OS.
32-bit vs 64-bit
An HTTP receive location uses the BTSHTTPReceive.dll. There is a 32-bit and a 64-bit version of the DLL. You choose which version you want to use. 64-bit processes have more available memory, so if you process larger messages, then the 64-bit version may be best.
32-bit install location: Program Files (x86)\Microsoft BizTalk Server\HttpReceive. 64-bit install location: Program Files (x86)\Microsoft BizTalk Server\HttpReceive64
To run the 64-bit version of the HTTP receive adapter in 64-bit native mode, open a command prompt, and execute the following scripts:
Type:
cscript %SystemDrive%\inetpub\AdminScripts\adsutil.vbs set w3svc/AppPools/Enable32bitAppOnWin64 0
Type:
C:\WINDOWS\Microsoft.NET\Framework64\vX.X.XXXXX>aspnet_regiis.exe -i
Note
Any IIS configuration that leads to SOAP and HTTP sharing the same process is not valid. You can have only one isolated receiver per process.
Configure the IIS application
Open Internet Information Services (open Server Manager, select Tools, and select Internet Information Services Manager).
In IIS, select your server name. In the Features View, double-click Handler Mappings. In the Actions pane, select Add Script Map.
Note
When you configure the script mapping at the web server-level, the mapping applies to all web sites. If you want to restrict the mapping to a specific Web site or virtual folder, select that web site or folder, and then add the script map.
In Add Script Map, select Request path, and type
BtsHttpReceive.dll.
In Executable, select the ellipsis (…), and browse to \Program Files\Microsoft BizTalk Server\HttpReceive. Select BtsHttpReceive.dll, and then select Open.
In Name, enter
BizTalk HTTP Receive, and then select Request Restrictions. In this window:
In Verbs, select One of the following verbs, and enter
In Access, select Script, and then select OK.
When prompted to allow the ISAPI extension, select Yes.
Create a new application pool (right-click Application Pools, select Add application pool). Name your application pool (such as
BTSHTTPReceive), select NET Framework v4.0.30319, and select OK.
Note
The .NET version number may vary depending on the version of .NET Framework installed on the computer.
The new application pool is listed.
Select your new application pool, and open the Advanced Settings (Actions pane). In this window:
- Enable 32-Bit Application: Set to True if you chose the 32-bit BtsHttpReceive.dll
- Process Model section, Identity: Select the ellipsis (…), select Custom account, and then Set it to an account that is a member of the BizTalk Isolated Host Users and IIS_WPG groups. Select OK.
Add a new application to the web site (right-click the Default Web Site, select Add Application). In this window:
- Alias : Enter an alias that you associate with the application (such as
BTS HTTP Receive, and then Select.
- Select the new application pool you just created, and then select OK.
- Physical path: Select the ellipsis (…), and browse to \Program Files\Microsoft BizTalk Server\HttpReceive.
Test Settings to verify there are no errors in the Test Connection dialog box. Close, and then select OK.
Tip
If Test Settings returns a warning, the identity of the application pool may be missing permissions to a folder, or access to a group. As a troubleshooting step, select Connect As, enter the User name and Password for a user account that is a member of the Administrators group.
The new application appears is listed under Default Web Sites.
See Also
How to Configure an HTTP Receive Location | https://docs.microsoft.com/en-us/biztalk/core/how-to-configure-iis-for-an-http-receive-location?redirectedfrom=MSDN | 2017-12-11T04:12:02 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.microsoft.com |
Chs. ERC 1-9; Wisconsin Employment Peace Act
Chs. ERC 10-19; Municipal Employment Relations Act
Chs. ERC 20-29; State Employment Labor Relations Act
Chs. ERC 30-39; Municipal Sector Interest Dispute Resolution Processes
Chs. ERC 40-49; WERC Roster of Ad Hoc Arbitrators and Fact Finders
Chs. ERC 50-59; Labor-Management Cooperation Services
Chs. ERC 60-69; UW Faculty and Academic Staff
Chs. ERC 70-89; Certification Elections for Certain Represented Municipal, School District, and State Employees
Chs. ERC 90- ; Civil Service Appeals, | http://docs-preview.legis.wisconsin.gov/code/admin_code/erc | 2018-01-16T13:08:54 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs-preview.legis.wisconsin.gov |
JWST's Fine Guidance Sensor (FGS) provides data for science attitude determination, fine pointing, and attitude stabilization using guide stars in the JWST focal plane. Absolute pointing and image motion performance is predicted on the JWST Pointing Performance page.
Introduction 4 includes updates through fall of 2017. See the JWST Guide Stars article for more information.
The flight software functions and corresponding operational modes of the FGS associated with the identification, acquisition, and tracking of a guide star are briefly described below. 36 subarrays of 2048 × 64 pixels with an effective integration time of 0.320. The ACS then enters its “Fine Guidance Mode” to control the observatory pointing in a closed loop using the FGS position centroids. Once the guide star is within ~0.06" of its desired location, the FGS can transition to "Fine Guide" mode.
In "Track" mode the FGS will adjust the position of the 32 x 32 pixel subarray. In "Fine Guide" mode, the subarray is fixed and cannot be changed without transitioning through "STANDBY"1, which exits fine guide control, and starting again with "Track" mode.
Once in fine guide control, the absolute pointing accuracy of JWST with respect to the celestial coordinate system will be determined by the astrometric accuracy of the Guide Star Catalog and the calibration of the JWST focal plane model.
Acknowledgements
The Canadian Space Agency (CSA) has contributed the FGS to the JWST Observatory. Honeywell (formerly COM DEV Space Systems) of Ottawa, Canada, is CSA’s prime contractor for the FGS. | https://jwst-docs.stsci.edu/display/JTI/Fine+Guidance+Sensor%2C+FGS | 2018-01-16T13:32:41 | CC-MAIN-2018-05 | 1516084886436.25 | [] | jwst-docs.stsci.edu |
PE 2.0 » Installing » Upgrading
A newer version is available; see the version menu above for details.
← Installing: Basic Installation — Index — Installing: Uninstalling →
Upgrading Puppet Enterprise
To upgrade from a previous version of Puppet Enterprise, use the same installer tarball as in a basic installation, but don’t run the
puppet-enterprise-installer script. Instead, run
puppet-enterprise-upgrader.
Depending on the version you upgrade from, you may need to take extra steps after running the upgrader. See below for your specific version..
NOTE: PE 1.2.5 cannot be upgraded to this version of PE. Systems running PE 1.2.5 can only upgrade to PE 2.6.1 or higher.
This is because PE 1.2.5 includes some packages that are newer than those in PE 2.0. This is known to block upgrades on SLES systems, and may cause more subtle failures on other platforms. If you run into an upgrade failure, it can be fixed by running the 2.6.1 upgrader..
Checking For Updates
Check here to find out what the latest maintenance release of Puppet Enterprise is. You can run
puppet --version at the command line to see the version of PE you are currently running.
Downloading PE
See the Preparing to Install chapter of this guide for information on downloading PE.
Starting the Upgrader
The upgrader must be run with root privileges:
# ./puppet-enterprise-upgrader
This will start the upgrader in interactive mode. If the puppet master role and the console (previously dashboard) role are installed on different servers, you must upgrade the puppet master first..
Upgrader Options
Like the installer, the upgrade will accept some command-line options:
-h
- Display a brief help message.
-s <ANSWER FILE>
- Save answers to file and quit without installing.
-a <ANSWER FILE>
- Read answers from file and fail if an answer is missing. See the upgrader answers section of the answer file reference for a list of available answers.
-A <ANSWER FILE>
- Read answers from file and prompt for input if an answer is missing. See the upgrader answers section of the answer file reference for a list of available answers.
-D
- Display debugging information.
-l <LOG FILE>
- Log commands and results to file.
-n
- Run in ‘noop’ mode; show commands that would have been run during installation without running them.
Non-interactive upgrades work identically to non-interactive installs, albeit with different answers available.
Configuring the Upgrade
The upgrader will ask you the following questions:
Cloud Provisioner
PE 2 includes a cloud provisioner tool that can be installed on trusted nodes where administrators have shell access. On every node you upgrade, you’ll be asked whether to install the cloud provisioner role.
Vendor Packages
If PE 2.0 needs any packages from your OS’s repositories, it will ask permission to install them.
Puppet Master Options
Removing
mco’s home directory
The
mco user from PE 1.2 gets deleted during the upgrade, and is replaced with the
peadmin user.
If the
mco user had any preference files or documents you need, you should tell the upgrader to preserve the
mco user’s home directory; otherwise, it will be deleted.
Installing Wrapper Modules
In PE 2.0, the
mcollectivepe,
accounts, and
baselines modules were renamed to
pe_mcollective, pe_accounts, and
pe_compliance, respectively. If you have used any of these modules by their previous names, you should install the wrapper modules so your site will continue to work while you switch over.
Console Options
User Name and Password
The console, which replaces Puppet Dashboard, now requires a user name and a password for web access. The upgrader will ask you to choose this name and password.
Final Steps: From an Earlier PE 2.0 Release
No extra steps are needed when upgrading between maintenance releases of PE 2.0.
Final Steps: From PE 1.2
No extra steps are needed when upgrading from PE 1.2.x to PE 2.0.1 or later.
Note that some features may not be available until puppet agent has run once on every node. In normal installations, this means all features will be available within 30 minutes after upgrading all nodes.
Final Steps: From PE 1.1 or 1.0
Important note: Upgrades from some configurations of PE 1.1 and 1.0 aren’t fully supported. To upgrade from PE 1.1 or 1.0, you must have originally installed the puppet master and Puppet Dashboard roles on the same node. Contact Puppet Labs support for help with other configurations on a case-by-case basis, and see issue #10872 for more information.
After running the upgrader on the puppet master/Dashboard (now console) node, you must:
- Stop the
pe-httpdservice
- Create a new database for the inventory service and grant all permissions on it to the console’s MySQL user.
- Manually edit the puppet master’s
puppet.conf,
auth.conf,
site.pp, and
settings.ymlfiles
- Generate and sign certificates for the console, to enable inventory and filebucket viewing.
- Edit
passenger-extra.conf
- Restart the
pe-httpdservice.
You can upgrade agent nodes after upgrading the puppet master and console. After upgrading an agent node, you must:
- Manually edit
puppet.conf.
Stop
pe-httpd
For the duration of these manual steps, Puppet Enterprise’s web server should be stopped.
$ sudo /etc/init.d/pe-httpd stop
Create a New Inventory Database
To support the inventory service, you must manually create a new database for puppet master to store node facts in. To do this, use the
mysql client on the node running the database server. (This will almost always be the same server running the puppet master and console.)
# mysql -uroot -p Enter password: mysql> CREATE DATABASE console_inventory_service; mysql> GRANT ALL PRIVILEGES ON console_inventory_service.* TO '<USER>'@'localhost';
Replace
<USER> with the MySQL user name you gave Dashboard during your original installation.
Edit Puppet Master’s
/etc/puppetlabs/puppet/puppet.conf
To support the inventory service, you must configure Puppet to save facts to a MySQL database.
[master] # ... facts_terminus = inventory_active_record dbadapter = mysql dbname = console_inventory_service dbuser = <CONSOLE/DASHBOARD'S MYSQL USER> dbpassword = <PASSWORD FOR CONSOLE'S MYSQL”.
If you configured the puppet master to not send reports to the Dashboard, you must configure it to report to the console now:
[master] # ... reports = https, store reporturl = https://<CONSOLE HOSTNAME>:<PORT>/reports/upload
Puppet agent on this node also has some new requirements:
[agent] # support filebucket viewing when using compliance features: archive_files = true # if you didn't originally enable pluginsync, enable it now: pluginsync = true
Edit Puppet Master’s
/etc/puppetlabs/puppet/auth.conf
To support the inventory service, you must add the following two stanzas to your puppet master’s
auth.conf file:
# Allow the console to retrieve inventory facts: path /facts auth yes method find, search allow pe-internal
You must add the following lines to site.pp in order to view file contents in the console:
# specify remote filebucket filebucket { 'main': server => '<puppet master's hostname>', path => false, } File { backup => 'main' }
Edit
/etc/puppetlabs/puppet-dashboard/settings.yml
Change the following three settings to point to one of the puppet master’s valid DNS names:
ca_server: '<PUPPET MASTER HOSTNAME>' inventory_server: '<PUPPET MASTER HOSTNAME>' file_bucket_server: '<PUPPET MASTER HOSTNAME>'
Change the following two settings to true:
enable_inventory_service: true use_file_bucket_diffs: true
Ensure that Console Certificates
First, navigate to the console’s installation directory:
$ cd /opt/puppet/share/puppet-dashboard
Next, start a temporary WEBrick puppet master:
$ sudo /opt/puppet/bin/puppet master
Next, stop the temporary puppet master:
$ sudo kill $(cat $(puppet master --configprint pidfile) )
Finally, chown the certificates directory to
puppet-dashboard:
$.
Start
pe-httpd
You can now start PE’s web server again.
$ sudo /etc/init.d/pe-httpd start
Edit
puppet.conf on Each Agent Node
On each agent node you upgrade to PE 2.0, make the following edits to
/etc/puppetlabs/puppet/puppet.conf:
[agent] # support filebucket viewing when using compliance features: archive_files = true # if you didn't originally enable pluginsync, enable it now: pluginsync = true
← Installing: Basic Installation — Index — Installing: Uninstalling → | https://docs.puppet.com/pe/2.0/install_upgrading.html | 2018-01-16T13:15:36 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.puppet.com |
Milestone SHR
see Stable Hybrid Release
this will be a community driven effort to port the gtk Om2007.2 telephony applications over to the fso middleware and package it in a distribution supporting applications in various UI toolkits
Update:
SHR has its own trac at bearstech now. marking it as completed here to block new tickets in this trac
Note: See TracRoadmap for help on using the roadmap. | http://docs.openmoko.org/trac/milestone/SHR | 2018-01-16T13:00:45 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.openmoko.org |
Create a Customer Service case from a security incident Security Incident Response ships with a default field mapping that maps a security incident to a Customer Service (CS) case. You can create a CS case from any security incident, edit the Priority, and also add Optional notes. Before you beginRole required: sn_si.basic and sn_customerservice_agentNote: The Customer Service plugin must be activated to perform this task. Procedure Navigate to Security Incident. Open the security incident that you want to add a CS case to. Click Create Customer Service Case in the top header. The pop-up window is pre-populated with information from the security incident based on your field mapping. You can select a new Priority and add any Optional notes. Note: The Priority field overwrites the default setting. The Optional notes are appended to the incident. Click Submit. A CS case is created and displayed in the Customer Service Cases related list in the security incident. Note: You can click the CS case link to follow up on the case. Related TopicsCustomer service case management | https://docs.servicenow.com/bundle/jakarta-security-management/page/product/security-incident-response/task/create-cs-case-from-si.html | 2018-01-16T13:44:50 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.servicenow.com |
In order to manage virtual machines across the cluster, Ganeti needs to understand the resources present on the nodes, the hardware and software limitations of the nodes, and how much can be allocated safely on each node. Some of these decisions are delegated to IAllocator plugins, for easier site-level customisation.
Similarly, the HTools suite has an internal model that simulates the hardware resource changes in response to Ganeti operations, in order to provide both an iallocator plugin and for balancing the cluster.
While currently the HTools model is much more advanced than Ganeti’s, neither one is flexible enough and both are heavily geared toward a specific Xen model; they fail to work well with (e.g.) KVM or LXC, or with Xen when tmem is enabled. Furthermore, the set of metrics contained in the models is limited to historic requirements and fails to account for (e.g.) heterogeneity in the I/O performance of the nodes.
At this moment, Ganeti itself doesn’t do any static modelling of the cluster resources. It only does some runtime checks:
Basically this model is a pure SoW one, and it works well when there are other instances/LVs on the nodes, as it allows Ganeti to deal with ‘orphan’ resource usage, but on the other hand it has many issues, described below.
Since HTools does an pure in-memory modelling of the cluster changes as it executes the balancing or allocation steps, it had to introduce a static (SoR) cluster model.
The model is constructed based on the received node properties from Ganeti (hence it basically is constructed on what Ganeti can export).
For disk it consists of just the total (tdsk) and the free disk space (fdsk); we don’t directly track the used disk space. On top of this, we compute and warn if the sum of disk sizes used by instance does not match with tdsk - fdsk, but otherwise we do not track this separately.
For memory, the model is more complex and tracks some variables that Ganeti itself doesn’t compute. We start from the total (tmem), free (fmem) and node memory (nmem) as supplied by Ganeti, and additionally we track:
memory that cannot be unaccounted for via the Ganeti model; this is computed at startup as:
tmem - imem - nmem - fmem
and is presumed to remain constant irrespective of any instance moves
tmem, nmem and xmem are presumed constant during the instance moves, whereas the fmem, imem, rmem and amem values are updated according to the executed moves.
The CPU model is different than the disk/memory models, since it’s the only one where:
We therefore track the total number of VCPUs used on the node and the number of physical CPUs, and we cap the vcpu-to-cpu ratio in order to make this somewhat more similar to the other resources which are limited.
There is also a model that deals with dynamic load values in htools. As far as we know, it is not currently used actually with load values, but it is active by default with unitary values for all instances; it currently tracks these metrics:
Even though we do not assign real values to these load values, the fact that we at least sum them means that the algorithm tries to equalise these loads, and especially the network load, which is otherwise not tracked at all. The practical result (due to a combination of these four metrics) is that the number of secondaries will be balanced.
There are unfortunately many limitations to the current model.
The memory model doesn’t work well in case of KVM. For Xen, the memory for the node (i.e. dom0) can be static or dynamic; we don’t support the latter case, but for the former case, the static value is configured in Xen/kernel command line, and can be queried from Xen itself. Therefore, Ganeti can query the hypervisor for the memory used for the node; the same model was adopted for the chroot/KVM/LXC hypervisors, but in these cases there’s no natural value for the memory used by the base OS/kernel, and we currently try to compute a value for the node memory based on current consumption. This, being variable, breaks the assumptions in both Ganeti and HTools.
This problem also shows for the free memory: if the free memory on the node is not constant (Xen with tmem auto-ballooning enabled), or if the node and instance memory are pooled together (Linux-based hypervisors like KVM and LXC), the current value of the free memory is meaningless and cannot be used for instance checks.
A separate issue related to the free memory tracking is that since we don’t track memory use but rather memory availability, an instance that is temporary down changes Ganeti’s understanding of the memory status of the node. This can lead to problems such as:
The behaviour here is wrong; the migration of instance2 to the node in question will succeed or fail depending on whether instance1 is running or not. And for instance1, it can lead to cases where it if crashes, it cannot restart anymore.
Finally, not a problem but rather a missing important feature is support for memory over-subscription: both Xen and KVM support memory ballooning, even automatic memory ballooning, for a while now. The entire memory model is based on a fixed memory size for instances, and if memory ballooning is enabled, it will “break” the HTools algorithm. Even the fact that KVM instances do not use all memory from the start creates problems (although not as high, since it will grow and stabilise in the end).
Because we only track disk space currently, this means if we have a cluster of N otherwise identical nodes but half of them have 10 drives of size X and the other half 2 drives of size 5X, HTools will consider them exactly the same. However, in the case of mechanical drives at least, the I/O performance will differ significantly based on spindle count, and a “fair” load distribution should take this into account (a similar comment can be made about processor/memory/network speed).
Another problem related to the spindle count is the LVM allocation algorithm. Currently, the algorithm always creates (or tries to create) striped volumes, with the stripe count being hard-coded to the ./configure parameter --with-lvm-stripecount. This creates problems like:
Moreover, the allocation currently allocates based on a ‘most free space’ algorithm. This balances the free space usage on disks, but on the other hand it tends to mix rather badly the data and metadata volumes of different instances. For example, it cannot do the following:
Additionally, while Ganeti supports setting the volume separately for data and metadata volumes at instance creation, there are no defaults for this setting.
Similar to the above stripe count problem (which is about not good enough customisation of Ganeti’s behaviour), we have limited pass-through customisation of the various options of our storage backends; while LVM has a system-wide configuration file that can be used to tweak some of its behaviours, for DRBD we don’t use the drbdadmin tool, and instead we call drbdsetup directly, with a fixed/restricted set of options; so for example one cannot tweak the buffer sizes.
Another current problem is that the support for shared storage in HTools is still limited, but this problem is outside of this design document.
A further problem generated by the “current free” model is that during a long operation which affects resource usage (e.g. disk replaces, instance creations) we have to keep the respective objects locked (sometimes even in exclusive mode), since we don’t want any concurrent modifications to the free values.
A classic example of the locking problem is the following:
In the above example, the second IAllocator run will wait for locks for nodes A and B, even though in the end the second instance will be placed on another set of nodes (C and D). This wait shouldn’t be needed, since right after the first IAllocator run has finished, hail knows the status of the cluster after the allocation, and it could answer the question for the second run too; however, Ganeti doesn’t have such visibility into the cluster state and thus it is forced to wait with the second job.
Similar examples can be made about replace disks (another long-running opcode).
For most of the resources, we have metrics defined by policy: e.g. the over-subscription ratio for CPUs, the amount of space to reserve, etc. Furthermore, although there are no such definitions in Ganeti such as minimum/maximum instance size, a real deployment will need to have them, especially in a fully-automated workflow where end-users can request instances via an automated interface (that talks to the cluster via RAPI, LUXI or command line). However, such an automated interface will need to also take into account cluster capacity, and if the hspace tool is used for the capacity computation, it needs to be told the maximum instance size, however it has a built-in minimum instance size which is not customisable.
It is clear that this situation leads to duplicate definition of resource policies which makes it hard to easily change per-cluster (or globally) the respective policies, and furthermore it creates inconsistencies if such policies are not enforced at the source (i.e. in Ganeti).
The balancing algorithm, as documented in the HTools README file, tries to minimise the cluster score; this score is based on a set of metrics that describe both exceptional conditions and how spread the instances are across the nodes. In order to achieve this goal, it moves the instances around, with a series of moves of various types:
However, the algorithm only looks at the cluster score, and not at the “cost” of the moves. In other words, the following can and will happen on a cluster:
Even though a migration is much, much cheaper than a disk replace (in terms of network and disk traffic on the cluster), if the disk replace results in a score infinitesimally smaller, then it will be chosen. Similarly, between two disk replaces, one moving e.g. 500GiB and one moving 20GiB, the first one will be chosen if it results in a score smaller than the second one. Furthermore, even if the resulting scores are equal, the first computed solution will be kept, whichever it is.
Fixing this algorithmic problem is doable, but currently Ganeti doesn’t export enough information about nodes to make an informed decision; in the above example, if the 500GiB move is between nodes having fast I/O (both disks and network), it makes sense to execute it over a disk replace of 100GiB between nodes with slow I/O, so simply relating to the properties of the move itself is not enough; we need more node information for cost computation.
Note
This design document will not address this limitation, but it is worth mentioning as it directly related to the resource model.
The current allocation/capacity algorithm works as follows (per node-group):
repeat: allocate instance without failing N+1
This simple algorithm, and its use of N+1 criterion, has a built-in limit of 1 machine failure in case of DRBD. This means the algorithm guarantees that, if using DRBD storage, there are enough resources to (re)start all affected instances in case of one machine failure. This relates mostly to memory; there is no account for CPU over-subscription (i.e. in case of failure, make sure we can failover while still not going over CPU limits), or for any other resource.
In case of shared storage, there’s not even the memory guarantee, as the N+1 protection doesn’t work for shared storage.
If a given cluster administrator wants to survive up to two machine failures, or wants to ensure CPU limits too for DRBD, there is no possibility to configure this in HTools (neither in hail nor in hspace). Current workaround employ for example deducting a certain number of instances from the size computed by hspace, but this is a very crude method, and requires that instance creations are limited before Ganeti (otherwise hail would allocate until the cluster is full).
There are two main changes proposed:
The second change is rather straightforward, but will add more complexity in the modelling of the cluster. The first change, however, represents a significant shift from the current model, which Ganeti had from its beginnings.
The resources of a node can be characterised in two broad classes:
In the first category, we have things such as total core count, total memory size, total disk size, number of network interfaces etc. In the second category we have things such as free disk space, free memory, CPU load, etc. Note that nowadays we don’t have (anymore) fully-static resources: features like CPU and memory hot-plug, online disk replace, etc. mean that theoretically all resources can change (there are some practical limitations, of course).
Even though the rate of change of the two resource types is wildly different, right now Ganeti handles both the same. Given that the interval of change of the semi-static ones is much bigger than most Ganeti operations, even more than lengthy sequences of Ganeti jobs, it makes sense to treat them separately.
The proposal is then to move the following resources into the configuration and treat the configuration as the authoritative source for them (a SoR model):
Since these resources can though change at run-time, we will need functionality to update the recorded values.
Remember that the resource model used by HTools models the clusters as obeying the following equations:
diskfree = disktotal - ∑ diskinstances
memfree = memtotal - ∑ meminstances - memnode - memoverhead
As this model worked fine for HTools, we can consider it valid and adopt it in Ganeti. Furthermore, note that all values in the right-hand side come now from the configuration:
This means that we can now compute the free values without having to actually live-query the nodes, which brings a significant advantage.
There are a couple of caveats to this model though. First, as the run-time state of the instance is no longer taken into consideration, it means that we have to introduce a new offline state for an instance (similar to the node one). In this state, the instance’s runtime resources (memory and VCPUs) are no longer reserved for it, and can be reused by other instances. Static resources like disk and MAC addresses are still reserved though. Transitioning into and out of this reserved state will be more involved than simply stopping/starting the instance (e.g. de-offlining can fail due to missing resources). This complexity is compensated by the increased consistency of what guarantees we have in the stopped state (we always guarantee resource reservation), and the potential for management tools to restrict which users can transition into/out of this state separate from which users can stop/start the instance.
Many of the current node locks in Ganeti exist in order to guarantee correct resource state computation, whereas others are designed to guarantee reasonable run-time performance of nodes (e.g. by not overloading the I/O subsystem). This is an unfortunate coupling, since it means for example that the following two operations conflict in practice even though they are orthogonal:
This conflict increases significantly the lock contention on a big/busy cluster and at odds with the goal of increasing the cluster size.
The proposal is therefore to add a new level of locking that is only used to prevent concurrent modification to the resource states (either node properties or instance properties) and not for long-term operations:
The new lock level will sit before the instance level (right after BGL) and could either be single-valued (like the “Big Ganeti Lock”), in which case we won’t be able to modify two nodes at the same time, or per-node, in which case the list of locks at this level needs to be synchronised with the node lock level. To be determined.
Based on the above, the locking contention will be reduced as follows: IAllocator calls will no longer need the LEVEL_NODE: ALL_SET lock, only the resource lock (in exclusive mode). Hence allocating/computing evacuation targets will no longer conflict for longer than the time to compute the allocation solution.
The remaining long-running locks will be the DRBD replace-disks ones (exclusive mode). These can also be removed, or changed into shared locks, but that is a separate design change.
FIXME
Need to rework instance replace disks. I don’t think we need exclusive locks for replacing disks: it is safe to stop/start the instance while it’s doing a replace disks. Only modify would need exclusive, and only for transitioning into/out of offline state.
In order to support ballooning, the instance memory model needs to be changed from a “memory size” one to a “min/max memory size”. This interacts with the new static resource model, however, and thus we need to declare a-priori the expected oversubscription ratio on the cluster.
The new minimum memory size parameter will be similar to the current memory size; the cluster will guarantee that in all circumstances, all instances will have available their minimum memory size. The maximum memory size will permit burst usage of more memory by instances, with the restriction that the sum of maximum memory usage will not be more than the free memory times the oversubscription factor:
∑ memorymin ≤ memoryavailable
∑ memorymax ≤ memoryfree * oversubscription_ratio
The hypervisor will have the possibility of adjusting the instance’s memory size dynamically between these two boundaries.
Note that the minimum memory is related to the available memory on the node, whereas the maximum memory is related to the free memory. On DRBD-enabled clusters, this will have the advantage of using the reserved memory for N+1 failover for burst usage, instead of having it completely idle.
FIXME
Need to document how Ganeti forces minimum size at runtime, overriding the hypervisor, in cases of failover/lack of resources.
Unfortunately the design will add a significant number of new parameters, and change the meaning of some of the current ones.
As described in Policies, we currently lack a clear definition of the support instance sizes (minimum, maximum and standard). As such, we will add the following structure to the cluster parameters:
Ganeti will by default reject non-standard instance sizes (lower than min_ispec or greater than max_ispec), but as usual a --ignore-ipolicy option on the command line or in the RAPI request will override these constraints. The std_spec structure will be used to fill in missing instance specifications on create.
Each of the ispec structures will be a dictionary, since the contents can change over time. Initially, we will define the following variables in these structures:
In a single-group cluster, the above structure is sufficient. However, on a multi-group cluster, it could be that the hardware specifications differ across node groups, and thus the following problem appears: how can Ganeti present unified specifications over RAPI?
Since the set of instance specs is only partially ordered (as opposed to the sets of values of individual variable in the spec, which are totally ordered), it follows that we can’t present unified specs. As such, the proposed approach is to allow the min_ispec and max_ispec to be customised per node-group (and export them as a list of specifications), and a single std_spec at cluster level (exported as a single value).
Beside the limits of min/max instance sizes, there are other parameters related to capacity and allocation limits. These are mostly related to the problems related to over allocation.
Since these are used mostly internally (in htools), they will be exported as-is from Ganeti, without explicit handling of node-groups grouping.
Regarding spindle_ratio, in this context spindles do not necessarily have to mean actual mechanical hard-drivers; it’s rather a measure of I/O performance for internal storage.
The proposed model for the new disk parameters is a simple free-form one based on dictionaries, indexed per disk template and parameter name. Only the disk template parameters are visible to the user, and those are internally translated to logical disk level parameters.
This is a simplification, because each parameter is applied to a whole nested structure and there is no way of fine-tuning each level’s parameters, but it is good enough for the current parameter set. This model could need to be expanded, e.g., if support for three-nodes stacked DRBD setups is added to Ganeti.
At JSON level, since the object key has to be a string, the keys can be encoded via a separator (e.g. slash), or by having two dict levels.
When needed, the unit of measurement is expressed inside square brackets.
Currently Ganeti supports only DRBD 8.0.x, 8.2.x, 8.3.x. It will refuse to work with DRBD 8.4 since the drbdsetup syntax has changed significantly.
The barriers-related parameters have been introduced in different DRBD versions; please make sure that your version supports all the barrier parameters that you pass to Ganeti. Any version later than 8.3.0 implements all of them.
The minimum DRBD version for using the dynamic resync speed controller is 8.3.9, since previous versions implement different parameters.
A more detailed discussion of the dynamic resync speed controller parameters is outside the scope of the present document. Please refer to the drbdsetup man page (8.3 and 8.4). An interesting discussion about them can also be found in a drbd-user mailing list post.
All the above parameters are at cluster and node group level; as in other parts of the code, the intention is that all nodes in a node group should be equal. It will later be decided to which node group give precedence in case of instances split over node groups.
FIXME
Add details about when each parameter change takes effect (device creation vs. activation)
For the new memory model, we’ll add the following parameters, in a dictionary indexed by the hypervisor name (node attribute hv_state). The rationale is that, even though multi-hypervisor clusters are rare, they make sense sometimes, and thus we need to support multipe node states (one per hypervisor).
Since usually only one of the multiple hypervisors is the ‘main’ one (and the others used sparringly), capacity computation will still only use the first hypervisor, and not all of them. Thus we avoid possible inconsistencies.
Of the above parameters, only _total ones are straight-forward. The others have sometimes strange semantics:
Since these two values cannot be auto-computed from the node, we need to be able to declare a default at cluster level (debatable how useful they are at node group level); the proposal is to do this via a cluster-level hv_state dict (per hypervisor).
Beside the per-hypervisor attributes, we also have disk attributes, which are queried directly on the node (without hypervisor involvment). The are stored in a separate attribute (disk_state), which is indexed per storage type and name; currently this will be just LD_LV and the volume name as key.
All the new parameters (node, instance, cluster, not so much disk) will need to be taken into account by HTools, both in balancing and in capacity computation.
Since the Ganeti’s cluster model is much enhanced, Ganeti can also export its own reserved/overhead variables, and as such HTools can make less “guesses” as to the difference in values.
FIXME
Need to detail more the htools changes; the model is clear to me, but need to write it down. | http://docs.ganeti.org/ganeti/2.6/html/design-resource-model.html | 2018-01-16T13:37:02 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.ganeti.org |
Windows.
Media.
Windows. Play To Media.
Windows. Play To Media.
Windows. Play To Media.
Namespace
Play To
Classes
Enums class by calling the GetForCurrentView method. You can then call addEventHandler on the PlayToManager class PlayReady DRM. | https://docs.microsoft.com/en-us/uwp/api/Windows.Media.PlayTo | 2018-01-16T13:39:38 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.microsoft.com |
Customize.Customize a start and end dateYou can configure calendar reports to support the spanning of multi-day events across calendar cells. | https://docs.servicenow.com/bundle/jakarta-performance-analytics-and-reporting/page/use/reporting/concept/c_CustomizeCalendarReports.html | 2018-01-16T13:48:27 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.servicenow.com |
AppThemes provides an API (Application Programming Interface) which enables developers to extend our themes without modifying the core code. This can be achieved by writing plugins or additional functions that “hook” into different areas of the theme.
Each theme includes it’s own custom API in addition to the “AppThemes General API” which are shared hooks used across all themes.
If you are familiar with the WordPress API (also known as filters, actions, and hooks), then you’ll quickly understand how the AppThemes API can be used.
Note: This information applies to AppThemes released after July 10th, 2011. Any theme version prior to this date did not yet have the API enabled yet.
What are Hooks?
Hooks are defined in specific places throughout each theme so that your plugins or custom functions can ‘hook into’ AppThemes without modifying the core code. There are two different ways to invoke hooks which are called, actions and filters. The AppThemes API currently supports only action hooks although you can still of course use the WordPress API for core actions and filters.
View all the AppThemes action hooks which are included with each of our themes.
Actions
Actions are the hooks that the AppThemes API triggers at specific points during execution such as before the loop, after the footer, init, before comments, etc. Your plugin can respond to the event by executing a PHP function, which might do one or more of the following:
- Modify database data
- Send an email message
- Modify what is displayed in the browser screen
The basic steps to making this happen are:
- Create the PHP function that should execute when the event occurs, in your plugin or functions.php file.
- Hook to the action in AppThemes, by calling add_action()
- Load the page that should trigger the action
Create an Action Callback
The first step in creating an action in your plugin or functions.php file is to create a PHP function with the intended functionality, which is named a ‘callback’.
For example, say you want to send yourself an email message whenever a blog post is viewed (silly example we know). You would create a function similar to the one below and place it in your plugin or functions.php file.
Now anytime somebody views one of your blog posts, you will receive an email. This means the
appthemes_after_blog_loop action was invoked.
Hook into AppThemes
You’ll notice after the function was defined above, the line below it uses add_action() to “hook” into the AppThemes
appthemes_after_blog_loop function. Here’s the general syntax you’ll need to use when evoking a hook:
where:
- hook_name – The name of an action hook provided by AppThemes, that tells what event your function should be associated with.
- your_function_name – The name of the function that you want to be executed following the event specified by hook_name. This can be a standard php function, a function present in the AppThemes core, or a function defined by you in the plugin file (such as ’email_friends’ defined above).
- priority – An optional integer argument that can be used to specify the order in which the functions associated with a particular action are executed (default: 10). Lower numbers correspond with earlier execution, and functions with the same priority are executed in the order in which they were added to the action.
- accepted_args – An optional integer argument defining how many arguments your function can accept (default 1), useful because some hooks can pass more than one argument to your function.
Removing Actions
In some cases, you may find that you want your plugin to disable one of the actions or filters built into AppThemes, or added by another plugin. You can do that by calling
remove_action( 'action_hook', 'action_function' ).
For example, this code would prevent your JobRoller website from loading the default footer in the theme.
Now you can build your own custom footer without touching the core code. source code or documentation first just to be sure.
Admin Section Hooks
The AppThemes API isn’t limited to just front-end theme hooks. You can also extend your AppThemes into the admin back-end by creating your own admin pages and options. These are done by using admin hooks and are included with all our themes as well.
List of Action Hooks
See AppThemes Actions for a current list of default action hooks available in our themes. | https://docs.appthemes.com/developers/api/ | 2017-03-23T00:15:30 | CC-MAIN-2017-13 | 1490218186530.52 | [] | docs.appthemes.com |
Programming with Analysis Management Objects (AMO)
Analysis Management Objects (AMO) is a library of programmatically accessed objects that enables an application to manage an Analysis Services instance.
This section explains AMO concepts, focusing on major objects, how and when to use them, and the way they are interrelated. For more information about specific objects or classes, see:
- Microsoft.AnalysisServices Namespace, for reference documentation.
- Analysis Services Management Objects (AMO), as a Bing.com general search.
Beginning in SQL Server 2016, AMO is refactored into multiple assemblies. Generic classes such as Server, Database, and Roles are in the Microsoft.AnalysisServices.Core Namespace. Multidimensional-specific APIs remain in Microsoft.AnalysisServices Namespace.
If you are programming for tabular models at 1200 or higher compatibility level, use the Tabular Object Model (TOM). TOM is an extension of the Analysis Services Management Object (AMO) client library.
Custom scripts and applications written against earlier versions of AMO will continue to work with no modification. However, if you have script or applications that target SQL Server 2016 or later specifically, or if you need to rebuild a custom solution, be sure to add the new assembly and namespace to your project. | https://docs.microsoft.com/en-us/bi-reference/amo/developing-with-analysis-management-objects-amo | 2018-11-12T22:17:08 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.microsoft.com |
You can apply call number prefixes and suffixes to items from a pre-configured list in the Unified Volume/Copy Creator. See the document, Unified Volume/Copy Creator, for an example.
© 2008-2013 GPLS and others. Evergreen is open source software, freely
licensed under GNU GPLv2 or later. The Evergreen Project is
a member of Software
Freedom Conservancy. | http://docs.evergreen-ils.org/2.2/_apply_call_number_prefixes_and_suffixes.html | 2018-11-12T23:28:36 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.evergreen-ils.org |
HMAC-SHA Signature
Topics
Required Authentication Information
When accessing Amazon SimpleDB using one of the AWS SDKs, the SDK handles the authentication process for you. For a list of available AWS SDKs supporting Amazon SimpleDB, see Available Libraries.
However, when accessing Amazon SimpleDB using a REST request, you must provide the following items so the request can be authenticated.
Authentication
AWSAccessKeyId—Your AWS account is identified by your Access Key ID, which AWS uses to look up your Secret Access Key.
Signature—Each request must contain a valid HMAC-SHA signature, or the request is rejected.
A request signature is calculated using your Secret Access Key, which is a shared secret known only to you and AWS. You must use a HMAC-SHA256 signature.
Date—Each request must contain the time stamp of the request.
Depending on the API you're using, you can provide an expiration date and time for the request instead of or in addition to the time stamp. For details of what is required and allowed for each API, see the authentication topic for the particular API.
Authentication Process
Following is the series of tasks required to authenticate requests to AWS using an HMAC-SHA request signature. It is assumed you have already created an AWS account and received an Access Key ID and Secret Access Key. For more information about those, see Creating an AWS Account.
You perform the first three tasks.
Process for Authentication: Tasks You Perform
AWS performs the next three tasks.
Process for Authentication: Tasks AWS Performs
You can send REST requests over either HTTP or HTTPS. Regardless of which protocol you use, you must include a signature in every REST request. This section describes how to create the signature. The method described in the following procedure is known as signature version 2, and uses the HMAC-SHA256 signing method.
In addition to the requirements listed in Required Authentication Information, signatures for REST requests must also include:
SignatureVersion—The AWS signature version, which is currently the value
2.
SignatureMethod—Explicitly provide the signature method
HmacSHA256.
Important
If you are currently using signature version 1: Version 1 is deprecated, and you should move to signature version 2 immediately.
To create the signature
Create the canonicalized query string that you need later in this procedure:
Sort the UTF-8 query string components by parameter name with natural byte ordering.
The parameters can come from the GET URI or from the POST body (when
Content-Type).
Note
Currently all AWS service parameter names use unreserved characters, so you don't need to encode them. However, you might want to include code to handle parameter names that use reserved characters, for possible future use.
Separate the encoded parameter names from their encoded values with the equals sign ( = ) (ASCII character 61), even if the parameter value is empty.
Separate the name-value pairs with an ampersand ( & ) (ASCII character 38).
Create the string to sign according to the following pseudo-grammar (the
"\n"represents an ASCII newline character)., see.
Convert the resulting value to base64.
Use the resulting value as the value of the
Signaturerequest parameter.
Important
The final signature you send in the request must be URL encoded as specified in RFC 3986 (for more information, see). If your toolkit URL encodes your final request, then it handles the required URL encoding of the signature. If your toolkit doesn't URL encode the final request, then make sure to URL encode the signature before you include it in the request. Most importantly, make sure the signature is URL encoded only once. A common mistake is to URL encode it manually during signature formation, and then again when the toolkit URL encodes the entire request.
Some toolkits implement RFC 1738, which has different rules than RFC 3986 (for more information, go to.
Example PutAttributesVersion=2 &SignatureMethod=HmacSHA256 &AWSAccessKeyId=<Your AWS Access Key ID>
Following is the string to sign.
GET\n sdb.amazonaws.com\n /\n AWSAccessKeyId=<Your AWS Access Key ID> &Action=PutAttributes &Attribute.1.Name=Color &Attribute.1.Value=Blue &Attribute.2.Name=Size &Attribute.2.Value=Med &Attribute.3.Name=Price &Attribute.3.Value=0014.99 &DomainName=MyDomain &ItemName=Item123 &SignatureMethod=HmacSHA256 &SignatureVersion=2 &Timestamp=2010-01-25T15%3A01%3A28-07%3A00 &Version=2009-04-15
Following is the signed=<URLEncode(Base64Encode(Signature))> &SignatureVersion=2 &SignatureMethod=HmacSHA256 &AWSAccessKeyId=<Your AWS Access Key ID>
About the Time Stamp
The time stamp (or expiration time) you use in the request must be a
dateTime
object, with the complete date plus hours, minutes, and seconds (for more information,
go to). For example: 2010-01-31T23:59:59Z.
Although it is not required, we recommend you provide the time stamp in the Coordinated
Universal
Time (Greenwich Mean Time) time zone.
If you specify a time stamp (instead of an expiration time), the request automatically expires 15 minutes after the time stamp (in other words, AWS does not process a request if the request time stamp is more than 15 minutes earlier than the current time on AWS servers). Make sure your server's time is set correctly.
Important
If you are using .NET you must not send overly specific time stamps, due to different
interpretations of how extra time precision should be dropped. To avoid overly
specific time
stamps, manually construct
dateTime objects with no more than millisecond
precision. | https://docs.aws.amazon.com/AmazonSimpleDB/latest/DeveloperGuide/HMACAuth.html | 2018-11-12T22:48:20 | CC-MAIN-2018-47 | 1542039741151.56 | [array(['images/HMACAuthProcess_You.png',
'HMAC-SHA Authentication Process'], dtype=object)
array(['images/HMACAuthProcess_AWS.png',
'HMAC-SHA Authentication Process'], dtype=object)] | docs.aws.amazon.com |
GravityVolume
The GravityVolume entity can be used to create tunnels through which the player is getting pushed by an invisible force. It does so by modifying the global gravity variable so that the player stays afloat while maintaining momentum.
Place a GravityVolume entity in the level and in a similar way to placing out a road or river, draw the gravity volume out. Once you have your shape finished double-click the left mouse to finalize the shape.
Parameters | https://docs.aws.amazon.com/lumberyard/latest/legacyreference/entities-misc-objects-gravityvolume.html | 2018-11-12T23:02:50 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.aws.amazon.com |
How to create an audience using Shopify and MailChimp
In a world of siloed data the company with un-siloed data is king, and if you are reading this - so are you. This tutorial will show you how to combine data from two systems (Shopify and MailChimp) to create a 1+1=3 type of audience ready to be activated in your favorite marketing system.
You will need the the following to finish this tutorial:
- A MailChimp source
- A Shopify source
- An unsiloed data mentality
If you haven't set up your sources yet, checkout this article and then get come back to this tab in your browser!
Ready, set, go 🚀
Create a composite audience
Head on over to the "Audiences" (1) tab and then click on the "New Audience" (2) button to create a new audience.
Scroll past the list of template audiences and click the very last button that says "Build from scratch". We start by giving our audience a name, this is what will be visible in the activation system so make sure to give it a descriptive name (1). In this toy example we'll call it "Composite audience - email campaign retargeting" and you will see why in a minute.
We will start by defining the Shopify profile filter, "Last order date: more than 90 days ago" (2). Now click on "MailChimp" in the left panel (3) to bring up the MailChimp profile filtering.
We check the "Recent campaign opens" and the radio button "Yes" (1). The resulting audience will contain profiles that didn't buy anything in the last 90 days from Shopify and opened one or more of the last 5 campaigns sent through MailChimp. Hit the "Save" button (2) and activate the audience in your preferred marketing system.
If you have any questions regarding the steps in this tutorial or if you experience any issues the Supercrowd team is eager to help you, send us an email at [email protected] or use this link to get in touch with us right away!
Let's break the silos together and create a Supercrowd™ | https://docs.supercrowd.net/article/0uh0vxcfww-how-to-create-an-audience-using-shopify-and-mailchimp | 2018-11-12T23:26:45 | CC-MAIN-2018-47 | 1542039741151.56 | [array(['https://storage.googleapis.com/helpdocs-assets/articles/0uh0vxcfww/1536658917408/untitled-drawing-281-29.png',
None], dtype=object)
array(['https://storage.googleapis.com/helpdocs-assets/articles/0uh0vxcfww/1536658917905/silo.jpg',
None], dtype=object)
array(['https://storage.googleapis.com/helpdocs-assets/articles/0uh0vxcfww/1536658918342/supercrowd.png',
None], dtype=object)
array(['https://storage.googleapis.com/helpdocs-assets/articles/0uh0vxcfww/1536658918575/supercrowd.png',
None], dtype=object)
array(['https://storage.googleapis.com/helpdocs-assets/articles/0uh0vxcfww/1536658919000/supercrowd.png',
None], dtype=object) ] | docs.supercrowd.net |
API Reference
Welcome to the Griffin API! You can use this API to access all of our endpoints, enabling you to do everything from managing accounts and counterparties to making payments.
The API is organized around REST; it has predictable, resource-oriented URLs, and uses HTTP response codes to indicate API errors. We use built-in HTTP features, like HTTP authentication and any external banking networks and incur no cost. BQokikJOvBiI2HlWgH4olfQ2" instead of
-u BQokikJOvBiI2HlWgH4olfQ2:.
All API requests must be made over HTTPS. Calls made over plain HTTP will fail. API requests without authentication will also fail.
Client Errors
There are three possible types of client errors on API calls that receive request bodies:
- Sending invalid JSON will result in a
400 Bad Requestresponse.
{"message":"Problems parsing JSON"}
- Sending the wrong type of JSON values will result in a
400 Bad Requestresponse.
{"message":"Body should be a JSON object"}
- Sending invalid fields will result in a
422 Unprocessable Entityresponse.`
{ "message": "Validation Failed", "errors": [ { "resource": "account", "code": "invalid", "field_path": [ "account_id" ], "hint": "account_id must be a UUID v4 string." } ] }
All error objects have
resource and
field_path (an array of keys for identifying nested properties) so that your client can tell what the problem is. Most will include a human-readable
hint field, as well as an error code to let you know what is wrong with the field. These are the possible validation error codes:
missing
This means a resource does not exist
missing_field
This means a required field on a resource has not been set.
invalid
This means the formatting of a field is invalid. The documentation for that resource should be able to give you more specific information.
already_exists
This means another resource has the same value as this field. This can happen in resources that must have a unique key.
Resources may also send custom validation errors (where
code is
custom). Custom errors will always have a
message field describing the error, and most errors will also include a
documentation_url field pointing to some content that might help you resolve the error.
Idempotent Requests
The API supports idempotency for safely retrying requests without accidentally performing the same operation twice. For example, if a request to create a payment fails due to a network connection error, you can retry the request with the same idempotency key to guarantee that only a single charge is created.
GET and
DELETE requests are idempotent by definition, meaning that the same backend work will occur no matter how many times the same request is issued. You shouldn't send an idempotency key with these verbs because it will have no effect.
To perform an idempotent request, provide an additional
Idempotency-Key: <key> header to the request.
How you create unique keys is up to you, but we suggest using V4 UUIDs or another appropriately random string. We'll always send back the same response for requests made with the same key, and keys can't be reused with different request parameters. Keys expire after 24 hours.
Pagination
Requests that return multiple items will be paginated to 10 items by default.
You can specify further pages with the
?page parameter, and can also set a custom page size up to 100 with the
?limit parameter.
The Link header includes pagination information:
Link: <>; rel="next", <>; rel="last"
The above example includes a line break for readability.
This
Link response header contains one or more Hypermedia link relations, some of which may require expansion as URI templates.
The possible
rel values are:
The link relation for the immediate next page of results.
The link relation for the last page of results.
first
The link relation for the first page of results.
prev
The link relation for the immediate previous page of results.
Rate Limiting
The returned HTTP headers of any API request show your current rate limit status:
curl -i HTTP/1.1 200 OK Date: Mon, 01 Jul 2013 17:27:06 GMT Status: 200 OK X-RateLimit-Limit: 60 X-RateLimit-Remaining: 56 X-RateLimit-Reset: 1372700873
X-RateLimit-Limit
The maximum number of requests you're permitted to make per hour.
X-RateLimit-Remaining
The number of requests remaining in the current rate limit window.
X-RateLimit-Reset
The time at which the current rate limit window resets in UTC epoch seconds.
If you exceed the rate limit, an error response returns:
HTTP/1.1 403 Forbidden Date: Tue, 20 Aug 2017 14:50:41 GMT Status: 403 Forbidden X-RateLimit-Limit: 60 X-RateLimit-Remaining: 0 X-RateLimit-Reset: 1377013266 { "message": "API rate limit exceeded for xxx.xxx.xxx.xxx.", "documentation_url": "" }
Abuse rate limits
To protect the quality of service on Griffin, additional rate limits may apply to some actions. For example: rapidly creating content, polling aggressively instead of using webhooks, making API calls with a high concurrency, or repeatedly requesting data that is computationally expensive may result in abuse rate limiting.
Abuse rate limits are not intended to interfere with legitimate use of the API. Your normal rate limits should be the only limit you target.": "" }
Customers
A customer in the API is a reference to a legal person that is a customer of your business. The customers API allows you to keep track of these legal persons and to quickly reference them when creating accounts, and allows us to perform the legally-required Know-Your-Customer (KYC) and Anti-Money-Laundering (AML) checks on them.
Current Accounts
Direct accounts (where your firm is the owner) will be opened instantly, because we'll already have completed the requisite Know-Your-Customer and Anti-Money-Laundering checks.
By contrast, indirect accounts will trigger an automated compliance flow on our end to make sure the counterparty that you are creating a current account for is one that has legal permissions to do so.
This compliance flow typically takes around 5 minutes, and we recommend configuring a webhook in your settings page that we can use to notify you once the compliance check has completed.
Create a current account
Creates a designated, segregated client money account for a specific customer.
Get a single current account
Retrieve details on a single current account, including the current balance.
Transactions
The Transactions API endpoints provide read-only aggregations of other payments and events impacting a given Current Account.
FPS.
Not all financial institutions participate in FPS. Transfers created through the FPS API to institutions that do not support FPS will result in a response with HTTP status 412 (Precondition Failed) as follows:
{ "message": "Precondition Failed", "errors": [ { "resource": "fps", "code": "invalid", "hint": "Target institution does not support FPS." } ] }
List FPS payments
Retrieve current and prior FPS payments. Takes a number of optional query parameters that can be used to filter the returned resources.
CHAPS.
List CHAPS payments
Retrieve current and prior CHAPS payments. Takes a number of optional query parameters that can be used to filter the returned resources.
List SEPA payments
Retrieve current and prior SEPA payments. Takes a number of optional query parameters that can be used to filter the returned resources. | https://docs.griffin.sh/reference | 2018-11-12T23:06:52 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.griffin.sh |
DLM support requirements
Prior to installing Data Lifecycle Manager (DLM), you must consider various aspects of your HDP environment and prepare your clusters prior to DLM installation. The host on which you install DLM is the same host on which you install all DPS Platform.
Support Matrix information
You can find the most current information about interoperability for this release on the Support Matrix. The Support Matrix tool provides information about:
Operating Systems
Databases
Browsers
JDKs
To access the tool, go to:.
DLM Host requirements
The DLM application is installed on the same host as DPS Platform and has no requirements beyond what is required by DPS Platform. See the DPS Platform Support Requirements for details.
Requirements for clusters used with DLM Engine
The clusters on which you install the DLM Engine must meet the requirements identified in the following sections. After the DLM Engine is installed and properly configured on a cluster, the cluster can be registered with DPS and used for DLM replication.
See the Support Matrix for supported operating systems and databases.
Port and network requirements for clusters
Have the following ports available and open on each cluster:
HDP component requirements for DLM
The following additional Apache components might be required on your clusters for DLM support, depending on the security configuration and type of replication being performed: | https://docs.hortonworks.com/HDPDocuments/DLM1/DLM-1.1.2/installation/content/dlm_support_matrix.html | 2018-11-12T23:14:43 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.hortonworks.com |
Check in is used to put the definition of an object that has been created or modified by Visual LANSA into the LANSA for i Master Repository.
You can check in LANSA objects such as processes, functions, files, fields, components or variables. To check in objects, you can use the Repository tab and right click on the object. The check in option will be displayed in the pop-up menu.
As part of the 5.6.1 Check In Options for some objects, you can choose various check in and compile related options as appropriate for the selected objects.
When the object is checked in to the Master Repository, the status of the object locks is controlled by the LANSA for i system's task tracking settings and use of the Keep Locks option on the check in dialog. (Refer to Using Task Tracking.)
The locking check is enforced so that only the user who currently has the object locked is allowed to check in the object. This prevents two normal users, who are not security officers, from simultaneously checking in the same object to the Master. For example, two developers, Bob and John, are using a shared database on their Visual LANSA installations. They are working on the same partition in the database but using different tasks, *uTask1 and *uTask2. Bob creates a field under *uTask1. John can immediately see and open this field as read-only but cannot check it in until Bob releases the lock on the field by checking it in.
However, this rule only applies to normal users. Security officers can check in any objects and as such they should not be used as developer profiles.
An export list including any objects checked in can be automatically generated during the check in processing.
Also See
5.6.2 Check In Joblog Viewer
5.9 Start and Stop the Host Monitor
Submit the Job to Compile a Process Definition in the LANSA for i Guide for descriptions of options not included here. | https://docs.lansa.com/14/en/lansa011/content/lansa/l4wadm03_0030.htm | 2018-11-12T22:37:54 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.lansa.com |
3.5 Variable References and #%top
When the expander encounters an id that is not bound by a module-level or local binding, it converts the expression to (#%top . id) giving #%top the lexical context of the id; typically, that context refers to #%top. See also Expansion Steps.
Within a module form, (#%top . id) expands to just
id—
See also Expansion Steps for information on how the expander introduces #%top identifiers.
Changed in version 6.3 of package base: Changed the introduction of #%top in a top-level context to unbound identifiers only. | https://docs.racket-lang.org/reference/__top.html | 2018-11-12T23:26:48 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.racket-lang.org |
Changelog for package avt_vimba_camera
0.0.10 (2017-08-16)
Merge pull request
#26
from 130s/k/add_ci [CI] Add config for ROS Kinetic (and Lunar as an option).
[CI] Add config for ROS Kinetic (and Lunar as an option).
Merge pull request
#25
from mintar/fix_open_camera Fix opening the camera
Simplify openCamera() logic
Open camera on init Without this patch, the camera is never opened. This bug was introduced in 63f868791.
Give more information to the user
Merge pull request
#22
from plusone-robotics/dyn_reconfig_fix Dynamic reconfigure fix
Fix
#21
: Retry camera opening and handle SIGINT
Changed file mode in order to build
Fixed crashing of dynamic reconfigure
Fixed dynamic reconfigure configuration that did not allow camera parameters to be updated
Fix call QueueFrame() method
Fix CPU overhead issue
Fix
#18
Merge pull request
#17
from josepqp/kinetic Added ARM 32 to CMakeList.txt
Stop camera before destroy it
Added ARM 32
Merge branch 'kinetic' of github.com:srv/avt_vimba_camera into kinetic
Change variable scope
Fix
#15
: do not depend on turbot_configurations
Merge pull request
#14
from josepqp/kinetic
Added ARM 32 bits libraries
Modified CMakeList.txt to compile with ARM 32 bits
Added Iris Parameter
kinetization
Fix sync problems after camera tests
Add a sync timer
Try stereo image sync
Add a check timer
Fix
#12
: allow bigger resolutions
Fix camera info
Fix camera config
Fix camera info when decimation
Make sync node acts as stereo sync checker
Include a check timer on stereo_camera
Perform the stereo_sync in a separate node
Publish camera temperatures
Change the way of reset
Increase the initial wait time before checking sync
Add a sync watcher node
Fix branch mix
Remove unused variables
Left and right callback in a separate thread
Change default sync time
change logging messages
fix binning
add stereo launchfiles
removed prints
set stereo launchfiles
removed unused params
calibration epi. 4
improvements to stereo node
merge with v2.0 SDK
upgrade to VIMBA SDK 2.0
upgrade to 1.4
changed ros prints from info to debug
removed comment
changed stereo camera launchfile
Merge pull request
#11
from lucasb-eyer/indigo Set the frame_id of the image header, too.
Set the frame_id of the image header, too.
Contributors: Isaac I.Y. Saito, Martin Günther, Miquel Massot, Shaun Edwards, SparusII, agoins, josep, lucasb-eyer, plnegre, shaun-edwards
0.0.9 (2014-11-17)
Fix
#8
: Constructor delegation and typo in assignment
added mono camera name
corrected diagnostics
fixed sync diagnostic
improved diagnostics
better timestamp management
added command error check
cleaning stereo prints
removed old cpp
fixed merging conflict
update updater
added time to tick function
added getTimestamp
added reset timestamp command
changed errors to warnings
added open/close msgs to diagnostics
added diagnostics. wip
bugfixes
full operative stereo camera
prepared launchfile for stereo
auto set packet size
stereo sync
preparing for stereo
added launchfile
hide first run
set auto configuration by default
fix with ptp mode
Fix dynamic reconfigure error with PTP
mono camera compiles
Fix interface type
Merge pull request
#5
from lucasb-eyer/auto Fix names/values of auto settings.
Fix names/values of auto settings.
Fix
#2
: Set the highest GeV packet size
Merge pull request
#3
from pkok/single_identifier Allow user to connect by specifying either GUID or IP address.
Allow user to connect by specifying either GUID or IP address.
wip
added testing launchfiles
added parameters for sync
Contributors: Miquel Massot, Patrick de Kok, SPENCER-Freiburg Laptop
0.0.8 (2014-09-05)
readdition of vimba
Contributors: Miquel Massot
0.0.7 (2014-09-04)
removed vimba headers
Contributors: Miquel Massot
0.0.6 (2014-09-03)
change to libvimba package
Contributors: Miquel Massot
0.0.5 (2014-09-03)
add shared library as imported target
Contributors: Miquel Massot
0.0.4 (2014-09-01)
absolute path for libvimbacpp
changed version
bugfix re-release
Contributors: Miquel Massot
0.0.2 (2014-03-24)
test on polled camera
formatting
added packages
added GPIO params
added params and launchfile
added launchfile
added camera calibration and fixed reconfiguration issues
first images in ROS
first tests with Manta G-504C
added tags to gitignore
develop in progress
added gitignore
changed package name and pushed some devel
added config file
prepared and tested Vimba library
first commit
Contributors: Miquel Massot | https://docs.ros.org/en/kinetic/changelogs/avt_vimba_camera/changelog.html | 2021-07-24T02:12:09 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.ros.org |
How can I delete a chart
Step 1
To delete a chart you need to go to the charts library, which could be found here: Visualizer > Chart Library
Step 2
Once you open the library, you will be able to find a chart that you need. When you find it, click on the "Delete" icon.
Step 3
After you click on the "Delete" icon, the chart will be deleted from the database. | https://docs.themeisle.com/article/600-delete-chart | 2021-07-24T01:35:36 | CC-MAIN-2021-31 | 1627046150067.87 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5ad8c6812c7d3a0e93677e58/file-NpCS0vCykI.png',
None], dtype=object) ] | docs.themeisle.com |
Sometimes Ad-blockers will block our
metrical.xyz domain in order to try to give their users a better privacy and navigation. As we said it before, we don't store any personal information about visitors of your site, but we offer a solution about this problem: you can setup a subdomain of your main domain to serve the metrical script and calls.
With this solution, Ad-blockers won't block our script and calls and you will be able to use Metrical as normal.
Open your domain provider (Godaddy, Namecheap, ect) and add a subdomain of your choice with a
CNAME record pointing to
cdn.metrical.xyz. (add also the last dot
.)
Something like this:
CNAME log.customdomain.com
cdn.metrical.xyz.
Then, add the exact subdomain (without https:// or http://) to the settings page of an application on Metrical. This is necessary to obtain a SSL certificate with Let’s Encrypt, so your data (and the visits of your users will be safe and protected by HTTPS.
Change cdn.metrical.xyz to your custom domain inside the tracking script, and also add the
host property inside the
metrical object.
Like this:
<script>window.metrical = {"app": "Your website UUID","host": ""}</script><script asyncsrc=""type="text/javascript"></script>
After that, you are ready to go! | https://docs.metrical.xyz/developers/custom-domain | 2021-07-24T00:55:47 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.metrical.xyz |
Disk shelves with IOM12 modules can be cabled in HA pair and single-controller configurations (for supported platforms) by applying the SAS cabling rules: configuration rules, controller slot numbering rules, shelf-to-shelf connection rules, controller-to-stack connection rules, and if applicable, mini-SAS HD SAS optical cable rules.
The SAS cabling rules regarding configuration rules and mini-SAS HD SAS optical cable rules described in this guide are specific to disk shelves with IOM12 modules.
The SAS cabling rules described in this guide balance SAS cabling between the on-board SAS ports and host bus adapter SAS ports to provide highly available storage controller configurations and meet the following goals:
You should avoid deviating from the rules; deviations might reduce reliability, universality, and commonality. | https://docs.netapp.com/platstor/topic/com.netapp.doc.hw-ds-sas3-icg/GUID-679C6BFC-EED5-45CC-B74A-22A474C0110D.html?lang=en | 2021-07-24T00:38:25 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.netapp.com |
Mobile monitoring for Android monitors your mobile app, giving you a comprehensive view of your app's performance. It works for Android apps written using Java or Kotlin.
Install the Android agent
Before you install the Android agent, make sure your app follows the compatibility requirements. As part of the installation process, mobile monitoring automatically generates an application token. This is a 40-character hexadecimal string for authenticating each mobile app that you monitor.
Follow the Android installation and configuration procedures for your environment as applicable. If you have problems with your Android installation, or if you do not see data in the mobile monitoring UI for your Android app, follow the troubleshooting procedures.
Extend your instrumentation
After you install the agent, extend the agent's instrumentation by using the mobile monitoring UI and following up on information in New Relic Insights.. | https://docs.newrelic.com/docs/mobile-monitoring/new-relic-mobile-android/get-started/introduction-new-relic-mobile-android/ | 2021-07-24T01:38:41 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.newrelic.com |
New features
Added
distributed_tracing.enabledto
truein your
newrelic.jsfile, or set
NEW_RELIC_DISTRIBUTED_TRACING_ENABLEDkey for the segment descriptor passed to the record method.
Reservoirs will now respect setting their size to 0. | https://docs.newrelic.com/jp/docs/release-notes/agent-release-notes/nodejs-release-notes/node-agent-470/ | 2021-07-24T00:55:34 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.newrelic.com |
Volumes are the Block Storage devices that you attach to instances to enable persistent storage. Users can attach a volume to a running instance or detach a volume and attach it to another instance at any time. For information about using the dashboard to create and manage volumes as an end user, see the OpenStack End User Guide.
As an administrative user, you can manage volumes and volume types for users in various projects. You can create and delete volume types, and you can view and delete volumes. Note that a volume can be encrypted by using the steps outlined below.
Note
A message indicates whether the action succeeded.
Create a volume type using the steps above for Create a volume type.
Click Create Encryption in the Actions column of the newly created volume type.
Configure the encrypted volume by setting the parameters below from available options (see table):
Specifies the class responsible for configuring the encryption.
Specifies whether the encryption is from the front end (nova) or the back end (cinder).
Specifies the encryption algorithm.
Specifies the encryption key size.
Click Create Volume Type Encryption.
Encryption Options
The table below provides a few alternatives available for creating encrypted volumes.
* Source NIST SP 800-38E
When you delete a volume type, volumes of that type are not deleted.
Note
A message indicates whether the action succeeded.
When you delete an instance, the data of its attached volumes is not destroyed.
Note
A message indicates whether the action succeeded.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/newton/admin-guide/dashboard-manage-volumes.html | 2021-07-24T02:25:07 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.openstack.org |
micro_speech — Micro Speech Audio Module Example¶
The
micro_speech module runs Google’s TensorFlow Lite for Microcontrollers Micro Speech framework
for voice recognition.
Please see this guide for training a new model.
Constructors¶
- class
micro_speech.
MicroSpeech¶
Creates a MicroSpeech voice recognition class.
audio_callback(buf_in)¶
Pass this method to
audio.start_streaming()to fill the
MicroSpeechclass with audio samples.
MicroSpeechwill compute the FFT of the audio samples and keep a sliding window internally of the FFT the last 100ms or so of audio samples received as features for voice recognition.
listen(tf_model[, threshold=0.9[, timeout=1000[, filter=None]]])¶
Executes the tensor flow lite model
tf_model, which should be a path to a tensor flow lite model on disk, on the audio stream.
This method will continue to execute the model until it classifies a result that has a confidence ratio above
thresholdand that’s within the range specified by
filter.
For example, if the model is designed to classify sounds into the four labels [‘Silence’, ‘Unknown’, ‘Yes’, ‘No’], then a
thresholdof 0.7 mean that listen() only returns when the confidence score for one of those classes goes above 0.7.
filtercan then be
[2, 3]to specify that we only care about ‘Yes’ or ‘No’ going above 0.7.
timeoutis the amount of time to run the model on audio data. If zero then listen will run forever until a result passes the threshold and filter criteria.
Returns the index of the the label with the highest confidence score. E.g. for the example above 0, 1, 2, or 3 for [‘Silence’, ‘Unknown’, ‘Yes’, ‘No’] respectively. | https://docs.openmv.io/library/omv.micro_speech.html | 2021-07-24T01:13:20 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.openmv.io |
Goals and Analytics
Set Goals
The performance of your landing page can be measured by setting up relevant goals and measuring them. You can enable 2 actions - Form Submissions and Links clicks as conversions goals for your page. The option to enable this is present under the Settings > Analytics tab of your landing page in the dashboard. You will find all the links on the landing page listed in this panel. You can choose the specific link clicks that have to be tracked.
Analytics
The goals that you enable can be tracked within the Analytics Tab. You will find the option to filter the Analytics data based on Date range, Individual Goal, and Variant.
Reset Analytics
In addition to tracking the analytics, you also have the option to reset analytics in this panel. You can use this option in case you want to clear off any analytics data that was formed while testing the landing page before the actual launch.
| https://docs.swipepages.com/article/14-goals-and-analytics | 2021-07-24T00:24:48 | CC-MAIN-2021-31 | 1627046150067.87 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5f16c8742c7d3a10cbab0b9f/file-DZivW9UOoj.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5f16cebc04286306f80727e8/file-IBFq3d0cjG.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5f16d00e2c7d3a10cbab0bf3/file-oUgKjiYTsw.png',
None], dtype=object) ] | docs.swipepages.com |
# Audiences & Personas
# Industries
While we've worked with a wide-range of industries, most of our portfolio has been in the following verticals:
- Higher Education
- Healthcare
- Financial/Insurance
- Nonprofit/Philanthropic
# Personas
Who do we sell to?
# Marketing Director
Our typical marketing contact is a VP or Director of marketing at a mid-size corporation, higher ed institution, or nonprofit. The report directly to a C-Level (or equivelant) and have been tasked with implementing strategic objectives by their manager. Although they work for an organization that may have hundreds of employees, often they only have one or two digital implementation specialists on staff; their digital implementation needs simply aren't consistent enough to keep a whole team employed, so they need to find a partner who can implement their immediate campaigns.
The fact that the company is large enough to have a VP title is often indicative that they have projects that are complex enough to require our services and the budget to dedicate to them. Other common titles include...
- VP of Marketing
- Director of Marketing/Communications
Needs
- Increase online lead generation.
- Execute new digital marketing campaigns.
- Improve overall website performance.
- Appease internal stakeholders.
- Find reliable agency for a long term relationship.
- Digital strategy deliverables like personas, journey maps.
- Worldclass conversion-oriented design (styleguides, user testing, UX).
In the private world, we're accustom to seeing additional focus on...
- Personalization.
- CRM integrations.
In higher education and nonprofits, we've seen...
- Accessibility concerns.
- Focus on donations (donor management systems and CRMs tailored to fundraising, not sales).
# IT Manager
Our IT Manager contact has an existing dev team, but it's either too small to handle all dev needs or, more likely, lacks expertise in web technologies. They may be managing a team that works primarily on a "backend" product that's constructed in a language like Java, but need help to create a user-friendly experience through a portal, intranet, or other interface. Ocassionally they also have control over their organization's marketing website.
Title include...
- CTO
- IT Director
- Director of Business Applications
- Development Manager
- Web Manager at Financial Institutions
Needs | https://docs.thinktandem.io/guides/audiences-personas.html | 2021-07-24T02:25:10 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.thinktandem.io |
Mitaka Series Release Notes¶
2.0.0¶
This release includes a new command line utility ‘barbican-manage’ that consolidates and supersedes the separate HSM and database management scripts.
The Mitaka release includes a new API to add arbitrary user-defined metadata to Secrets.
This release includes significant improvements to the performance of the PKCS#11 Cryptographic Plugin driver. These changes will require a data migration of any existing data stored by previous versions of the PKCS#11 backend.
New Features¶
- The ‘barbican-manage’ tool can be used to manage database schema changes as well as provision and rotate keys in the HSM backend.
Known Issues¶
The service will encounter errors if you attempt to run this new release using data stored by a previous version of the PKCS#11 Cryptographic Plugin that has not yet been migrated for this release. The logged errors will look like
'P11CryptoPluginException: HSM returned response code: 0xc0L CKR_SIGNATURE_INVALID'
Upgrade Notes
Deprecation Notes¶
- The ‘barbican-db-manage’ script is deprecated. Use the new ‘barbican-manage’ utility instead.
- The ‘pkcs11-kek-rewrap’ script is deprecated. Use the new ‘barbican-manage’ utility instead.
- The ‘pkcs11-key-generation’ script is deprecated. Use the new ‘barbican-manage’ utility instead. | https://docs.openstack.org/releasenotes/barbican/mitaka.html | 2017-02-19T21:06:29 | CC-MAIN-2017-09 | 1487501170253.67 | [] | docs.openstack.org |
Trait diesel::
expression_methods::[−][src] NullableExpressionMethods
pub trait NullableExpressionMethods: Expression + Sized { fn nullable(self) -> Nullable<Self> { ... } }
Methods present on all expressions
Provided Methods
fn nullable(self) -> Nullable<Self>
Converts this potentially non-null expression into one which is treated as nullable. This method has no impact on the generated SQL, and is only used to allow certain comparisons that would otherwise fail to compile.
Example
table! { posts { id -> Integer, user_id -> Integer, author_name -> Nullable<VarChar>, } } fn main() { use self::users::dsl::*; use self::posts::dsl::{posts, author_name}; let connection = establish_connection(); let data = users.inner_join(posts) .filter(name.nullable().eq(author_name)) .select(name) .load::<String>(&connection); println!("{:?}", data); } | http://docs.diesel.rs/diesel/expression_methods/trait.NullableExpressionMethods.html | 2019-01-16T10:20:47 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.diesel.rs |
This topic describes how to create a custom theme based on the Office 2016 Colorful SE theme with the look-and-feel of an MS Excel application. In this topic, we will change the theme's blue contrast colors to green.
Run the Theme Designer. Open the Get Started Tab and select Create a New Theme.
Select a base theme and version for your new theme. In this example, we select the Office 2016 Colorful SE theme and 18.2.3 version.
Specify the theme name (Colorful_Excel in this tutorial) and location.
The Theme Designer clones the Office 2016 Colorful SE theme and places the copy in the specified directory.
Set up preview
Click Build to enable preview. Select a control to preview.
The Theme Designer allows you to see how the new theme is applied to all the standard and DevExpress controls.
The Navigation window allows you to select the required control to preview.
In this tutorial, we use the Spreadsheet control to preview changes.
Change colors
In the palette tab, select all the blue colors and change them to green. You can edit them in HSB color mode and change the H (hue) value to 147.
Refer to the WPF Theme Designer Tools for more information about advanced color editing tools.
XAML files editing
Use the View in XAML tool to find an element location in XAML. Click the ribbon icon or use the Ctrl+D shortcut to enable this tool.
Point to an element, hold down the Ctrl+Shift keys combination and click an element. Theme Designer will open this element's XAML code in the Code View window.
Set the BorderThickness to 2 and the BorderBrush to $Border. Save the XAML file to apply changes.
Refer to the Edit Theme in XAML topic for more information about theme editing in XAML files.
Save your modified theme and click Publish.
The Theme Designer builds your theme and prompts you to open the output directory with the built .DLL and .PDB files.
Click Yes to open the output directory with your built theme assembly.
Follow the steps below to apply this theme to an application.
Add a reference to the theme assembly in this solution. To do this, right-click References in the Solution Explorer and select Add Reference. In the Browse section, locate the assembly built in the previous step.
Add the following code to the App.xaml.cs file. | https://docs.devexpress.com/WpfThemeDesigner/118594/getting-started | 2019-01-16T10:27:03 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.devexpress.com |
sqlrutils package
The sqlrutils package provides a mechanism for R users to put their R scripts into a T-SQL stored procedure, register that stored procedure with a database, and run the stored procedure from an R development environment.
How to use sqlrutils
The sqlrutils library is installed as part of SQL Server Machine Learning when you add R to your installation. You get the full collection of proprietary packages plus an R distribution with its base packages and interpreters. You can use any R IDE to write R script calling functions in sqlrutils, but the script must run on a computer having SQL Server Machine Learning with R.
The workflow for using this package includes the following steps:
- Define stored procedure parameters (inputs, outputs, or both)
- Generate and register the stored procedure
- Execute the stored procedure
In an R session, load sqlrutils from the command line by typing
library(sqlrutils).
Note
You can load this library on computer that does not have SQL Server (for example, on an R Client instance) if you change the compute context to SQL Server and execute the code in that compute context.
Function list
Next steps
Add R packages to your computer by running setup for R Server or R Client:
Next, review the steps in a typical sqlrutils workflow:
See also
Package Reference
R tutorials for SQL Server | https://docs.microsoft.com/en-us/machine-learning-server/r-reference/sqlrutils/sqlrutils | 2019-01-16T10:27:43 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.microsoft.com |
API Gateway 7.5.3 Policy Developer Filter Reference XML encryption wizard Overview The following filters are involved in encrypting a message using XML encryption: Filter Role Topic Find Certificate Specifies the certificate that contains the public key to use in the encryption. The data is encrypted such that it can only be decrypted with the corresponding private key. Find certificate XML-Encryption Settings Specifies the recipient of the encrypted data, what data to encrypt, what algorithms to use, and other such options that affect the way the data is encrypted. XML encryption settings XML-Encryption Performs the actual encryption using the certificate selected in the Find Certificate filter, and the options set in the XML-Encryption Settings filter. XML encryption. Configuration. For more details, see XML encryption settings. Related Links | https://docs.axway.com/bundle/APIGateway_753_PolicyDevFilterReference_allOS_en_HTML5/page/Content/PolicyDevTopics/encryption_enc_wizard.htm | 2019-01-16T09:46:24 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.axway.com |
Performance¶
Category: Core
Enumerations¶
enum Monitor:
- TIME_FPS = 0 --- Frames per second.
- TIME_PROCESS = 1 --- Time it took to complete one frame.
- TIME_PHYSICS_PROCESS = 2 --- Time it took to complete one physics frame.
-. This also includes the root node, as well as any nodes not in the scene tree.
- RENDER_OBJECTS_IN_FRAME = 11 --- 3D objects drawn per frame.
- RENDER_VERTICES_IN_FRAME = 12 --- Vertices drawn per frame. 3D only.
- RENDER_MATERIAL_CHANGES_IN_FRAME = 13 --- Material changes per frame. 3D only
- RENDER_SHADER_CHANGES_IN_FRAME = 14 --- Shader changes per frame. 3D only.
- RENDER_SURFACE_CHANGES_IN_FRAME = 15 --- Render surface changes per frame. 3D only.
- RENDER_DRAW_CALLS_IN_FRAME = 16 --- Draw calls per frame. 3D only.
- RENDER_VIDEO_MEM_USED = 17 --- Video memory used. Includes both texture and vertex memory.
- RENDER_TEXTURE_MEM_USED = 18 --- Texture memory used.
- RENDER_VERTEX_MEM_USED = 19 --- Vertex memory used.
- RENDER_USAGE_VIDEO_MEM_TOTAL = 20
- PHYSICS_2D_ACTIVE_OBJECTS = 21 --- Number of active RigidBody2D nodes in the game.
- PHYSICS_2D_COLLISION_PAIRS = 22 --- Number of collision pairs in the 2D physics engine.
- PHYSICS_2D_ISLAND_COUNT = 23 --- Number of islands in the 2D physics engine.
- PHYSICS_3D_ACTIVE_OBJECTS = 24 --- Number of active RigidBody and VehicleBody nodes in the game.
- PHYSICS_3D_COLLISION_PAIRS = 25 --- Number of collision pairs in the 3D physics engine.
- PHYSICS_3D_ISLAND_COUNT = 26 --- Number of islands in the 3D physics engine.
- AUDIO_OUTPUT_LATENCY = 27
- MONITOR_MAX = 28 that a few of these monitors are only available in debug mode and will always return 0 when used in a release build.
Many of these monitors are not updated in real-time, so there may be a short delay between changes. | http://docs.godotengine.org/ko/latest/classes/class_performance.html | 2019-01-16T09:48:30 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.godotengine.org |
Acquisition Error Table
The error table is primarily used to hold information about errors that occur while Teradata Database is trying to redistribute the data during the acquisition phase. If Teradata Database is unable to build a valid primary index, some application phase errors may be put into this table.
Table 29 defines the Acquisition Error Table, with column entries comprising the unique primary index. | https://docs.teradata.com/reader/YeE2bGoBx9ZGaBZpxlKF4A/ZnzHkzRTfbT6Xn89n~CbNg | 2019-01-16T09:58:20 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.teradata.com |
mobileAppContentFile resource type
Note: Using the Microsoft Graph APIs to configure Intune controls and policies still requires that the Intune service is correctly licensed by the customer.
Contains properties for a single installer file that is associated with a given mobileAppContent version.
Methods
Properties
Relationships
None
JSON Representation
Here is a JSON representation of the resource.
{ "@odata.type": "#microsoft.graph.mobileAppContentFile", "azureStorageUri": "String", "isCommitted": true, "id": "String (identifier)", "createdDateTime": "String (timestamp)", "name": "String", "size": 1024, "sizeEncrypted": 1024, "azureStorageUriExpirationDateTime": "String (timestamp)", "manifest": "binary", "uploadState": "String" } | https://docs.microsoft.com/en-us/graph/api/resources/intune-apps-mobileappcontentfile?view=graph-rest-1.0 | 2019-01-16T10:43:18 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.microsoft.com |
Making.
Previously when working in the context of one of your apps, the navigation menu for Urban Airship Engage features was on the left, as in the image below. Click the image for a larger view.
We have moved the Engage navigation just above the dashboard workspace. Access Messages, Audience, Reports, and Settings (gear icon) from the dropdown menus. Click the image for a larger view.
Product Navigation Moves Way Up
Since the release of our two major data products in Q4 2015, Connect and
Insight, we have four major product lines including our flagship mobile
engagement service, UA Engage, and our digital wallet solution, Wallet.
Access any Urban Airship product from the top navigation menu, from anywhere in the dashboard. | https://docs.urbanairship.com/whats-new/2016-03-31-new-navigation/ | 2019-01-16T10:56:30 | CC-MAIN-2019-04 | 1547583657151.48 | [array(['https://docs.urbanairship.com/images/product-nav.png',
'New Navigation New Navigation'], dtype=object) ] | docs.urbanairship.com |
The Smooth command averages the positions of curve and surface control points and mesh vertices in a specified region.
The Smooth command evens out the spacing of selected control points in small increments. This command is useful for removing unwanted detail, and for removing loops in curves and surfaces.
On mesh objects use the Weld command before smoothing in order to prevent the mesh from pulling apart.
Smooth Options
Smooths only in the specified x, y, or z direction.
Prevents edges and endpoints from being included.
Meshes: Vertices along naked edges will not be modified.
Curves: End control points will not be modified.
Surfaces: The control points along the boundaries of the surface will not be modified. The edges and trims of trimmed surfaces will be modified if they do not coincide with the surface boundary.
Use world or construction plane or object u, v, and n coordinates to determine the direction of the smoothing.
Sets an amount of smoothing.
The curve control point toward the average.
The curve control point moves past the average.
The curve control point moves away from the average (roughing).
Specifies the number of steps to iterate the smoothing factor through.
Edit curves
Edit surfaces
Rhinoceros 6 © 2010-2019 Robert McNeel & Associates. 12-Apr-2019 | http://docs.mcneel.com/rhino/6/help/en-us/commands/smooth.htm | 2019-04-18T12:51:42 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.mcneel.com |
Ankit Tiwari Full Life And Career In A Short View: This person is one of the talented personality of the Indian music industry who already placed himself in the hearts of music fans. But as he has a nice voice, he got many more offer for giving voice in the jingle. List Of More Ankit Tiwari Bollywood Songs And Download Hit Album: 01. But after 20 years, he does something that even his father and mother did not expect. Subsequently, he was offered to compose music for Do Dooni Chaar 2010 and Saheb, Biwi Aur Gangster 2011 , where he started his singing career with the song he composed for the later.
Its very fast and powerful source to find millions of songs freely available over internet. He received filmfare award and best music directer award for Aashiqui 2 album teamed with Mithoon and Jeet Ganguly. That time earning money is the main aim of him. Moreover, We do not host Song: Ankit Tiwari All Song Mp3 Pagalworld mp3. Both of his mother and father was involved with music. Click to Download button to download mp3.
And he was a good buddy of Ankit. Ankit Tiwari came to limelight after singing and composing song in Aashiqui 2 2013 film. When he was expert to lay keyboard he was too young, I mean just a kid. Most interesting part of this matter is, he missed the faded feelings right now at the time of regular stage show. In this session, we are trying to share all information about this artist and his biography. As we have say about him, there are so many young talent in the industry in the recent time and he is one of the heavyweight of them. To know about his career we are upholding the names of the songs as well as the names of the films in which he had playbacked so far: List Of Ankit Tiwari As Singer Some Song: So these are the songs by him so far and there are more to come in the recent times when person like him have to keep himself always with the new things as the gift and new entertaining theme for the music fans.
Ankit Tiwari A to Z mp3 songs download pagalworld. At the first meeting, Habib was glad to know about him and offer him to make music for his upcoming movie Do Dooni Chaar. They were own a musical band to earn some money. He was born in 6 March 1986 at Kanpur, Uttar Pradesh, India. Thus he gained his rising and take a good position as Bollywood music directer and playback singer. Ankit Tiwari composed music of Tere Jaane Se song myself while Zaeden produced music of this track.
After over his school level education from Jugal Devi Saraswati Vidya Mandir, he did not go anywhere for education. Now a day Ankit Tiwari Songs mp3 from the internet is a very common matter. Tere Jane Se new song rights acquired by Zee Music Company Inc. About Pagalworld Pagalworld is a free Music and Video search Engine where you can find your favourite songs for free. He is now a regular musician this is the reason you may think only music is his hobby.
Ankit Tiwari Songs Life and Biography In a Short View: Real Name: Ankit Tiwari. He is now one of music sensation in India. Group youtube channel presenting the video of yaad hai song by ankit tiwari and palak muchhal web. There he has a chance to make 2 songs. He was born in 1986 in Kanpur of India.
What people love to download Ankit Tiwari video songs are given there. Tiwari and mother name is Suman Tiwari. If by anyhow any of them is offensive to you, please Contact Us asking for the removal. Ankit Tiwari Mp3 Songs and Musical journey of his life: In 2008 his father was inspired him to earn money. That was enough good start on the journey to making Ankit Tiwari songs List. Ankit Tiwari is here to win your hearts with his soul-stirring voice in Musafir Ankit Tiwari is an Indian playback singer and music director.
He always wants to be a professional music composer. And importantly that was a U-turn in his career. But now a day people accept Ankit Tiwari new song as a professional singer and try to get Ankit Hindi songs to the internet and you can also get his songs on this site. His parents were also involved with music, his father had a music troupe in Kanpur and his mother was also a devotional singer. But He has another hobby without music and they are cycling, cooking and swimming. Ankit Tiwari Agar Tu Hota mp3 download 320kbps playtime of 05:29 min on PagalWorld.
But his mother and father were trained him by musical knowledge. Zee Cine Awards Best Music Director of 2014. He instituted at and won several local music competitions during the time. So this the time to take a look on the names of the awards and achievement of this man when these are also an important part of their career and also recognized as the confessional certificates of quality working with the creative things. At 2010 he was meet with director Habib Faisal. In his entire musical career, he always tries to compose music but directors are like to take him as a singer. | http://reckon-docs.com.au/download/ankit-tiwari-all-songs-download-pagalworld.html | 2019-04-18T12:22:34 | CC-MAIN-2019-18 | 1555578517639.17 | [] | reckon-docs.com.au |
4.6.7
compile(), even if the maximum delay is higher than before.
parallel_run().
add_function()were not available on CUDA devices.
dendrite.rankis deprecated, use
dendrite.pre_ranksinstead.
4.6.6
4.6.5
period_offsetto define the offset within a period to record. The default argument is 0ms to match the previous behavior (at the beginning of the period).
SpikeSourceArray: wrong update of events on the GPU device led to wrong results.
4.6.4
power(x,a)function to replace the slow cmath pow(x,a) function
4.6.3
pre_spike and
post_spike fields of a spiking neuron can now modify pre and post synaptic variables:
pre_spike = """ g_target += w post.nb_spikes += 1 """
SpikeSourceArray available on CUDA.
Monitor.mean_fr() and
Monitor.histogram() now work with PopulationViews (thanks @ilysym).
Many bug fixes with Python3 and CUDA.
4.6.2
sum()to sum over all targets, instead of specifying them.
Constantobjects can now be created to define global-level parameters which do not need to be explicitly defined in Neurons/Synapses.
init=) can use constants instead of numerical values.
report()has been improved and can now generate Markdown reports, which can be converted to html or pdf by pandoc.
4.6.1
Population.compute_firing_ratecan now be called anytime to change the window over which the mean firing rate of a spiking neuron is computed.
~/.config/ANNarchy/annarchy.json.
--gpuor
--gpu=2if you have multiple GPUs.
4.6.0
CUDA implementation for rate-code networks revised and fixed several errors.
CUDA now supports now the simulation of spiking neurons (please refer to the documentation to discover limitations).
CUDA allows now the monitoring of dendrites.
Added the command line argument –cuda to run simulations on CUDA devices.
Issue 38: user-defined functions are now available in user space:
add_function('sigmoid(x) = 1.0 / (1.0 + exp(-x))') compile() x = np.linspace(-10., 10., 1000) y = functions('sigmoid')(x)
Major reimplementation of data structures for the connectivity patterns (LIL, CSR).
Added the
TimedArray population to store the temporal evolution of a population’s firing rate internally. See Setting inputs.
Fixed structural plasticity and added an example in
examples/structural_plasticity.
Added the
projection keyword allowing to declare a single parameter/variable for the whole projection, instead of one value per synapse (the default) or one value per post-synaptic neuron (keyword
postsynaptic).
Parameters can now also be recorded by Monitors.
Projections can now be monitored, if the user knows what he does…
The global parameters of populations and projections can now be saved/loaded to/from a JSON file for manual parameterization.
Many bug fixes.
Deprecated recording functions and objects such as
RateNeuron are removed.
4.5.7
4.5.6
Spiking networks can now use variable delays, at the cost of extra computations.
Fixed bug in parser when a synapse uses pre/post variables with one name containing the other (e.g. r and r_mean).
Fixed bug when assigning postsynptic variable a random distribution.
Added the ability to access the weighted sums of a rate-coded population with:
pop.sum('exc')
Added the ability to record these weighted sums:
m = monitor(pop, ['r', 'sum(exc)'])
4.5.5
Fixed bug when loading a network saved before 4.5.3.
Added the
every decorator allowing functions to be called periodically during a simulation:
result = [] @every(period=1000.) def set inputs(n): # Set inputs to the network pop.I = Uniform(0.0, 1.0) # Save the output of the previous step result.append(pop.r) simulate(100 * 1000.)
Fixed installation with non-standard Python distribution (e.g. Anaconda).
Added a
HomogeneousCorrelatedSpikeTrains class allowing to generate homogeneous correlated spike trains:
pop = HomogeneousCorrelatedSpikeTrains(geometry=200, rates=10., corr=0.3, tau=10.)
Installing through pip does not forget CUDA files anymore.
Added
Population.clear() to clear all spiking events (also delayed) without resetting the network.
Population.reset() and
Projection.reset() now accept a list of attributes to be reset, instead of resetting all of them.
Unit tests are now performed on Travis CI to get a badge.
Bug fixed: min/max bounds on g_target was wrongly analyzed when depending on a parameter.
parallel_run() now accepts additional arbitrary arguments that can be passed to the simulation callback.
Added an
ite(cond, statement1, statement2) conditional function replicating
if cond: statement1 else: statement2, but which can be combined:
r = 1.0 + ite(sum(exc) > 1.0, sum(exc), 0.0) + ite(sum(inh) > 1.0, -sum(inh), 0.0)
The
Network class has several bugs fixed (e.g. disabled populations stay disabled when put in a network).
Populations have now an “enabled” attribute to read their status.
4.5.4:
t_laststoring the time (in ms) of the last emitted spike.
compute_firing_rate()that allows spiking neurons to compute their mean firing rate over a prefined window and store the result (in Hz) into the reserved variable
r.
unless_postcan be set in
pre_spiketo disable the evaluation of the pre-spike in that case (default behavior for simple_stdp example).
compile()now accepts a different compiler (g++ or clang++) and custom flags.
load()method when using a single weight, or for very sparse random projections.
4.5.3:
Projections can be assigned a name.
A list or Numpy array can be used to slice a Population:
neurons = [1, 4, 17, 34] subpop = pop[neurons]
Synapses can be accessed directly at the Projection level with:
proj.synapse(pre, post) # equivalent to proj[post][pre]
Bugfix: pop[0] now returns a PopulationView containing the neuron 0, not an IndividualNeuron (accessible through pop.neuron(0))
Various bugfixes in CUDA.
Bugfix: connect_from_sparse now works with popviews whose ranks are not linearly increasing (e.g. columns)
Bugfix: IO access to projection data should now be much faster.
Spiking neurons can have delayed variables.
4.5.2:
4.5.1:
4.5.0:
Networkobject has been added to run multiple simulations in parallel (
parallel_run()). See Parallel simulations and networks.
Monitorobject. Old recording methods are depreciated. See Recording with Monitors.
exactis replaced by
event-driven. Still works, but will be suppressed in future versions.
unless-refractoryflag has no effect anymore. Before,
ustarted decaying during the refractory period.
PoissonPopulationshould be used for rate-to-spike conversion,
DecodingProjectionfor spike-to-rate. See Hybrid networks.
4.4.0:
g_targetcan define min/max flags.
connect_from_sparse()method.
4.3.5:
4.3.4:
4.3.3:
4.3.2:
4.3.1:
4.3.0:
4.2.4:
4.2.3
4.2.2
4.2.1:
4.2.0:
4.1.2:
4.1.1:
4.1.0: | https://annarchy.readthedocs.io/en/stable/intro/Version.html | 2019-04-18T13:11:47 | CC-MAIN-2019-18 | 1555578517639.17 | [] | annarchy.readthedocs.io |
Setup stages
The basic principle is the insertion of web tracking tags in certain pages of your website.
There are two types of tags:
WEB: this tag tells you if the page has been visited,
TRANSACTION: operates like a Web tag, but with the possibility of adding information on the business volume generated, for example (transaction amount, number of items purchased, etc.).:
Insert the URLs corresponding to these pages in your Adobe Campaign platform, then generate and extract the associated web-tracking tags (from the Campaign execution>Resources>Web tracking tags node of the client console).
Create the web-tracking tags yourself in "on-the-fly creation" mode: the URLs corresponding to these pages will be automatically inserted in your Adobe Campaign platform.> | https://docs.campaign.adobe.com/doc/AC/en/CFG_Setting_up_web_tracking_Setup_stages.html | 2019-04-18T13:21:53 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.campaign.adobe.com |
Message-ID: <68258114.57.1555590247141.JavaMail.daemon@confluence> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_56_509562307.1555590247141" ------=_Part_56_509562307.1555590247141 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Our add-ons for Jira, Conflue= nce, Bitbucket, Cruicible and Bamboo follow an almost identical pattern. If= you have one of those products, this guide is for you.
Introductio= n
This guide with establish password-less Integrated Windows Authenticatio= n single sign-on for a JIRA instance available at https:/= /issues.example.com
The Windows uses are logged into their computers using the Active Direct= ory domain EXAMPLE.LOCAL
In this example, we assume a Microsoft Active Directory/LDA= P User Directory has already been set up for the same domain.
We also assume that you have Domain Admin rights such that you can creat= e and configure user accounts in AD. If you don't have these permissions yo= urself, you might have to ask a colleague for help
We recommend setting up a test enviro= nment before you go to production.
In our add-on configuration page, click Run Kerberos Setup = Wizard.
This= wizard helps you in the following ways:
In many cases the wizard can suggest appropriate configuration values fo= r you automatically.
If this is the case, you will be notified. You might want to jump straig= ht to the task summary= using the suggested values instead of going though each step.
For the purpose of this guide though, we will run through each steps of = the wizard.
Connecting to your Active Directory lets the wizard inspect your AD, sug= gest values and validate that your configuration is valid.
You can choose a pre-configured User Directory, or connect to an Active = Directory server of your choice:
If issues.examle.local is a DNS CNAME record, say for s= erver123.example.local, then the canonical name is: server123= .example.local
Otherwise, if it's a DNS A record, then the canonical n= ame is issues.example.com
Usually, the wizard can determine this for you by looking it up in DNS o= n the server.
If that fails, the wizard will instruct you on how to determine this man= ually on the client side.
Note that even if you access JIRA using the short name nbsp;the canonical name is always in FQDN form. (It is never just issue= s, but issues.example.com)
The Kerberos Realm name looks something like EXAMPLE.LOCAL or AC= COUNTING.COMPANY.COM
It is your Active Directory Domain name in upper case, dot-separate= d format.
If the wizard can't look this up in AD, it will instruct you on how to d= etermine this on your client.
Kerberos services need to be mapped to an Active Directory account. We r= ecommend you use a separate AD account for the purpose of mapping each Kerb= eros service.
Unless your instance is already mapped, the wizard will suggest an accou= nt name such as svc-jirasso-issues
The wizard will suggest the strongest encryption type supported by your = environment.
Some factors which may limit your choice of encryption strength:
Enabling AES 256 support in Java
In January 2018 Java 8 update 161 was released having support for AES 25=
6.
If you for some reason must use a Java-version older than this, you = can download the JCE Unlimited policy files from Oracle: k/java/javase/downloads/jce8-download-2133166.html . Full installation = instructions should be in the README, but it's basically just unzipping the= contents into the correct folder of the local Java installation. Then rest= arting Java (i.e. Jira/Confluence/etc). If the files are not picked up, you= should get a pretty clear message to that effect as soon as you try to do = something that requires AES-256.
The JCE policy files were previousl= y needed because of old US export policies for strong cryptography. Java di= sabled ciphers like AES-256 by default, requiring the installation of the a= forementioned JCE policy files to unlock. While export requirements were re= laxed more than a decade ago, it took until early this year for the first J= ava Runtime release to finally ship with the unlimited policy files include= d by default.
If your service is already mapped to account, then the strongest configu= red encryption type for that account is recommended.
In this case, the wizard has recommended AES-256:
The final page of the wizard starts by displaying the configuration dete= rmined K= erberos AES 256 bit encryption":
Step 2:
Shows you how to create a keytab file using ktpass. Again, this is a tas= k you might have to delegate to your AD team.
Step 3:
Finally, you may upload the keytab file created. After the upload has fi= nished a logon test will be performed.
Note that if you have multiple domains, then you are offered to add keys= to the existing keytab file.
A quick review of the syntax:
Running the ktpass command will output a keytab file and register i= ssues.examples.com as an HTTP Kerberos service.
Specifically, ktpass will:
servicePrincipalNameattribute on = the account with the value
HTTP/issues.example.com
userPrincipalNameattribute to
HTTP/issues.example.com
Note that ktpass must be running in a "run as administrator" cmd window = by a user with Domain Admin permissions.
After uploading the keytab file, you will be redirected to the Kerberos = Authentication Test page.
If you're lucky this test will succeed on your first try:
In our case, we got a failing test. Internet Explorer has not been&= nbsp;configured to send Kerberos tickets to issues.example.com. It fal= ls back to sending NTLM tickets instead (which is seen as a usename an= d password popup)
We need to make sure issues.example.com is placed = in the Local Intranet Security Zone, since that is a requirement for Intern= et Explorer to send Kerberos tickets.
The zone settings are usually set enterprise-wide using Group Policies. = Here's the GPO we used to place *.example.com and issues.example.com in the= Local Intranet Zone (Zone 1):
For more details on configuring Zone settings, and configuring Chrome an= d Firefox on Windows, Mac and Linux, see our Browser Configuration Guide | https://docs.kantega.no/exportword?pageId=819313 | 2019-04-18T12:24:07 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.kantega.no |
mParticle imposes certain limits on incoming data in order to protect the performance of both the mParticle dashboard and your apps. This includes limits around the length of individual data points, such as event names, how fast mParticle can receive data, and how many unique data points a workspace or account can have.
The tables below list mParticle’s Default Limits. Limits are not configurable unless otherwise specified.
mParticle can recieve data across many channels, and limits are not always enforced in the same way for each channel. Where appropriate, the details section of each table describes how limits affect SDK data - received from mParticle’s native SDKs - and S2S or ‘server-to-server’ data. S2S data includes data received via the Events API, and from partner feeds.
Was this page helpful? | https://docs.mparticle.com/guides/default-service-limits/ | 2019-04-18T12:18:05 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.mparticle.com |
Receiving Message Bus Events
Solace PubSub+ message brokers can be configured to generate events that provide management and status information and publish them to well-known topics onto the message broker message bus.
The following types of event messages can be published to the message bus:
- System—events that have a message broker- or system-wide scope (that is, they are events that concern the entire message broker—they are not limited to a Message VPN or particular client). Examples of system scope events are fan alarms, hardware failure events, and so on. Refer to Publishing System Events.
- Message VPN—events pertain to a particular Message VPN. An example of a Message VPN event is a message rate event that is generated when the set message rate threshold for a given Message VPN is exceeded. (Refer to Publishing Message VPN Events.)
- Client—events that pertain to client connections to particular Message VPN. These could be events such as client connects and disconnects. Client event messages are only sent out on the Message VPN in which the event occurred. (Refer to Publishing Client Events.)
- Subscription—events that pertain to client topic subscriptions added to or removed from the Message VPN. Subscribe and unsubscribe event messages are only sent out on the Message VPN in which the event occurred. (Refer to Publishing Subscription Events.)
Management applications using the Solace messaging APIs can receive these event messages by subscribing to specific topics that the events are published to (these topics are listed in the Summary of Standard Event Message Topics and Text table).
For a detailed listing of the possible events and descriptions of each event, refer to Solace PubSub+ Event Reference.
Configuring Message Broker Event Publishing
To be able to receive event messages published over the message broker message bus, the publishing of events over the message broker message bus must be enabled. Additional event configuration changes for the publishing of event messages can also be made.
Publishing System Events
By default, the publishing of system event messages over the message broker message bus is not enabled.
To enable a message broker to publish system event messages, a Message VPN that currently has an enabled state must be configured as the Management Message VPN. (Only one Message VPN can be configured as the Management Message VPN.) Once a Management Message VPN is configured, system event messages will then always be published to that Management Message VPN. (If no Management Message VPN is configured, then system event messages are not published for the message broker.)
Once a Management Message VPN is configured, the publishing of system-level event messages must be enabled.
- To set a Message VPN as the Management VPN, enter the following CONFIG command:
solace# configure
solace(configure)# management-message-vpn <vpn-name>
Where:
<vpn-name>is the name of the Message VPN to be designated as the Management Message VPN.
The no version of the command (no management-message-vpn) removes the Management Message VPN designation from the Message VPN.
- To turn on the publishing of system event messages, use the following Logging Event CONFIG command:
solace(configure)# logging event
solace(configure/logging/event)# publish-system
Note: The no version of the command (no publish system) turns off the publishing of system event messages.
- To view the current status of the system scope configuration, enter the following User EXEC command.
Example:
solace(configure/logging/event)# show logging event
System-tag:
Publish System Event Messages: Disabled
Publishing Message VPN-Level Events
By default, the publishing of Message VPN, client, and subscription event messages over the message broker message bus is not enabled, and must be enabled for each type you want to enable on a Message VPN by Message VPN basis.
Note: If the publishing of Message VPN, client, and/or subscription event messages over the message broker message bus is not enabled, the events are still written to syslog.
The event messages for these event types can be published to the message bus in the Message VPN in which the event occurred.
- Publishing Message VPN Events
- Setting a Topic Format for Published Events
- Publishing Client Events
- Publishing Subscription Events
Publishing Message VPN Events
To turn on Message VPN event publishing for a Message VPN, enter the following CONFIG commands:
solace(configure)# message-vpn <vpn-name>
solace(configure/message-vpn)# event
solace(configure/message-vpn/event)# publish-message-vpn
Note: The no version of the command (no publish-message-vpn) turns off the publishing of Message VPN event messages for the given Message VPN.
Setting a Topic Format for Published Events
By default, all event logs for a message broker are published to a topic that is prefixed by
#LOG/.
However, in Message Queuing Telemetry Transport (MQTT) topic syntax, the “#” character is reserved as a “zero or more levels” wildcard. Therefore, for an application to receive published event logs using an MQTT session subscription, you must configure the message broker to publish event logs to a topic that is compatible with MQTT subscriptions.
To set the topic format for published events for a Message VPN, enter the following CONFIG command:
solace(configure/message-vpn/event)# publish-topic-format [smf] [mqtt]
Where:
smf specifies that events should be published using SMF topic syntax, with the format
#LOG/<log-level>/<event-specific-content>. This is the default topic format used for published event logs.
mqtt specifies that events should be published using MQTT topic syntax, with the format
$SYS/LOG/<log-level>/<event-specific-content>.
- At least one option must be specified. If both formats are specified, event logs will be published using each of the specified topic formats.
- The no version of this command (no publish-topic-format) resets the parameter to its default value of
smfonly.
Publishing Client Events
To turn on client event publishing for a Message VPN, enter the following CONFIG commands:
solace(configure)# message-vpn <vpn-name>
solace(configure/message-vpn)# event
solace(configure/message-vpn/event)# publish-client
Note: The no version of the command (no publish-client) turns off the publishing of client scope events for the given Message VPN.
Publishing Subscription Events
To turn on subscription add/delete event message publishing for a Message VPN, enter the following CONFIG commands:
solace(configure)# message-vpn <vpn-name>
solace(configure/message-vpn)# event
solace(configure/message-vpn/event)# publish-subscription [no-unsubscribe-events-on-disconnect] [event-topic-format {v1 | v2}]
Where:
no-unsubscribe-events-on-disconnect configures the publishing of subscription events to disregard unsubscribe events for each of a client’s subscriptions when the client disconnects.
event-topic-format v1 sets the topic structure of subscription events to the form
#LOG/INFO/SUB_ADD|SUB_DEL/<subscribedTopic>
event-topic-format v2 sets the topic structure of subscription events to the form
#LOG/INFO/SUB/<routerName>/ADD|DEL/<vpnName>/<clientName>/<subscribedTopic>
- The no version of the command (no publish-subscription) turns off the publishing of subscription add/delete event messages for the given Message VPN.
- Enabling the publishing of subscription-level events to the message bus in a Message VPN may affect subscription performance on the message broker.
- Subscription event topics should not exceed the maximum topic string length. Topics strings that exceed the maximum length will be truncated. For more information on topic structure, refer to Topic Support & Syntax.
Viewing VPN Event Publishing Configuration
The current event publishing configuration and status for a Message VPN can be viewed through the show message-vpn User EXEC command.
solace> show message-vpn default
Message VPN: default
>Configuration Status: Enabled
Local Status: Up
Distributed Cache Management: Enabled
Total Local Unique Subscriptions: 0
Total Remote Unique Subscriptions: 0
Total Unique Subscriptions: 0
Maximum Subscriptions: 5000000
Export Subscriptions: Yes (100% complete)
Active Incoming Connections: 30
Service SMF: 30
Service Web-Transport: 0
Service REST: 0
Active Outgoing Connections:
Service REST: 0
Max Incoming Connections: 9000
Service SMF: 9000
Service Web-Transport: 200000
Service REST: 9000
Max Outgoing Connections:
Service REST: 6000
Basic Authentication: Enabled
Auth Type: no authentication
Auth Profile:
Radius Domain:
Client Certificate Authentication: Disabled
Maximum Chain Depth: 3
Validate Certificate Dates: Enabled
Allow API Provided Username: Disabled
Kerberos Authentication: Disabled
Allow API Provided Username: Disabled
SEMP over Message Bus: Enabled
Admin commands: Disabled
Client commands: Disabled
Distributed Cache commands: Disabled
Show commands: Disabled
Legacy Show Clear commands: Enabled
Large Message Threshold: 1024 (KB)
Event Log Tag
Publish Client Event Messages: Disabled
Publish Message VPN Event Messages: Disabled
Publish Subscription Event Messages: Disabled
No unsubscribes on disconnect: Disabled
Event topic format: N/A
Event Threshold Set Value Clear Value
---------------------------------- ---------------- ----------------
Incoming Connections 80%(7200) 60%(5400)
Service SMF 80%(7200) 60%(5400)
Service Web-Transport 80%(160000) 60%(120000)
Service REST 80%(7200) 60%(5400)
Ingress Message Rate (msg/sec) 4000000 3000000
Egress Message Rate (msg/sec) 4000000 3000000
Subscriptions (#subs) 80%(4000000) 60%(3000000) | https://docs.solace.com/System-and-Software-Maintenance/Receiving-Message-Bus-Events.htm?Highlight=receiving%20events%20message%20bus%20events | 2019-04-18T12:22:40 | CC-MAIN-2019-18 | 1555578517639.17 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.solace.com |
.
Platform Version Considerations
The following should be considered when specifying a platform version:
When specifying a platform version, you can use either the version number (for example,
1.2.0) or
LATEST.
To use a specific platform version, specify the version number when creating or updating your service. If you specify
LATEST, your tasks use the most current platform version available, which may not be the most recent platform.3.0
Added task recycling for Fargate tasks, which is the process of refreshing tasks that are a part of an Amazon ECS service. For more information, see Fargate Task Recycling.
Beginning on March 27, 2019, any new Fargate task launched supports injecting sensitive data into your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Specifying Sensitive Data.
-. | https://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html | 2019-04-18T13:08:32 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.aws.amazon.com |
ElasticsearchBufferingHints
Describes the buffering to perform before delivering data to the Amazon ES destination.
Contents
- IntervalInSeconds
Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
Type: Integer
Valid Range: Minimum value of 60. Maximum value of 900.
Required: No
- SizeInMBs
Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 100.
Required: No
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/firehose/latest/APIReference/API_ElasticsearchBufferingHints.html | 2019-04-18T12:47:54 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.aws.amazon.com |
Published: 2005-01-19
Applies to:
- Content Studio all versions
Type: Information
Symptoms
If you have a very deep folder structure in Content Studio you might not be able to create any document in this container.
Cause
Content Studio has a limitation in the total length of the path including the file name. The maximum allowed length is 255 characters. For all documents except for uploaded files the file name itself is 36 characters plus the length of the file extension.
Resolution
If you plan to use a very deep folder structure you should consider using short names for the folders (units and categories).
Status
This limitation is by design. | https://docs.contentstudio.se/Knowledgebase/CS5/Limitations%20in%20the%20Content%20Studio%20file%20name%20lengths_5.html | 2019-04-18T13:19:38 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.contentstudio.se |
Q: I'm having trouble with customers, payments or other EDD features. What should I do?
EDD Bookings uses the Easy Digital Downloads plugin's e-commerce features, so if you have any issues with payments, payment gateways, customer records or the like, please contact their support team directly here, or browse their documentation here.
Remember that EDD Bookings is a 3rd-party extension for EDD, so while we handle issues with regards to bookings specifically, any issues with the EDD core plugin or other extensions are handled by the respective developers responsible. | https://docs.eddbookings.com/article/440-q-im-having-trouble-with-customers-payments-or-other-edd-features-what-should-i-do | 2019-04-18T13:23:50 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.eddbookings.com |
.
Obtaining the SDK
In case you want to start from scratch, you can obtain the SDK through worker packages, our package management framework.
You can obtain the relevant worker package as follows:
Add the following configuration within
<root>/workers/<your_worker>/spatialos_worker_packages.jsonto download the header files and libraries to link against:
{ .
Multiple packages can be specified at once as multiple entries in the
targetsarray with different paths if desired. For example, when using Debug and Release configurations on Windows,. | https://docs.improbable.io/reference/12.0/cppsdk/setting-up | 2019-04-18T12:44:56 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.improbable.io |
Skype for Business Bot - hybrid environment support
Skype for Business bots can be connected to the Skype for Business Server users if hybrid connectivity has been deployed in the environment.. Bots will be configured as online users reachable by the on-premises users.
Getting started
For more information about how to deploy hybrid connectivity between Skype for Business Server and Skype for Business Online, see Deploy hybrid connectivity between Skype for Business Server and Skype for Business Online.
Configuring hybrid connectivity requires Active Directory synchronization to keep your on-premises and online users synchronized. Azure AD Connect is the best way to connect your on-premises directory with Azure Active Directory (Azure AD) and Office 365. For more information about using Azure AD Connect for hybrid environment configuration, see Integrate your on-premises directories with Azure Active Directory.
Read Skype for Business Bot - Common Errors to troubleshoot some of the common errors encountered during the Skype for Business Bot setup.
Bot setup for Skype for Business hybrid environment
After you have successfully deployed the hybrid environment, follow these steps to build and enable a Skype for Business bot:
Create the bot using the Microsoft Bot Framework. See Creating a Skype for Business bot section for details.
Launch the Connecting your bot to Skype for Business Online page and follow all the instructions to add your bot to Skype for Business Online. You will be required to sign in as a Tenant Administrator of the Skype for Business Online environment and run the New-CsOnlineApplicationEndpoint PowerShell cmdlet.
New-CsOnlineApplicationEndpoint -ApplicationID <AppID generated from Bot Framework Portal like 41ec7d50-ba91-1207-73ee-136b88859725> -Name <NameOfTheBot> -Uri sip:<[email protected]>
For the Skype for Business Hybrid environment, the New-CsOnlineApplicationEndpoint cmdlet will output an additional on-premises cmdlet to be run in your Skype for Business Server (on-premises) Management Shell. The additional cmdlet is covered in more detail in the next step.
Note
Read Skype for Business Bot - Common Errors to troubleshoot some of the common Bot setup issues.
Create an application endpoint on the Skype for Business Server (on-premises) Management Shell using the following on-premises cmdlet:
New-CsHybridApplicationEndpoint -ApplicationId <AppID generated from Bot Framework Portal like 41ec7d50-ba91-1208-73ee-136b88859725> -DisplayName <NameOfTheBot> -SipAddress sip:<[email protected]> –OU <ou=Redmond,dc=litwareinc,dc=com>
Note
Ensure that the New-CsHybridApplicationEndpoint parameters: ApplicationId, DisplayName, and SipAddress have the same values as (step 2) New-CsOnlineApplicationEndpoint parameters: ApplicationID, Name and Uri, respectively.
Skype for Business Server Cumulative Update 5 or greater is required to run this cmdlet.
The successful execution of the New-CsHybridApplicationEndpoint cmdlet will create a disabled user object on Active Directory and show a "Successfully initiated provisioning of application endpoint on-prem" message.
Wait for the newly created user object to be directory synced to the Azure Active Directory or start a new directory sync cycle by running the Start-ADSyncSyncCycle on the domain controller machine. To learn more about Azure AD Connect directory sync, see Azure AD Connect sync: Scheduler and Integrate your on-premises directories with Azure Active Directory.
Ensure that you wait for 8 hours before the endpoint is discovered from the Skype for Business clients for the newly created application ids. An on-premises user should be able to search for the BOT from the client and initiate the chat conversations. | https://docs.microsoft.com/en-us/skype-sdk/skype-for-business-bot-framework/docs/bot-hybrid-support | 2019-04-18T12:37:49 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.microsoft.com |
:
beginLocationTrackingmethod and let the mParticle SDK collect and update location information for you. Remember to call
endLocationTrackingwhen you no longer need to track location
locationproperty directly. In this case you are responsible for maintaining
locationupdated and setting it to
nil/nullwhen no longer needed
// Begin location tracking [[MParticle sharedInstance] beginLocationTracking:kCLLocationAccuracyThreeKilometers minDistance:1000]; // End location tracking [[MParticle sharedInstance] endLocationTracking]; // Set location directly - (void)updateLocation:(CLLocation *)newLocation { [MParticle sharedInstance].location = newLocation; }
// Begins tracking location MParticle.sharedInstance().beginLocationTracking(kCLLocationAccuracyThreeKilometers, minDistance: 1000) // Ends tracking location MParticle.sharedInstance().endLocationTracking()
The
minDistance value specifies the distance in meters the user must move before a location update will be logged.
Be aware that a user must give permission to an app to use their location. A second permission is required to enable location tracking when the app is in background. By default, if you use automatic location tracking, your app will make both of these permission requests to the user as required.
If you don’t wish to collect location data while your app is in background, you can disable
backgroundLocationTracking. This should be done before enabling location tracking.
[MParticle sharedInstance].backgroundLocationTracking = NO; [[MParticle sharedInstance] beginLocationTracking:kCLLocationAccuracyThreeKilometers minDistance:1000];
func updateLocation(newLocation: CLLocation) -> Void { MParticle.sharedInstance().location = newLocation }
This will cause the SDK to stop including location information when your app is in background and resumes the inclusion of location information when the app comes back to the foreground.
Location tracking is not supported for tvOS.
Was this page helpful? | https://docs.mparticle.com/developers/sdk/ios/location/ | 2019-04-18T13:04:50 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.mparticle.com |
Good drainage is essential to maintain the road and minimise sediment entering waterways and estuaries. Council and company earthworks’ guidelines are aimed at reducing the amount of sediment that gets into waterways. On-going sediment harms stream life.
- Drainage is onto stable ground.
- Water is cut off as often as possible, to avoid directing lots of water into one place – where it would scour.
- Culverts on the top and lower sections of the road are lined up to minimise the amount of water forced into the lower road’s water table.
- Poor water control can quickly cause a lot of damage and erosion.
- This leads to expensive road maintenance.
- It also generates fine sediment which is bad for the environment.
- Drainage is directed away from the fill.
- The fill is contained by slash and stabilised by grass seeding.
- Water and sediment control were installed as soon as was possible.
- Water volume and speed has been reduced to help prevent erosion and sediment loss.
- The amount of sediment going into waterways has been minimised.
- Large amounts of fine sediment can severely damage stream life.
- If this was the result of poor practice it could lead to prosecution. | https://docs.nzfoa.org.nz/live/nz-forest-road-engineering-manual-operators-guide/water-control/water-control-overview/ | 2019-04-18T12:57:18 | CC-MAIN-2019-18 | 1555578517639.17 | [array(['/site/assets/files/1248/pic-95.430x323.jpg', None], dtype=object)
array(['/site/assets/files/1249/pic-97.430x323.jpg', None], dtype=object)
array(['/site/assets/files/1250/pic-96.430x323.jpg', None], dtype=object)
array(['/site/assets/files/1251/pic-98.430x323.jpg', None], dtype=object)] | docs.nzfoa.org.nz |
A percentage value representing the ratio of a Progress Bar's current position value with respect to the Maximum and Minimum values is displayed within the progress bar.
Namespace: DevExpress.AspNetCore
Assembly: DevExpress.AspNetCore.Bootstrap.v18.2.dll
public const ProgressBarDisplayMode Percentage
Public Const Percentage As ProgressBarDisplayMode
Bootstrap Controls for ASP.NET Core are in maintenance mode. We don’t add new controls or develop new functionality for this product line. Our recommendation is to use the ASP.NET Core Controls suite. | https://docs.devexpress.com/ASPNETCoreBootstrap/DevExpress.AspNetCore.ProgressBarDisplayMode.Percentage | 2019-04-18T13:14:04 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.devexpress.com |
Integrations
Amazon Simple Notification Service (Amazon SNS) is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients.
In order to take advantage of the Amazon SNS integration, you’ll need the SNS Topic ARN and the credentials of an Identity and Access Management (IAM) user that has access to SNS.
Click here for information on SNS ARN Syntax. Sample ARN syntax for SNS is: arn:aws:sns:region:account-id:topicname.
Refer to the steps below for Amazon setup:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sns:Publish" ], "Resource": [ "arn:aws:sns:{region}:{account-id}:{topicname}" ] } ] }
Create a Custom Policy. Use one of the following methods to create the policy:
The event data will be forwarded as JSON objects. Please refer to the JSON documentation for a detailed description of the data format.
Was this page helpful? | https://docs.mparticle.com/integrations/amazonsns/ | 2019-04-18T12:17:28 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.mparticle.com |
Presentation
Multiple Features Import is a module that allows you to import multiple features using PrestaShop’s native import tool. All you need to do is indicate the separator for multiple values and import your CSV file, the module does everything else for you.
General Points and Operating Principles
Multiple Features Import requires the Presta-Module Multiple Features module in order to work. You can’t use multiple import if the multiple features provided by our module are not available. Your feature would then be displayed as Cotton|Silk|Wool, instead of being separated into 3 separate values.
What is more, the module needs you to define the multiple values separator that you have used, so as to import the information correctly. You need to use this character or your import will fail.
Finally, know that the module uses Prestashop’s native import tool and does not modify it, so you can also use it for standard imports.
Configuration
Features Separator
This allows you to set the separator used to separate the different values for a feature in the file to be imported.
Tutorial
You need to use specific formatting to import products with multiple features. Define your separator in the module configuration (the default character | is used in this example), then go to your CSV file.
The column containing the features should be in the following format:
FeatureName:Value|Value2|Value3,FeatureName2:Value|Value2|Value3.
Example: Color:Red|Black|Blue, Material:Cotton|Silk
The features and values will be automatically created if they are not already in your catalog.
PrestaShop’s import system allows you to upload a CSV file with your data (with UTF-8 or ISO-8859-1 encoding). Once the file has been imported, you can match your columns with the existing information on the product, category and supplier sheets, etc., via the simple graphic interface.
This section aims to provide an example of a product import with multiple features. It is recommended that you back up your shop before you start, so that an error does not destroy your entire catalog. Navigate to Advanced Parameters in your PrestaShop shop and click Database Backup and follow the instructions.
Once you have backed up your database, navigate to Advanced Parameters -> Menu. This takes you to this page:
You need to start by selecting the type of data to be imported. For this example we are going to import products with multiple features. Select Products from the dropdown list.
The tool then prompts you to select the language of the file that you want to import, from among all the languages currently installed on your shop.
Import your file using the button provided to this end or select a file that you already uploaded for a previous import by clicking Choose from history
State if your file uses ISO-8859-1 encoding using the slider provided.
Selecting NO, is the same as stating that the file uses UTF-8 encoding.
Also enter the different separators you use in your CSV file if they are different to the default values.
The following 4 options may have an effect on your current catalog (use with care!):
– Delete all products before import: this allows you to delete your ENTIRE catalog before proceeding with the import. Be sure to deactivate the modules that manipulate your products in order to avoid errors on deletion (such as Advanced Pack, for example).
– Use product reference as key: this allows you to use the product reference as the key, which means the references must be unique otherwise this will create a conflict in your database and generate errors on your shop.
– Do not regenerate thumbnails: this allows you to not regenerate the thumbnails fo the products imported.
– Force all ID numbers: this allows you to force the use of the ID numbers of the products imported. PrestaShop will automatically generate the ID numbers you need for your products if you do not force the ID numbers.
Click Next Step when you have done this.
Your shop will then import the CSV file selected and display the data in a table.
There are two fields at the top of the page:
– The first one allows you to save the configuration that you have just used under a specific name, so that you can use it again for future imports.
– The second one allows you to define the number of lines to be ignored when the file is imported. Enter the number of lines that do not correspond to data (header lines, for example).
The following table requires mapping for the data to be imported correctly.
There is a drop-down list at the top of each column. Just tell your shop what the given column corresponds to, from the values provided by PrestaShop.
For example, the last column in the file used in this example corresponds to the Features field. Features (Name:Value:Position:Custom) should therefore be selected from the drop-down list at the top of the column. Multiple Features Import will automatically retrieve the multiple values and assign them to the relevant product.
Simply map the different fields and then click Import .CSV data.
The data has now been imported into your shop.
PrestaShop will then tell you to re-build the search engine index for your shop so that the newly imported products can be found.
To do this, navigate to Preferences -> Search in your back office. You will see the following block in the middle of the page:
Click on Re-build the entire index and wait for a few moments. Your products will be fully operational when this operation is complete. | https://docs.presta-module.com/en/multiple-features-import-2/ | 2019-04-18T13:02:51 | CC-MAIN-2019-18 | 1555578517639.17 | [array(['https://docs.presta-module.com/wp-content/uploads/2016/08/Configuration-1.jpg',
'Configuration'], dtype=object) ] | docs.presta-module.com |
This topic presents the supported databases for the TrueSight Operations Management system:
Supported databases for the Presentation Server
Operations Management runs on the following databases:
- PostgreSQL 9.4.4
- Elasticsearch 1.3.2
The databases are embedded within Operations Management and need not be installed separately.
Supported database for App VisibilityDuring App Visibility portal and collector installations, a local PostgreSQL database is automatically installed on the same computer as the App Visibility portal and collector host. You cannot connect to a remote database.
Supported databases for Infrastructure Management
For information about supported databases for Infrastructure Management, see Supported databases for Infrastructure Management
Related topic | https://docs.bmc.com/docs/display/TSPS101/Supported+databases | 2019-04-18T13:29:00 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.bmc.com |
An Act to amend 341.05 (7) of the statutes; Relating to: operation of farm tractors on highways.
Amendment Histories
Bill Text (PDF: )
SB195 ROCP for Committee on Senate Organization (PDF: )
SB195 ROCP for Committee on Transportation, Public Safety, and Veterans and Military Affairs On 7/18/2013 (PDF: )
Wisconsin Ethics Commission information
2013 Assembly Bill 259 - Rules | https://docs.legis.wisconsin.gov/2013/proposals/sb195 | 2019-04-18T13:16:03 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.legis.wisconsin.gov |
New-Network
Controller Load Balancer Configuration
Syntax
New-NetworkControllerLoadBalancerConfiguration -ConnectionUri <Uri> -Properties <LoadBalancerManagerProperties> -ResourceId <string> [-CertificateThumbPrint <string>] [-Credential <PSCredential>] [-Etag <string>] [-Force] [-ResourceMetadata <ResourceMetadata>] [-Tags <psobject>]
Description
This cmdlet adds/updates the configuration of load balancer in Network Controller. This includes the virtual IP of load balancer service in Network Controller, different VIP pools associated with the load balancer and IP address ranges that are excluded from outbound NAT
Examples
Example 1
This example creates load balancer configuration in Network Controller, with REST endpoint as. It specifies the load balancer service IP address and a VIP pool from an existing logical network and subnet.
\\Retrieve the VIP pool $pool=Get-NetworkControllerIpPool -ConnectionUri -NetworkId ln1 -SubnetId subnet1 \\Create load balancer configuration object $lbConfig = New-Object Microsoft.Windows.NetworkController.LoadBalancerManagerProperties $lbconfig.loadBalancerManagerIpAddress = "10.0.0.23" $lbconfig.VipPools = $pool \\Add the load balancer configuration to Network Controller New-NetworkControllerLoadBalancerConfiguration -ConnectionUri -ResourceId lbconfig1 -Properties $lbconfig
Required Parameters
Specifies the Uniform Resource Identifier (URI) of the Network Controller, used by all Representational State Transfer (REST) clients to connect to Network Controller.
Specifies the properties of the load balancer configuration that can be changed:
Specifies the unique identifier for the load balancer added/changed for a load balancer configuration: | https://docs.microsoft.com/en-us/powershell/module/networkcontroller/new-networkcontrollerloadbalancerconfiguration?view=win10-ps | 2017-12-11T04:17:43 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.microsoft.com |
Language: Resource Collectors
Included in Puppet Enterprise 2016.1. A newer version is available; see the version menu above for details.
Resource.
Syntax:
- (or one of the value’s members, if the value is an array) is identical to the search key.
!= (non.. | https://docs.puppet.com/puppet/4.4/lang_collectors.html | 2017-12-11T04:06:20 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.puppet.com |
The like box is a special version of the Facebook like button designed for Facebook pages. Embedding this widget allows you to promote your Facebook page directly on your website.
Important!
Facebook like box works only with Facebook pages and not with profiles, or groups. This block requires a Facebook javascript SDK set up and activated. | http://docs.themeburn.com/burnengine/content-blocks/facebook-like-box/ | 2017-12-11T04:02:57 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.themeburn.com |
When the upload process is completed, processing starts on the STYLY side. Depending on the size of the data, this processing usually takes several minutes or within an hour.
To check if the process is completed, press “Assets” in the STYLY editor.
Then, press “3D Model”.
Press “Upload”.
The processing status of the data of the uploaded item is written as Processing Status at the bottom of the page which indicates the processing status of the data uploaded.
There are two kinds of status.
Waiting – waiting for processing
Complete – Processing completed
Once the upload is completed in this manner, the uploaded model can be inserted into the scene. | http://docs.styly.cc/uploading-original-assets/uploading-status/ | 2017-12-11T03:51:58 | CC-MAIN-2017-51 | 1512948512121.15 | [array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object) ] | docs.styly.cc |
Overview
The parking API for on-street parking. The “blocks” api returns data for a number of city blocks around a position. A block represents a road from one intersection to the next. The concept of blocks is important for the OnStreet API to be map independent and to be conflatable to any third party map library. Each block has a forecasted probability parking and datat attributes like the cost of parking.
Each block consists of multiple (1..n) segments. Segments start and end when the parking rules change. A pretthy common six-segment block in the US would have 3 segments on each side of the road: The first segment would cover parking from the corner to the close to the mid point, where there is a short segment of no parking around the alley. A final segment would cover the stretch from the alley to the other corner.
Driveways and Fire Hydrants
In order to avoid “zebra striping” in map visualizations, the API results do not include short breaks in street parking such as driveways, fire hydrates, mailboxes, crosswalks, etc. Gaps that are shorter than 25m are not included, unless they also represent a change in parking rules. We do include these short gaps in our internal representation, but the API does not surface them.
Spaces Total
The
spacesTotal field returned by the API is specific to each segment.
The number of spaces returned here reflects the number of actual spaces in the segment,
this does not take into account whether the street is currently available, “isOpen”, “No Parking”, etc.,
but an absolute measurement of spaces.
Usage
The v3 block queries are based on a circle defined by a point (lat, lon) and a radius (meters).
Entry Datetime and Duration
The
entrytime and
duration parameters not only change the rates
but also the
occupancy prediction. Selecting an entrytime in the past or more than a few hours in the future is not supported.
Probability versus isOpen
The
isOpen attribute is a boolean that summarizes whether parking is legal at this point in time. For consistency, our forecasting algorithm will predict the number of available spaces for blocks that are temporary not allowing parking (such as for
street cleaning). For most end users, a block with a high probability of an open parking spot but no legal parking at this time (isOpen equals False) should probably not be shown.
Example /blocks request
# obtain token from UAS and store in $token variable # the point is represented as lat|lon curl ""
API versions and extensions
When the API changes in a substantial way, we will bump the version number according to ‘semantic versioning’ standards and notify our customers. We expect our client software to be ‘open to extension’, that is, fields can be added to the API output without changing the version number. Client software should ignore any fields that are not described in this documentation or that were not present at the time of their implementation.
Response Attributes list
/blocks/v3
GET List on-street parking (blocks) in a given area.
Parameters= required
Responses
200 A list of blocks with their segments. In the example, only one blocks and an abbreviated list of segments is shown for clarity.
{ "result": [ { "name": "10th Street", "probability": 53, "reservations": [], "segments": [ { "end": 185, "side": "RIGHT", "start": 142, "amenities": [ { "id": 1, "name": "Free" }, { "id": 2, "name": "Metered parking" }, { "id": 3, "name": "Overnight parking" } ], "paymentID": [], "polyline6": "ezi{_A~sz~`FlOmR", "rateCards": [ "Each Hour (Mon-Tue | Thu-Sat; 9am-6pm; 9 Hour Max): $1", "1 Hour (Wed; 9am-10am): $1", "Wed (10am-12pm): No Parking", "Each Hour (Wed; 12pm-6pm; 6 Hour Max): $1", "Mon-Sat (6pm-9am): Free", "Sun: Free" ], "segmentID": "826357f3-9897-45c4-bd40-a0783aec3950", "paymentIDs": [], "spacesTotal": 16, "isOpen": true, "calculatedRates": [ { "rate_cost": 0, "quoted_duration": "1:00:00", "rate_type": "B" } ] }, { "end": 122, "side": "RIGHT", "start": 60, "amenities": [ { "id": 1, "name": "Free" }, { "id": 3, "name": "Overnight parking" }, { "id": 4, "name": "Residential parking" } ], "paymentID": [ ], "polyline6": "_{j{_A|z{~`F~Vm[", "rateCards": [ "Mon-Tue | Thu-Sun (7am-2am; Residential Permit Only): Free", "Wed (7am-10am; Residential Permit Only): Free", "Wed (10am-12pm): No Parking", "Wed (12pm-2am; Residential Permit Only): Free", "2am-7am: Free" ], "segmentID": "91819789-620d-44c4-ae77-6456f5c7e3aa", "paymentIDs": [ ], "spacesTotal": 11, "isOpen": true, "calculatedRates": [ { "rate_cost": 0, "quoted_duration": "1:00:00", "rate_type": "B" } ] } ] } ] } | http://docs.inrix.com/parking/blocks-v3/ | 2020-11-24T01:15:26 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.inrix.com |
The
videojs() function doubles as the main function for users to create a
Player instance as well as the main library namespace.
It can also be used as a getter for a pre-existing Player instance.
However, we strongly recommend using
videojs.getPlayer() for this
purpose because it avoids any potential for unintended initialization.
Due to limitations of our JSDoc template, we cannot properly document this as both a function and a namespace, so its function signature is documented here.
Arguments
id
string|Element, required
Video element or video element ID.
options
Object, optional
Options object for providing settings. See: Options Guide.
ready
Component~ReadyCallback, optional
A function to be called when the Player and Tech are ready.
Return Value
The
videojs() function returns a Player instance.
Classes
static browser :Object
A reference to the browser utility module as an object.
static dom :Object
A reference to the DOM utility module as an object.
static log :function
A reference to the log utility module as an object.
static options :Object
The global options object. These are the settings that take effect if no overrides are specified when the player is created.
static players :Object
Global enumeration of players.
The keys are the player IDs and the values are either the Player instance or
nullfor disposed players.
static TOUCH_ENABLED :boolean
Use browser.TOUCH_ENABLED instead; only included for backward-compatibility with 4.x.
- Deprecated:
- Since version 5.0, use browser.TOUCH_ENABLED instead.
static url :Object
A reference to the URL utility module as an object.
static VERSION :string
Current Video.js version. Follows semantic versioning.
Methods
static addLanguage(code, data) → {Object
Adding languages so that they're available to all players. Example:
videojs.addLanguage('es', { 'Hello': 'Hola' });
Parameters:
Returns:Object -
The resulting language dictionary object
static bind(context, fn, uidopt) → {function}
Bind (a.k.a proxy or context). A simple method for changing the context of a function.
It also stores a unique id on the function so it can be easily removed from events.
Parameters:
Returns:function -
The new function that will be bound into the context given
static computedStyle(el, prop)
A safe getComputedStyle.
This is needed because in Firefox, if the player is loaded in an iframe with
display:none, then
getComputedStylereturns
null, so, we do a null-check to make sure that the player doesn't break in these cases.
Parameters:
static createTimeRange(start, end)
Create a
TimeRangeobject which mimics an HTML5 TimeRanges instance.
Parameters:
static createTimeRanges(start, end)
Create a
TimeRangeobject which mimics an HTML5 TimeRanges instance.
Parameters:
static deregisterPlugin(name)
De-register a Video.js plugin.
Parameters:
Throws:
If an attempt is made to de-register the base plugin.
-
- Type
- Error
static extend(superClass, subClassMethodsopt) → {function}
Used to subclass an existing class by emulating ES subclassing using the
extendskeyword.
Parameters:
Returns:function -
The new class with subClassMethods that inherited superClass.
Example
var MyComponent = videojs.extend(videojs.getComponent('Component'), { myCustomMethod: function() { // Do things in my method. } });
static formatTime(seconds, guide) → {string}
Delegates to either the default time formatting function or a custom function supplied via
setFormatTime.
Formats seconds as a time string (H:MM:SS or M:SS). Supplying a guide (in seconds) will force a number of leading zeros to cover the length of the guide.
Parameters:
Returns:string -
Time formatted as H:MM:SS or M:SS
Example
formatTime(125, 600) === "02:05"
static getAllPlayers() → {Array}
Returns an array of all current players.
Returns:Array -
An array of all players. The array will be in the order that
Object.keysprovides, which could potentially vary between JavaScript engines.
static getComponent(name) → {Component}
Get a
Componentbased on the name it was registered with.
Parameters:
- Deprecated:
- In `videojs` 6 this will not return `Component`s that were not registered using Component.registerComponent. Currently we check the global `videojs` object for a `Component` name and return that if it exists.
static getPlayer(id) → {Player|undefined}
Get a single player based on an ID or DOM element.
This is useful if you want to check if an element or ID has an associated Video.js player, but not create one if it doesn't.
Parameters:
Returns:Player | undefined -
A player instance or
undefinedif there is no player instance matching the argument.
static getPlayers() → {Object}
Get an object with the currently created players, keyed by player ID
Returns:Object -
The created players
static getPlugin(name) → {function|undefined}
Gets a plugin by name if it exists.
Parameters:
Returns:function | undefined -
The plugin (or
undefined).
static getPlugins(namesopt) → {Object|undefined}
Gets an object containing multiple Video.js plugins.
Parameters:
Returns:Object | undefined -
An object containing plugin(s) associated with their name(s) or
undefinedif no matching plugins exist).
static getPluginVersion(name) → {string}
Gets a plugin's version, if available
Parameters:
Returns:string -
The plugin's version or an empty string.
static getTech(name) → {Tech|undefined}
Get a
Techfrom the shared list by name.
Parameters:
static hook(type, The)
Add a function hook to a specific videojs lifecycle.
Parameters:
static hookOnce(type, The)
Add a function hook that will only run once to a specific videojs lifecycle.
Parameters:
static hooks(type, fnopt) → {Array}
Get a list of hooks for a specific lifecycle
Parameters:
Returns:Array -
an array of hooks, or an empty array if there are none.
static isCrossOrigin(url, winLocopt) → {boolean}
Returns whether the url passed is a cross domain request or not.
Parameters:
Returns:boolean -
Whether it is a cross domain request or not.
static mergeOptions(…sources) → {Object}
Merge two objects recursively.
Performs a deep merge like lodash.merge, but only merges plain objects (not arrays, elements, or anything else).
Non-plain object values will be copied directly from the right-most argument.
Parameters:
Returns:Object -
A new object that is the merged result of all sources. parseUrl(url) → {url:URLObject}
Resolve and parse the elements of a URL.
Parameters:
Returns:url:URLObject -
An object of url details
static plugin(name, plugin)
Deprecated method to register a plugin with Video.js
Parameters:
- Deprecated:
- videojs.plugin() is deprecated; use videojs.registerPlugin() instead
static registerComponent(name, comp) → {Component}
Register a component so it can referred to by name. Used when adding to other components, either through addChild
component.addChild('myComponent')or through default children options
{ children: ['myComponent'] }.
NOTE: You could also just initialize the component before adding.
component.addChild(new MyComponent());
Parameters:
static registerPlugin(name, plugin) → {function}
Register a Video.js plugin.
Parameters:
Returns:function -
For advanced plugins, a factory function for that plugin. For basic plugins, a wrapper function that initializes the plugin.
static registerTech(name, tech)
Registers a
Techinto a shared list for videojs.
Parameters:
static removeHook(type, fn) → {boolean}
Remove a hook from a specific videojs lifecycle.
Parameters:
Returns:boolean -
The function that was removed or undef
static resetFormatTime()
Resets formatTime to the default implementation.
static setFormatTime(customImplementation)
Replaces the default formatTime implementation with a custom implementation.
Parameters:
static trigger(elem, event, hashopt) → {boolean|undefined}
Trigger an event for an element
Parameters:
Returns:boolean | undefined -
Returns the opposite of
defaultPreventedif default was prevented. Otherwise, returns
undefined
static use(type, middleware)
Define a middleware that the player should use by way of a factory function that returns a middleware object.
Parameters:
static xhr(options) → {XMLHttpRequest|XDomainRequest}
A cross-browser XMLHttpRequest wrapper.
Parameters:
Returns:XMLHttpRequest | XDomainRequest -
The request object. | https://docs.videojs.com/module-videojs-videojs.html | 2020-11-24T01:08:19 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.videojs.com |
Evaluation Sessions Grid
ImportantThis content may not be the latest Genesys Engage cloud content. To find the latest content, go to Recording in Genesys Engage cloud..
ImportantTo return the grid columns to their default state, click Reset to defaults from the Select Columns list
This page was last edited on October 2, 2020, at 12:39.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PSAAS/latest/Recording/evaluationsessiongrid | 2020-11-24T00:18:19 | CC-MAIN-2020-50 | 1606141169606.2 | [array(['/images/5/5e/Smicon_resetview.png', 'Smicon resetview.png'],
dtype=object) ] | docs.genesys.com |
Vehicle Fixed Costs
If you wish to account for vehicle fixed costs (licensing, insurance, and depreciation) incurred before you started using the RTA system, you may enter it using the Audit File Adjustments option.
- Select System > Utilities > Vehicles > Audit File Adjustments from the RTA main menu (STVA).
- Select the cost type from the drop down list: license, insurance, or depreciation.
- Select a radio button to specify in which period(s) to post the cost. Select Year to post the cost to the vehicle's year and life costs. Select Life to post the cost to the vehicle's life costs.
- Accept the current meter reading displayed or enter the meter reading associated with this transaction.
- Accept the default date displayed or enter the date associated with this transaction.
- Enter the cost amount to post.
- Enter up to 12 characters to describe the transaction for audit purposes and choose Post to complete the transaction. | https://docs.rtafleet.com/rta-manual/vehicle-inventory/vehicle-fixed-costs/ | 2020-11-24T00:50:27 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.rtafleet.com |
Data Explorer¶
The Data Explorer is available when modifying layouts on the Subject Viewer. This tool is helpful as a way to navigate FHIR JSON data in order to extract values for display.
The power behind the Data Explorer is JMESPath (JSON Matching Expression paths). JMESPath is a query language for searching JSON documents. It allows you to declaratively extract elements from a JSON document.
Example: Configure Birth Place¶
Prerequisites¶
- A user who has ABAC policy named
layoutAdmin.
- A project with subjects (patients) who have birthPlace stored.
- In this example:
- Project:
Clinical T2D LifeExtend
- Subject:
Ms. Lavelle Vandervort
1. "Not Available" for Birth Place¶
A layout is configured to display Birth Place in the header, however the value is showing as Not Available.
Not Available could mean this subject is missing data, however in this example the header configuration for Patient JMESPath Query is incorrectly using a query expression of
extensions.
2. Open Data Explorer¶
Open the Data Explorer using the icon next to Birth Place:
3. Modify the JMESPath Query Expression¶
Using the Input data as a guide, navigate the JSON structure to locate birthPlace inside of extension.
After reviewing the JSON structure for the birthPlace element, craft a JMESPath query to extract the
city,
state,
country of birthPlace.
Example
extension[?url == ''].valueAddress | values([0]) | join(', ', @)
The example above, annotated below, shows off how helpful JMESPath query expressions can be:
// Find extension data element, if available, related to `birthPlace` FHIR patient extension. // See: <> extension[?url == ''].valueAddress // The extension calls for the keys of: city, state, country for birthPlace. // Extract the object key values from the birthPlace object into an array. // See: <> | values([0]) // Join the array of key values with a comma // See: <> | join(', ', @)
4. Apply change¶
Apply the change and immediately the value for birthPlace will populate if this data exists for this subject.
Save the layout to share this change with all users of this project.
5. Output Format Types and Data Display¶
Previously the above steps have shown off JMESPath's power and ability to extract and render data using the default Default Raw JMESPath Result output format type.
However the web console offers Output Format Type display utilities of:
- Address
- Annotation
- Codeable Concept
- Coding
- Contact Point
- DateTime
- Human Name
- Identifier
- Period
- Quantity
- Range
- Ratio
- Timing
We can simplify our JMESPath expression and use the Address output format type:
extension[?url == ''].valueAddress
When the queried data renders correctly, modify the Result Type in the layout configuration to use Address as the output format type.
Save the layout to share this change with all users of this project. | https://docs.us.lifeomic.com/user-guides/subject-viewer/data-explorer/ | 2020-11-24T01:08:58 | CC-MAIN-2020-50 | 1606141169606.2 | [array(['../../images/data-explorer-1-header-not-displaying-data-for-birthplace.png',
'Data Explorer Header Missing Birthplace'], dtype=object)
array(['../../images/data-explorer-2-open.png', 'Data Explorer Open'],
dtype=object)
array(['../../images/data-explorer-3-modify-jmespath-query.png',
'Data Explorer Modify JMESPath Query Expression'], dtype=object)
array(['../../images/data-explorer-4-query-modified.png',
'Data Explorer Query Modified'], dtype=object)
array(['../../images/data-explorer-5-output-format-types.png',
'Data Explorer Output Format Types'], dtype=object) ] | docs.us.lifeomic.com |
Use XFF IP Address Values in Security Policy and Logging
You can configure the firewall to use the IP address in the X-Forwarded-For (XFF) field of the HTTP header to enforce security policy. If the packet passes through a single proxy server before reaching the firewall, the XFF field contains the IP address of the originating endpoint and the firewall can use that IP address to enforce security policy. However, if the packet passes through multiple upstream devices, the firewall uses the most-recently added IP address to enforce policy or use other features that rely on IP information.
Use XFF Values in Policy
Complete the following procedure to use the client IP address in the XFF header when enforcing security policy.
In Microsoft Azure, by default, an application gateway inserts the original source IP address and port in the XFF header. To use XFF headers in policy on your firewall, you must configure the application gateway to omit the port from the XFF header. For more information, see Azure documentation.
- Log in to your firewall.
- Select.DeviceSetupContent-IDX-Forwarded-For Headers
- Click the edit icon.
- SelectEnabled for Security Policyfrom theUse X-Forwarded-For Headerdrop-down.You cannot enabled Use X-Forwarded-For Header for security policy and User-ID at the same time.
- (Optional) SelectStrip X-Forwarded-For Header. Selecting this option removes the XFF header before the firewall forwards the request. This option does not disable the use of XFF headers; the firewall uses the XFF header for policy enforcement and logging.
- ClickOK.
- Commityour changes.
Display XFF Values in Logs
In addition to XFF header usage in security policy, you can view the XFF IP address in various logs, reports, and the Application Command Center (ACC) to aide in monitoring and troubleshooting. You can add the X-Forwarded-For column in Traffic, Threat, Data Filtering, and Wildfire Submissions logs.
To view the XFF IP address in your logs, complete the following steps.
- Log in to your firewall.
- Select.MonitoringLogs
- SelectTraffic,Threat,Data Filtering, orWildfire Submissions.
- Click the arrow to the right of any column header and selectColumns.
- SelectX-Forwarded-For IPto display the XFF IP in your log.
Display XFF Values in Reports
Predefined reports generate the firewall do not contain XFF values. To view XFF IP addresses in reports, the firewall includes built-in report templates that include XFF information.
- Log in to your firewall.
- Select.MonitorManage Custom ReportsAdd
- ClickLoad Template.
- Enter XFF into the search bar and click the search button to locate the built-in XFF report templates.
- ClickLoad.
- Configure your custom reportTime Frame,Sort By, andGroup Byto display the XFF information in the manner best suited to your needs.
- (Optional) ClickRun Nowto generate your report on demand instead of, or in addition to, aScheduled Time.
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/pan-os/10-0/pan-os-admin/policy/identify-users-connected-through-a-proxy-server/use-xff-values-for-ip-based-security-policy-and-logging.html | 2020-11-24T01:49:12 | CC-MAIN-2020-50 | 1606141169606.2 | [array(['/content/dam/techdocs/en_US/dita/_graphics/10-0/policy/x-forwarded-for-diagram.png',
'x-forwarded-for-diagram.png'], dtype=object) ] | docs.paloaltonetworks.com |
Application-related options are referred as environment options. If you are using MagicDraw, they are saved in the global.opt file that is located in <user home directory>\AppData\Local\.magicdraw\<version number>\data. The location for other modeling tools developed by No Magic Inc. is adequate.
You can add custom environment options for your modeling tool.
To add your own environment options
- Extend the com.nomagic.magicdraw.core.options.AbstractPropertyOptionsGroup class.
- Add the extending class to application environment options.
Example of adding custom environment options
class MyOptionsGroup extends AbstractPropertyOptionsGroup { ... } Application application = Application.getInstance(); EnvironmentOptions options = application.getEnvironmentOptions(); options.addGroup(new MyOptionsGroup());
Example of accessing environment options
Application application = Application.getInstance(); EnvironmentOptions options = application.getEnvironmentOptions(); int imageDpi = options.getGeneralOptions().getImageResolutionDpi();
You can find the code examples in
- <programinstallation directory>\openapi\examples\environmentoptions | https://docs.nomagic.com/display/MD190SP1/Environment+Options | 2020-11-24T00:48:03 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.nomagic.com |
If you are moving from Data ONTAP running in 7-Mode to clustered Data ONTAP, you might find it handy to refer to the command maps, which show the clustered Data ONTAP equivalents of 7-Mode commands, options, and configuration files.
The Command Map for 7-Mode Administrators includes the following mappings of 7-Mode commands, options, and configuration files to their clustered Data ONTAP equivalents:
Although the Data ONTAP command-line interface (CLI) is significantly reorganized for cluster operations, many of the commands have 7-Mode-compatible shortcut versions that require no change to scripts or other automated tasks. These shortcut versions are listed first and in bold in the tables here. Shortcut versions that are not 7-Mode-compatible are listed next, followed by the full, long-form version of the commands:
If no bold shortcut is listed, a 7-Mode-compatible version is not available. Not all forms of the commands are shown in the table. The CLI is extremely flexible, allowing multiple abbreviated forms.
A cluster has three different shells for CLI commands:
It provides all the commands you need to configure and manage the cluster.
These commands take effect only at the node level. You can switch from the clustershell to a nodeshell session to run nodeshell commands interactively, or you can run a single nodeshell command from the clustershell. You can recognize a command as a nodeshell command if it has the (long) form system node run -node {nodename|local} commandname.
It is not intended for general administrative purposes. Access the systemshell only with guidance from technical support.
When you see a 7-Mode-compatible shortcut version of a nodeshell command, it is assumed that you are running the command from the nodeshell. To switch to the nodeshell, enter the following:system node run -node {nodename|local}
Other forms of the nodeshell command must be run from the clustershell. | https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-7mapc/GUID-64D7ED18-1918-4B73-B535-A483584C8BFF.html?lang=en | 2020-11-24T02:05:29 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.netapp.com |
logging level.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
delete-v2-logging-level --target-type <value> --target-name <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--target-type (string)
The type of resource for which you are configuring logging. Must be THING_Group .
Possible values:
- DEFAULT
- THING_GROUP
--target-name (string)
The name of the resource for which you are configuring logging.
- logging level for a thing group
The following delete-v2-logging-level example deletes the logging level for the specified thing group.
aws iot delete-v2-logging-level \ --target-type THING_GROUP \ --target-name LightBulbs
This command produces no output. | https://docs.aws.amazon.com/cli/latest/reference/iot/delete-v2-logging-level.html | 2020-11-24T00:38:08 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.aws.amazon.com |
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::ECS::Types::ListAccountSettingsResponse
- Defined in:
- (unknown)
Overview
Returned by:
Instance Attribute Summary collapse
- #next_token ⇒ String
The
nextTokenvalue to include in a future
ListAccountSettingsrequest.
- #settings ⇒ Array<Types::Setting>
The account settings for the resource.
Instance Attribute Details
#next_token ⇒ String
The
nextToken value to include in a future
ListAccountSettings
request. When the results of a
ListAccountSettings request exceed
maxResults, this value can be used to retrieve the next page of
results. This value is
null when there are no more results to return.
#settings ⇒ Array<Types::Setting>
The account settings for the resource. | https://docs.aws.amazon.com/sdkforruby/api/Aws/ECS/Types/ListAccountSettingsResponse.html | 2019-11-12T07:53:05 | CC-MAIN-2019-47 | 1573496664808.68 | [] | docs.aws.amazon.com |
Cloud Accounts
License Admins and Account Admins will have the rights to manage Account level configurations.
Select Settings on the left menu and click on Manage Accounts.
Select relevant License
Click on CSP name to open up a list of Cloud Accounts onboarded to the Cloudneeti application
Select the Cloud Account you want to configure.
The Add Cloud Account button initiates the Cloud Account onboarding process
The Configure Account button provides a dropdown with options for further configurations.
The dropdown of the Configure Account button provides multiple configuration options.
Configure Notifications allows configurations of notification settings.
Configure Data Collection allows users to set the frequency of data collection and other configurations.
Update Cloud Account allows users to update the Cloud Account name
Re-Scan allows users to initiate a new scan of the cloud account separately from scheduled scans.
The same configuration options can be found in the right upper corner of a report where the License and Cloud Account are selected on the top.
Configure Notifications
Configure Notifications will give you an opportunity to add users who should receive notifications. Cloudneeti sends notifications about a new scan completion and about major configuration changes at a License or Account level.
Configure Data Collection
Configure Data Collection allows users to set the frequency of data collection
and other configurations.
Update Cloud Account
Update Cloud Account allows to update Cloud Account name
Re-Scan
Re-Scan allows users to initiate a new scan of the cloud account separately from scheduled scans.
| https://docs.cloudneeti.com/administratorGuide/manageAccounts/ | 2019-11-12T08:18:04 | CC-MAIN-2019-47 | 1573496664808.68 | [array(['../../images/administratorGuide/Configuration_Options.png#thumbnail',
'Configuration Options'], dtype=object)
array(['../../images/administratorGuide/Configure_Notifications.png#thumbnail',
'Configure Notifications'], dtype=object)
array(['../../images/administratorGuide/Configure_Data_Collection.png#thumbnail',
'Configure Data Collection'], dtype=object)
array(['../../images/administratorGuide/Update_Cloud_Account.png#thumbnail',
'Update Cloud Account'], dtype=object)
array(['../../images/administratorGuide/Re-Scan.png#thumbnail', 'Re-Scan'],
dtype=object) ] | docs.cloudneeti.com |
November 2017
Volume 32 Number 11
[Upstart]
Creativity Under Fire
By Krishnan Rangachari | November 2017
Recently, a reader e-mailed me about a simple software engineering project that had become too complex. He felt that he didn’t have the skills or resources to handle it himself anymore, and he didn’t know what to do.
In such a situation, there are really two problems: how to escape from feeling overwhelmed by a complex project, and how to be creative enough to find a solution. Let’s tackle the two problems separately.
When it comes to feeling overwhelmed, here are 10 strategies I rely on to free myself:
1. Draft It: I create a preliminary, rough version of my deliverable, based on what I know. Then I get it ready for review from a low-risk colleague, like a trusted peer. I challenge myself to get this done within 20 percent of the time it would take me to do the whole project.
2. Redefine It: I may be overwhelmed because the project’s goal is too vague, or I’m trying to do too much. So I make the goal simultaneously more concrete and simpler.
3. Leverage It: I reuse as much as I can from other people’s work. I focus only on where I can add value with my own unique additions and insights. If it’s not unique, I copy from others.
4. Credit It: I invite another person to collaborate on the project with me, and I give them most of the credit. The more credit I give to others, the less pressure I’ll feel to be the project’s lone-star savior, though I still give the project the very best I can. Interestingly, I’ve found that when I collaborate with others and share credit, we don’t divide the resulting rewards; we multiply them.
5. Atomize It: I break the project into its fundamental, most uncomfortable, smallest parts, then I pick one, high-impact part to focus on. Fulfillment in a project doesn’t come merely from “getting things done.” It comes from working on its important, uncomfortable parts non-compulsively.
6. Fake It: Sometimes, the overwhelmed feeling is just a trick of the mind. I ignore it and act as if I were an amazing engineer who knows exactly what to do. Then I go do it.
7. Game It: I change the rules, sometimes to an extreme. Instead of trying to do a project in four weeks, I ask myself how I can get the whole thing done in four hours. (Have you ever finished an overwhelming project at the very last minute? Well, you don’t have to wait until deadline to channel that laser-like focus; you can train yourself to do it anytime.)
8. Skip It: I might be subjecting myself to a futile exercise in pointlessness. If I suspect this is the case, I simply stop working on the project and move on.
9. Partition It: I may have said “yes” to work that’s not mine to do. So, I identify only the parts of the project that are mine, do those, and reassign, delegate, eliminate or disown the rest.
10. Review It: I come up with two or three solutions in my mind, then present them to my colleagues and ask for their thoughts. Through their iterative feedback, I crowdfund my solution.
Now, let’s discuss the second problem: creativity. Perhaps you’re comfortable with fixing bugs, but don’t feel as comfortable with large-scale, open-ended problems. How can you be creative enough to craft architectural designs, devise technical strategy, and make proposals that currently feel out of your reach? There are three strategies I use:
1. Transfer It: When I express my creativity outside work—whether through singing, dancing, acting, improv or writing—I find that I’m more creative at work. Over time, my mind learns to believe that I’m creative.
2. Detach It: When I feel stuck, sometimes what holds me back is wanting a particular result. Even wanting a little bit of recognition can dampen my creativity, because it introduces fear—and fear and creativity can’t co-exist. So, I redefine my goal to only my input (that is, the work I do) on the project; the output (that is, what praise I get) is no longer in the picture. It also helps to reframe the project in terms of the value I’d like to give others. Whether others indeed derive value is out of my control.
3. Sandbox It: I focus on being extra-creative in an area where I’m already deeply comfortable. This could be a side project, a private skunkworks project or even a non-work project. This gives me a safe space to push my limits and build my own sense of creative comfort. Over time, I can start to expand beyond these limits.
Krishnan Rangachari helps engineering managers have more impact. Visit RadicalShifts.com for his free course.
Discuss this article in the MSDN Magazine forum | https://docs.microsoft.com/en-us/archive/msdn-magazine/2017/november/upstart-creativity-under-fire | 2019-11-12T08:41:45 | CC-MAIN-2019-47 | 1573496664808.68 | [] | docs.microsoft.com |
Navigator
From Xojo Documentation
The Navigator is the area on the left sidebar that allows you to move around (and organize) the items in your project. The Navigator consists of the Filter and Jump Bar, plus the Contents, Run, Profiles and Build Settings sections.
Contents
- 1 Filter
- 2 Jump Bar
- 3 Contents
- 4 Run
- 5 Profiles
- 6 Build Settings
- 7 Go To Location
- 8 Using Tabs
- 9 Adding Project Items
- 10 Adding Images and Pictures
- 11 Working with Project Items
- 12 Printing
- 13 Keyboard Shortcuts
- Filter: The Filter is used to only show specific items in the Navigator's Contents section.
- Jump Bar: The Jump Bar is used to control the level of detail you see for an item in the project.
- Contents: The Contents section contains the items in your project that were added using the Insert button or menu, such as windows, web pages, classes, menus and folders. You can click on project items in the Contents section to view them or edit them. When an item is selected, you can use the arrow keys to move the selection between items.
- The Run section is only visible in the Debugger tab when you are running your project.
- Profiles: The Profiles section only appears when you have enabled the Profiler and have run your apps in debug mode to get profiler data.
- Build Settings: The Builds Settings section contains all build-related information, including the build targets (and their settings) and active build steps used by Build Automation.
Filter
The Navigator has a Filter field at the top that can be used to filter what is displayed in the Contents section. Use the Filter to quickly show specific project items based on your criteria. For example, if you know you have a method that is called “LoadCustomer”, but you don’t recall what class or window it is on, you can just type “Custom” in the Filter field to have the Navigator display any project items that have something containing “Custom” in its name (e.g. a method, control name, constant or property). You can then click on the item to see it or edit it.
This also allows you to type the "location" of a project item and member to quick jump to it, similar to what you can do with "Go To Location". For example: "MyClass.Save" would quickly go to the Save method on MyClass.
Advanced Filtering
You can use the "%" character as a wildcard for filtering the Navigator.
You can also include a "type" operator to limit the filter to only find things of a certain type, which you can also combine with the wildcard. Here are some examples:
Jump Bar
The Navigator displays items using a hierarchical list. The scope of what is displayed is controlled by the Jump Bar. By default, your entire project is in scope, so the Jump Bar displays your project name.
When you double-click on an item that is a parent, the Jump Bar changes to show the parent (this is referred to as “drilling into” the project item). Now only the items that are its children are displayed in the Navigator.
The “Double click opens item in new tab” setting in Preferences alters the above behavior. When that setting is checked, double-click opens the item in a new tab, so use Option+⌘ when double-clicking on a project item to drill into it (on Windows and Linux, use Shift-Control).
Click the name in the Jump Bar to see the entire history and to jump quickly to a specific item in the history. The Jump Bar is incredibly powerful when used with tabs because it allows you to focus the tab on just the specific item you are working with.
Contents
The Contents section shows the items in your project. You can click on project items in the Contents section to view them or edit them. When an item is selected, you can use the arrow keys to move the selection between items. You can do the usual things such as deleting, copying or pasting project items. You can also drag project items to reorganize them or to move them between project items.
For example, you can drag a method from one class to another class to move it. In addition, objects can be multi-selected. For example, you can select a Class and a Window, and drag them both into a Folder.
Run
The Run section contains your app name only appears in the Debugger tab when you run your project.
Profiles
If you run your app with Profiling enabled, the Profiles section appears with the results when the app quits normally (not if it crashes or has an exception). This section has a separate entry for time you run with the Profiler enabled. Click an entry to review the methods that were called with their elapsed times. Refer to the Code Profiler topic for more information about the Profiler.
Build Settings
The Build Settings section shows the various build OS targets available to you and is used to view and change the information needed to build and run your project.
For Desktop projects, the item “This Computer” is checked by default and contains the settings for the platform you are currently using.
For Web projects, the item "Xojo Cloud" is checked by default.
Check the check box next to other targets to create a build for it the next time you build the project. Click on a build target name to change its settings using the Inspector. For more about build settings for the various project types, refer to the App Structure section.
The Build section also is where you manage your Build Steps, which are described in the Build Automation section.
Go To Location
If you know its name, you can directly jump to a specific project item using the Go To Location feature. Select Project ↠ Go To Location to display the Go To Location window. Enter the name of the project item you want to jump to and press Return (or click Go). The Navigator will select the item for you.
Using Tabs
A tab is simply another view into your project. When you create a new tab, you get new Navigator and Editor areas. You can navigate anywhere you want within a tab, just as you can when there are no tabs. Everything works exactly the same; you just now have multiple views into your project, each of which can show different information. Tabs can be a great way to keep frequently used items available, particularly when used in conjunction with the Jump Bar.
To open a project item in a tab, you can use the contextual menu and select “Open in New Tab”. You can also use Option+⌘ when double-clicking on a project item to open it in a new tab (on Windows and Linux, use Shift-Control). There is also a preference to have project items open in a new tab when double-clicked.
The “Double click opens item in new tab” setting in Preferences alters the above behavior. When that setting is checked, double-click opens the item in a new tab. Using Option+⌘ when double-clicking on a project item allows you to drill into it (on Windows and Linux, use Shift-Control).
Tabs can be locked or unlocked. A locked tab will not have its contents changed when you click on Filter or Search results, nor will it be changed when you use Go To Location. In those cases, a new tab (or the next available unlocked tab) are used. Click the small “x” in the tab to close it. Hold down Option (on Mac) or Alt (on Windows or Linux) when clicking the "x" to close all tabs but the left-most one. You can also use View↠Unlock Current Tab or View↠Lock Current Tab (along with its keyboard shortcut).
If you open more tabs than can be displayed in the tab bar, an “overflow” icon appears for you to click to get a drop-down list of the remain tabs. Select a tab from the list replaces the currently selected tab. You can switch between tabs (wrapping around at the beginning or end) using ⌘+Shift+} or ⌘+Shift+{.
A new tab is added when you run your project. This tab has the App name and displays the Debugger when selected. If you navigate to another section of your project, clicking on this tab while the project is running returns to the debugger. Closing this tab is the same as clicking the "Stop" button in the Debugger.
Adding Project Items
Use the Insert button on the toolbar or the Insert menu to add new project items (which appear in the Contents area). You can also use the contextual menu when clicking on the "Contents" header in the Navigator to add project items.
You can also add new project items by dragging a control directly from the Library to the Navigator Contents area. This creates a subclass of the control. Learn about subclasses in the Object-Oriented Programming chapter.
Project items cannot be named “Xojo”.
Common Project Items
Desktop Project Items
Web Project Items
iOS Project Items
Adding Images and Pictures
You can add images and pictures to your projects by simply dragging the files onto the Navigator. This creates an Image Set and puts the picture at the 1x size, displaying the Image Set Editor. You can drag different pictures (with the appropriate DPI to the 2x and 3x sizes). All the pictures in an Image Set must have the same aspect ratio.
You can manually add an Image Set (Insert ↠ Image) and then drag an image or picture to each size.
When your app is run, the appropriate image for the current screen resolution will be used.
These contextual menu items are available when an image in an Image Set is selected:
- Open in External Editor: Opens the picture in the default picture viewer for the OS.
- Show on Disk: Displays the picture file using the OS file system viewer (Finder or Windows Explorer).
On Desktop and Web projects, you can also drag a picture directly into the project (without creating an Image Set) by holding down Option on MacOS (or Ctrl+Alt on WIndows and Linux). Clicking on the Picture displays a preview of it in the center area of the Workspace.
These contextual menu items are available when a Picture is selected:
- Open File: Opens the picture in the default picture viewer for the OS.
- Show on Disk: Displays the picture file using the OS file system viewer (Finder or Windows Explorer).
- Convert To Image: Converts the Picture to an Image Set with the picture placed in the 1x slot. You can multi-select pictures to convert more than one at a time. Note: The Supports HiDPI property (in Shared Build Settings) must be set to ON in order for this option to appear.
Working with Project Items
You can right-click (Control-Click on Mac) on any project item in the Navigator to display the contextual menu. From the contextual menu you have these options (not all options appear for every project item):
- Add to
- Use this command to add code items such as event handlers, methods, properties, etc. to the project item.
- Inspect
- Displays the Inspector for the project item.
- Cut/Copy
- Use to cut or copy the project item to the clipboard.
- Paste
- Pastes a project item in the clipboard into the Navigator, adding it to your project.
-
- Deletes the project item. To put it in the clipboard, use Cut.
- Duplicate
- Creates a copy of the project item in the Navigator, adding a number suffix.
- Make External/Internal
- External items can be used to share project items between your projects. Learn more in the Sharing Code section.
- Encrypt/Decrypt
- Refer to Encrypting Project Items for information on encrypting and decrypting project items.
- Export
- Use to export a project item to a file. This is a useful way to share a project item with someone else.
- Prints all the source code for the project item (see Printing).
- New Subclass
- Creates a new class in the Navigator that uses the project item as its superclass. Learn about classes in Object-Oriented Programming.
- Extract Interface
- Displays a dialog that allows you to create a new interface using the specified name and selected methods from the class. The class has the newly created interface added to it.
- Extract Superclass
- Displays a dialog that allows you to create a new super class using the specified name and selected members from the class. The class has the newly created class set as its Super.
- Implement Interface
- Lets you select an interface that contains methods to implement for the class. When you select the interface (or interfaces) to add, the methods from the interfaces are added to the project item. Learn about Interfaces in Object-Oriented Programming.
- Inspector Behavior
- Displays the Inspector Behavior window where you can customize the properties that display in the Inspector for your own classes and controls that you add to layouts. You can choose to display properties for your custom classes and subclasses and you can choose to hide properties that would normally be displayed. Add new properties using the “+” button on the left below the list of properties. Check or uncheck the check box next to the property name to determine if it is visible in the Inspector. You can also specify default values for any properties that are displayed. Drag properties in the list to reorder them or change their grouping in the Inspector. Use the Enumerations section to add a list of values that the user can select with a drop-down menu.
- Edit Super Class
- Displays the parent (super) class for the currently selected class. This item only appears for subclasses. Learn about classes in Object-Oriented Programming.
Printing
You can print your source code using File ↠ Print from the menu:
- When you have project items selected, only the selected items print.
- If you do not have a project item selected, the entire project prints. The easiest way to print the entire project is to select something in the Build Settings.
Refer to the Printing preferences for other settings.
Keyboard Shortcuts
In order for these to work, you must have focus in the Navigator, which means that the row selection is blue.
- Home: Move selection to first row, scrolling if necessary.
- End: Move selection to last row, scrolling if necessary.
- Right Arrow: Expand currently selected row.
- Left Arrow: Collapse currently selected row.
- Up Arrow: Move selection to previous row. If you are now selecting a code item, then focus moves to the Code Editor.
- Down Arrow: Move selection to next row. If you are now selecting a code item, then focus moves to the Code Editor.
- Page Up: Move selection up by one page, scrolling if necessary.
- Page Down: Move selection down by one page, scrolling if necessary.
- Shift + Up/Down Arrows: Select multiple rows. | http://docs.xojo.com/UserGuide:Navigator | 2019-11-12T09:49:58 | CC-MAIN-2019-47 | 1573496664808.68 | [] | docs.xojo.com |
Dependency
Property Key. Override Metadata(Type, PropertyMetadata) Method
Definition
Overrides the metadata of a read-only dependency property that is represented by this dependency property identifier.
public: void OverrideMetadata(Type ^ forType, System::Windows::PropertyMetadata ^ typeMetadata);
public void OverrideMetadata (Type forType, System.Windows.PropertyMetadata typeMetadata);
member this.OverrideMetadata : Type * System.Windows.PropertyMetadata -> unit
Public Sub OverrideMetadata (forType As Type, typeMetadata As PropertyMetadata)
Parameters
- typeMetadata
- PropertyMetadata
Metadata supplied for this type.
Exceptions
Attempted metadata override on a read-write dependency property (cannot be done using this signature).
Metadata was already established for the property as it exists on the provided type.
Examples
The following example overrides metadata for an existing read-only dependency property that a class inherits. In this case, the scenario goal was to add a coerce value callback that the base property metadata did not have. You could also override metadata for any of the other reasons that overriding metadata is typically appropriate (changing default value, adding FrameworkPropertyMetadataOptions values, etc.)
static Fishbowl() { Aquarium.AquariumSizeKey.OverrideMetadata( typeof(Aquarium), new PropertyMetadata( double.NaN, null, new CoerceValueCallback(CoerceFishbowlAquariumSize) ) ); } static object CoerceFishbowlAquariumSize(DependencyObject d,Object baseValue) { //Aquarium is 2D, a Fishbowl is a round Aquarium, so the Size we return is the ellipse of that height/width rather than the rectangle Fishbowl fb = (Fishbowl)d; //other constraints assure that H,W are positive return Convert.ToInt32(Math.PI * (fb.Width / 2) * (fb.Height / 2)); }
Shared Sub New() Aquarium.AquariumSizeKey.OverrideMetadata(GetType(Aquarium), New PropertyMetadata(Double.NaN, Nothing, New CoerceValueCallback(AddressOf CoerceFishbowlAquariumSize))) End Sub Private Shared Function CoerceFishbowlAquariumSize(ByVal d As DependencyObject, ByVal baseValue As Object) As Object 'Aquarium is 2D, a Fishbowl is a round Aquarium, so the Size we return is the ellipse of that height/width rather than the rectangle Dim fb As Fishbowl = CType(d, Fishbowl) 'other constraints assure that H,W are positive Return Convert.ToInt32(Math.PI * (fb.Width / 2) * (fb.Height / 2)) End Function
Remarks
Overriding metadata on a read-only dependency property is done for similar reasons as overriding metadata on a read-write dependency property, and is restricted to access at the key level because behaviors specified in the metadata can change the set behavior (the default value, for instance).
As with read-write dependency properties, overriding metadata on a read-only dependency property should only be done prior to that property being placed in use by the property system (this equates to the time that specific instances of objects that register the property are instantiated). Calls to OverrideMetadata should only be performed within the static constructors of the type that provides itself as the
forType parameter of this method, or equivalent initialization for that class.
This method effectively forwards to the OverrideMetadata method, passing the DependencyPropertyKey instance as the key parameter. | https://docs.microsoft.com/en-gb/dotnet/api/system.windows.dependencypropertykey.overridemetadata?view=netframework-4.8 | 2019-11-12T08:04:02 | CC-MAIN-2019-47 | 1573496664808.68 | [] | docs.microsoft.com |
VirtualQueryEx function
Retrieves information about a range of pages within the virtual address space of a specified process.
Syntax
SIZE_T VirtualQueryEx( HANDLE hProcess, LPCVOID lpAddress, PMEMORY_BASIC_INFORMATION lpBuffer, SIZE_T dwLength );
Parameters
hProcess
A handle to the process whose memory information is queried. The handle must have been opened with the PROCESS_QUERY_INFORMATION access right, which enables using the handle to read information from the process object. For more information, see Process Security and Access Rights.
lpAddress
A pointer to a MEMORY_BASIC_INFORMATION structure in which information about the specified page range is returned.
dwLength
Memory Management Functions | https://docs.microsoft.com/en-us/windows/win32/api/memoryapi/nf-memoryapi-virtualqueryex | 2019-11-12T09:29:55 | CC-MAIN-2019-47 | 1573496664808.68 | [] | docs.microsoft.com |
Security
Exploring The Windows Firewall
Steve Riley
At a Glance:
- Inbound vs. outbound protection
- The Windows Filtering Platform
- The Advanced Security interface
- Network profiles.
Think about this: of the total amount of time your laptop is powered up and connected to some network, what percentage of that time is it connected to your corporate network? If you’re anything like me, maybe 20 percent max. That means that only 20 percent of the time is my laptop safely within the confines of the Microsoft corporate network, protected by the network’s perimeter defenses from outside attack. But what about the 80 percent of time when my laptop is, for all practical purposes, connected directly to the Internet? (And I’m quite often connected to the most dangerous network in the world: the hotel LAN at a computer security conference!) And those times when I am connected to the corporate network, how about threats posed by other computers within that environment?
Security controls evolve to follow—sometimes too far behind—the threats. Viruses were a client problem because people traded floppy disks, so antivirus programs first appeared on clients. Then as e-mail became popular and malware matured into worms that relied on e-mail distribution, anti-malware programs evolved and appeared on e-mail gateways. With the rise of the Web, malware matured into Trojans and anti-malware followed onto Internet access proxy servers. This is a well-understood evolutionary path that no one quibbles with.
Now let’s apply the same logic to firewalls. While a firewall at your network’s edge was sufficient protection against the threats of yesterday, this is no longer the case as threats are different, they’re more sophisticated, and they’re more prevalent. Not to mention that devices and work styles differ significantly from the past. Many computers carry sensitive information stored locally, and they spend a lot of time away from the corporate network (that is, outside the edge). Therefore, the firewall must evolve into an individual client protection mechanism. Make no mistake: client firewalls are no longer optional. To protect your computers from your own corpnet and from the Internet, client firewalls are required.
Client Firewalls and Security Theater
Many people didn’t realize that the initial release of Windows® XP included a client firewall. That’s not really surprising since the firewall was switched off by default and it was buried behind too many mouse clicks. In its own rather stealthy way, the firewall just showed up without any real indication of its purpose or guidance on how to use it. But it did work. If you had enabled that firewall, it would have saved you from Nimda, Slammer, Blaster, Sasser, Zotob, and anything else that tried to hurl unsolicited traffic at your network port. Realizing the importance of client protection, Windows XP Service Pack 2 (SP2) enabled the firewall by default, created two profiles (Internet and corpnet), and allowed for Group Policy enablement.
Unfortunately, two barriers slowed the adoption of the Windows XP SP2 firewall: application concerns and security theater. Many people worried that the firewall would stop their applications from working correctly. This was rarely the case, though, because of the firewall’s design. The firewall allowed all outbound traffic to leave your computer, but blocked all inbound traffic that wasn’t in reply to some previous outbound request. The only time this design would break an application on a client was if the application created a listening socket and expected to receive inbound requests. The Windows XP firewall allowed for simple configurations of exceptions for programs or ports (but, unfortunately, not through Group Policy).
The bigger deterrent was the security theater performed by manufacturers of other client firewalls. Some people believed that the design of the Windows XP firewall—namely allowing all outbound traffic to leave unfettered—was insufficient functionality for a client firewall. The argument was that a sufficient client firewall should block all traffic, inbound and outbound, unless the user has specifically granted permission.
Now, let’s think this through for a moment. Two scenarios emerge.
- If you’re running as a local administrator and you are infected by malware, the malware will simply disable the firewall. You’re 0wn3d.
- If you aren, stop harassing me!" And once that dialog goes away, so does your security. Or, more commonly, the malware will simply hijack an existing session of a program you’ve already authorized, and you won’t even see the dialog. Again, you’re 0wn3d. from a computer that’s already compromised, how can you be sure that the computer is really doing what you ask? The answer: you can’t. Outbound protection is security theater—it’s a gimmick that only gives the impression of improving your security without doing anything that actually does improve your security. This is why outbound protection didn’t exist in the Windows XP firewall and why it doesn’t exist in the Windows Vista™ firewall. (I’ll talk more about outbound control in Windows Vista in a bit.)
What’s New in Windows Vista?
The Windows Filtering Platform, part of the new network stack, is the foundation for the Windows Vista firewall. Like Windows XP, Windows Vista blocks inbound traffic by default. Depending on which profile your computer is using, there might be some default exceptions for network services (I’ll discuss profiles later). You can, if you wish, write rules to allow inbound connections. Also like Windows XP, Windows Vista by default allows all outbound traffic from interactive processes, but restricts outbound traffic from services that participate in service restriction. And, again, you can write rules to block additional outbound connections.
The big difference between Windows XP and Windows Vista is the new Advanced Security interface and full Group Policy support for configuration and rules (see Figure 1). The old Control Panel UI is still there and it’s mostly unchanged except for logging and Internet Control Message Protocol (ICMP) settings, which are now in the new UI. This new UI, the Advanced Security MMC snap-in, offers all the new features and flexibility. There’s also a new context in the netsh command, netsh advfirewall, through which you can script rule addition and deletion, set and show global and per-profile policies, and display the firewall’s active state. And for you developers, FirewallAPI.dll and Netfw.h provide programmatic control over all of the firewall’s settings.
Figure 1** Windows Firewall with Advanced Security **(Click the image for a larger view)
The Advanced Security MMC is wizard-driven. When creating a rule, you can choose one of four types: program, port, predefined, or custom. These are explained in Figure 2.
Figure 2 Four types of rules
There are many elements that you can reference when writing rules, all of which are available for local rules and rules applied through Group Policy. These include: Active Directory® user and computer accounts and groups, source and destination IP addresses, source and destination TCP and UDP ports, IP protocol numbers, programs and services, types of interfaces (wired, wireless, or remote access), and ICMP types and codes.
Once configured, the firewall processes rules in the following order:
Service Restrictions Some of the services in Windows Vista will restrict themselves to limit the likelihood of another Blaster-style attack. One of the restrictions is a list of ports the service requires. The firewall enforces this and prevents the service from using (or being told to use) any other port.
Connection Security Rules The Advanced Security MMC incorporates IPsec as well as the firewall. Any rules that include IPsec policies are processed next.
Authenticated Bypass These allow specified authenticated computers to bypass other rules.
Block Rules These explicitly block specified incoming or outgoing traffic.
Allow Rules These explicitly allow specified incoming or outgoing traffic.
The firewall rules are stored in the registry, but I’m not going to tell you exactly where. Oh, OK, you’ll find them at these locations:
- HKEY_LOCAL_MACHINE\SYSTEM\ CurrentControlSet\Services\SharedAccess\Defaults\FirewallPolicy\FirewallRule
- HKEY_LOCAL_MACHINE\SYSTEM\ CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy\FirewallRules
- HKEY_LOCAL_MACHINE\SYSTEM\ CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy\RestrictedServices\Static\System
Just please don’t edit the rules directly in the registry. If you do, we’ll find you and sell your pet on eBay! Well, maybe not, but the only supported way to edit rules is to use the Advanced Security MMC.
Network Profiles
Windows Vista defines three network profiles: domain, private, and public. When the computer is domain-joined and has successfully logged into the domain, the computer automatically applies the domain profile—you never get to make this choice on your own. When the computer is connected to an internal network that lacks a domain (like a home or small office network), you (or an administrator) should apply the private profile. Finally, when the computer is directly connected to the Internet, you should apply the public profile.
How does Windows Vista decide where to place your computer? Whenever there’s a network change (say it receives a new IP address or sees a new default gateway or gets a new interface), a service called Network Location Awareness (NLA) detects the change. It builds a network profile—which includes information about existing interfaces, whether the computer authenticated to a domain controller, the gateway’s MAC address, and so on—and assigns it a GUID. NLA then notifies the firewall and the firewall applies the corresponding policy (there’s a policy defined for each of the three profiles).
If this is a new interface that the computer hasn’t seen before and NLA didn’t choose the domain profile, then you’ll see a dialog box that asks you to indicate what kind of network you’re connecting to. Mysteriously, there are three choices: Home, Work, and Public. You might think that Work means the domain profile, but that’s not actually the case. Remember, you never see the domain profile because NLA automatically selects that when the computer logs onto a domain. In reality, both Home and Work correspond to the private profile. Functionally, they’re equivalent—only the icons are different. (Note: you must be a local administrator, or able to elevate to a local administrator, to select the private profile.) As you’d expect, public corresponds to the public profile.
In Windows Vista, a network profile applies to all the interfaces in the computer. Here’s a rundown of the NLA decision tree:
- Examine all connected networks.
- Is any interface connected to a network classified as public? If yes, set the computer’s profile to public and exit.
- Is any interface connected to a network classified as private? If yes, set computer’s profile to private and exit.
- Do all interfaces see a domain controller and did the computer successfully log on? If yes, set computer’s profile to domain and exit.
- Else set computer’s profile to public.
The goal is to select the most restrictive profile possible. There are two obvious side effects, however. First, if your computer’s Ethernet port is connected to your corpnet and its wireless NIC is connected to the Starbucks downstairs, the computer will select the public profile, not the domain profile. Second, if your computer is directly connected to the Internet (in the public profile) or is connected to your home LAN (in the private profile) and you make a VPN connection to your corpnet, your computer will remain in the public or private profile.
What might this mean? The firewall’s policy for the domain profile includes rules for remote assistance, remote administration, file and print sharing, and so on. If you rely on these rules in order to get to a client remotely, you won’t be able to if the client has chosen some other profile. But don’t despair—you can write firewall rules to allow whatever inbound connections you need and then apply them only to VPN connections. Now you can still administer your clients over the VPN even when they aren’t in the domain profile.
Controlling Outbound Connections
Earlier, I said that the typical form of outbound protection in client firewalls is just security theater. However, one form of outbound control is very useful: administratively controlling certain types of traffic that you know you don’t want to permit. The Windows Vista firewall already does this for service restrictions. The firewall allows a service to communicate only on the ports it says it needs and blocks anything else that the service attempts to do. You can build on this by writing additional rules that allow or block specific traffic to match your organization’s security policy (see Figure 3).
Figure 3** New Inbound Rule Wizard **(Click the image for a larger view)
Say, for instance, that you want to prohibit users from running a particular instant messaging client. You can create a rule (in Group Policy, of course) to block connections to the login servers for that client.
There are practical limitations to this approach, though. For example, Windows Live™ Messenger (which you might still know as MSN® Messenger) has a variety of servers it can use to log in, and the list is always changing. Plus, it’ll fall back to port 80/tcp if the default port 1863/tcp is blocked. A rule to block Windows Live Messenger from connecting to its login servers would be too complex and always in flux. I mention this to illustrate that administrative outbound control can be useful, but isn’t a substitute for Software Restriction Policies if you need to maintain tight control over the software users are allowed to install and run.
Protect Your Computer
The perimeter is gone. Every computer must now take responsibility for its own protection. Just as anti-malware moved from the client to the edge, so firewalls must move from the edge to the client. You can take immediate action by enabling the firewall you’ve already got installed.
Whether your systems are running Windows XP or you’re already transitioning to Windows Vista, the Windows firewall is available to all your clients and will provide the protection you need to improve security within your organization—even when your mobile workers are thousands of miles away from the office.
Where to Learn More
- Windows Vista TechCenter: Getting Started with Windows Firewall with Advanced Security
- Windows Vista TechCenter: Windows Firewall with Advanced Security—Diagnostics and Troubleshooting
- Steve Riley’s TechEd Europe Presentation: Windows Vista Firewall and IPSec Enhancements
- The Cable Guy: The New Windows Firewall in Windows Vista and Windows Server "Longhorn"
- Windows Vista Firewall Virtual Lab
Steve Riley, a senior security strategist in the Microsoft Trustworthy Computing Group. | https://docs.microsoft.com/en-us/previous-versions/technet-magazine/cc138010(v=msdn.10)?redirectedfrom=MSDN | 2019-11-12T09:06:36 | CC-MAIN-2019-47 | 1573496664808.68 | [array(['images/cc138010.fig01.gif',
'Figure 1 Windows Firewall with Advanced Security'], dtype=object)
array(['images/cc138010.fig03.gif', 'Figure 3 New Inbound Rule Wizard'],
dtype=object) ] | docs.microsoft.com |
configure the following settings on your forwarder:
- Use the
initCrcLengthattribute in
inputs.confto increase the number of characters used for the CRC calculation, and make it longer than your static header.
- Use the
crcSaltattribute when configuring the file in
inputs.conf. The
crcSaltattribute, when set to
<SOURCE>, ensures that each file has a unique CRC. The effect of this setting is that Splunk.1.3, 7.1.6, 7.2.3, 7.2.4, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 8.0.0
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/SplunkCloud/7.2.7/Data/Howlogfilerotationishandled | 2019-11-12T09:12:48 | CC-MAIN-2019-47 | 1573496664808.68 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Not finding the events you're looking for?
This documentation does not apply to the most recent version of Splunk. Click here for the latest version..
This documentation applies to the following versions of Splunk: 4.0 , 4.0.1 , 4.0.2 , 4.0.3 , 4.0.4 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/4.0.1/Admin/Cantfindthedatayourelookingfor | 2012-05-27T02:45:16 | crawl-003 | crawl-003-017 | [] | docs.splunk.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.