content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
My azure scripts on github
Hi!
I’ve decided to put my azure scripts on Github, that keeps them in one place and I can update whenever I find bugs.
I have more scripts in the queue, but I first need to remove credentials, hostnames etc. before I put them on github.
Hope it helps,
H. | https://docs.microsoft.com/en-us/archive/blogs/holgerkenn/my-azure-scripts-on-github | 2020-01-18T02:18:16 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.microsoft.com |
Struct Lt
syn::token
pub struct Lt {
pub spans: [Span; 1],
}
<
Don't try to remember the name of this type -- use the Token!
macro instead.
Token!
spans: [Span; 1]
impl Token for Lt
fn peek(cursor: Cursor) -> bool
fn display() -> &'static str
impl Parse for Lt
fn parse(input: ParseStream) -> Result<Self>
impl Clone for Lt
fn clone(&self) -> Lt
Returns a copy of the value. Read more
fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source. Read more
source
impl Copy for Lt
impl Eq for Lt
impl Default for Lt
fn default() -> Self
Returns the "default value" for a type. Read more
impl PartialEq<Lt> for Lt
fn eq(&self, _other: &Lt) -> bool
This method tests for self and other values to be equal, and is used by ==. Read more
self
other
==
#[must_use]
fn ne(&self, other: &Rhs) -> bool
This method tests for !=.
!=
impl Debug for Lt
fn fmt(&self, f: &mut Formatter) -> Result
Formats the value using the given formatter. Read more
impl Hash for Lt
fn hash<H: Hasher>(&self, _state: &mut H)
Feeds this value into the given [Hasher]. Read more
Hasher
fn hash_slice<H>(data: &[Self], state: &mut H) where H: Hasher,
Feeds a slice of this type into the given [Hasher]. Read more
impl ToTokens for Lt
fn to_tokens(&self, tokens: &mut TokenStream)
Write self to the given TokenStream. Read more
TokenStream
fn into_token_stream(self) -> TokenStream
Convert self directly into a TokenStream object. Read more
impl !Send for Lt
impl !Sync for Lt
impl<K> Token for K where K: CustomKeyword,
fn peek(Cursor) -> bool
impl<T> Spanned for T where T: ToTokens,
fn span(&Self) -> Span
Returns a Span covering the complete contents of this syntax tree node, or [Span::call_site()] if this node is empty. Read more
Span
Span::call_site()> Any for T where T: 'static + ?Sized,
fn get_type_id(&self) -> TypeId
get_type_id
this method will likely be replaced by an associated static
Gets the TypeId of self. Read more
TypeId<E> SpecializationError for E
fn not_found<S, T>(trait_name: &'static str, method_name: &'static str) -> E where T: ?Sized,
rustc_private
this crate is being loaded from the sysroot, an unstable location; did you mean to load this crate from crates.io via Cargo.toml instead?
Cargo.toml
Create an error for a missing method specialization. Defaults to panicking with type, trait & method names. S is the encoder/decoder state type, T is the type being encoded/decoded, and the arguments are the names of the trait and method that should've been overridden. Read more
S
T
impl<T> Erased for T
impl<T> Send for T where T: ?Sized,
impl<T> Sync for T where T: ?Sized, | https://docs.rs/syn/0.15.22/syn/token/struct.Lt.html | 2020-01-18T01:43:21 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.rs |
Verified Users Setup
Overview
This document provides instruction on Verified Users extension.
Verified Users
When the extension is activated, you can view all the users, verified and unverified, who registered from your site on WP-Admin > Users page. It will also add a Verified Users tab under Ultimate Member > Settings > Extension.
Settings page
[Ultimate Member > Settings > Extensions > Verified Users]
- Content Lock Redirect - Add a URL where unverified users who try to access verified areas will be redirected
[Ultimate Member > Settings > Email]
- Account is verified E-mail - template for members email
- Verification Request E-mail - template for admins email
Press a gear icon and you will see template’s options. You may switch On or switch Off the email, edit emails subject and body.
User Request Verification
Once the Verified Users extension is activated on your site, users will see Request Verification link on their profiles and accounts. They also have the ability to Cancel the request on their end.
[Profile]
[Account]
When a user sends a request verification, you can view it at the top of the WP- Admin > Users page.
You can either Approve or Reject the verification request when you hover over the user.
Manual Account Verification
One key feature of the Verified Users extension is the admin can manually verify any user from the admin page with or without user request for verification. To verify a user, you can simply click on Verify when you hover over a user on the WP-Admin > Users page. Alternatively, you can also click on Edit to verify a user.
When you click on Edit, scroll down to Account Verification and choose Verified Account on the drop down menu. Click on Update User at the bottom of the page to save changes.
Once you have verified a user, you will receive a notice at the top of the page saying "Users have been updated". You can also see the Unverify link when you hover over the verified user.
Automatic Verification of User Community Role
You can also automatically verify users with a specific role once they register on your site. To do this, please navigate to the WP -Admin > Ultimate Member > User Roles and click on the Role Title that needs automatic verification. On the right side of the page, you will see a settings section for Verified Accounts. Click on Yes to automatically verify users with the role.
[Ultimate Member > User Roles > Edit role]
Bulk Verification
If you need to verify more than one user, you can do bulk verification. To do this, click on the checkbox of the users you need to verify. Then select Mark Account as verified from the UM Action drop down menu. Click on Apply button to save changes.
Once a user is verified, a blue circle with a check mark badge will appear on his/her profile and on the member directory.
| https://docs.ultimatemember.com/article/184-verified-users-setup | 2020-01-17T23:50:39 | CC-MAIN-2020-05 | 1579250591431.4 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/561c96629033600a7a36d662/images/5df64f3b04286364bc92df23/file-CBym5CdTh4.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/561c96629033600a7a36d662/images/5df64f8204286364bc92df24/file-ejxfjXjLmr.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/561c96629033600a7a36d662/images/584c9a209033602d65f6e797/file-ZcJLCeL7Kh.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/561c96629033600a7a36d662/images/5df6529e04286364bc92df26/file-SlwiG23Pu8.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/561c96629033600a7a36d662/images/584ca082c697912ffd6bc8f1/file-rYAJ8d5SjQ.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/561c96629033600a7a36d662/images/584ca2629033602d65f6e7a6/file-xEmQJ6RJ3g.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/561c96629033600a7a36d662/images/584c8f24c697912ffd6bc8e0/file-6PdZcMNxHS.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/561c96629033600a7a36d662/images/5df6537e04286364bc92df29/file-fl5qFHIny1.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/561c96629033600a7a36d662/images/584c937ac697912ffd6bc8e5/file-zM59LUL2UH.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/561c96629033600a7a36d662/images/584c99bf9033602d65f6e796/file-uLgkQ5PCE7.png',
None], dtype=object) ] | docs.ultimatemember.com |
Configuring advanced incoming mailbox properties
During advanced configuration, you enter information about associated mailboxes, templates, and forms, and information related to mailbox security. You can do this by using the Advanced Configuration tab of the AR System Email Mailbox Configuration form as shown in the following figure.
Advanced configuration for incoming mailboxes
(Click the image to expand it.)
Note
Review the information about advanced configuration settings in Creating and using email templates.
To create an advanced configuration for your incoming mailbox
- In the Advanced Configuration tab of the AR System Email Mailbox Configuration form, select an outgoing mailbox from the Associated Mailbox Name list to reply to incoming emails that require responses, such as queries.
- In the Action Configuration section, specify:
- Email Action — To enable the Email Engine to detect and process instructions included in an incoming email message, select Parse. If you use templates to perform Submit, Modify, or Query actions, you must select Parse.
For more information about templates and parsing, see Using label-value pairs in templates and Types of email templates.
- Use Original Template Format (enabled for upgrades from BMC Remedy Mail Server) — To enable original parsing system processing, select Yes.
Original parsing ignores special HTML fields, XML formats, and data entered in an invalid format, such as a character string in a number field. If you use this option, the Email Engine displays an error message when it encounters these types of fields or formats. To use normal parsing, select No.
Note
If you select No, make sure that multiple lines in emails are encapsulated with the
[$$and
$$]multiple-line delimiters.
- Reply with Result — To enable the Email Engine to return the results of an action in an email, select Yes.
This option allows the email sender to know if the incoming email succeeded or failed. For more information, see Sharing a database without using a server group.
- Reply with Entry — To return the complete entry of a submit or modify action, select Yes.
- Enable Modify Actions — To enable the Email Engine to modify existing entries, select Yes.
- Default Workflow Form — Enter the name of the default form on which the Email Engine executes instructions such as queries, form-entry modifications, and form submittals, from the incoming email message.
Note
If you define a default workflow form, incoming templates do not require the Form (or Schema) label. For more information, see Form label.
- Force Default Workflow Form — To confine all instructions from the incoming email message to the form that you specified in the Default Workflow Form field, select Yes.
Note
If an incoming template specifies a schema, the schema will not be processed and the default workflow form will be used instead.
- In the Incoming Security Configuration section, specify the level of security to be applied to email messages to this mailbox. This information is used to determine which AR System user information to apply when executing instructions parsed from an incoming email.
Depending on the level of security that you want, apply one of the following security options:
- Use Security Key — Select Yes to enable a security key for incoming email.
The information is added to the Email Security form, so you do not have to supply the user name and password in the incoming email. If you use this option, you must create and configure the security key. See Configuring incoming mailbox security.
If you select No, the security key will be disabled for incoming email containing the modify action. In case of multiple recipients, the outgoing email message for this modify action will not be sent.
- Use Supplied User Information — To use AR System server login information from the incoming email message to execute instructions in the incoming message, such as instructions to modify requests or submit queries, select Yes.
For more information about login syntax, see Login, Password, and TCP Port labels.
- Use Email From Address — To use the sender's email address as a form of authentication, select Yes.
The Email Engine displays an error message if the sender's email address is different from the email address stored in the AR System User form.
Note
Apply only one of the given security options.
- Click Save. | https://docs.bmc.com/docs/ars81/configuring-advanced-incoming-mailbox-properties-225968790.html | 2020-01-18T01:54:18 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.bmc.com |
All content with label as5+batching+distribution+ec2+expiration+gridfs+infinispan+interactive+jboss_cache+jpa+jta+loader+lock_striping+notification+release+state_transfer.
Related Labels:
publish, datagrid, coherence, interceptor, server, rehash, replication, recovery, transactionmanager, dist, partitioning, query, deadlock, pojo_cache, archetype, jbossas, nexus, demos, guide,
schema, listener, cache, s3, amazon, grid, test, api, xsd, maven, documentation, youtube, write_behind, hibernate, aws, interface, custom_interceptor, setup, clustering, eviction, fine_grained, concurrency, out_of_memory, import, index, events, configuration, batch, hash_function, buddy_replication, colocation, xa, pojo, write_through, cloud, mvcc, tutorial, murmurhash2, xml, read_committed, jbosscache3x, meeting, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, br, development, websocket, transaction, async, xaresource, build, hinting, scala, installation, client, migration, non-blocking, rebalance, filesystem, tx, gui_demo, eventing, client_server, testng, murmurhash, infinispan_user_guide, standalone, snapshot, webdav, hotrod, repeatable_read, docs, consistent_hash, store, faq, 2lcache, jsr-107, jgroups, lucene, locking, rest, hot_rod
more »
( - as5, - batching, - distribution, - ec2, - expiration, - gridfs, - infinispan, - interactive, - jboss_cache, - jpa, - jta, - loader, - lock_striping, - notification, - release, - state_transfer )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/as5+batching+distribution+ec2+expiration+gridfs+infinispan+interactive+jboss_cache+jpa+jta+loader+lock_striping+notification+release+state_transfer | 2020-01-18T00:19:16 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.jboss.org |
All content with label async+buddy_replication+data_grid+documentation+gridfs+hotrod+infinispan+locking+lucene+nexus+notification+setup.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, partitioning, query, deadlock, contributor_project, archetype, lock_striping, jbossas, guide, listener,
cache, s3, amazon, memcached, grid, jcache, test, api, xsd, ehcache, maven, userguide, write_behind, ec2, 缓存, streaming, hibernate, aws, interface, clustering, eviction, large_object, out_of_memory, concurrency, jboss_cache, import, events, batch, hash_function, configuration, loader, xa, write_through, cloud, remoting, mvcc, tutorial, murmurhash2, jbosscache3x, read_committed, xml, distribution, meeting, cachestore, cacheloader, resteasy, hibernate_search, cluster, development, br, websocket, transaction, interactive, xaresource, build, searchable, demo, scala, cache_server, installation, ispn, client, non-blocking, migration, filesystem, jpa, tx, user_guide, gui_demo, eventing, student_project, client_server, testng, infinispan_user_guide, murmurhash, webdav, snapshot, repeatable_read, docs, batching, consistent_hash, store, jta, faq, 2lcache, as5, jsr-107, jgroups, rest, hot_rod
more »
( - async, - buddy_replication, - data_grid, - documentation, - gridfs, - hotrod, - infinispan, - locking, - lucene, - nexus, - notification, - setup )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/async+buddy_replication+data_grid+documentation+gridfs+hotrod+infinispan+locking+lucene+nexus+notification+setup | 2020-01-17T23:58:55 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.jboss.org |
Hosting Chatter - Tell us what you think!
The Hosting Technology Specialist blog has been a excellent source of information over the last year, now it is time to get more of an active involvement and feedback on hosting industry! This will center around a topic a week we will submit. This topic hopefully will create great dicussion on issues, technology trends, thoughts and perspecties on the issues and ideas that are at the center of this great market! Whether you represent a web hosting provider, dedicated managed hosting provider, dedicated, colocation ISP, Telco, SI or whatever, we want to hear from you!
Thanks!
Microsoft Hosting Technology Specialist Team | https://docs.microsoft.com/en-us/archive/blogs/mhpta/hosting-chatter-tell-us-what-you-think | 2020-01-18T02:00:35 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.microsoft.com |
Process
Thread. Ideal Processor Property
Definition
Sets the preferred processor for this thread to run on.
public: property int IdealProcessor { void set(int value); };
[System.ComponentModel.Browsable(false)] public int IdealProcessor { set; }
member this.IdealProcessor : int
Public Property IdealProcessor As Integer
Property Value
The preferred processor for the thread, used when the system schedules threads, to determine which processor to run the thread on.
- Attributes
-
Exceptions
The system could not set the thread to start on the specified processor.
The process is on a remote computer.
Examples; / = (IntPtr)1; } } }
Imports System.Diagnostics Class Program Shared Sub Main(ByVal args() As String) ' Make sure there is an instance of notepad running. Dim notepads As Process() = Process.GetProcessesByName("notepad") If notepads.Length = 0 Then Process.Start("notepad") End If Dim threads As ProcessThreadCollection = CType(1, IntPtr) End Sub End Class
Remarks. | https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.processthread.idealprocessor?view=netcore-2.2 | 2020-01-18T01:58:16 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.microsoft.com |
The Web Dashboard control provides the capability to create calculated fields that allow you to apply complex expressions to data fields obtained from the dashboard's data source. As a result, you can use these fields in data visualizations as regular data source fields.
Note that calculated fields are not supported for the OLAP data source.
You can add a new calculated field based on the existing data source fields after you have created a data source.
You can create calculated fields both in the Data Sources page and from the Binding panel.
Go to the dashboard menu and open the Data Sources page. Select a required data source (and the required query/data member, if applicable) and click the Add Calculated Field button to create a calculated field.
Open the Binding panel, go to the Binding section and click the Add calculated field button (the icon).
This invokes the Edit Calculated Field dialog, which allows you to construct the required expression.
The following elements are available for creating expressions:
Element
Description
Fields
Contains available fields and dashboard parameters.
Constants
Contains boolean variables.
Functions
Contains different types of functions including aggregate.
To learn how to use Aggregate functions, see Aggregations. The Expression Operators, Functions and Constants topic lists common functions (DateTime, Math, String, etc.) supported by expressions.
Operators
Allows you to select operators from the list.
You can add a comment to your expression to explain it and make the expression more readable. Comments are multi-line and begin with /* and end with */.
/*
*/
After creating the expression, click Save to creates a new calculated field and display it in the Field List. This type of a field is indicated with the f glyph.
You can configure calculated fields both in the Data Sources page and from the Binding panel:
This invokes the Edit Calculated Field dialog. You can change the calculated field's name, type or edit the current expression.
To delete the calculated field, use the calculated field's Delete button (the / icons). | https://docs.devexpress.com/Dashboard/117196/create-dashboards/create-dashboards-on-the-web/providing-data/calculated-fields | 2020-01-18T01:48:01 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.devexpress.com |
File:App Events.png
Size of this preview: 800 × 417 pixels. Other resolutions: 320 × 167 pixels | 1,365 × 711 pixels.
Original file (1,365 × 711 pixels, file size: 166 KB, MIME type: image/png)
App Events
File history
Click on a date/time to view the file as it appeared at that time.
- You cannot overwrite this file.
File usage
The following page links to this file: | https://docs.hubitat.com/index.php?title=File:App_Events.png | 2020-01-18T01:47:11 | CC-MAIN-2020-05 | 1579250591431.4 | [array(['/images/9/99/App_Events.png', 'File:App Events.png'], dtype=object)] | docs.hubitat.com |
October 2017
Volume 32 Number 10
[C++]
From Algorithms to Coroutines in C++
By Kenny Kerr
There’s a C++ Standard Library algorithm called iota that has always intrigued me. It has a curious name and an interesting function. The word iota is the name of a letter in the Greek alphabet. It’s commonly used in English to mean a very small amount and often the negative, not the least amount, derived from a quote in the New Testament Book of Matthew. This idea of a very small amount speaks to the function of the iota algorithm. It’s meant to fill a range with values that increase by a small amount, as the initial value is stored and then incremented until the range has been filled. Something like this:
#include <numeric> int main() { int range[10]; // Range: Random missile launch codes std::iota(std::begin(range), std::end(range), 0); // Range: { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 } }
It’s often said that C++ developers should expunge all naked for loops and replace them with algorithms. Certainly, the iota algorithm qualifies as it takes the place of the for loop that any C or C++ developer has undoubtedly written thousands of times. You can imagine what your C++ Standard Library implementation might look like:
namespace std { template <typename Iterator, typename Type> void iota(Iterator first, Iterator last, Type value) { for (; first != last; ++first, ++value) { *first = value; } } }
So, yeah, you don’t want to be caught in a code review with code like that. Unless you’re a library developer, of course. It’s great that the iota algorithm saves me from having to write that for loop, but you know what? I’ve never actually used it in production. The story usually goes something like this: I need a range of values. This is such a fundamental thing in computer science that there must be a standard algorithm for it. I again scour the list over at bit.ly/2i5WZRc and I find iota. Hmm, it needs a range to fill with values. OK, what’s the cheapest range I can find … I then print the values out to make sure I got it right using … a for loop:
#include <numeric> #include <stdio.h> int main() { int range[10]; std::iota(std::begin(range), std::end(range), 0); for (int i : range) { printf("%d\n", i); } }
To be honest, the only thing I like about this code is the range-based for loop. The problem is that I simply don’t need nor want that range. I don’t want to have to create some container just to hold the values so that I can iterate over them. What if I need a lot more values? I’d much rather just write the for loop myself:
#include <stdio.h> int main() { for (int i = 0; i != 10; ++i) { printf("%d\n", i); } }
To add insult to injury, this involves a lot less typing. It sure would be nice, however, if there were an iota-like function that could somehow generate a range of values for a range-based for loop to consume without having to use a container. I was recently browsing a book about the Python language and noticed that it has a built-in function called range. I can write the same program in Python like this:
for i in range(0, 10): print(i)
Be careful with that indentation. It’s how the Python language represents compound statements. I read that Python was named after a certain British comedy rather than the nonvenomous snake. I don’t think the author was kidding. Still, I love the succinct nature of this code. Surely, I can achieve something along these lines in C++. Indeed, this is what I wish the iota algorithm would provide but, alas. Essentially, what I’m looking for is a range algorithm that looks something like this:
template <typename T> generator<T> range(T first, T last) { return{ ... }; } int main() { for (int i : range(0, 10)) { printf("%d\n", i); } }
To my knowledge, no such function exists, so let’s go and build it. The first step is to approximate the algorithm with something reliable that can act as a baseline for testing. The C++ standard vector container comes in handy in such cases:
#include <vector> template <typename T> std::vector<T> range(T first, T last) { std::vector<T> values; while (first != last) { values.push_back(first++); } return values; }
It also does a good job of illustrating why you don’t want to build a container in the first place, or even figure out how large it should be, for that matter. Why should there even be a cap? Still, this is useful because you can easily compare the output of this range generator to a more efficient alternative. Well, it turns out that writing a more efficient generator isn’t that difficult. Have a look at Figure 1.
Figure 1 A Classical Generator
template <typename T> struct generator { T first; T last; struct iterator{ ... }; iterator begin() { return{ first }; } iterator end() { return{ last }; } }; template <typename T> generator<T> range(T first, T last) { return{ first, last }; }
The range function simply creates a generator initialized with the same pair of bounding values. The generator can then use those values to produce lightweight iterators via the conventional begin and end member functions. The most tedious part is spitting out the largely boilerplate iterator implementation. The iterator can simply hold a given value and increment it as needed. It must also provide a set of type aliases to describe itself to standard algorithms. This isn't strictly necessary for the simple range-based for loop, but it pays to include this as a bit of future-proofing:
template <typename T> struct generator { struct iterator { T value; using iterator_category = std::input_iterator_tag; using value_type = T; using difference_type = ptrdiff_t; using pointer = T const*; using reference = T const&;
Incrementing the iterator can simply increment the underlying value. The post-increment form can safely be deleted:
iterator& operator++() { ++value; return *this; } iterator operator++(int) = delete;
The other equally important function provided by an iterator is that of comparison. A range-based for loop will use this to determine whether it has reached the end of the range:
bool operator==(iterator const& other) const { return value == other.value; } bool operator!=(iterator const& other) const { return !(*this == other); }
Finally, a range-based for loop will want to dereference the iterator to return the current value in the range. I could delete the member call operator, because it isn’t needed for the range-based for loop, but that would needlessly limit the utility of generators to be used by other algorithms:
T const& operator*() const { return value; } T const* operator->() const { return std::addressof(value); }
It might be that the generator and associated range function are used with number-like objects rather than simple primitives. In that case, you might also want to use the address of helper, should the number-like object be playing tricks with operator& overloading. And that’s all it takes. My range function now works as expected:
template <typename T> generator<T> range(T first, T last) { return{ first, last }; } int main() { for (int i : range(0, 10)) { printf("%d\n", i); } }
Of course, this isn’t particularly flexible. I’ve produced the iota of my dreams, but it’s still just an iota of what would be possible if I switched gears and embraced coroutines. You see, with coroutines you can write all kinds of generators far more succinctly and without having to write a new generator class template for each kind of range you’d like to produce. Imagine if you only had to write one more generator and then have an assortment of range-like functions to produce different sequences on demand. That’s what coroutines enable. Instead of embedding the knowledge of the original iota generation into the generator, you can embed that knowledge directly inside the range function and have a single generator class template that provides the glue between producer and consumer. Let’s do it.
I begin by including the coroutine header, which provides the definition of the coroutine_handle class template:
#include <experimental/coroutine>
I’ll use the coroutine_handle to allow the generator to interact with the state machine represented by a coroutine. This will query and resume as needed to allow a range-based for loop—or any other loop, for that matter—to direct the progress of the coroutine producing a pull- rather than push-model of data consumption. The generator is in some ways similar to that of the classical generator in Figure 1. The big difference is that rather than updating values directly, it merely nudges the coroutine forward. Figure 2 provides the outline.
Figure 2 A Coroutine Generator
template <typename T> struct generator { struct promise_type{ ... }; using handle_type = std::experimental::coroutine_handle<promise_type>; handle_type handle{ nullptr }; struct iterator{ ... }; iterator begin() { ... handle.resume(); ... } iterator end() { return nullptr; } };
So, there's a little more going on here. Not only is there an iterator that allows the range-based for loop to interact with the generator from the outside, but there's also a promise_type that allows the coroutine to interact with the generator from the inside. First, some housekeeping: Recall that the function generating values won't be returning a generator directly, but rather allow a developer to use co_yield statements to forward values from the coroutine, through the generator, and to the call site. Consider the simplest of generators:
generator<int> one_two_three() { co_yield 1; co_yield 2; co_yield 3; }
Notice how the developer never explicitly creates the coroutine return type. That’s the role of the C++ compiler as it stitches together the state machine represented by this code. Essentially, the C++ compiler looks for the promise_type and uses that to construct a logical coroutine frame. Don’t worry, the coroutine frame will likely disappear after the C++ compiler is done optimizing the code in some cases. Anyway, the promise_type is then used to initialize the generator that gets returned to the caller. Given the promise_type, I can get the handle representing the coroutine so that the generator can drive it from the outside in:
generator(promise_type& promise) : handle(handle_type::from_promise(promise)) { }
Of course, the coroutine_handle is a pretty low-level construct and I don’t want a developer holding onto a generator to accidentally corrupt the state machine inside of an active coroutine. The solution is simply to implement move semantics and prohibit copies. Something like this (first, I’ll give it a default constructor and expressly delete the special copy members):
generator() = default; generator(generator const&) = delete; generator &operator=(generator const&) = delete;
And then I’ll implement move semantics simply by transferring the coroutine’s handle value so that two generators never point to the same running coroutine, as shown in Figure 3.
Figure 3 Implementing Move Semantics
generator(generator&& other) : handle(other.handle) { other.handle = nullptr; } generator &operator=(generator&& other) { if (this != &other) { handle = other.handle; other.handle = nullptr; } return *this; }
Now, given the fact that the coroutine is being driven from the outside, it's important to remember that the generator also has the responsibility of destroying the coroutine:
~generator() { if (handle) { handle.destroy(); } }
This actually has more to do with the result of final_suspend on the promise_type, but I’ll save that for another day. That’s enough bookkeeping for now. Let’s now look at the generator’s promise_type. The promise_type is a convenient place to park any state such that it will be included in any allocation made for the coroutine frame by the C++ compiler. The generator is then just a lightweight object that can easily move around and refer back to that state as needed. There are only two pieces of information that I really need to convey from within the coroutine back out to the caller. The first is the value to yield and the second is any exception that might have been thrown:
#include <variant> template <typename T> struct generator { struct promise_type { std::variant<T const*, std::exception_ptr> value;
Although optional, I tend to wrap exception_ptr objects inside std::optional because the implementation of exception_ptr in Visual C++ is a little expensive. Even an empty exception_ptr calls into the CRT during both construction and destruction. Wrapping it inside optional neatly avoids that overhead. A more precise state model is to use a variant, as I just illustrated, to hold either the current value or the exception_ptr because they’re mutually exclusive. The current value is merely a pointer to the value being yielded inside the coroutine. This is safe to do because the coroutine will be suspended while the value is read and whatever temporary object may be yielded up will be safely preserved while the value is being observed outside of the coroutine.
When a coroutine initially returns to its caller, it asks the promise_type to produce the return value. Because the generator can be constructed by giving a reference to the promise_type, I can simply return that reference here:
promise_type& get_return_object() { return *this; }
A coroutine producing a generator isn’t your typical concurrency-enabling coroutine and it’s often just the generator that dictates the lifetime and execution of the coroutine. As such, I indicate to the C++ compiler that the coroutine must be initially suspended so that the generator can control stepping through the coroutine, so to speak:
std::experimental::suspend_always initial_suspend() { return {}; }
Likewise, I indicate that the coroutine will be suspended upon return, rather than having the coroutine destroy itself automatically:
std::experimental::suspend_always final_suspend() { return {}; }
This ensures that I can still query the state of the coroutine, via the promise_type allocated within the coroutine frame, after the coroutine completes. This is essential to allow me to read the exception_ptr upon failure, or even just to know that the coroutine is done. If the coroutine automatically destroys itself when it completes, I wouldn’t even be able to query the coroutine_handle, let alone the promise_type, following a call to resume the coroutine at its final suspension point. Capturing the value to yield is now quite straight forward:
std::experimental::suspend_always yield_value(T const& other) { value = std::addressof(other); return {}; }
I simply use the handy address of helper again. A promise_type must also provide a return_void or return_value function. Even though it isn’t used in this example, it hints at the fact that co_yield is really just an abstraction over co_await:
void return_void() { }
More on that later. Next, I’ll add a little defense against misuse just to make it easier for the developer to figure out what went wrong. You see, a generator yielding values implies that unless the coroutine completes, a value is available to be read. If a coroutine were to include a co_await expression, then it could conceivably suspend without a value being present and there would be no way to convey this fact to the caller. For that reason, I prevent a developer from writing a co_await statement, as follows:
template <typename Expression> Expression&& await_transform(Expression&& expression) { static_assert(sizeof(expression) == 0, "co_await is not supported in coroutines of type generator"); return std::forward<Expression>(expression); }
Wrapping up the promise_type, I just need to take care of catching, so to speak, any exception that might have been thrown. The C++ compiler will ensure that the promise_type’s unhandled_exception member is called:
void unhandled_exception() { value = std::current_exception(); }
I can then, just as a convenience to the implementation, provide a handy function for optionally rethrowing the exception in the appropriate context:
void rethrow_if_failed() { if (value.index() == 1) { std::rethrow_exception(std::get<1>(value)); } }
Enough about the promise_type. I now have a functioning generator—but I’ll just add a simple iterator so that I can easily drive it from a range-based for loop. As before, the iterator will have the boilerplate type aliases to describe itself to standard algorithms. However, the iterator simply holds on to the coroutine_handle:
struct iterator { using iterator_category = std::input_iterator_tag; using value_type = T; using difference_type = ptrdiff_t; using pointer = T const*; using reference = T const&; handle_type handle;
Incrementing the iterator is a little more involved than the simpler iota iterator as this is the primary point at which the generator interacts with the coroutine. Incrementing the iterator implies that the iterator is valid and may in fact be incremented. Because the “end” iterator holds a nullptr handle, I can simply provide an iterator comparison, as follows:
bool operator==(iterator const& other) const { return handle == other.handle; } bool operator!=(iterator const& other) const { return !(*this == other); }
Assuming it’s a valid iterator, I first resume the coroutine, allowing it to execute and yield up its next value. I then need to check whether this execution brought the coroutine to an end, and if so, propagate any exception that might have been raised inside the coroutine:
iterator &operator++() { handle.resume(); if (handle.done()) { promise_type& promise = handle.promise(); handle = nullptr; promise.rethrow_if_failed(); } return *this; } iterator operator++(int) = delete;
Otherwise, the iterator is considered to have reached its end and its handle is simply cleared such that it will compare successfully against the end iterator. Care needs to be taken to clear the coroutine handle prior to throwing any uncaught exception to prevent anyone from accidentally resuming the coroutine at the final suspension point, as this would lead to undefined behavior. The generator’s begin member function performs much the same logic, to ensure that I can consistently propagate any exception that’s thrown prior to reaching the first suspension point:
iterator begin() { if (!handle) { return nullptr; } handle.resume(); if (handle.done()) { handle.promise().rethrow_if_failed(); return nullptr; } return handle; }
The main difference is that begin is a member of the generator, which owns the coroutine handle, and therefore must not clear the coroutine handle. Finally, and quite simply, I can implement iterator dereferencing simply by returning a reference to the current value stored within the promise_type:
T const& operator*() const { return *std::get<0>(handle.promise().value); } T const* operator->() const { return std::addressof(operator*()); }
And I’m done. I can now write all manner of algorithms, producing a variety of generated sequences using this generalized generator. Figure 4 shows what the inspirational range generator looks like.
Figure 4 The Inspirational Range Generator
template <typename T> generator<int> range(T first, T last) { while (first != last) { co_yield first++; } } int main() { for (int i : range(0, 10)) { printf("%d\n", i); } }
Who needs a limited range, anyway? As I now have a pull model, I can simply have the caller decide when they've had enough, as you can see in Figure 5.
Figure 5 A Limitless Generator
template <typename T> generator<int> range(T first) { while (true) { co_yield first++; } } int main() { for (int i : range(0)) { printf("%d\n", i); if (...) { break; } } }
The possibilities are endless! There is, of course, more to generators and coroutines and I’ve only just scratched the surface here. Join me next time for more on coroutines in C++. You can find the complete example from this article over on Compiler Explorer: godbolt.org/g/NXHBZR.
Kenny Kerr is an author, systems programmer, and the creator of C++/WinRT. He is also an engineer on the Windows team at Microsoft where he is designing the future of C++ for Windows, enabling developers to write beautiful high-performance apps and components with incredible ease.
Thanks to the following technical expert for reviewing this article: Gor Nishanov
Discuss this article in the MSDN Magazine forum | https://docs.microsoft.com/en-us/archive/msdn-magazine/2017/october/c-from-algorithms-to-coroutines-in-c | 2020-01-18T01:13:14 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.microsoft.com |
Campaign goals
Track how your campaign is performing against particular metrics with default and custom campaign goals. By default, Swrve automatically tracks the following metrics:
- Engagement rate
- Time in app
- Number of sessions
- Revenue
Additionally, select up to two custom campaign goals based on in-app events or purchases that you want to measure in relation to the campaign. For example, you might want to track purchases related to promotional items or events related to features promoted in your campaign.
Setting campaign goals
To set goals for your campaign from the campaign build screen, on the Goals block, select add +.
Step 1: On the Set your campaign goal screen, select the Primary goal for your campaign. This is the initial event or purchase you want users to make after being exposed to the campaign.
- Event: Use to track engagement with a specific feature you are promoting in your message. Select an event from the Select event list.
- In-app Purchase: Use to track real-world currency purchases of promotional items featured in your message. Select any item or a specific item, and if required, select the specific item from the Select an item list.
- Purchase: Use to track virtual currency purchases of promotional items featured in your message. Select any item or a specific item, and if required, select the specific item from the Select an item list.
Step 2: If required, select a Secondary goal for your campaign. This might be an event or purchase you want users to make further downstream. For example, the primary goal of the message might be to persuade users to add an item to a wish list in your app and a secondary goal could be to track how many users then buy the item on their wish list.
Step 3: To adjust the Attribution window, enter the required value and select Hours, Days, or Weeks from the list. Campaign goals are attributed back to the user if they achieve the goal within the set amount of time from when they first receive or interact with the campaign. The default attribution window is seven days.
Step 4: To set your goals and return to the campaign build screen, select Save. The “Goals info has been saved!” notification confirms your campaign goals are set.
Next steps
- Add your campaign content:
- Define your campaign audience. See Targeting campaign audiences.
- Schedule and launch your campaign. See Scheduling your campaigns. | https://docs.swrve.com/user-documentation/campaigns/campaign-builder/campaign-goals/ | 2020-01-18T00:59:37 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.swrve.com |
What's in the Release NotesThe release notes cover the following topics:
- Products that can upgrade to VMware Identity Manager 3.3.1
- What's New in 3.3.1
- Internationalization
- Compatibility, Installation, and Upgrade
- Documentation
- Known Issues
VMware Products that can upgrade to VMware Identity Manager 3.3.1
VMware vRealize Products such as vRealize Automation, vRealize Suite Lifecycle Manager (vRSLCM), vRealize Operations, vRealize Business, vRealize Log insight, and vRealize Network Insight for Authentication and SSO
The vRealize products that are deployed and managed through vRSLCM only can consume VMware Identity Manager 3.3.1.
vRSLCM will now handle a brand-new installation of VMware Identity Manager 3.3.1 or upgrade to 3.3.1 from an earlier version of Identity Manager.
- VMware NSX-T Data Center for Authentication and SSO
- NSX-T can be deployed with VMware Identity Manager 3.3.1 or upgraded to 3.3.1 from an earlier version.
What's New for VMware Identity Manager 3.3.1
Enhancements to the IDP hostname validation
The IDP hostname is validated to make sure that it is a valid fully qualified domain name.
- Administrators can control the visibility of “Change to a different domain” on login page
- End users will not see the option to “Change to a different domain” on the login pages if the administrator toggles a checkbox.
- Extended support for a group membership matching logic to a SAML attribute which is a string array
- Allows multiple users groups to be sent over in the SAML assertion. This offer support for "usergroups" that is sent as an array with a multi-value attribute, as part of the SAML assertion.
- Support for RADIUS auth per directory
- Allows to configure RADIUS authentication per directory.
- Rest API to revoke refresh tokens
- OAuth2 refresh tokens are long-lived. This API now allows for a way for a user to revoke a refresh token or an admin to revoke the refresh token on behalf of the user through this API.
Internationalization
VMware Identity Manager 3.3 is available in the following languages.
- German
- Spanish
- Japanese
- Simplified Chinese
- Korean
- Taiwan
- Russian
- Italian
- Portuguese (Brazil)
- Dutch
Compatibility, Installation, and Upgrade
VMware vCenter™ and VMware ESXi™ Compatibility
VMware Identity Manager appliance supports the following versions of vSphere and ESXi.
- 6.5 U3, 6.7 U2, 6. U3
Component Compatibility
Windows Server Supported
- Windows Server 2008 R2
- Windows Server 2012
- Windows Server 2012 R2
- Windows Server 2016
Web Browser Supported
- Mozilla Firefox, latest version
- Google Chrome 42.0 or later
- Internet Explorer 11
- Safari 6.2.8 or later
- Microsoft Edge, latest version
Database Supported
- Postgres 9.6.15
- MS SQL 2012, 2014, and 2016 LDAP - IBM Security Directory Server 6.3.1
VMware Product Interoperability Matrix provides details about the compatibility of current and previous versions of VMware products and components, such as VMware vCenter Server, VMware ThinApp, and Horizon 7.
Verified VMware Identity Manager integration with Citrix Virtual Apps & Desktops (previously XenApp & XenDesktop) versions 7 1808 and 7.18. Tested use case was with the end users doing internal and external launches (via Netscaler) of their entitled Citrix resources from the Workspace ONE portal.
For other system requirements, see the VMware Identity Manager Installation guides for 3.3 on the VMware Identity Manager Documentation center.
Upgrading to VMware Identity Manager 3.3 (Linux)
To upgrade to VMware Identity Manager for Linux 3.3.1, see Upgrading VMware Identity Manager 3.3.1 (Linux) on the VMware Identity Manager Documentation center. During the upgrade, all services are stopped, so if only one connector is configured plan the upgrade with the expected downtime in mind.
You must upgrade to VMware Identity Manager version 3.3 and then upgrade to VMware Identity Manager 3.3.1.
Important: Before you start the upgrade to 3.3.1, edit the /etc/init.d/horizon-workspace script. Replace the line
# Should-Start: $named $remote_fs $time hzn-sysconfig elasticsearch thinapprepo
with
# Should-Start: $named $remote_fs $time hzn-sysconfig thinapprepo
Save the file and proceed with the upgrade.
Note: When you upgrade to VMware Identity Manager 3.3.1 for Linux, if you see the following error message and the upgrade is aborted, follow these steps to update the certificate. After the certificate is updated, restart the upgrade.
"Certificate auth configuration update required for tenant <tenantName> prior to upgrade. Pre-update check failed, aborting upgrade."
- Log in to the VMware Identity Manager console.
- Navigate to Identity & Access Management > Setup.
- In the Connectors page, click the link in the Worker column
- Click the Auth Adapters tab, then click CertificateAuthAdapter.
- In the Uploaded CA Certificates section, click the red X next to the certificate to remove it.
- In the Root and intermediate CA Certificates section, click Select File to re-add the certificate.
- Click Save.
VMware Identity Manager Connector 3.3.1 (Windows)
A new installer is available for VMware Identity Manager Connector for Windows. Use the installer to upgrade from VMware Enterprise System Connector or to install the VMware Identity Manager Connector.
You will need to set the version to upgrade to 3.3.0.100 before you run the upgrade. To do this run the following command before you upgrade.
- /usr/local/horizon/update/configureupdate.hzn provider --url
- /usr/local/horizon/update/configureupdate.hzn manifest --set-version 3.3.0.100
- You are now ready to run the upgrade commands.
Documentation
The VMware Identity Manager 3.3 documentation is in the VMware Identity Manager Documentation center. The 3.3.1 upgrade guide can be found under VMware Identity Manager 3.3 in the Installation & Architecture section.
Known Issues
- User portal is not enabled in clustered setup
In case of Clustered Setup, the catalog is not enabled by default. To access the catalog in a clustered setup, the API needs to be called using the load balancer hostname.
Use the API Listed to Enable Catalog. API Details "GET Authorization : HZN
"
- Unable to add an IWA AD, if an LDAP AD(different domain) is already associated with the given connector
When we add an IWA directory, the /etc/hosts file is modified in case the domain is different than the server domain. Subsequent to that if another IWA directory is created, creation fails. You will need to edit the /etc/hosts file to create IWA the next time.
Need to manually edit the host file entries. See KB article 67773 at https:// ikb.vmware.com/s/article/67773.
- Errors seen in Horizon logs when database failover happens
When a database failover happens, EH cache-related errors are seen in horizon.log. Restarting the server resolves the error.
Restart service of the old master database node. | https://docs.vmware.com/en/VMware-Identity-Manager/3.3/rn/VMware-Identity-Manager-331-Release-Note.html | 2020-01-18T01:08:09 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.vmware.com |
What's in the Release NotesThese release notes cover the following topics:
What's New
vCloud Availability 3.5 introduces the following new features:
- Integration with VMware vCloud Usage Meter to collect product consumption data and generate reports for the VMware Cloud Provider Program, see Add vCloud Availability in the vCloud Usage Meter documentation.
- Support for multiple vCloud Availability instances per a vCloud Director instance. The service providers can control the accessible Provider VDCs in each vCloud Availability instance, see Cloud Deployment Architecture and Manage the Accessible Provider VDCs.
- Cloud to cloud pairing is now possible over private networks, without allowing public administrative access and without providing the remote site credentials, see Pair Cloud Sites.
- For on-premises to cloud replications, multiple virtual machines can be grouped and managed as a single unit, to specify boot order and boot delay settings, see Grouping VMs in a Replication.
- Replication of additional vCloud Director settings: vApp networks, VM Guest OS Customization, and VM Guest properties, see Using Replications.
- Traffic monitoring for each replication and each tenant; exporting of usage and traffic data, see Traffic Monitoring.
- Datastore evacuation, vCloud Availability Replicator maintenance mode, and rebalancing replications, see vCloud Availability Maintenance.
- Exclusion or inclusion of virtual machine hard drives in replications, see Selecting Replicated Disks.
- Configuring the replications network settings in the target cloud site, see Configuring Network Settings of Replications to the Cloud.
Configuration Maximums
For the tested scale and concurrency limits, see VMware Configuration Maximums.
Upgrade
vCloud Availability 3.5 supports an in-place upgrade from vCloud Availability 3.0.x, see Upgrading vCloud Availability in the Cloud and Upgrading vCloud Availability On-Premises.
Caveats and Limitations
For interoperability between vCloud Availability and other VMware products, see VMware Product Interoperability Matrices.
vCloud Availability 3.x can not pair with vCloud Availability for Cloud-to-Cloud DR 1.5.x and vCloud Availability for vCloud Director 2.x. You can migrate the protected workloads from vCloud Availability for vCloud Director 2.x to vCloud Availability 3.x, see Migrate from vCloud Availability for vCloud Director 2.0.
Note: The vCloud Availability vSphere Client Plug-In requires vSphere Client support. Use the vCloud Availability Portal if your vSphere does not support vSphere Client.
Supported Browsers
vCloud Availability 3.5 supports the following browsers:
- Google Chrome 78 and later
- Mozilla Firefox 69 and later
- Microsoft Edge 44 and later
- Safari 13 and later
Known Issues
- The vCloud Availability vSphere Client Plug-In displays a 503 Service Unavailable screen if the browser session remains idle
After you perform an operation by using the vSphere Client Plug-In and leave the browser session idle for a longer time, usually more than 30 minutes, the vSphere Client Plug-In times out and returns a 503 Service Unavailable error.
Workaround: Logging out and logging back in to the vSphere Web Client does not fix the issue. To renew the vSphere Client Plug-In session, wait for several minutes and refresh the browser.
- After upgrading the cloud site to vCloud Availability 3.5, on-premises tenants running vCloud Availability 3.0.x receive an error message when using the vCloud Availability vSphere Client Plug-In
When on-premises tenants running vCloud Availability 3.0.x are paired with a cloud site running vCloud Availability 3.5, attempting to use the vCloud Availability vSphere Client Plug-In shows an error:
Operation aborted due to an unexpected error.
Workaround:
- Upgrade all on-premises tenants to vCloud Availability 3.5.
- Alternatively, in the cloud site you can add the following property to the vCloud Availability Cloud Replication Management Appliance:
- Open an SSH connection to the
Appliance-IP-Addressand authenticate as the root user.
- In the
/opt/vmware/h4/cloud/config/application.propertiesfile, add
api.strict.deserialization = false.
- Restart the service
systemctl restart cloud.service.
- When selecting virtual machines to group in a vApp, advancing the pagination list clears the selection
In the inventory list of virtual machines, if you select virtual machines from another page to group in a vApp, the selected virtual machines on previous pages are deselected.
Workaround: Select to display more items per page and select virtual machines from the same inventory list page.
- Deselecting a disk for replication removes the disk from the interface
If you deselect a disk from a replication, you can no longer select this disc as it is removed from the user interface.
Workaround: n/a
- Test failover intermittently fails, leaving a pending vApp in vCloud Director
Performing a test failover might not fail over all virtual machines successfully.
When virtual machine operations fail, the resulting vApp is not cleaned from vCloud Director. This results in inability to import the corresponding virtual machines.
Workaround:
- Execute test cleanup.
- Manually remove any pending vApps in vCloud Director.
- If you cannot delete the target vApps, change the vApp name in the replication.
- In vSphere Client 6.7 Update 1, when you right-click on a virtual machine you cannot use Configure Protection or Configure Migration
In vSphere Client 6.7 Update 1, when browsing the inventory of the virtual machines, if you right-click and select Configure Protection or select Configure Migration, the corresponding wizards do not open.
Workaround:
To configure virtual machine protection or migration, use the vCloud Availability vSphere Client Plug-in or the vCloud Availability Portal.
- Cannot configure the DNS servers by using the service management interface
By default, the primary DNS server is set to 127.0.0.53 and the secondary DNS server is not configured. Attempting to modify the DNS configuration results in no change to the DNS servers in the service management interface. This is a display issue and the DNS functionality is not affected.
Workaround: To verify that the DNS configuration is applied, open an SSH session to the appliance and run the following command
resolvectl status
- Cannot modify the domain name by using the service management interface
In the Network Settings window, entering a domain name adds it to the Domain Search Path instead.
Workaround: None
- Cannot monitor traffic of vCloud Availability 3.0.x instances
In vCloud Availability 3.5, when you click the Traffic tab for replications to vCloud Availability 3.0.x instances you see a
Permission deniederror message.
Workaround: n/a | https://docs.vmware.com/en/VMware-vCloud-Availability/3.5/rn/VMware-vCloud-Availability-35-Release-Notes.html | 2020-01-18T00:56:33 | CC-MAIN-2020-05 | 1579250591431.4 | [] | docs.vmware.com |
Header File
sys\stat.h, tchar.h
Category
Directory Control Routines
Prototype
int stati64(const char *pathname, struct stati64 *buff);
int _stati64(const char *__path, struct stati64 *__statbuf);
int _wstati64(const wchar_t *pathname, struct stati64 *buff);
// From tchar.h
#define _tstati64 _stati64
Description
Gather statistics about the file named by *pathP and place them in the buffer *bufP.
The statistics fields are set thus:
st_devset to -1 if S_IFCHR, else set to drive holding the file.
st_ino0
st_modeUnix-style bit-set for file access rights
st_nlink1
st_uid0
st_gid0
st_rdevsame as st_dev
st_sizefile size (0 if S_IFDIR or S_IFCHR)
st_atimetime file last changed (seconds since 1970)
st_mtimesame as st_atime
st_ctimesame as st_atime
The file access rights bit-set may contain S_IFCHR, S_IFDIR, S_IFREG, S_IREAD, S_IWRITE, or S_IEXEC.
If the name is for a device, the time fields will be zero and the size field is undefined.
Return Value
The return value is 0 if the call was successful, otherwise -1 is returned and errno contains the reason. The buffer is not touched unless the call is successful.
Portability | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/_stati64_xml.html | 2012-05-26T19:50:10 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
Header File
stdio.h
Category
Memory and String Manipulation Routines
Syntax
int _snprintf(char* buffer, size_t nsize, const char* format, ...);
int _snwprintf(wchar_t* buffer, size_t nsize, const wchar_t* format, ...);
Description
Sends formatted output to a string of a maximum length specified by nsize. _snprintf and _snwprintf are Microsoft compatible with the _snprintf and _snprintfw functions, respectively.
If the number of bytes to output is:
If nsize is too small, then return value is -1, and only nsize characters are written, with no terminating ‘\0’ character.
Return Value
Number of bytes output or –1 if nsize is too small. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/_snprinft_xml.html | 2012-05-26T19:49:56 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
You can use a unidirectional dataset even if the query or stored procedure it represents does not return any records. Such commands include statements that use Data Definition Language (DDL) or Data Manipulation Language (DML) statements other than SELECT statements. The language used in commands is server-specific, but usually compliant with the SQL-92 standard for the SQL language. The SQL command you execute must be acceptable to the server you are using. Unidirectional datasets neither evaluate the SQL nor execute it, but pass the command to the server for execution. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/executingcommands_xml.html | 2012-05-26T22:59:50 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
The
i = v[i++]; // i is undefined
The value of i depends on whether i is incremented before or after the assignment. Similarly,
int total = 0; sum = (total = 3) + (++total); // sum = 4 or sum = 7 ??
is ambiguous for sum and total. The solution is to revamp the expression, using a temporary variable:
int temp, total = 0; temp = ++total; sum = (total = 3) + temp;
Where the syntax does enforce an evaluation sequence, it is safe to have multiple evaluations:
sum = (i = 3, i++, i++); // OK: sum = 4, i = 5
Each subexpression of the comma expression is evaluated from left to right, and the whole expression evaluates to the rightmost value
The compiler regroups expressions, rearranging associative and commutative operators regardless of parentheses, in order to create an efficiently compiled expression; in no case will the rearrangement affect the value of the expression | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/evaluationorder_xml.html | 2012-05-26T22:59:25 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
Configure user language and locale:
en.2 , 4.2.1 , 4.2.2 , 4.2.3 , 4.2.4 , 4.2.5 , 4.3 , 4.3.1 , 4.3.2 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/latest/Admin/Userlanguageandlocale | 2012-05-27T00:35:24 | crawl-003 | crawl-003-016 | [] | docs.splunk.com |
transforms.conf
transforms.conf
The following are the spec and example files for transforms.conf.
transforms.conf.spec
# Copyright (C) 2005-2011 Splunk Inc. All Rights Reserved. Version. * REGEX results. * Required for index-time field extractions where WRITE_META = false or is not set. * For index-time searches, DEST_KEY = _meta, which is where Splunk stores indexed fields. For other potential DEST_KEY values see the KEYS section at the bottom of this file. * When you use DEST_KEY = _meta you should also add $0 to the start of your FORMAT attribute. $0 represents the DEST_KEY value before Splunk performs the REGEX (in other words, _meta). * The $0 value is in no way derived *from* the REGEX. (It does not represent a captured group.) * KEYs. *]. * NOTE: Any KEY (field name) prefixed by '_' is not indexed by Splunk, in general.
transforms.conf.example
# Copyright (C) 2005-2011 Splunk Inc. All Rights Reserved. Version 4 ':'.
This documentation applies to the following versions of Splunk: 4.3.2 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/latest/Admin/Transformsconf | 2012-05-27T00:35:07 | crawl-003 | crawl-003-016 | [] | docs.splunk.com |
This error message occurs when the compiler can determine that a constant is outside the legal range. This can occur for instance if you assign a constant to a variable of subrange type.
program Produce; var Digit: 1..9; begin Digit := 0; (*Get message: Constant expression violates subrange bounds*) end. program Solve; var Digit: 0..9; begin Digit := 0; end. | http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/cm_bounds_error_xml.html | 2012-05-27T02:55:32 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
. If you don't specify a "snap to" time unit, Splunk snaps automatically to the second.
Separate the time amount from the "snap to" time unit with an "@" character. You can use any of time units listed in Step 2. Additionally, you can "snap to" a specific day of the week, such as last Sunday or last Monday. To do this, use @w0 for Sunday, @w1 for Monday, etc.
Important: When snapping to the nearest or latest time, Splunk always snaps backwards or rounds down to the latest time not after the specified time. For example, if it is 11:59:00 and you "snap to" hours, you will snap to 11:00 not 12:00.
Important:.
Examples of relative time modifiers
For these examples, the current time is Wednesday, 05 February 2009, 01:37:05 PM. Also note that 24h is usually but not always equivalent to 1d because of Daylight Savings Time boundaries.. | http://docs.splunk.com/Documentation/Splunk/4.0/User/ChangeTheTimeRangeOfYourSearch | 2012-05-26T23:57:21 | crawl-003 | crawl-003-016 | [] | docs.splunk.com |
File system change monitor
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Contents
File system change monitor
Use Splunk's file system change monitor to track: You cannot currently use both monitor and file system change monitor to follow the same directory or file. If you want to see changes in a directory, use file system change monitor. If you want to index new events in a directory, use monitor. generates events whenever the contents of
$SPLUNK_HOME/etc/ are affected in any way. When you start Splunk for the first time, an
add audit event is generated for each file in the
$SPLUNK_HOME/etc/ directory and all sub-directories. Any time after that, any change in configuration (regardless of origin) generates an audit event for the affected file(s). The audit event is indexed into the audit index (
index=_audit).>. If you do not set a value for an attribute, Splunk uses the default.
Note: Additions or changes to the
[fschange] stanza require a restart of the Splunk Server.
[fschange:<directory or file to monitor>] index = <indexname> recurse = true | false followLinks= true | false pollPeriod = <integer> hashMaxSize = <integer> fullEvent = true | false sendEventMaxSize = <integer> signedaudit= true | false filters=<filter1>,<filter2>,...<filterN>
Possible attribute/value pairs
[fschange:<directory or file to monitor>]
- Specify a directory or file and Splunk monitors all changes.
- If you specify a directory, Splunk also recurses into sub-directories.
- Splunk indexes any changes as events.
- Defaults to
$SPLUNK_HOME/etc/.
index = <indexname>
- The index to store all events generated.
- Defaults to _audit, unless you do not set
signedaudit(below) or set
signedaudit = false, in which case events go into the default index. = <integer>
- Check this directory for changes every N seconds.
- Defaults to 3600.
- If you make a change, the file system audit events could take anywhere between 1 and 3600 seconds to be generated and become available in audit search.
hashMaxSize = <integer>
- Calculate a SHA1 hash for every file that is greater generates events in the _audit index, otherwise events go to your default index.
- You must set signedaudit = <integer>
- Only send the full event if the size of the event is less than or equal to N bytes.
- This limits the size of indexed file data.
- Defaults to -1, which is unlimited.
- Note: This setting is ignored in versions of Splunk earlier than 3.3.3. Make sure to set
fullEvent = falseif you are indexing large events.
sourcetype = <string>
- Set the sourcetype for events from this input.
- "sourcetype=" is automatically prepended to <string>.
- Defaults to audittrail (if
signedaudit=true) or fschange (if
signedaudit=false).1 = .*\.h [fschange:/etc] filters = backups,code
This documentation applies to the following versions of Splunk: 3.3 , 3.3.1 , 3.3.2 , 3.3.4 , 3.4 , 3.4.1 , 3.4.2 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/3.4.2/Admin/FileSystemChangeMonitor | 2012-05-27T00:30:26 | crawl-003 | crawl-003-016 | [] | docs.splunk.com |
Help Center
Local Navigation
Search This Document
Create a group
When you create a group, you automatically become the administrator for the group.
Depending on your wireless service provider or organization, you might not be able to use the BlackBerry® Groups feature.
- On the Home screen or in the Instant Messaging folder, click the BlackBerry Messenger icon.
- On the contact list screen, press the Menu key.
- Click Create New Group.
- Type a name for the group.
- Type a description that people see when they receive the invitation to the group.
- Change the Group Icon field.
- Perform any of the following actions:
- Click Create Group.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/13195/Create_a_group_825831_11.jsp | 2012-05-26T23:42:00 | crawl-003 | crawl-003-016 | [] | docs.blackberry.com |
Help Center
Local Navigation
Search This Document
Create an appointment
Appointments that you create in a group appear in every member's calendar.
- On the Home screen or in the Instant Messaging folder, click the BlackBerry Messenger icon.
- On the contact list screen, in the BlackBerry Groups category, click a group.
- Click Calendar.
- Click New Shared Appointment.
- Type the appointment information.
- If necessary, change the Recurrence field.
- Press the Menu key.
- Click Save.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/13195/Create_an_event_839277_11.jsp | 2012-05-26T23:42:11 | crawl-003 | crawl-003-016 | [] | docs.blackberry.com |
Help Center
Local Navigation
Search This Document
Delete the chat history
- On the Home screen or in the Instant Messaging folder, click the BlackBerry Messenger icon.
- On the contact list screen, in a contact category, highlight a contact.
- Press the Menu key.
- Click View History.
- Press the Menu key.
- Perform one of the following actions:
- Click Delete.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/13195/Delete_a_chat_history_BBMsgr_653168_11.jsp | 2012-05-26T23:42:23 | crawl-003 | crawl-003-016 | [] | docs.blackberry.com |
To descendant of IAppServer. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/multiconfiguringtremotedatamodule_xml.html | 2012-05-26T23:42:19 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
To descendant of IAppServer. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/multiconfiguringtmtsdatamodule_xml.html | 2012-05-26T23:42:13 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
A application architecture to act as the client of an application server. InternetExpress applications generate HTML pages that contain a mixture of HTML, XML, and javascript. The HTML governs the layout and appearance of the pages seen by end users in their browsers. The XML encodes the data packets and delta packets that represent database information. The javascript allows the HTML controls to interpret and manipulate the data in these XML data packets on the client machine.
If the InternetExpress application uses DCOM to connect to the application server, you must take additional steps to ensure that the application server grants access and launch permissions to its clients. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/multibuildingbrowserbasedclientsusingxmldatapackets_xml.html | 2012-05-26T23:41:57 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
Applications. If you are not using SOAP, you can call an interface method using AppServer by writing a statement.
When you are using DCOM as a communications protocol, you can use early binding of AppServer calls. Use the as operator to cast the AppServer variable to the IAppServer descendant you created when you created the remote data module. For example:
with MyConnection.AppServer as IMyAppServer do SpecialMethod(x,y);
To use early binding under DCOM, the server's type library must be registered on the client machine. You can use TRegsvr.exe, which ships with Delphi to register the type library.
When you are using TCP/IP or HTTP, you can't use true early binding, but because the remote data module uses a dual interface, you can use the application server's dispinterface to improve performance over simple late binding. The dispinterface has the same name as the remote data module's interface, with the string 'Disp' appended. You can assign the AppServer property to a variable of this type to obtain the dispinterface. Thus:
var TempInterface: IMyAppServerDisp; begin TempInterface :=IMyAppServerDisp(IDispatch(MyConnection.AppServer)); ... TempInterface.SpecialMethod(x,y); ... end;
If you are using SOAP, you can't use the AppServer property. Instead, you must obtain the server's interface by calling the GetSOAPServer method. Before you call GetSOAPServer, however, you must take the following steps:
with MyConnection.GetSOAPServer as IMyAppServer do SpecialMethod(x,y);
IDispatch* disp = (IDispatch*)(MyConnection->AppServer) IMyAppServerDisp TempInterface( (IMyAppServer*)disp); TempInterface.SpecialMethod(x,y); | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/multicallingserverinterfaces_xml.html | 2012-05-26T23:42:02 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
This error message results when the compiler expected two types to be compatible (or similar), but they turned out to be different.
program Produce; procedure Proc(I: Integer); begin end; begin Proc( 22 / 7 ); (*Result of / operator is Real*) end.
Here a C++ programmer thought the division operator / would give him an integral result - not the case in Delphi.
program Solve; procedure Proc(I: Integer); begin end; begin Proc( 22 div 7 ); (*The div operator gives result type Integer*) end.
The solution in this case is to use the integral division operator div - in general, you have to look at your program very careful to decide how to resolve type incompatibilities. | http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/cm_comp_types_2_xml.html | 2012-05-27T03:02:00 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
When importing type information from a .NET assembly, the compiler may encounter symbols that do not conform to CLS specifications. One example of this is case-sensitive versus case-insensitive identifiers. Another example is having a property in a class with the same name as a method or field in the same class. This error message indicates that same-named symbols were found in the same scope (members of the same class or interface) in an imported assembly and that only one of them will be accessible from Delphi syntax. | http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/cm_cls_id_redeclared_type_xml.html | 2012-05-27T03:01:40 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
The techniques in this section pertain to models of particularly complex composite states and substates.
You can resize the main state. You can also create a substate by drawing a state diagram within another state diagram and indicating start, end, and history states as well as transitions.
Create a composite state by nesting one or more levels of states within one state. You can also place start/end states and a history state inside of a state, and draw transitions among the contained substates.
Using the Shortcuts command on the context menu of the diagram, you can reuse existing elements in other state diagrams. Right-click the diagram and choose Add > Shortcuts, navigate within the pane containing the tree view of the available project contents for the project group | http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/devcommon/createcompositestate_xml.html | 2012-05-27T02:28:15 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
One of the most important aspects of passing mobile websites through a quality assurance funnel is the ability to interact with a website manually like real-world users would do. With the ever-growing variety of mobile devices of various makes and operating system version, it is becoming harder and harder to be able to manually test the site in different devices in order to validate cross browser.
Experitest aims to solve just that. Using only your browser, you can get access to a large set of mobile devices to test your site with a focus on manual interaction.
Let's look at an example:
Viewing and Interacting with a website on a device
To begin, open a device from the device screen. You can then start interacting with the device as if you were holding it in your own hands.
Begin by opening the browser
Click on the address bar and type the url of the site you want to test
Hit enter and start interacting with the site to test how it is rendered visually and how it functions. | https://docs.experitest.com/display/LT/Web+Manual+Testing | 2020-09-18T09:54:05 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.experitest.com |
AWS AMI Deployments Overview
This topic describes the concept of a Harness AWS AMI deployment by describing the high-level steps involved.
For a quick tutorial, see the AWS AMI Quickstart.
For detailed instructions on using AWS AMI in Harness, see the AWS AMI How-tos.
Before You Begin
Before learning about Harness AWS AMI deployments, you should have an understanding of Harness Key Concepts.
What Does Harness Need Before You Start?
A Harness AWS AMI deployment requires the following:
- A working AWS AMI that Harness will use to create your instances.
- A working Auto Scaling Group (ASG) that Harness will use as a template for the ASG that Harness will create. The template ASG is referred to as the base ASG in Harness documentation.
- An AWS Instance or ECS cluster in which to install a Harness Delegate.
- IAM Role for the Harness Cloud Provider connection to AWS.
What Does Harness Deploy?
Harness takes the AMI and base ASG you provide, and creates a new ASG and populates it with instances using the AMI. You can specify the desired, min, and max instances for the new ASG, resize strategy, and other settings in Harness.
What Does a Harness AWS AMI Deployment Involve?
The following list describes the major steps of a Harness AWS AMI deployment:
How Does Harness Downsize Old ASGs?
Harness upscales and downsizes in two states, setup and deploy.
- Setup — The setup state is when your new ASG is created.
- Deploy — The deploy phase(s) is when your new ASG is upscaled to the number of new instances you requested. This is either a fixed setting (Min, Max, Desired) or the same number as the previous ASG.
During setup state:
- The previous ASG is kept with non-zero instance count (highest revision number, such as _7). Any older ASGs are downsized to 0.
- New ASG is created with 0 count.
- For ASGs that had 0 instances, Harness keeps 3 old ASGs and deletes the rest.
During deploy phases:
- New ASG is upscaled to the number of new instances you requested.
- Previous ASG is gradually downsized. In the case of a Canary deployment, the old ASG is downsized in the inverse proportion to the new ASG's upscale. If the new ASG is upscaled 25% in phase 1, the previous ASG is downsized 25%.
At the end of deployment:
- New ASG has the number of new instances you requested. In Canary, this is always 100%.
- Previous ASG is downsized to 0.
Rollback
If rollback occurs, the previous ASG is upscaled to its pre-setup number of instances using new instances.
Next Steps
Read the following topics to build on what you've learned: | https://docs.harness.io/article/aedsdsw9cm-aws-ami-deployments-overview | 2020-09-18T11:32:54 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.harness.io |
Contact the Rapid7 Support Team
If you need assistance with your InsightVM product, the Rapid7 Support team is here to help.
Support Team Services
Our Support Engineers offer the following services to ensure that your InsightVMVM
Case severity assignment helps the Support team prioritize and address issues based on their business impact. You can select one of the following severity levels when creating your case:
- the scheduling tool
The Scheduler in the Customer Portal gives you the ability to schedule a time to meet with a Rapid7 Support Engineer. This reduces the amount of time spent on coordinating schedules between you and Rapid7. We want to help you focus on case resolution instead. You can request your Support Engineer's availability after you create a case and during the case lifecycle. Before you schedule a meeting, you will need to provide some context for the issue you are experiencing. This provides our team with the context needed to make progress on this issue more quickly.
Request a meeting
Once your case has been assigned to an engineer and they have reviewed the case,uluer?
If you received an error when you attempt to request a meeting, it could mean one of the following things:
- The product you have does not allow for meetings to be scheduled.
- The case is with an engineer that has an issue with their scheduler. Please email or comment in the case if this occurs to let us know you require a meeting to be scheduled.
Stay Informed
We publish release notes for InsightVM. | https://docs.rapid7.com/insightvm/support-technical-support-and-customer-care/ | 2020-09-18T09:32:12 | CC-MAIN-2020-40 | 1600400187390.18 | [array(['/areas/docs/_repos//product-documentation__master/8530310ab3d18000d6ee88af0df22930a6ba3a7d//insightvm/images/e71fb97-case.png',
'Request a meeting'], dtype=object) ] | docs.rapid7.com |
Role management
Abstract
The security model employed by the new role management allows granular and transparent control over the user permissions.
The security model is employed by the new role management, which allows a granular and transparent control over the user permissions across all Hierarchical Entities, Devices, Signals and other general operations.
As we shall see in the upcoming articles, the role management is based on the following concepts: | https://docs.webfactory-i4.com/i4connected/en/role-management.html | 2020-09-18T10:46:48 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.webfactory-i4.com |
, connect a TapNLink and connect IoTize Studio to it.
>.
Watson IoT Platform service includes a device twin feature. This cloud-based digital representation of your device (MQTT) and setup the configuration:
- Set Enable Relay to Yes.This allows the tap to use MQTT to receive LWM2M commands
- Select IBM Watson in IoT Platform
- Cloud Profile: A specific profile to control access privileges of the connected 'IoT Platform'.
IBM Watson information: Provide the previously created Device Twin information to enable TapNLink to connect directly to your Watson IoT Platform.
Organization ID
- Device Type
- Device ID
- Authentication Token
- IBM Watson messaging root certificate: If you set up a Root CA to aunthenticate your devices, set it here. Leave it empty otherwise
Use SSL protocol
IBM Broker login summary (MQTT): This shows the actual MQTT connection information that will be used by the Tap. These are created from the IBM Watson information. It also shows the topics used to receive commands and send answers.
Step4: Setup the WiFi Settings and Configure TapNLink
Set Incoming communication (Wireless) | WiFi:
- Network mode to 'Network(Station)'
- SSID to your WiFi network
- WEP key to your WiFi network's security key,
Click on the "Configure" button to re-configure TapNLink, then use Test|Reboot Tap to restart TapNLink. Now TapNLink will dynamically connect to your IBM Watson IoT Platform and wait for any incoming LWM2M request.
Step5: Connect IoTize Studio to IBM Watson
In order to connect IoTize Studio to IBM Watson, we need to create an application API Key. We only need to create it once: the API Key will be able to communicate with every device you registered in your IBM Watson cloud service.
- In the Platform Service dashboard, go to Apps > API Keys.
- Click on Generate API Key. You will then see the API Key and its token. It is very important that you keep a note of your token when it appears on the summary screen, as when you proceed past the summary stage, the token will not appear again.
- Click on Finish.
Now, go to Studio | Connection to Tap in IoTize Studio:
- Set Protocol to MQTT Relay
- Set Adapt broker information from Tap MQTT settings to Yes.
- Set Application API Key and API Key Token to the one you created on your IBM Watson dashboard.
Click on Monitor. IoTize Studio will connect to the MQTT Broker, and communicate with the Tap. You are now able to communicate with your Tap through your IBM Watson MQTT broker.
To go further
- To learn more about IoTize TapNLink products, refer to IoTize documentation center.
- To learn more about IBM Watson IoT platform, refer to IBM Knowledge Center. | http://docs.iotize.com/Technologies/IBMWatson/ | 2022-01-16T19:36:54 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['res/architecture.png', 'IBM Watson integration'], dtype=object)
array(['res/account.png', 'IBM Cloud Account'], dtype=object)
array(['res/services.png', 'IBM Watson IoT'], dtype=object)
array(['res/devices.png', 'Watson Io Platform Devices'], dtype=object)
array(['res/deviceID.png', 'Device Information'], dtype=object)
array(['./res/configuration.png', None], dtype=object)
array(['./res/iotize-configuration.png', None], dtype=object)] | docs.iotize.com |
ReportDesignerBase.DocumentSaved Event
Occurs every time a report displayed in a designer document has been saved.
Namespace: DevExpress.Xpf.Reports.UserDesigner
Assembly: DevExpress.Xpf.ReportDesigner.v21.2.dll
Declaration
public event EventHandler<ReportDesignerDocumentEventArgs> DocumentSaved
Public Event DocumentSaved As EventHandler(Of ReportDesignerDocumentEventArgs)
Event Data
The DocumentSaved event's data class is ReportDesignerDocumentEventArgs. The following properties provide information specific to this event:
Remarks
Handle the DocumentSaved event to respond to saving a report to a storage. The current designer document is specified by the ReportDesignerDocumentEventArgs.Document property of the event parameter.
If the attempt to save a report fails, the ReportDesignerBase.DocumentSaveFailed event is raised.
See Also
Feedback | https://docs.devexpress.com/WPF/DevExpress.Xpf.Reports.UserDesigner.ReportDesignerBase.DocumentSaved | 2022-01-16T18:19:36 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.devexpress.com |
K8ssandra Roadmap
K8ssandra roadmap ideas for community consideration.
K8ssandra today is deployed as an entire stack. This open-source technology currently assumes your deployment uses the entire stack. Trading out certain components for others is not supported at this time. As part of the roadmap, one goal is to support a la carte composition of components.
The roadmap is currently tracked in GitHub on a project board. Here’s a quick preview:
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.
Last modified May 3, 2021: Documentation Information Architecture Updates (#700) (99ae1dc) | https://docs.k8ssandra.io/roadmap/ | 2022-01-16T18:47:24 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['roadmap-gh-project.png', 'Roadmap Preview'], dtype=object)] | docs.k8ssandra.io |
18.2.3. The 3D Map Item¶
The 3D Map item is used to display a
3D map view.
Use the
Add 3D Map button, and follow
items creation instructions to add a new
3D Map item that you can later manipulate the same way as demonstrated
in Interacting with layout items.
By default, a new 3D Map item is empty. You can set the properties of the 3D view and customize it in the Item Properties panel. In addition to the common properties, this feature has the following functionalities (Fig. 18.22):
18.2.3.1. Scene settings¶
Press Copy Settings from a 3D View… to choose the 3D map view to display.
The 3D map view is rendered with its current configuration (layers, terrain, lights, camera position and angle…).
18.2.3.2. Camera pose¶
Center X sets the X coordinate of the point the camera is pointing at
Center Y sets the Y coordinate of the point the camera is pointing at
Center Z sets the Z coordinate of the point the camera is pointing at
Distance sets the distance from the camera center to the point the camera is pointing at
Pitch sets the rotation of the camera around the X-axis (vertical rotation). Values from 0 to 360 (degrees). 0°: terrain seen straight from above; 90°: horizontal (from the side); 180°: straight from below; 270°: horizontal, upside down; 360°: straight from above.
Heading sets the rotation of the camera around the Y-axis (horizontal rotation - 0 to 360 degrees). 0°/360°: north; 90°: west; 180°: south; 270°: east.
The Set from a 3D View… pull-down menu lets you populate the items with the parameters of a 3D View. | https://docs.qgis.org/3.16/fi/docs/user_manual/print_composer/composer_items/composer_map3d.html | 2022-01-16T19:20:12 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.qgis.org |
Support | Blog | Contact Us
Schedule Enterprise Demo
changes.mady.by.user docuser1
Saved on Oct 09, 2021
Saved on Nov 16, 2021
...
split col:myCol positions:20,55,80
Tip: Numeric values for positions do not need to be in sorted order.
Output: Splits the myCol column into four separate columns, where:
myCol
© 2013-2022 Trifacta® Inc. Privacy Policy | Terms of Use | https://docs.trifacta.com/pages/diffpagesbyversion.action?pageId=109906235&selectedPageVersions=43&selectedPageVersions=42 | 2022-01-16T19:15:57 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.trifacta.com |
Command editor executable (
UE4Editor.exe) to run as the game or a server using uncooked content.
These commands are not case sensitive.
Example:
UE4Editor.exe -game (
.umap) here is optional. To load a map not found in the Maps directory, an absolute path or a relative path from the Maps directory can be used. In this case, the inclusion of the file extension is mandatory. The server IP address is a standard four-part IP address consisting of four values between 0 and 255 separated by periods. The additional options are specified by appending them to the map name or server IP address. Each option is prefaced by a '?', and can set a value with '='. Starting an option with '-' will remove that option from the cached URL options.
Examples:
MyGame.exe /Game/Maps/MyMap UE4Editor.exe MyGame.uproject /Game/Maps/MyMap?game=MyGameInfo -game UE4Editor.exe MyGame.uproject /Game/Maps/MyMap?listen -server MyGame.exe 127.0.0.1
General Options
Server Options
Switches
These arguments can be passed to either the game or the editor, depending on the specific keyword and its intended usage. Some arguments are plain switches (-UNATTENDED) while others are setting switches that are "key=value" pairs (-LOG=MyLog.txt). These commands are not case sensitive. The syntax for passing plain switches is to preface each argument with a minus '-' and then the argument immediately afterward. Setting switches need no leading '-', with the exception of the server switches.
Example:
UE4Editor.
AutomatedMapBuild: Perform an automated build of a specified map.
BIASCOMPRESSIONFORSIZE: Override compression settings with respect to size.
BUILDMACHINE: Set as build machine. Used for deciding if debug output is enabled.
BULKIMPORTINGSOUNDS: Use when importing sounds in bulk. (Content.
DEVCON: Disable secure connections for developers. (Uses unencrypted sockets.)
DUMPFILEIOSTATS: Track and log File IO statistics..
INSTALLED: For development purposes, run the game as if installed.
INSTALLFW/
UNINSTALLFW: Set whether the handling of the firewall integration should be performed.
INSTALLGE: Add the game to the Game Explorer.
CULTUREFORCOOKING: Set language to be used for cooking.
LIGHTMASSDEBUG: Launch lightmass manually with -debug and allow lightmass to be executed multiple times.
LIGHTMASSSTATS: Force all lightmass agents to report detailed stats to the log.
LOG: When used as a switch (-log), opens a separate window to display the contents of the log in real time. When used as a setting (LOG=filename.log), tells the engine to use the log filename of the string that immediately follows.
LOGTIMES: Print time with log output. (Default, same as setting LogTimes=True in the [LogFiles] section of *Engine.ini.)
NOCONFORM: Tell the engine not to conform packages as they are compiled.
NOCONTENTBROWSER: Disable the Content Browser.
NOINNEREXCEPTION: Disable.)
NOPAUSE: Close the log window automatically on exit.
NOPAUSEONSUCCESS: Close the log window automatically on exit as long as no errors were present.
NORC: Disable the remote control. Used for dedicated servers.
NOVERIFYGC: Do not verify garbage compiler assumptions.
NOWRITE: Disable output to log..
VADEBUG: Use the Visual Studio debugger interface.
VERBOSE: Set compiler to use verbose output.
VERIFYGC: Force garbage compiler assumptions to be verified.
WARNINGSASERRORS: Treat warnings as errors.
Rendering
ConsoleX: Set the horizontal position for console output window.
ConsoleY: Set the vertical position for console output window.
WinX: Set the horizontal position of the game window on the screen.
WinY: Set the vertical position of the game window on the screen.
ResX: Set horizontal resolution for game window.
ResY: Set vertical resolution for game window.
VSync: Activate the VSYNC via command line. Pprevents.
EXEC: Executes the specified exec file.
FPS: Set the frames per second for benchmarking.
FULLSCREEN: Set game to run in fullscreen mode.
SECONDS: Set the maximum tick time.
WINDOWED: Set game to run in windowed mode.
Network
LANPLAY: Tell the engine to not cap client bandwidth when connecting to servers. Causes double the amount of server updates and can saturate client's bandwidth.
Limitclientticks: Force throttling of network updates.
MULTIHOME: Tell the engine to use a multihome address for networking.
NETWORKPROFILER: Enable network profiler tracking.
NOSTEAM: Set steamworks to not be used.
PORT: Tell the engine to use a specific port number.
PRIMARYNET: Affect how the engine handles network binding.
User
NOHOMEDIR: Override use of My Documents folder as home directory.
NOFORCEFEEDBACK: Disable force feedback in the engine.
NOSOUND: Disable any sound output from the engine.
NOSPLASH: Disable use of splash image when loading the
NODATABASE: Do not use database, and ignore database connection errors.
NOLIVETAGS: Skip loading unverified tag changes from SQL database. Only load for current user..
Use Aaother command-line argument to temporarily override which INIs are loaded by the game or editor. For example, if a custom 'MyGame.ini' is to be used instead of 'MyOldGame.ini', the argument would be -GAMEINI=MyGame.ini. This table lists the arguments used to override the different INI files used in UE4:]) | https://docs.unrealengine.com/4.27/en-US/ProductionPipelines/CommandLineArguments/ | 2022-01-16T19:23:45 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.unrealengine.com |
Redirects
Because the VIP Platform uses Nginx (not Apache), there are no .htaccess files. Redirects for URLs must be handled by alternative methods. The most suitable solution(s) for a site’s redirects should be chosen based on the amount and types of redirects needed.
- The Safe Redirect Manager plugin is a useful option for a small number of redirects (fewer than 300), and for redirects that will change frequently.
- To create redirects a large number of redirects (greater than 300) for old, or “legacy”, URLs that now return
404HTTP response status codes, use the WPCOM Legacy Redirector plugin.
- Some redirects can be written directly into a site’s theme code.
- To use more than one domain per site, set up vip-config.php to handle redirecting secondary domains to the desired primary domain for a site. This is particularly useful for mapping domains on a multisite, where redirects between non-
wwwdomains and
wwwvariants do not occur automatically.
Redirects with a
302 HTTP status are cached by VIP’s page cache for 1 minute, and redirects with a
301 HTTP status are cached for 30 minutes. | https://docs.wpvip.com/technical-references/redirects/ | 2022-01-16T18:44:36 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.wpvip.com |
changes.mady.by.user Lasantha Fernando
Saved on Sep 05, 2013
changes.mady.by.user Praneesha Chandrasiri
Saved on Jun 30, 2014
...:
/etc/resolv.conf
/etc/hosts
127.0.0.1 localhost
<ip_address> <machine_name> localhost
<ip_address>
<machine_name> localhost
You are now ready to run the product.
Powered by a free Atlassian Confluence Community License granted to WSO2, Inc.. Evaluate Confluence today. | https://docs.wso2.com/pages/diffpages.action?pageId=50519948&originalId=38478891 | 2022-01-16T19:37:35 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.wso2.com |
Minimum Requirements for Using Continia Document Output
There's a significant difference between running Document Output in the on-premises version of Microsoft Dynamics 365 Business Central and using it online in the cloud version. This article describes the minimum requirements and things to note when running Document Output in Business Central online. You can find the on-premises requirements.
See also
Overview of Business Functionality
Business Central website | https://docs.continia.com/en-us/continia-document-output/development-and-administration/online/minimum-requirements | 2022-01-16T18:52:26 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.continia.com |
Synchronizing the Actuals data set
The Actuals data set contains Interaction History (IH) records after you synchronize it with the Interaction History database. You synchronize Actuals one time after upgrading Pega Platform from version earlier than 7.3. You might need to synchronize Actuals when you have cleaned up IH database tables by deleting records older than a given time stamp.
- In Dev Studio, click .
- Click Synchronize.Synchronization time depends on the number of IH records that are synchronized.
Automatic synchronization
Automatic synchronization takes place when you start Visual Business Director for the first time after upgrading Pega Platform from version earlier than 7.3. Interaction History data is loaded eagerly, aggregated, and the results of the aggregation are written to Cassandra. As a result, the first start might take longer. During subsequent starts, the Actuals data set and other VBD data sets are loaded lazily. | https://docs.pega.com/decision-management/84/synchronizing-actuals-data-set | 2022-01-16T20:19:54 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.pega.com |
Here you will learn how the TA/JSA/SWMS feature works in the mobile app for our new 2021 release
Index:
Please watch the video below which details our new TA/JSA/SWMS mobile app release
TA/JSA/SWMS 2021 Mobile Release (6:26)
If you still need assistance after watching this video then please keep on reading...
Accessing a Task Analysis via the mobile app
To access a Task Analysis on the SiteConnect Mobile app, log in and go to the three horizontal lines on the upper left hand corner, then scroll down and click TA/JSA/SWMS in the menu that appears.
Then select the Site that the TA is relevant to by clicking Change Site.
If there are TA's assigned to that Site, they will then appear on this list like so.
If you are have been assigned as a Approver, Authorizer, Supervisor or an Assignee for this TA, the this will flash Required next to the position that you have been assigned.
Pending means that somebody else has been assigned this role and has yet to fulfill this.
To Approve, Authorize, Acknowledge or complete the TA, just click this TA in this list to access it. Then read through the TA and work through any steps that have been outlined.
Once completed, you can click Approve, Authorize or Acknowledge at the top of the TA (depending on what role you have been assigned via your account admins).
If you have just been assigned this TA to complete and the other appointed staff members have not Authorized and Approved the TA yet, you will not be able to acknowledge (or edit) this TA until this has been done.
You can also export the TA to a PDF via the orange prompt down the bottom of the screen if you wish.
When you check this TA again for this Site, you will now see that instead of flashing as Required it will now be Green and reflect what you have just done (again depending on your assigned role)
This will also be visible for your account admins in their web portal accounts.
Please note that if your admins have created a new version of this TA then progress will reset so you will need to Approve, Authorize or Acknowledge this again.
Some TA's may also have uploaded files that you can view. You will see these down the bottom of the TA.
Editing a TA
When you are viewing a TA within the mobile app, you can also edit it by clicking Edit in the upper right hand corner.
From here you will be able to edit the content of this TA by clicking in each field and filling in your own information.
Once you have finished editing the TA you can then Save it by clicking the Save prompt in the upper right hand corner.
Once you have Saved your edited TA you will have created a NEW VERSION of this TA. Because of this, you will now be prompted to add an Amendment Title and a reason for the amendment which will be visible to your account admins. These are compulsory to fill in as highlighted by the asterisks.
This will also mean that it will need to be Approved, Authorized and Acknowledged by whoever is responsible for this.
Once you have filled these out you can click Continue and this will create this new version of the TA which will then replace the original one. This version one will also be present in the My Forms list when you reenter this.
Workflows
Some Ta's will als have Workflows in the which are step by step brekdowns of the task itself.
Click on each step to enlarge it and see all relevant information
You can also Edit Workflow steps or Remove them entirely by clicking Edit on the upper right hand corner of the TA (once it has been Approved and Authorized), then scrolling down to the workflow field and clicking the relevant option.
When editing a Workflow, you will also be asked to add an amendment title as this creates a new Version much like editing the TA as a whole in the previous section of this article.
Task Analysis Documents
You can also view Task Analysis Documents by going back to the three horizontal lines on the upper right hand corner and press Task Analysis Documents in this menu. This will bring up a list of loaded documents for the Site that you can also change by clicking Change Site.
Once pressed, you can then view the file by clicking it on the next screen under Uploaded Files.
If there is no document present, then your account admins will need to be contacted and they will need to add the document on their end.
If you need any further help or have any questions please contact the support team by email [email protected] or Ph: 0800 748 763 | https://docs.sitesoft.com/ta-jsa-swms-in-the-mobile-app-2021 | 2022-01-16T19:02:40 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['https://play.vidyard.com/Ct9MNaqn6NyT3z3siLW8qD.jpg',
'TA 2021 Mobile Release'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Accessing%20TAs.gif?width=169&name=Accessing%20TAs.gif',
'Accessing TAs'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Change%20Site.png?width=213&name=Change%20Site.png',
'Change Site'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Vehciel%20inspection.png?width=212&name=Vehciel%20inspection.png',
'Vehciel inspection'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Required%20status.png?width=273&name=Required%20status.png',
'Required status'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/TA%20Approved.gif?width=179&name=TA%20Approved.gif',
'TA Approved'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Export.png?width=368&name=Export.png',
'Export'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Approver%20accepted.png?width=262&name=Approver%20accepted.png',
'Approver accepted'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/new%20version.png?width=192&name=new%20version.png',
'new version'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Uploaded%20files.png?width=192&name=Uploaded%20files.png',
'Uploaded files'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Edit%20TA.png?width=223&name=Edit%20TA.png',
'Edit TA'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Edit%20TA-1.png?width=202&name=Edit%20TA-1.png',
'Edit TA-1'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Save%20Movile.png?width=224&name=Save%20Movile.png',
'Save Movile'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Amendment%20menu.png?width=276&name=Amendment%20menu.png',
'Amendment menu'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/V3.png?width=207&name=V3.png',
'V3'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Workflow.png?width=227&name=Workflow.png',
'Workflow'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Workflow%20stepp.gif?width=211&name=Workflow%20stepp.gif',
'Workflow stepp'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Editing%20workflow.gif?width=220&name=Editing%20workflow.gif',
'Editing workflow'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/TA%20docs.gif?width=204&name=TA%20docs.gif',
'TA docs'], dtype=object)
array(['https://docs.sitesoft.com/hs-fs/hubfs/Upoaded%20files.png?width=181&name=Upoaded%20files.png',
'Upoaded files'], dtype=object) ] | docs.sitesoft.com |
When you move a vApp to another virtual datacenter, the vApp is removed from the source virtual datacenter.
Prerequisites
You are at least a vApp author.
Your vApp is stopped.
Procedure
- From the Navigator, select Compute > vApps.
- Select the vApp you want to move.
- From the More menu, select Move to..
- Select the virtual datacenter where you want to move the vApp.
- Click OK.
Results
The vApp is removed from the source datacenter and moved to the target datacenter. | https://docs.vmware.com/en/VMware-Cloud-Director/9.0/com.vmware.vcloud.tenantportal.doc/GUID-405347C9-181F-46EB-B6DB-84BD5AF3E708.html | 2022-01-16T19:38:09 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.vmware.com |
The instructions on this page explains explain how plain text passwords in configuration files can be encrypted using the secure vault implementation that is built into WSO2 products. Note that you can customize the default secure vault configurations in the product by implementing a new secret repository, call back handler etc. See the related topics for information about secure vault.
... steps topics given below for instructions.
...
Passwords are encrypted by executing the Cipher Tool. You must install and configure the Cipher Tool as explained below:
...For example, see that are created for Carbon Kernel.
cipher-tool.propertiesfile and the
cipher-text.propertiesfile with information of the passwords that you want to encrypt.
Follow the steps given below.
Open the
cipher-tool.properties
file stored in the
<PRODUCT_HOME>/repository/conf/securityfolder. This file should contain information about the configuration files in which the passwords (that require encryption) are located. The following format is used:
Example 1: Consider the admin user's password in the
user-mgt.xmlfile shown below.'.
Example 2: Consider the password that is used to connect to an LDAP user store (configured in the
user-mgt.xmlfile) shown below.:
Using the
UserManager.Configuration.Property.ConnectionPasswordalias:
Example 3: Consider the keystore password specified in the
catalina-server.xmlfile shown below.’..
rss-config.xmlfile.
Add the following to the
cipher-tool.propertiesfile:
Add the following to the
cipher-text.propertiesfile:
...
Open a command prompt and go to the to the
<PRODUCT_HOME>/bindirectory, where we stored the
ciphertool.shscript. Run the
ciphertool.shscript using one of the following commands:
Use the command given below to simply execute the script. You will be required to provide the keystore password (for authentication) in a later step.
the cipher tool scripts (for Windows and Linux) are stored.
Execute the cipher tool script from the command prompt using the command relevant to your OS:
On Windows:
./ciphertool.bat -Dconfigure
On Linux:
./ciphertool.sh
-Dconfigure
Use the command given below if you want to provide the keystore password as you run the script. The default keystore password is "wso2carbon".
- The Cipher Tool reads the alias values and their corresponding plain text passwords from the
cipher-text.propertiesfile.".
This step is required only if you did not provide the keystore password in step 1. The following message will be prompted, requesting for the keystore password: "[Please Enter Primary KeyStore Password of Carbon Server : ]". Enter the keystore password (which is "wso2carbon" for the default keystore).
If the script execution completed successfully,file given below, which does not use xpath notations. As shown below, the password of the
LOGEVENTappender is set to
admin::.
Related Topics
... | https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=45950313&originalVersion=24&revisedVersion=66 | 2022-01-16T19:15:00 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.wso2.com |
InitializeAuthenticateManagementKeyResponse ClassNamespace: Yubico.YubiKey.Piv.Commands Assembly: Yubico.YubiKey.dll
The response to the initialize authenticate management key command.
public sealed class InitializeAuthenticateManagementKeyResponse : PivResponse, IYubiKeyResponseWithData<(bool, ReadOnlyMemory<byte>)>, IYubiKeyResponse
Implements
Remarks
This is the partner Response class to InitializeAuthenticateManagementKeyCommand.
The data returned is a tuple consisting of a boolean and a
ReadOnlyMemory<byte>. The boolean indicates if this is mutual
authentication or not,
true for mutual auth,
false for
single. The byte array is "Client Authentication Challenge".
See the comments for the class InitializeAuthenticateManagementKeyCommand, there is a lengthy discussion of the process of authenticating the management key, including descriptions of the challenges and responses.
It is likely that you will never need to call
GetData in this
class. You will pass an instance of this class to the constructor for
CompleteAuthenticateManagementKeyCommand, which will process the
challenge. | https://docs.yubico.com/yesdk/yubikey-api/Yubico.YubiKey.Piv.Commands.InitializeAuthenticateManagementKeyResponse.html | 2022-01-16T18:28:15 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.yubico.com |
7. Troubleshooting
7.1. MacOs Catalina Compatibility
With the release of MacOs Catalina, Apple requires apps to be signed and/or notarized. This helps users make sure they open apps from trusted developers. It appears that the transition files that are needed to upgrade older IDF files to the latest version of EnergyPlus are not signed. Thus, users using archetypal on the MacOs platform might encounter an error of this type: “Transition cannot be opened because the developer cannot be verified”. (see Missing transition programs for more details on downloading and installing older transition programs).
It seems that clicking “cancel” will still work, although the prompt will appear for all older transition files repetitively. An issue has been submitted here on the EnergyPlus github repository. Hopefully, the developers of EnergyPlus will be able to address this issue soon.
7.2. Missing transition programs
For older EnergyPlus file versions (< 7-1-0), the necessary transition files are not included with the EnergyPlus installer. Users must download and install missing transition programs manually. This can be acheived in two simple steps:
Navigate to the EnergyPlus Knowledgebase and download the appropriate transition programs depending on the platform you are using (MacOs, Linux or Windows). These programs come in the form of a zipped folder.
Extract all the files from the zipped folder to your EnergyPlus installation folder at ./PreProcess/IDFVersionUpdater/. For example, on MacOs with EnergyPlus version 8-9-0, this path is
/Applications/EnergyPlus-8-9-0/PreProcess/IDFVersionUpdater/.
| https://archetypal.readthedocs.io/en/stable/troubleshooting.html | 2022-01-16T20:06:53 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['_images/unsigned_app_error.png', '_images/unsigned_app_error.png'],
dtype=object)
array(['_images/extract_transition_EP_files.gif',
'_images/extract_transition_EP_files.gif'], dtype=object)] | archetypal.readthedocs.io |
Define signoff roles and schemes
This topic applies to the Canadian version of CaseWare AnalyticsAI, part of the CaseWare Cloud suite.
CaseWare AnalyticsAI offers the ability to customize signoff roles and schemes. Custom signoff roles allow you to define labels for preparer and reviewer roles that can be used to create custom sign-off schemes. You can assign custom sign-off schemes to documents to track the required reviewer and preparer workflows.
For more information, see Define signoff roles, Set up signoff schemes and Review and customize signoff schemes. | https://docs.caseware.com/2020/webapps/31/29/Explore/AnalyticsAI-CAN/Define-signoff-roles-and-schemes.htm?region=us | 2022-01-16T18:08:12 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['/documentation_files/2020/webapps/31/Content/en/Resources//CaseWare_Logos/casewarelogo.png',
None], dtype=object) ] | docs.caseware.com |
Eggplant Functionalの使用方法.
SUTに接続する
使用する接続方法は接続するシステムのタイプ、すなわちモバイルデバイス、デスクトップコンピュータ、その他に応じて変わります。 The Connecting to SUTs section includes information on how to use the Eggplant Functional interface to connect to Android devices, iOS devices, as well as different desktop operating systems.
スクリプトを作成する
スクリプトを作成して実行するには多くの時間がかかることがたまにあります。. SenseTalkスクリプト言語とデバッグのテクニックの基本もこの章で解説しています。
スクリプトを実行する
You can run scripts from the Eggplant Functional interface as well as from the command line. 複数のスクリプトを実行するスケジュールの設定も可能です。 In the Running Scripts section, you'll also find information about reading test results, about using keyword-driven testing with Eggplant Functional, and about using Eggdrive to incorporate external tests with Eggplant Functional.
設定
Go to the Preferences section to learn about customizing the Eggplant Functional environment by adjusting the Preference settings.
Using Source Control Management (SCM)
Go to SCM to understand how to use Eggplant Functional to work with repositories in SCM. | https://docs.eggplantsoftware.com/ja/ePF/using/epf-using-eggplant-functional.htm | 2022-01-16T18:30:19 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.eggplantsoftware.com |
Why Duration and not Period?Source:
vignettes/time-span-objects.Rmd
time-span-objects.Rmd
This article explains why the
mctq package uses
Duration instead of
Period (objects from the lubridate package) as the default object for time spans.
Duration versus
Period objects
The
lubridate package offers three types of objects for storing and manipulating time spans:
Duration,
Period, and
Interval.
To understand the difference between
Duration and
Period objects you must first remember that the timeline is not always consistent, as it can have irregularities caused by, for example, leap years, DST (Daylight Saving Time), or leap seconds. That’s when
Period objects differ from
Duration objects.
Duration objects represent time spans by their exact number of seconds. That is, a
Duration object of 1 hour will always represent a 1-hour time span, even with possible timeline irregularities.
start <- lubridate::ymd_hms("2020-01-01 10:00:00", tz = "America/New_York") #> Warning in system("timedatectl", intern = TRUE): running command 'timedatectl' #> had status 1 start + lubridate::duration(1, units = "hour") #> [1] "2020-01-01 11:00:00 EST"
Period objects work a little bit differently. They are a special type of object developed by the
lubridate team that represents “human units”, ignoring possible timeline irregularities. That is to say that 1 day as
Period can have different time spans when looking to a timeline after an irregular event.
To illustrate this behavior, take the case of a DST event, starting at 2016-03-13 01:00:00 EST.
start <- lubridate::ymd_hms("2016-03-13 01:00:00", tz = "America/New_York") start + lubridate::duration(1, units = "hour") #> [1] "2016-03-13 03:00:00 EDT" start + lubridate::period(1, units = "hour") #> [1] NA
You might ask: why the result is
NA when adding 1 hour as a
Period object? That’s because
Period objects ignore time irregularities. When the DST starts at
01:00:00 the timeline “jumps” to
03:00:00, so the period from
02:00:00 to
02:59:59 doesn’t exist.
base: 2016-03-13 01:00:00, tz = "America/New_York" DST + 1 hour -----|---------------| |---------------|-----> 01:00 NA 03:00 04:00 From the `Duration` perspective: base + 1 hour = 2016-03-13 03:00:00 |-------------------------------|---------------| 1 hour 1 hour From the `Period` perspective: base + 1 hour = NA |---------------|---------------|---------------| 1 hour 1 hour 1 hour
Period objects are useful when you need to consider the human units of time. For example:
start <- lubridate::ymd_hms("2016-03-13 01:00:00", tz = "America/New_York") start + lubridate::duration(1, units = "day") #> [1] "2016-03-14 02:00:00 EDT" start + lubridate::period(1, units = "day") #> [1] "2016-03-14 01:00:00 EDT"
In this case,
1 day, by human standards, represents the same
time of day of the next day. But, considering the DST event, that
1 day has a time span of 23 hours.
You can learn more about
lubridate time span objects in the Dates and times chapter from Wickham & Grolemund’s book “R for Data Science”.
The MCTQ context
At first glance you might think that, since MCTQ was made for human respondents, the best representation for time spans would be the one that better represents “human units”, right? That would be fine if we were talking about a time span in a timeline irregularity context, but MCTQ doesn’t deal with this scenario.
When using MCTQ, the interest is to measure the exact time span between one local time to another. By ignoring irregularities in the timeline,
Periods produce a fluctuating time span, hence
Period objects are not compatible with other time spans like objects (e.g.,
hms).
hms::parse_hm("10:00") + lubridate::period(1, units = "hours") #> Error: Incompatible classes: <hms> + <Period>
In summary,
Period objects were made considering a very specific context that doesn’t apply to MCTQ. That’s why
Duration objects are the default object for time spans. | https://docs.ropensci.org/mctq/articles/time-span-objects.html | 2022-01-16T18:27:26 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['../logo.png', None], dtype=object)] | docs.ropensci.org |
an Okta admin account with the correct permissions. For more information on enabling multi-factor authentication on Okta, please click the link below.
Okta Multi-Factor Authentication Overview
1. Log in to Okta1. Log in to Okta
2. Navigating Security Features2. Navigating Security Features
Locate Security and then click on Multifactor followed by Factor Types
3. Selecting the Factor Type3. Selecting the Factor Type
You will be shown several options for implmenting Multi-Factor Authentication. Please select the Google Authenticator app option. The Trusona app will work just as well in place of it.
Users Signing in Using MFAUsers Signing in Using MFA
4. App Installation & Sign-in4. App Installation & Sign-in
Before a user logs into their account, make sure they have the Trusona app installed on their mobile device. When they log in with their regular credentials, they will be asked to enter additional credentials. Click on Set
6. Finalize6. Finalize
Enter the code from the app into the screen, then click to submit it. The user should now be able to log into their account
Setup complete! The next time someone logs in to their Okta Account and are prompted for a One-time passcode, they can use the Trusona app to log in. | https://docs.trusona.com/totp/okta/ | 2022-01-16T18:09:07 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['https://docs.trusona.com/images/totp-integration-images/okta/mfa_options.png',
'Select the Google Authenticator option'], dtype=object) ] | docs.trusona.com |
FCS Files¶
Uploading FCS Files¶
Howto
- Drag and drop folders or files into your experiment. Uploads will be tracked in the upper-right corner. You can navigate to other pages within CellEngine during the upload.
If you attempt to upload FCS files to an experiment that already has files with the same filenames uploaded, you will be given the option of skipping the duplicate files and only uploading the new files.
FCS File Compatibiltiy¶
CellEngine has industry-leading data file compatibility, with support for FCS2.0, FSC3.0, FCS3.1 and FCS4.0 files. We continually validate with data files from:
- Acea Bio: NovoCyte
- Apogee Flow: Micro, Universal, Auto40
- Attune: Attune (original and new)
- Beckman Coulter: Cyan, Epics XL-MCL, CytoFLEX, Gallios, Navios, MoFlo Astrios, MoFlo XDP
- Beckton Dickinson (BD): Accuri C6, FACSAria, FACSCalibur, FACSCantoII, FACSort, FACSVerse, LSRFortessa, LSRII, Symphony X50
- Cytek: Aurora, Northern Lights, dxP10, dxP8, xP5
- Fluidigm: CyTOF1, CyTOF2, Helios
- Luminex: Amnis ImageStream Mk II
- Millipore: easyCyte 6HT, Guava
- Miltenyi Biotec: MACSQuant
- Partec: CyFlo Cube 6, PAS
- Propel Labs: YETI/ZE5
- Sony: iCyt Eclipse, SA3800 Spectral Analyzer
- Stratedigm: S1400
CellEngine tolerates invalid FCS files in most cases, and has the ability to correct invalid spill strings (compensation matrices), unescaped delimiters, empty keywords, incorrect segment coordinates and other violations.
CellEngine also generally applies the same adjustments that the instrument manufacturers apply in their own software, which includes vendor-specific scaling and transformations not captured in the FCS specification.
Viewing FCS File Metadata¶
You can view meta-information about FCS files, such as FCS header values, laser delays and PMT voltages, by clicking on the three-dot menu for a file in the list of FCS files on the experiment summary page, and then selecting view details.
For CyTOF Helios files that have only been processed with the Fluidigm Helios software, this dialog also displays the last tuning results and other instrument performance information.
Concatenating FCS Files¶
Concatenating merges two or more files into one.
Howto
- In the FCS file list on either the experiment summary page or the annotation page, select the files that you want to concatenate.
- In the menu bar above the file list, select concatenate files. A dialog will open.
- Select whether or not to add a file number column to the output file.
- Select whether or not to delete the input files after successfully concatenating.
- Click concatenate. The concatenated file will be added to your experiment.
You can elect to add a file number column (channel) to the output file so that you can see which file each event (cell) came from. The values in this column have a uniform random spread (±0.25 of the integer value) to ease visualization. While this column can be useful for analysis, it will cause your experiment to have FCS files with different panels unless you delete all FCS files that have not been concatenated.
During concatenation, any FCS header parameters that do not match between files will be removed, with some special exceptions:
$BTIM(clock time at beginning of acquisition) and
$DATEwill be set to the earliest value among the input files.
$ETIM(cock time at end of acquisition) will be set to the latest value among the input files.
$PnR(range for parameter
n) will be set to the highest value among the input files.
All channels present in the first selected file must also be present in the other selected files.
Importing FCS Files from Other Experiments¶
Importing copies an FCS file from another experiment into the current experiment.
Howto
- In the FCS file list on the experiment summary page, click import FCS file.
- Select the experiment from which you want to import the file.
- Select the FCS file that you want to import.
- Click import.
Importing and Exporting FCS Files from and to S3-Compatible Services¶
FCS files can be imported from and exported to S3-compatible services (AWS S3, Google Cloud Storage, etc.) using the CellEngine API.
Control Files¶
Files set as "control files" in the annotation page will only be visible in the compensation editor, the experiment summary and the annotations page. This lets you exclude compensation controls, calibration beads and other files that you don't want to see in your analysis. | http://docs.cellengine.com/fcsfiles/ | 2022-01-16T19:42:49 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['/images/fileNumber.png',
'example of concatenated file with file number parameter'],
dtype=object) ] | docs.cellengine.com |
Couchbase
Detailed information on the Couchbase state store component
Component format
To setup Couchbase state store create a component of type
state.couchbase. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: <NAME> namespace: <NAMESPACE> spec: type: state.couchbase version: v1 metadata: - name: couchbaseURL value: <REPLACE-WITH-URL> # Required. Example: "" - name: username value: <REPLACE-WITH-USERNAME> # Required. - name: password value: <REPLACE-WITH-PASSWORD> # Required. - name: bucketName value: <REPLACE-WITH-BUCKET> # Required.
WarningThe above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Setup Couchbase
You can run Couchbase locally using Docker:
docker run -d --name db -p 8091-8094:8091-8094 -p 11210:11210 couchbase
You can then interact with the server using
localhost:8091 and start the server setup.
The easiest way to install Couchbase on Kubernetes is by using the Helm chart:
helm repo add couchbase helm install couchbase/couchbase-operator helm install couchbase/couchbase-cluster) | https://docs.dapr.io/reference/components-reference/supported-state-stores/setup-couchbase/ | 2022-01-16T18:50:29 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.dapr.io |
- 1. Prerequisites
- 2. Getting Started
- 3. Create an Identity Provider
- 4. Add Origin
- 5. Create new Sign-On policy
- 6. Create Routing Rule
- 7. Create a Trusona Registration application
- 8. Customizing your Trusona experience
- 9. Okta Identifier Registration
1. Prerequisites1. Prerequisites
Before proceeding, ensure that you have the following steps completed:
- Admin access to Okta Cloud IAM.
- Have admin access to the Trusona Dashboard. If your company does not have an account, visit the Trusona Dashboard to create one. Otherwise, consult with the owner of your company’s Trusona Dashboard account in order to create the integration.
2. Getting Started2. Getting Started
2.1. Log into the Okta admin portal2.1. Log into the Okta admin portal
If you are logged into the developer portal by default than select the dropdown that reads Developer Console and click Classic UI
If you see this page, click on the Admin button.
2.2. Create API token2.2. Create API token
Navigate to Security > API and then click the Create Token button.
Copy your API token (Token Value) and save it somewhere safe. You will be using it in later steps
2.3. Create a group2.3. Create a group
Navigate to Directory > Groups > click Add Group and create a name and a description.
- Name the group Trusona
- Provide a group description
- Click Add Group
This group is used to prevent users, who are using Trusona for passwordless login, from being prompted for an additional second factor of authentication.
You don’t need to maintain the membership of this group. Group membership is automatically managed by Trusona via the Okta API. Do not add any members to the group.
2.4. Navigating the dashboard2.4. Navigating the dashboard
From the Trusona Integration dashboard, navigate to Okta Integrations & click on Create Okta Integration.
2.5. Inputing Data2.5. Inputing Data
4 different input fields will be shown.
Name
- Okta Tenant URL
- This will look similar to.
- API Token
- The value from the token you made in Step 2.
- Group ID
- This is the value from the URL you copied in Step 3.
2.6. Accessing generated data2.6. Accessing generated data
Click on Save after entering all relevant information. Trusona will generate data that you will use in the Okta platform. Don’t worry about the warning message regarding “Missing metadata“ for now.
3. Create an Identity Provider3. Create an Identity Provider
Navigate to Security > Identity Providers > Click Add Identity Provider > Click SAML 2.0 IdP.
Note: If the “Add Identity Provider” button does not have a drop down then click “Add Identity Provider” and continue with the steps below.
Complete the form to add the new SAML IdP using the information below:
3.1. General Settings3.1. General Settings
3.2. Authentication Settings3.2. Authentication Settings
3.3. JIT Settings3.3. JIT Settings
3.4. SAML Protocol Settings3.4. SAML Protocol Settings
Click the ‘View’ button on the Okta integration in your Trusona Dashboard to view your IdP Issuer URL, IdP Single Sign-On URL, and Signature Certificate.
Once the information in the tables above has been entered into the form, click the Add identity provider button to continue.
4. Add Origin4. Add Origin
Navigate to Security > API > Trusted Origins and click the Add Origin button.
- Name your Origin Trusona.
- To create your Origin url, copy your IDP Single Sign-On from the Okta integration in the Trusona Dashboard then delete the
/saml. Example:
example.gateway.trusona.net
- Enter you newly created Origin URL.
- Check both CORS and Redirect checkboxes.
5. Create new Sign-On policy5. Create new Sign-On policy
5.1. Navigate to “Security” > “Okta Sign-on Policy”5.1. Navigate to “Security” > “Okta Sign-on Policy”
To create the new policy, click the Add New Okta Sign-on Policy button.
- Enter TrusonaUsers for the Policy Name.
- Choose a meaningful description for the Policy Description.
- Add the group you created in step 5 in the Assign to Groups section.
- Click Create Policy and Add Rule.
- Rule Name: Name rule (This rule allows users to authenticate from anywhere).
- Ensure that Require secondary factor is unchecked. (If “Require secondary factor” is checked, users may see unnecessary 2FA prompts after using Trusona to login to Okta.)
- After creating a rule make sure the new rule is activated.
6. Create Routing Rule6. Create Routing Rule
Note: Do not move onto step 10 until you have completed step 9. Otherwise you may be locked out of your account.
- Navigate to Security > Identity Providers > Routing Rules.
- Click the Adding Routing Rule button.
- Match the fields below.
- Click Create Rule.
Trusona recommends that this newly created routing rule be placed above existing routing rules. This ensures that users are redirected to the Trusona IdP for authentication. Your specific implementation and/or deployment needs may require the rule to be placed somewhere other than first in the list.
7. Create a Trusona Registration application7. Create a Trusona Registration application
The Trusona Registration application helps your users link their Okta account to their Trusona Account. This process guarantees that users are identified by the Trusona IdP with a known and valid Okta identifier. All users that intend to use Trusona to login with Okta should complete the registration process described below before attempting to use Trusona to login to Okta.
7.1. General Settings7.1. General Settings
- Applications > Applications > Add Application.
- Click Create New App.
- Choose the SAML 2.0 optioon.
- Click Create.
- Click Next.
7.2. Configure SAML7.2. Configure SAML
- Navigate to the Okta integration in the Trusona Dashboard -> Click Actions -> Show -> Under “Trusona Registration Application”, copy the IdP Single Sign-on URL. The url will end in
/registrations. Example:**/registrations
- In Okta, re-open the Trusona application and enter the IdP Single Sign-On URL.
- Check on “Use this for Recipient URL and Destination URL”.
- Audience URL (SP Entity ID): Enter
- Click Next.
7.3. Upload the Okta X.509 Certificate to Trusona7.3. Upload the Okta X.509 Certificate to Trusona
- In Okta, Applications -> Applications -> Trusona -> Sign On -> Click View Setup Instructions under “SAML 2.0 is not configure until you complete the setup instructions” prompt -> Scroll down to X.509 Certificate -> Click Download Certificate.
- Go to the Trusona Dashboard -> On the left hand side, click on Generic SAML integration and the Okta integration you created will be listed.
- Select Actions -> Edit -> Under Certificate click Choose File -> Upload the X.509 Okta Certificate -> Click Save in the bottom left corner.
7.4. Feedback7.4. Feedback
- Click the radio button “I’m an Okta customer adding an internal app”.
- Click Finish.
7.5. Create an Assignment7.5. Create an Assignment
Within the new Trusona application > Assignment > Assign.
- Assign to Groups.
- Select Everyone.
- Click Assign.
- Click Done.
8. Customizing your Trusona experience8. Customizing your Trusona experience
The Trusona Gateway (pictured below) includes default styling that will be familiar to your users using the Trusona App.
:
8.1. Provide images8.1. Provide images
- Hero image: 1440 x 1800 px
- Logo image: 500 x 500 px
8.2. Provide hex values8.2. Provide hex values
- Animated dot color: this is the color dots that animate
- List of QR colors: multiples of the same color will appear more (provide 2 hex values)
- Link color: also changes the Okta widget button colors
- Text color:
- Background color: affects background behind the QR, usually we just do pure white (#FFFFFF)
9. Okta Identifier Registration9. Okta Identifier Registration
Users who intend to use Trusona to login to Okta must complete these required one-time steps.
- Download and install the Trusona App.
- Register in the Trusona App.
- Login to Okta using their existing username and password.
- Find, and click on, the Trusona application “chiclet” created in Step 10.
- Scan the QR code with the Trusona App.
- Accept and complete the Trusonafication.
The user’s Okta identifier has now been linked to their Trusona account and they are now ready to use Trusona to login with Okta.
Please see Integrating Trusona and Okta SCIM for SCIM provisioning. | https://docs.trusona.com/integrations/okta-integration/ | 2022-01-16T18:16:34 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['https://docs.trusona.com/images/integration-images/okta/step-1.png',
'Switch to Classic UI'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/step-1-2.png',
'Click admin button'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/step-2.png',
'Navigate to Security'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/step-2-2.png',
'Create and copy API token'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/step-3.png',
'Create a new group'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/4.png',
'Navigate the Trusona Integration dashboard'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/5.png',
'Input the required data'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/6.png',
'View data'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/7.png',
'Create an identity provider'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/8.png',
'Create a new sign-on policy'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/9.png',
'Create a new routing rule'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/10.png',
'Create a new application'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/11.png',
'Choose SAML 2.0'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/12.png',
'Select the correct configuration'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/13.png',
'Assign to Groups'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/14.png',
'Assign to Everyone'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/step10-image1.png',
'Customize'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/step10-image3.jpg',
'Customize'], dtype=object)
array(['https://docs.trusona.com/images/integration-images/okta/step10-image2.jpg',
'Customize'], dtype=object) ] | docs.trusona.com |
# API Principles
The SensorWeb API allows access to all resources available on an OSH hub, including access to historical data, real-time data feeds and tasking.
In addition to the traditional REST operations, this API also exposes Websocket and MQTT endpoints to retrieve real-time events corresponding to resource additions, modifications and deletions, as well as push real-time observations into the system.
# REST API
This API loosely follows REST principles, by providing read/write access to the following hierarchy of resources:
- /systems
- /details
- /history
- /fois
- /datastreams
- /controls
- /tasks
- /status
- /commands
- /featuresOfInterest (sampling features or refs to sampled/domain feature)
- /members (for procedure groups)
- /datastreams
- /observations
- /observations
- /fois
- /history
- /members (for feature collections)
REST calls are implemented with the 4 traditional HTTP operations + the PATCH operation for more efficient partial updates:
- GET to retrieve individual resources or resource collections
- POST to create new resources in a parent collection
- PUT and PATCH to modify an existing resource
- DELETE to delete an existing resource
GET operations support query parameters to further filter the retrieved content. See the OpenAPI specification or the request examples for more details.
The full OpenAPI documentation is available here (opens new window)
# Websocket Binding
# Subscription
A websocket request can be issued on all ressource collections to get notified of resource changes. The URLs to use are the same as the URLs used for normal GET requests, except that they use the
ws:// or
wss:// protocol. Most query parameters used to filter collections are also supported.
Additional query parameters allow controling the kind of events to subscribe to. These additional parameters are:
eventTypes: The type of event(s) to subscribe to. Must be one or more string from the following enum [
ADDED, MODIFIED, REMOVED, ENABLED]
replaySpeed: This OSH extension allows replaying historical data at the desired speed. If this value is equal to
1.0, the requested data is replayed at the same rate the phenomenon actually happened (as indicated by the phenomenonTime property). If greather than
1.0, the playback will be accelerated by the corresponding factor. If lower than
1.0, the playback will be slowed down by the corresponding factor.
TIP
Although it is simpler to use than the MQTT binding, one restriction of the Websocket API is that it doesn't allow a client to subscribe to multiple collections at a time in the same connection.
When subscribing to a websocket on an observation collection, the default time parameter is
now/.., which corresponds to a request for real-time data. By changing the time parameter, it is possible to request a replay of historical data as well.
The JSON object sent through a websocket connection includes extra property providing information about the event itself:
{ '@eventType': 'ADDED', '@eventTime': '2020-03-06T15:23:46.132Z' 'id': 'ef4c5a2', 'name': 'Weather station', 'description': 'Weather station', ... }
The client can use a
select filter (e.g.
select=id,name) to strip some information and receive a minimal event object.
# Data Push
The Websocket incoming channel can also be used to push observations and commands into the system.
Observation data can be ingested by opening a channel on a
datastream/{id}/observations sub-collection. The payload format must be indicated by the
format query parameter or the
Content-Type HTTP header.
Likewise, commands can be submitted by opening a channel on a
controls/tasks sub-collection.
# MQTT Binding
The MQTT binding works slightly differently as it is available through it's own TCP port, separate from OSH's embedded HTTP server port. The MQTT endpoint is thus always the same and the resource URLs (including any query parameters) are used as MQTT topics instead.
# Subscribe
An example MQTT SUBSCRIBE request is given below:
The topic name can include filtering parameters:
# Publish
MQTT PUBLISH requests can also be used to post new observation resources. They must target a specific datastream by using its nested
observations collection, like so:
The datastream itself must have been previously created with the HTTP JSON API.
# MQTT over Websocket
In order to allow the MQTT endpoint to be used by web clients written in Javascript, the SensorWeb API implementation also supports MQTT over websocket.
The websocket endpoint to use is a sub-resource of the API root URL, for example:
wsx://demo.opensensorhub.org/api/mqtt
The MQTT.js (opens new window) library can be used to connect to OSH SensorWeb API endpoint using this protocol. | http://docs.opensensorhub.org/v2/web/sensorweb-api/intro.html | 2022-01-16T19:53:14 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.opensensorhub.org |
The Compile Menu¶
More details about map compiling can be found here.
Compile Options¶
- Save Map: Saves the map before compiling.
- Run CaBSP: Runs the CaBSP compiler when compiling (this can’t be deactivated since it is the essential part of compiling a map).
- Run CaPVS: Runs the CaPVS (Potentially Visibility Set) compiler when compiling a map.
- Run CaLight: Precomputes static lighting when compiling a map.
- Start Engine: Starts the engine when the map is compiled. | https://cafu.readthedocs.io/en/latest/mapping/cawe/menureference/compile.html | 2022-01-16T18:56:59 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['../../../_images/menucompile.png', 'image0'], dtype=object)] | cafu.readthedocs.io |
The YARN registry is a location into which statically and dynamically deployed applications can register service endpoints; client applications can look up these entries to determine the URLs and IPC ports with which to communicate with a service.
It is implemented as a zookeeper tree: services register themselves as
system services, under the registry path
/system, or
user services, which are registered under
/users/ where
USERNAME
is the name of the user
registering the service.
USERNAME
As the purpose of the mechanism is to allow arbitrary clients to look up a service, the entries are always world readable. No secrets should be added to service entries.
In insecure mode, all registry paths are world readable and writeable: nothing may be trusted.
In a secure cluster, the registry is designed to work as follows:
Kerberos + SASL provides the identification and authentication.
/systemservices can only be registered by designated system applications (YARN, HDFS, etc)/
User-specific services can only be registered by the user deploying the application.
If a service is registered under a user's path, it may be trusted, and any published public information (such as HTTPS certifications) assumed to have been issued by the user.
All user registry entries should also be registered as world writeable with the list of system accounts defined in
hadoop.registry.system.accounts; this is a list of ZK SASL-authenticated accounts to be given full access. This is needed to support system administration of the entries, especially automated deletion of old entries after application failures.
The default list of system accounts are
yarn,
mapred,
hdfs, and
hadoop; these are automatically associated with the Kerberos realm of the process interacting with the registry, to create the appropriate
sasl:account@REALM ZKentries.
If applications are running from different realms, the configuration option
hadoop.registry.kerberos.realmmust be set to the desired realm, or
hadoop.registry.system.accountsconfigured with the full realms of the accounts.
There is support for ZooKeeper
id:digestauthentication; this is to allow a user's short-lived YARN applications to register service endpoints without needing the Kerberos TGT. This needs active use by the launching application (which must explicitly create a user service node with an id:digest permission, or by setting
hadoop.registry.user.accounts, to the list of credentials to be permitted.
System services must not use id:digest authentication —nor should they need to; any long-lived service already needs to have a kerberos keytab.
The per-user path for their user services,
/users/, is created by the YARN resource manager when users launch services, if the RM is launched with the option
USERNAME
hadoop.registry.rm.enabledset to
true.
When
hadoop.registry.rm.enabledis true, the RM will automatically purge application and container service records when the applications and containers terminate.
Communication with ZK is over SASL, using the
java.security.auth.login.configsystem property to configure the binding. The specific JAAS context to use can be set in
hadoop.registry.jaas.contextif the default value,
Client, is not appropriate.
ZK Paths and Permissions:
All paths are world-readable; permissions are set up when the RM creates the root entry and user paths and hadoop.registry.secure=true.
Configuration options for secure registry access | https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.1/bk_security/content/zookeeper_acls_bp_yarn_registry.html | 2022-01-16T18:31:56 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.cloudera.com |
Support for file import from third party payment services
Business value
Today when importing customer payments generated from third party payment services, all payments are imported as one amount. This obscures the chance of identifying from where of the payments origin. The importing service will be extended to support import of payment files from third party payment services such as Klarna and Teller, so that each payment will be created with a payment line in the cash receipt journal.
Feature details
Payment Management will support file import from third party payment services, making it possible to import the customer payment files directly from these parties into the cash receipt journal. | https://docs.continia.com/en-us/continia-payment-management/new-and-planned/payment-and-cash-receipts/support-for-file-import-from-third-party-payment-services | 2022-01-16T18:37:22 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.continia.com |
<<
Joyent Two-Factor Authentication Doc
1. Log in to Joyent1. Log in to Joyent
Log in to the Joyent Compute Service Portal
2. Username & Two-Factor Authentication2. Username & Two-Factor Authentication
From the main page, click on your username in the upper-righthand corner. Select Two-Factor Authentication
3. Enabling Two-Factor Authentication3. Enabling Two-Factor Authentication
A popup should appear informing you that the security feature is disabled. Make sure you have the Trusona App installed on your mobile device and then click on the Enable button.
4. Scan the QR Code4. Scan the QR Code
While Joyent says to use Google Authenticator or Duo, the Trusona App will work just as well in their place. It should now show that Two-Factor Authentication is Enabled.
Setup complete! The next time you log in to Joyent and are prompted for a One-time passcode, you can use the Trusona app to log in.
Please note that enabling Two-Factor Authentication will only protect your personal account. It will not apply to any instances you may create through Joyent. | https://docs.trusona.com/totp/joyent/ | 2022-01-16T19:31:04 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['https://docs.trusona.com/images/totp-integration-images/joyent/joyent_logo.png',
'Joyent'], dtype=object)
array(['https://docs.trusona.com/images/totp-integration-images/joyent/enable.png',
'Click on the Enable button'], dtype=object)
array(['https://docs.trusona.com/images/totp-integration-images/joyent/joyent_scan.png',
'Scanning the code'], dtype=object) ] | docs.trusona.com |
Enjoy the Holiday season!
Book Creator
Add this page to your book
Add this page to your book
Book Creator
Remove this page from your book
Remove this page from your book
This is an old revision of the document!
Table of Contents
HOWTO articles - Security
Securing your computer is an ongoing process. The following guides will help you secure your Slackware installation, be it for server, workstation or laptop needs. Make sure you subscribe to the slackware-security mailing list. All security announcements since 1999 are available on.
This section contains articles related to securing your Slackware based system and network.
Inspired? Want to write a Security HOWTO page yourself?
Type a new page name (no spaces - use underscores instead) and start creating! You are not allowed to add pages
Type a new page name (no spaces - use underscores instead) and start creating! You are not allowed to add pages
Security
Physical security
Network security
- Firewall
- Protecting SSH connections from brute-force attacks: Install DenyHosts on Slackware
- Use only SSH keys instead of passwords for SSH connections: Using SSH keys
- Network services: the following services can be tweaked:
File System Security
- Encryption
- Encrypt swap space to protect sensitive contents Enabling Encrypted Swap
- File Permissions
- Track system changes with OSSEC
Overview of Security HOWTOS | https://docs.slackware.com/doku.php?id=howtos:security:start&rev=1358908408 | 2022-01-16T19:26:13 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.slackware.com |
Example: Returned Expression Errors for NULLIFZERO: Returned Request Errors for NULLIFZERO ; | https://docs.teradata.com/r/ITFo5Vgf23G87xplzLhWTA/55vtFKubdBCHyZJys3RhHg | 2022-01-16T20:16:13 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.teradata.com |
Edit an Atlas Search Index
You can change the index definition of an existing Atlas Search index. You cannot rename an index; if you need to change an index's name, you must create a new index and delete the old one.
Permissions required
_2<<
1
4
Click and choose one of the following from the dropdown.
Edit with Visual Editor for a guided experience.Note
The Visual Editor doesn't support custom analyzers or synonym mapping definitions.
- Edit with JSON Editor to edit the raw index definition.
5
6
Click Save to apply the changes.
The index's status changes from Active to Building. In this state, you can continue to use the old index because Atlas Search does not delete the old index until the updated index is ready for use. Once the status returns to Active, the modified index is ready to use. | https://docs.atlas.mongodb.com/atlas-search/edit-index/ | 2022-01-16T18:58:39 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['/assets/link.png', 'icons/link.png'], dtype=object)
array(['/assets/link.png', 'icons/link.png'], dtype=object)
array(['/assets/link.png', 'icons/link.png'], dtype=object)
array(['/assets/link.png', 'icons/link.png'], dtype=object)
array(['/assets/link.png', 'icons/link.png'], dtype=object)] | docs.atlas.mongodb.com |
equals
On this page
Definition
Syntax
equals has the following syntax:
Options
equals uses the following terms to construct a query:
Behavior
equals uses constant scoring. Each matching document receives
a score of
1 for each search clause matched. A document that matches
one search clause receives a score of
1, while a document that matches
three search clauses receives a score of
3. See the
Examples section for scoring examples.
Examples
The examples on this page use a collection named
users containing the
following three documents:
The
users collection is indexed with the following index definition:
Basic Examples
The following example uses the
equals operator to search the
users
collection for documents in which the
verified_user field is set to
true.
The above query returns the following results:
The documents for "Jim Hall" and "Ellen Smith" each receive a score of
1
because those documents have the
verified_user field set to
true.
The following example uses the
equals operator to search the
users
collection for documents in which the
teammates field contains the value
ObjectId("5a9427648b0beebeb69589a1").
The above query returns the document for "Fred Osgood", because that document
contains
ObjectId("5a9427648b0beebeb69589a1") in the
teammates array.
Compound Examples
The following example uses the compound operator in
conjunction with
must,
mustNot, and
equals to search for documents
in which the
region field is
Southwest and the
verified_user field
is not
false.
The above query returns the document for "Ellen Smith", which is the only one in the collection which meets the search criteria.
The following example query has these search criteria:
- The
verified_userfield must be set to
true
One of the following must be true:
- The
teammatesarray contains the value
ObjectId("5a9427648b0beebeb69579d0")
- The
regionfield is set to
Northwest
The above query returns the following results:
The document for "Jim Hall" receives a score of
2 because it meets the
requirements for the
must clause and the first of the two
should clauses.
You can search for multiple ObjectIDs with a compound query. The following
example query uses the
compound operator with a
should clause
to search for three different ObjectIDs, at least two of which must appear
to satisfy the query.
The above query returns the following results:
The document for "Ellen Smith" receives a score of
2 because it contains
two of the specified ObjectIDs in its
teammates array. | https://docs.atlas.mongodb.com/atlas-search/equals/ | 2022-01-16T19:06:36 | CC-MAIN-2022-05 | 1642320300010.26 | [array(['/assets/link.png', 'icons/link.png'], dtype=object)
array(['/assets/link.png', 'icons/link.png'], dtype=object)
array(['/assets/link.png', 'icons/link.png'], dtype=object)
array(['/assets/link.png', 'icons/link.png'], dtype=object)
array(['/assets/link.png', 'icons/link.png'], dtype=object)
array(['/assets/link.png', 'icons/link.png'], dtype=object)
array(['/assets/link.png', 'icons/link.png'], dtype=object)
array(['/assets/link.png', 'icons/link.png'], dtype=object)] | docs.atlas.mongodb.com |
ESB.
Changing an endpoint reference
Once the endpoint has been created, you can update it using any one of the options listed below. The options below describe how you can update the endpoint value for QA environment.
Option 1: Using ESB Tooling
- Open the
HelloWorldEP.xmlfile under HelloWorldQAResources project and replace the URL with the QA URL.
- Save all changes.
Your CApp can be deployed to your QA EI server. For details on how to deploy the CApp project, see Running the ESB profile via Tooling.
Option 2: From Command Line
- Open a Terminal window and navigate to
<ESB_TOOLING_WORKSPACE>/HelloWorldQAResources/src/main/synapse_configendpoints/HelloWorldEP.xmlfile.
Edit the HelloWorldEP.xml (e.g. using gedit or vi) under HelloWorldResources/QA and replace the URL with the QA one.
... <address uri=""/> ...
Navigate to
<ESB_TOOLING_WORKSPACE>/HelloWorldQAResourcesand build the ESB Config project using the following command:
mvn clean install
Navigate to
<ESB_TOOLING_WORKSPACE>/HelloWorldQACAppand build the CApp project using the following command:
mvn clean install
- The resulting CAR file can be deployed directly to the QA ESB server. For details, see Running the ESB profile via Tooling.
- To build the projects using the above commands, you need an active network connection.
- Creating a Maven Multi Module project that contains the above projects, allows you to projects in one go by simply building the parent Maven Multi Module project.
Option 3: Using a Script
Alternatively you can have a CAR file with dummy values for the endpoint URLs and use a customized shell script or batch script. The script created would need to do the following:
- Extract the CAR file.
- Edit the URL values.
- Re-create the CAR file with new values.
The resulting CAR file can be deployed directly to the QA ESB server. For details, see Running the ESB profile via Tooling. | https://docs.wso2.com/display/EI640/Working+with+Endpoints | 2022-01-16T19:19:53 | CC-MAIN-2022-05 | 1642320300010.26 | [] | docs.wso2.com |
scipy.special.lambertw¶
- scipy.special.lambertw(z, k=0, tol=1e-8)[source]¶
Lambert W function [R286]. [R287]. \(z^{z^{z^{\ldots}}}\):
>>> def tower(z, n): ... if n == 0: ... return z ... return z ** tower(z, n-1) ... >>> tower(0.5, 100) 0.641185744504986 >>> -lambertw(-np.log(0.5)) / np.log(0.5) (0.64118574450498589+0j) | https://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.special.lambertw.html | 2021-07-24T05:30:52 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.scipy.org |
Manually Deleting Checkpoint Files
Instead of using twbrmcp, you can delete the files manually. The procedure for manually deleting files varies depending on the operating system.
On UNIX or Windows Systems
rm TPT_install_directory/checkpoint/*
del TPT_install_directory\checkpoint\*.*
rm <user-defined directory>/*
del <user-defined directory>\*.*
On Z/OS
On z/OS, you can remove checkpoint files with either of the following two methods:
Method 1:
1 Go to the Data Set Utility panel (panel 3.2) in the Primary Options Menu of the TSO System Productivity Facility.
2 Enter the name of each checkpoint file in the name entry fields provided on this panel.
3 Type D (for “delete”) for the requested dataset option.
4 Hit Enter
Method 2:
Add a step to the beginning of your next Teradata PT job, with the following Job Control Language statements:
//DELETE PGM=IEFBR14
//CPD1 DD DISP=(OLD,DELETE),DSNAME=<high-level qualifier>.CPD1
//CPD2 DD DISP=(OLD,DELETE),DSNAME=<high-level qualifier>.CPD2
//LVCP DD DISP=(OLD,DELETE),DSNAME=<high-level qualifier>.LVCP
where <high-level qualifier> is the high-level qualifier you supplied to the TBUILD JCL PROC when you submitted the job that created these checkpoint files. Or substitute the names of your checkpoint datasets for everything to the right of DSNAME= above, if you have a different convention for naming them.
For examples of naming and using checkpoint datasets on z/OS “JCL Examples,” see the Teradata Parallel Transporter Reference. | https://docs.teradata.com/r/j9~8T4F8ZcLkW7Ke0mxgZQ/ZptQoxRd5GTvZnTg9Ahofg | 2021-07-24T03:57:55 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.teradata.com |
The following methods can be used to monitor VHA6.
To see your VHA6 statistics:
varnishstat -1 -f *vha6_stats*
The following metrics are logged:
broadcast_candidates- Requests evaluated for broadcasting.
broadcasts- Requests that have been broadcasted.
broadcast_skip- Requests that were skipped via a request override. See skip.
broadcast_nocache- Requests that could not be broadcasted due to uncacheability.
broadcast_lowttl- Request that could not be broadcasted due to low TTL and grace. See min_ttl and min_grace.
broadcast_toolarge- Requests that could not be broadcasted due to being too large. See max_bytes.
error_rate_limited- Transactions which hit the rate limit. See max_requests_sec.
fetch_peer- Broadcasts which hit this node and were converted to a peer fetch.
fetch_peer_hit- Broadcasts which hit this node and were a cache hit.
fetch_self- Broadcasts which hit the broadcasting node. These are expected for each broadcast.
fetch_origin- Fetches which hit this node as an origin from a peer.
fetch_origin_deliver- Fetches which successfully delivered an origin object to the peer.
fetch_peer_insert- Peer fetches which successfully inserted an object.
error_fetch- Peer fetches which resulted in an network or server error.
error_fetch_insert- Peer fetches which resulted in an origin VHA error.
error_origin_mismatch- Origin recieved the wrong fetch.
error_origin_miss- Origin does not have the object.
error_version_mismatch- VHA6 versions do not match across nodes.
error_no_token- No token present in transaction.
error_bad_token- Invalid token present in transaction.
error_stale_token- A valid token is present but its expired. See token_ttl.
legacy_vha- A legacy
vha-agentrequest was detected.
Use the following 1 liner to aggregate VHA6 stats across all VCLs:
varnishstat -1 -f *vha6_stats* | awk -F'[ .]+' '{t[$4]+=$5}END{for(f in t)print f " " t[f]}'
To see all VHA6 transactions:
varnishlog -g request -q "ReqMethod ~ VHA"
To capture VHA6 errors:
varnishlog -g request -q "VHA6 ~ ERROR"
All peer responses have a
vha6-origin header containing the origin hostname (server.identity).
To remove this header, add the following VCL:
sub vcl_deliver { unset resp.http.vha6-origin; } | https://docs.varnish-software.com/varnish-high-availability/monitoring/ | 2021-07-24T04:15:31 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.varnish-software.com |
SearchQuery API¶
The
SearchQuery class acts as an intermediary between
SearchQuerySet’s
abstraction and
SearchBackend’s actual search. Given the metadata provided
by
SearchQuerySet,
SearchQuery builds the actual query and interacts
with the
SearchBackend on
SearchQuerySet’s.
You can either hook it up in a
BaseEngine subclass or
SearchQuerySet
objects take a kwarg parameter
query where you can pass in your class.
SQ Objects¶.
Backend-Specific Methods¶
When implementing a new backend, the following methods will need to be created:
Inheritable Methods¶
The following methods have a complete implementation in the base class and can largely be used unchanged.
build_query¶
Interprets the collected query metadata and builds the final query to be sent to the backend.
build_params¶
Generates a list of params to use when searching.
clean¶
Provides a mechanism for sanitizing user input before presenting the value to the backend.
A basic (override-able) implementation is provided.
run¶
Builds and executes the query. Returns a list of search results.
Optionally passes along an alternate query for spelling suggestions.
Optionally passes along more kwargs for controlling the search query.
run_mlt¶
Executes the More Like This. Returns a list of search results similar to the provided document (and optionally query).
run_raw¶
Executes a raw query. Returns a list of search results.
get_count¶
Returns the number of results the backend found for the query.
If the query has not been run, this will execute the query and store the results.
get_results¶
Returns the results received from the backend.
If the query has not been run, this will execute the query and store the results.
get_facet_counts¶
Returns the results received from the backend.
If the query has not been run, this will execute the query and store the results.
boost_fragment¶
Generates query fragment for boosting a single word/value pair.
matching_all_fragment¶
Generates the query that matches all documents.
add_filter¶
Narrows the search by requiring certain conditions.
clear_order_by¶
Clears out all ordering that has been already added, reverting the query to relevancy.
add_model¶
Restricts the query requiring matches in the given model.
This builds upon previous additions, so you can limit to multiple models by chaining this method several times.
set_limits¶
Restricts the query by altering either the start, end or both offsets.
add_boost¶
Adds a boosted term and the amount to boost it to the query.
raw_search¶
Runs a raw query (no parsing) against the backend.
This method causes the
SearchQuery to ignore the standard query-generating
facilities, running only what was provided instead.
Note that any kwargs passed along will override anything provided
to the rest of the
SearchQuerySet.
more_like_this¶
Allows backends with support for “More Like This” to return results similar to the provided instance.
add_stats_query¶
Adds stats and stats_facets queries for the Solr backend.
add_within¶
SearchQuery.add_within(self, field, point_1, point_2):
Adds bounding box parameters to search query.
add_dwithin¶
SearchQuery.add_dwithin(self, field, point, distance):
Adds radius-based parameters to search query.
add_distance¶
SearchQuery.add_distance(self, field, point):
Denotes that results should include distance measurements from the point passed in.
add_field_facet¶
Adds a regular facet on a field.
add_date_facet¶
Adds a date-based facet on a field.
add_narrow_query¶
Narrows a search to a subset of all documents per the query.
Generally used in conjunction with faceting.
set_result_class¶
Sets the result class to use for results.
Overrides any previous usages. If
None is provided, Haystack will
revert back to the default
SearchResult object. | https://django-haystack.readthedocs.io/en/latest/searchquery_api.html | 2021-07-24T05:31:18 | CC-MAIN-2021-31 | 1627046150129.50 | [] | django-haystack.readthedocs.io |
OEMedChem TK 0.9.3¶
New features¶
New experimental classes and a new function for Matched Molecular Pair analysis were added. These APIs are experimental and they will likely change in future versions.
New functions were added to return type to name conversions for new parameterized constants.
New functions were added to return information for belief theory usage based on [Muchmore-2008].
New functions were added based on the implementation in
OEGetRingLinkerSideChainFragmentsto return annotated Bemis Murcko regions and types from the perception as defined in [Bemis-1996] .
New functions were added to return the graph-edit similarity used in clustering BROOD hitlists.
New functions were added which return individual terms that make up the Molecular Complexity as defined by [Bertz-1981].
New functions were added for a ring complexity measure similar to that defined by [Gasteiger-1979].
New functions were added which return a measure of stereo complexity as defined by [Boda-2007].
A new function was added to return a measure of the total molecular complexity which is a sum of individual complexity measures above.
Known bugs¶
The beta Matched Pair analyzer api internally removes stereochemistry from the internally indexed structures. It should more properly treat the presence of stereochemistry for duplicate checking and indexing equivalently. This issue will be repaired in the next release.
Documentation fixes¶
Added documentation for | https://docs.eyesopen.com/toolkits/java/medchemtk/releasenotes/version0_9_3.html | 2021-07-24T04:32:24 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.eyesopen.com |
Find an idea you like? Want to propose your own? See the application_process.adoc.. can be used for getting help with programming problems.
If you are new to the Fedora Project, the following material will help you to get started. You should also follow the application_process.adoc)
Radka (rhea) Janek (C#, webserver or dotnet related stuff on Linux, general support and help with the program)
Corey Sheldon (Python, 2Factor/Multi-Factor Auth, QA Testing, general mentoring, security,). | https://docs.fedoraproject.org/ro/mentored-projects/gsoc/2017/ideas/ | 2021-07-24T05:51:56 | CC-MAIN-2021-31 | 1627046150129.50 | [] | docs.fedoraproject.org |
.
String variable interpolation may use %-formatting, f-strings, or
str.format()as appropriate, with the goal of maximizing code readability.
Final judgments of readability are left to the Merger’s discretion. As a guide, f-strings should use only plain variable and property access, with prior local variable assignment for more complex cases:
# Allowed f'hello {user}' f'hello {user.name}' f'hello {self.user.name}' # Disallowed f'hello {get_user()}' f'you are {user.age * 365.25} days old' # Allowed with local variable assignment user = get_user() f'hello {user}' user_days_old = user.age * 365.25 f'you are {user_days_old} days old'
f-strings should not be used for any string that may require translation, including error and logging messages. In general
format()is more verbose, so the other formatting methods are preferred.
Don’t waste time doing unrelated refactoring of existing code to adjust the formatting method.
Avoid use of “we” in comments, e.g. “Loop over” rather than “We loop over”.
Use underscores, not camelCase, for variable, function and method names (i.e.
poll.get_unique_voters(), not
poll.getUniqueVoters()).
Use
InitialCapsfor.
Use
assertIs(…, True/False)for testing boolean values, rather than
assertTrue()and
assertFalse(), so you can check the actual boolean value, not the truthiness of the expression.:
$ python -m pip install isort >= 5.1.0 $ isort -rc .
...\> py -m pip install isort >= 5.1.0 ...\> isort -rc .
This runs
isortrec modulestatements before
from module import objectsin each section. Use absolute imports for other Django components and relative imports for local components.
On each line, alphabetize the items with the upper case items grouped before the lowercase Metashouldis defined for a given model field, define each choice as a list whichstatements that are no longer used when you change code. flake8 will identify these imports for you. If an unused import needs to remain for backwards-compatibility, mark the end of with
# NOQAtofile. | http://docs.djangoproject.com/en/dev/internals/contributing/writing-code/coding-style/ | 2020-11-24T06:55:14 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.djangoproject.com |
Project (Microsoft)
Microsoft Project is a project management software product. It is designed to assist a project manager in developing a schedule, assigning resources to tasks, tracking progress, managing the budget, and analyzing workloads.
Information Stored
The Microsoft Project Integration pulls an Application Roster and Application Access.
Note:The information stored is subject to change as enhancements are made to the product.
Application Roster
Application Access
Minimum Permissions Required
To grant the above permissions, the user must have Application Administrator access.
Note:To fetch Microsoft Project sign-in events, you must have an Azure AD Premium P1 or Premium P2 license assigned per tenant (for details, refer to Azure Active Directory editions), and you must ensure the Office 365 audit log is turned on (for details, refer to Turn Office 365 audit log search on or off).
Authentication Method
OAuth2
Credentials Required
Integrating Microsoft Project with SaaS Manager
To integrate Microsoft Project with SaaS Manager, perform the following steps.
To integrate Microsoft Project with SaaS Manager:
API Endpoints
Application Roster
Application Access
Note:Due to the limitations in Microsoft Graph APIs, we are not able to capture Suspicious Activities for the Microsoft Project integration. | https://docs.flexera.com/flexera/EN/SaaSManager/MicrosoftProjectIntegration.htm | 2020-11-24T06:55:49 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.flexera.com |
Cloudpath User Experience Cloudpath provides the prompts that guide the user through the sequence of steps that make up the enrollment workflow. During this process, the user enters information as requested, and makes selections about user type, device type, among others. User PromptsThis section displays the user prompts for a typical enrollment workflow. | https://docs.commscope.com/bundle/cloudpath-52-windows-phones-user-experience-guide/page/GUID-816CF29D-0A3A-42D1-8ABA-23A8DD06F455.html | 2020-11-24T07:12:01 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.commscope.com |
Glossary¶
This page gives definitions of domain-specific terms that we use a lot inside pretix and that might be used slightly differently elsewhere, as well as their official translations to other languages. In some cases, things have a different name internally, which is noted with a 🔧 symbol. If you only use pretix, you’ll never see these, but if you’re going to develop around pretix, for example connect to pretix through our API, you need to know these as well. | https://docs.pretix.eu/en/latest/user/glossary.html | 2020-11-24T06:40:00 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.pretix.eu |
There is a possibility to make changes in the look and feel of a slider.
For this you need to take the following steps:
<style> .my_first_class { /*some styles*/ } .my_second_class { /*some styles*/ } </style>
var slider = new dhx.Slider({ css:"my_first_class my_second-class" });
Related sample: Slider. Custom ColorsBack to top | https://docs.dhtmlx.com/suite/slider__customization.html | 2020-11-24T06:34:35 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.dhtmlx.com |
Set-Vpn
Auth Protocol
Configures the authentication method for incoming site-to-site (S2S) VPN interfaces on a Routing and Remote Access (RRAS) server.
Syntax
Set-Vpn
Auth Protocol [-UserAuthProtocolAccepted <String[]>] [-TunnelAuthProtocolsAdvertised <String>] [-RootCertificateNameToAccept <X509Certificate2>] [-CertificateAdvertised <X509Certificate2>] [-SharedSecret <String>] [-PassThru] [-CertificateEKUsToAccept <String[]>] [-CimSession <CimSession[]>] [-ThrottleLimit <Int32>] [-AsJob] [-WhatIf] [-Confirm] [<CommonParameters>]).
Examples
EXAMPLE 1
PS C:\> Set-VpnAuthProtocol -UserAuthProtocolAccepted Certificate -PassThru WARNING: Configuration parameters will be modified after the Remote Access service is restarted. UserAuthProtocolAccepted : {Certificate} TunnelAuthProtocolsAdvertised : Certificates RootCertificateNameToAccept : CertificateAdvertised :
This example changes the authentication method used by the server for incoming connections to Certificate and advertises certificates as authentication mechanism to the peer computers.
EXAMPLE 2
PS C:\>$cert1 = ( Get-ChildItem -Path cert:LocalMachine\root | Where-Object -FilterScript { $_.Subject -Like "*CN=Contoso Root Certification Authority,*" } ) PS C:\>Set-VpnAuthProtocol -RootCertificateNameToAccept $cert1 -PassThru
This example sets the root certificate against which all of the incoming connections computer certificates are matched.
Parameters
Runs the cmdlet as a background job. Use this parameter to run commands that take a long time to complete.
Specifies the certificate to be sent to a peer computer. Applicable only if the TunnelAuthProtocolsAdvertised parameter is set to Certificate.
Specifies an array of Certificate Extended Key Usage (EKU) extensions to allow. This parameter is only valid if the UserAuthProtocolAccepted parameter contains certificates..
Returns an object representing the item with which you are working. By default, this cmdlet does not generate any output.
Specifies the root certificates that are allowed. Applicable only if the UserAuthProtocolAccepted parameter contains certificates.
Specifies the text of the shared secret for the connection. Applicable only if the TunnelAuthProtocolsAdvertised parameter is set to PSK..
Specifies the local authentication protocols that are allowed.
Shows what would happen if the cmdlet runs. The cmdlet is not run.
Inputs
None
Outputs
The
Microsoft.Management.Infrastructure.CimInstance object is a wrapper class that displays Windows Management Instrumentation (WMI) objects.
The path after the pound sign (
#) provides the namespace and class name for the underlying WMI object. | https://docs.microsoft.com/en-us/powershell/module/remoteaccess/set-vpnauthprotocol?view=win10-ps | 2020-11-24T07:19:09 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.microsoft.com |
Visualization mod development
Although Spotfire offers many different visualization types, you might miss a certain way to visualize your data. To fill that gap, Spotfire provides a framework making it possible for a developer to extend Spotfire with new customized visualizations. Through the provided Spotfire mod API, these customized visualizations can be created using JavaScript or TypeScript, and they are called visualization mods.
The area chart below is an example of a visualization mod that has been created using this framework.
How does a visualization mod work?
You can think of a visualization mod as a visualization, whose appearance is specified by the mod developer who created the visualization mod, but still all data related functionality is handled in the same way as in any of the native Spotfire visualizations. An end user feels that the visualization mod is integrated with Spotfire, because the visualization responds to interactions in the same way as native visualizations do. For example, it is possible to drag a visualization mod to the visualization canvas, and change what is selected on the various visualization axes. Moreover, filtering of the data works as usual across all visualizations, no matter if they are native visualizations or visualization mods. The end user might not even notice there is a difference.
The visualization mods can be based on any of the data sources supported by Spotfire; in-memory data, in-database data, streaming data, and data-on-demand.
Sharing visualization mods with others
- saved to the Spotfire library.
Once saved to the library, the visualization mod can be added to analyses and also pinned to the visualization flyout. Users can browse and search the library for visualization mods.
- embedded in an analysis.
The analysis can then be saved to the library, or saved as a local file.
Developing visualization mods
A developer of a visualization mod needs a running instance of a Spotfire client, and a source code editor. Examples of visualization mods, which can serve as starting point for developers, are available for download from Spotfire Mods on GitHub. The examples to download are built using the Visual Studio Code editor. When using Visual Studio Code as editor, it is possible to get a live preview of the mod within the Spotfire client while developing.
For more information about the actual development and the tools, see Getting Started.
Version handling
By default, if a visualization mod is resaved to the Spotfire library, all instances of the visualization mod are updated in all analyses, where it is used.
See. | https://docs.spotfire.cloud.tibco.com/spotfire/GUID-6A2910C6-A44D-48B5-B80A-1D959DA3E63C.html | 2020-11-24T05:59:23 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.spotfire.cloud.tibco.com |
Diagnostic variables are available for device operation diagnostics. They are represented as OPC UA items and described below.
Only diagnostic variables that are common for all types of devices are described here
- GoodResponses (cyclic driver only): contains the total number of successful responses from the data source
- LastGoodResponseUtcTime (cyclic driver only): contains the time of the last successful response from the data source in the UTC format
- BadResponses (cyclic driver only): contains the total number of unsuccessful or unreceived (during loss of connection) responses from the data source
- LastBadResponseUtcTime (cyclic driver only): contains the time of the last unsuccessful or unreceived response from the data source in the UTC format
- CommunicationEstablished: contains that communication channel between the device and the data source has been established (True or False)
- CommunicationLost: inverted value of _CommunicationEstablished
- DemoIsExpired: contains that demonstration period for the device is expired
- MaxPollingDuration (cyclic driver only): contains the longest reading time for a group of blocks in the device
- MaxScanDuration (event-oriented driver only): specifies the maximum time (in milliseconds) it takes to parse data when listening to the channel
The values of the diagnostic variables can be overwritten by the user. This requires that the properties of the server’s objects must be available for writing, while the user must have the necessary rights. Read more about this here. | https://docs.monokot.io/hc/en-us/articles/360034746551-Device-Diagnostics | 2020-11-24T06:12:45 | CC-MAIN-2020-50 | 1606141171126.6 | [array(['/hc/article_attachments/360039986472/image-0.png', None],
dtype=object) ] | docs.monokot.io |
Get access to and back up a former user's data
Tip
Need help with the steps in this topic? We’ve got you covered. Make an appointment at your local Microsoft Store with an Answer Desk expert to help resolve your issue. Go to the Microsoft Stores page and choose your location to schedule an appointment.
When an employee leaves your organization, you probably want to access their data (documents and emails) and either review it, back it up, or transfer ownership to a new employee.
Access a former user's OneDrive documents
If you remove a user's license but don't delete the account, you retain access to the content in the user's OneDrive site. If you delete their account you have 30 days to access a former user’s OneDrive data. If you don't restore a user account within 30 days their OneDrive content is deleted. Before you delete the account, you should move the content from their OneDrive to another location.
To preserve a former user's OneDrive for Business documents you first access their OneDrive site and then move the files.
Use the new admin center to access a former user's OneDrive documents
The new admin center is available to all Microsoft 365 admins. You can opt in by selecting the Try the new admin center toggle located at the top of the Home page. For more information, see About the new Microsoft 365 admin center.
In the admin center, go to the Users > Active users page.
Select a user.
In the right pane, select OneDrive. Under Get access to files, select Create link to files.
Select the link to open the file location and download and copy the files to your own OneDrive for Business, or a common location. You can also share the link with another user to download the files.
Use the old admin center to access a former user's OneDrive documents
- In the admin center, go to the Users > Active users page.
- In the admin center, go to the Users > Active users page.
- In the admin center, go to the Users > Active users page.
Select a user.
In the right pane, expand OneDrive Settings, and then next to Access, select Access files.
Select the link to open the file location and download and copy the files to your own OneDrive for Business, or a common location. You can also share the link with another user to download the files.
Note
You can move up to 500 MB of files and folders at a time.
When you use Move to with documents that have version history, only the latest version is moved. To move earlier versions, you need to restore and move each one.
Revoke admin access to a user’s OneDrive site
As global admin you have access to the content in a user’s OneDrive site, but you may want to remove your access to a user’s documents. By default, the OneDrive Site Collection Administrator is the owner of the OneDrive account. The following steps describe how to remove a Site Collection Admin from a user’s OneDrive site.
If you get a message that you don't have permission to access the admin center, then you don't have administrator permissions in your organization.
In the left pane, select Admin centers > SharePoint.
In the left pane, select User Profiles.
Under People, select Manage User Profiles.
Enter the user's name and select Find.
Right-click the user, and then choose Manage site collection owners.
Remove the person who no longer needs access to the user's data, then select OK.
Learn more about how to add or remove site collection admins in the new SharePoint admin center, or in the classic SharePoint admin center.
Access the Outlook data of a former user, select File.
Select Open & Export > Import/Export.
Select Export to a file, and then select Next.
Select Outlook Data File (.pst), and then select Next.
Select the account you want to export by selecting.
Select Next.
Select Browse to select where to save the Outlook Data File (.pst). Type a file name, and then select OK to continue.
Note
If you've used export before, the previous folder location and file name appear. Type a different file name before selecting OK.
If you are exporting to an existing Outlook Data File (.pst), under Options, specify what to do when exporting items that already exist in the file.
Select select OK. In the Outlook Data File Password dialog box, type the password, and then select OK.
If you're exporting to an existing Outlook Data File (.pst) that is password protected, in the Outlook Data File Password dialog box, type the password, and then select OK.
See how to Export or backup email, contacts, and calendar to an Outlook .pst file in Outlook 2010.
Give another user access to a former user's email
To give access to the email messages, calendar, tasks, and contacts of the former employee to another employee, import the information to another employee's Outlook inbox.
Note
You can also convert the former user's mailbox to a shared mailbox or forward a former employee's email to another employee.
In Outlook, go to File > Open & Export > Import/Export.
This starts the Import and Export Wizard.
Select Import from another program or file, and then select Next.
Select Outlook Data File (.pst), and select Next.
Browse to the .pst file you want to import.
Under Options, choose how you want to deal with duplicates
Select Next.
If a password was assigned to the Outlook Data File (.pst), enter the password, and then select OK.
Set the options for importing items. The default settings usually don't need to be changed.
Select Finish.
Tip
If you want to import or restore only a few items from an Outlook Data File (.pst), you can open the Outlook Data File. Then, in the navigation pane, drag the items from Outlook Data File folders to your existing Outlook folders.
Related Topics
Remove a former employee from Office 365
Add and remove admins on a OneDrive account
Manage site collection administrators
OneDrive retention and deletion
Feedback | https://docs.microsoft.com/en-us/office365/admin/add-users/get-access-to-and-back-up-a-former-user-s-data?redirectSourcePath=%252flv-lv%252farticle%252fieg%2525C5%2525ABt-piek%2525C4%2525BCuvi-un-biju%2525C5%2525A1%2525C4%252581-lietot%2525C4%252581ja-datu-dubl%2525C4%252593%2525C5%2525A1ana-a6f7f9ad-e3f5-43de-ade5-e5a0d7531604&view=o365-worldwide | 2019-10-13T23:06:01 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.microsoft.com |
).
When filtering is applied to the control, if a currently selected item does not pass the filtering criteria, it will be deselected.. | https://docs.telerik.com/devtools/wpf/controls/radgridview/selection/basics | 2019-10-13T23:05:06 | CC-MAIN-2019-43 | 1570986648343.8 | [array(['images/RadGridView_BasicSelection_1.png',
'Telerik WPF DataGrid BasicSelection 1'], dtype=object)
array(['images/RadGridView_BasicSelection_2.png',
'Telerik WPF DataGrid BasicSelection 2'], dtype=object)] | docs.telerik.com |
An Act to amend 448.02 (3) (a); and to create 253.10 (3) (c) 2. es. and 253.103 of the statutes; Relating to: sex-selective, disability-selective, and other selective abortions and providing a penalty.
Bill Text (PDF: )
SB173 ROCP for Committee on Health and Human Services On 5/9/2019 (PDF: )
LC Bill Hearing Materials
Wisconsin Ethics Commission information
2019 Assembly Bill 182 - A - Vetoed | http://docs.legis.wisconsin.gov/2019/proposals/reg/sen/bill/sb173 | 2019-10-13T22:19:52 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.legis.wisconsin.gov |
Account reaper¶
The purpose of the account reaper is to remove data from the deleted accounts.
A reseller marks an account for deletion by issuing a
DELETE request
on the account’s storage URL. This action sets the
status column of
the account_stat table in the account database and replicas to
DELETED, marking the account’s data for deletion.
Typically, a specific retention time or undelete are not provided.
However, you can set a
delay_reaping value in the
[account-reaper] section of the
account-server.conf file to
delay the actual deletion of data. At this time, to undelete you have to update
the account database replicas directly, set the status column to an
empty string and update the put_timestamp to be greater than the
delete_timestamp.
Note
It is on the development to-do list to write a utility that performs this task, preferably through a REST call.
The account reaper runs on each account server and scans the server occasionally for account databases marked for deletion. It only fires up on the accounts for which the server is the primary node, so that multiple account servers aren’t trying to do it simultaneously. Using multiple servers to delete one account might improve the deletion speed but requires coordination to avoid duplication. Speed really is not a big concern with data deletion, and large accounts aren’t deleted often.
Deleting an account is simple. For each account container, all objects are deleted and then the container is deleted. Deletion requests that fail will not stop the overall process but will cause the overall process to fail eventually (for example, if an object delete times out, you will not be able to delete the container or the account). The account reaper keeps trying to delete an account until it is empty, at which point the database reclaim process within the db_replicator will remove the database files.
A persistent error state may prevent the deletion of an object or container. If this happens, you will see a message in the log, for example:
Account <name> has not been reaped since <date>
You can control when this is logged with the
reap_warn_after value in the
[account-reaper] section of the
account-server.conf file.
The default value is 30 days. | https://docs.openstack.org/swift/latest/admin/objectstorage-account-reaper.html | 2019-10-13T22:40:40 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.openstack.org |
DK11 for Delphi | GisFileS57.TGIS_FileS57.AddPartEvent | Constructors | Methods | Properties | Events
AddPart event. Will be fired when a new part is added to shape.
Available also on: .NET | Java.
// Delphi published property AddPartEvent : T_ExecAddPartEvent read write;
// C++ Builder published: __property T_ExecAddPartEvent* AddPartEvent = {read, write}; | https://docs.tatukgis.com/DK11/api:dk11:delphi:gisfiles57.tgis_files57.addpartevent | 2019-10-13T22:59:47 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.tatukgis.com |
The Trim Curve brushes (Trim Curve, Trim Lasso, Trim Rectangle and Trim Circle) are similar to the Clip Curve brush by removing the part of the model which is located on the shadowed side of the curve. There is a fundamental difference, however: these brushes totally remove the polygons rather than simply pushing them toward the curve.
The Trim Curves brush is selected by holding Ctrl+Shift and clicking the Brush thumbnail to access the pop-up selector. Once chosen, Trim Curves will always be activated when holding Ctrl+Shift until changed by choosing a new brush type via the same method.
These brushes work only on models without multiple subdivision levels.
Stroke options for Different Trim Results
For optimum predictable results, keep in mind that the position of the stroke over the model can produce different results.
- Open Curve: Your curve must cut through the entire model. If you stop the curve partway through a model then ZBrush will do its best to continue the curve to the edge, following the final path of your stroke.
- Close Curve (Lasso, Rectangle and Circle): When the stroke is entirely on the model, a new slice is created at the stroke location. This is exactly like the Clip brushes except that the topology outside the stroke is not pushed to the stroke edge. Instead, it is replaced with new topology, using the optimal number of polygons necessary to close the hole. When the stroke is not completely over the surface of the model then the polygons are cut out along the curve and the borders are filled with new polygons.
Do more with these brushes by enabling the BRadius option in the Brush>Clip Brush Modifiers. See the section below about Creating or Removing Thickness with the Trim Curve Brush.
The comparison between a Clip Curve brush on the left which pushes the polygons and theTrim Curve brush on the right which removes the polygons and then rebuilds the surface.
Important Note About the Shape of the Curve
The Trim Curve brush removes polygons which are unnecessary for cleanly capping the remaining mesh.
This hole closing function is able to close complex holes, but it is designed for creating flat caps. This means that the Trim Curve brush will generate optimum results when drawing straight lines or lines with sharp angles rather than rounded curves.
In addition to the TrimCurve brush you can also use the TrimCircle, TrimRect or TrimLasso for more control.
Creating or Removing Thickness With the Trim Curve Brush
It’s possible to cut away a path of geometry by activating the Brush >> Clip Brush Modifiers >> Brush Radius (BRadius) option.
This option uses the size of the brush (the brush radius) to keep only the polygons located within the brush radius relative to the curve. In effect, you’ll be drawing a swath of geometry that is the size of your Draw Size setting.
Holding the ALT key during the curve creation will delete the polygons within the brush radius, keeping the rest of the model instead.
On the left, the original Mesh and Trim Curve. In the center, the result of using the BRadius option. On the right, the same BRadius option, but with the ALT key pressed while releasing the brush cursor. | http://docs.pixologic.com/user-guide/3d-modeling/hard-surface/clip-brushes/trim-curve/ | 2019-10-13T22:57:53 | CC-MAIN-2019-43 | 1570986648343.8 | [array(['http://docs.pixologic.com/wp-content/uploads/2013/06/4R6-37.jpg',
'4R6-37'], dtype=object)
array(['http://docs.pixologic.com/wp-content/uploads/2013/06/4R6-38.jpg',
'4R6-38'], dtype=object) ] | docs.pixologic.com |
This for the components that have already been installed.
- The
apigeeuser is created on each VM.
- The
apigee-serviceutility:
- Developer Services portal
- Apigee SSO (Install Edge first and verify that it is working, then enable Apigee SSO and apply that change to the Edge installation). See Configuring Apigee.
Developer Services portal requires an SMTP server. If installed, it uses the same SMTP server as configured for Edge.
Prerequisites
Before you can install Edge, you must first meet the following prerequisites.
Edge License
Each installation of Edge requires a unique license file that you obtain from Apigee. If the license file is valid, the management server validates the expiry and allowed Message Processor (MP) count.
Complete Edge installation before enabling Apigee SSO or Monetization
Ops Manager 2.3 or 2.4
Ensure that you are using Ops Manager version 2.3 or 2.4 the Edge components. Only a subset of the Edge components are externally accessible and therefore require load balancers to control access. The following tables lists the Edge components that require a load balancer, as well.
- Under Apigee Edge on PCF, enter:
- System admin's e-mail address and password
- MP Pod name: default is "gateway"
- Region: default is "dc-1".
- Edge license.
- Apigee SSO, disable SSO for Management Server and Edge UI, BaaS, and Dev portal. After the Edge installation completes, then you can enable Apigee SSO.
See Configuring Apigee.
- Edge requires you to configure an SMTP server, as described in the following step. Some SMTP servers require you to set the sender's e-mail address used when generating e-mails. For example, it is required when installing Edge on AWS. To set the sender's e-mail address, set the following property:
- Under Edge UI Config Overrides set:
conf_apigee_apigee.mgmt.mailfrom="Apigee [email protected]" conf/application.conf+trustxforwarded=true
The second property specifies to use TLS in the URL sent to the user when resetting their password.
- If you installed the Developer Services portal, under Apigee Drupal Devportal Config Overrides set:
conf_devportal_default_sender_address="[email protected]"
- Select SMTP to configure the e-mail server used for e-mail messages sent from Edge. For example, when a user requests a new password. SMTP e-mail is disabled by default.
For the SMTP port, the value can be different based on the selected encryption protocol. For example, for Gmail, the port is 465 when using SSL and 587 for TLS.
If installed, the Developer Services portal also uses two servers each.
Note: Do not check any of the INTERNET CONNECTED boxes. All externally accessible Edge components use a load balancer to provide the Internet access.
- In Resource Config, ensure that you select a VM TYPE that matches the system requirements of the component as defined at Hardware Requirements.
- In Resource Config, specify the load balancer names in the LOAD BALANCERS column for the Management Server and Router.
- If you are installing the Apigee Developer Services portal (called Drupal Devportal in Ops Manager), specify the load balancer for the server.
See Configure load balancers.
Test the installation
This section describes how to test the Edge installation.
Log in to the Edge UI
The Edge UI lets you perform most of the tasks necessary to create, configure, and manage API proxies, API products, apps, and users. Once you installed Edge, you enabled the Apigee Validation Errand. This errand creates an organization named VALIDATE on edge.
After installing Edge, log in to the VALIDATE organization in the Edge UI by using the following procedure:
- Open the following URL in a browser:
In this URL, edge_ui_domain is the domain name of the Edge UI as defined by the load balancer for the Management Server component.
- When prompted, enter the system admin's e-mail address and password that you specified in the Ops Manager when you installed Edge.
The Edge UI appears.
Make calls the the Edge API
Test the installation by making calls to the Edge management API.
- Create a virtual host by running the following cURL command. This virtual host lets you use API calls to validate the installation:
curl -X PUT -u sysAdminEmail:passwd \ \ .
See Provisioning organizations for more. | https://docs.apigee.com/private-cloud/v4.19.06/installing-edge-using-ops-manager | 2019-10-13T22:53:54 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.apigee.com |
Difference between revisions of "Running a contest"
From Win-Test Wiki
Latest revision as of 00:43, 10 December 2016
Contents
Logging QSOs
SO1R specifics
- Automated CW generation, RTTY messages and voice keyers
- Radio Control
- Use of Cluster spots
- Taking advantage of the Band Map feature
SO2R specifics
- Learning the second radio window
- Using the 'shift binds second radio' option
This feature changes some key assignments in Win-Test
- Using the 'caps lock binds to secondary radio' option
- Using 'advanced SO2R' mode and setting up scenarios
- Win-Test and | http://docs.win-test.com/w/index.php?title=Running_a_contest&diff=cur&oldid=2431&printable=yes | 2020-05-25T14:29:59 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.win-test.com |
Configuration Overview
Learn how to configure SRM with the
srm.properties configuration file.
SRM configuration is managed through a single configuration file. The default location of the
configuration file for .rpm based installs is
/opt/streams-replication-manager/config. For ZIP or TAR based installations
the default location is
$SRM_CONF_DIR/srm.properties. The default name of the
file is
srm.properties.
Command line tools provided with SRM read configuration properties from the configuration
file. All tools automatically collect configuration information from the default file. If
required it is possible to store the configuration file at a different location. In this case
however, you need to specify the configuration file and its location when using the tools.
Specifying an alternate location for the file is achieved with the
--config
option which is supported by all SRM command line tools.
The
srm.properties file accepts all Kafka client properties available in the
version of Kafka that you are using. Additionally, it supports a number of SRM specific
configuration properties. For a comprehensive list of SRM specific properties, see
Configuration Properties Reference.
- Top level: Top level or global configuration is achieved by adding the property on its own. For example:
replication.factor=3
- Cluster level: Cluster level configuration can be achieved by prepeding the configuration property with a cluster alias. For example:
primary.replication.factor=3
- Replication level: Replication level configuration can be achieved by prepeding the configuration property with the name of the replication. For example:
primary->secondary.replication.factor=3
Minimum Configuration
At minimum the configuration file has to contain cluster aliases, cluster connection
information, and at least one
cluster->cluster replication that is enabled.
clustersproperty. Aliases are arbitrary names defined by the user. They are used in other configuration properties as well as with the SRM command line tools to refer to the clusters added for replication. For example:
#Kafka cluster aliases clusters = primary, backup
bootstrap.serversproperty. You add connection information by prepending the
bootstrap.serversproperty with a cluster alias and adding the address of the Kafka broker as the value. When configuring connection information, add each cluster to a new line. If a cluster has multiple hosts, add them to the same line but delimit them with commas. For example:
#Kafka broker addresses primary.bootstrap.servers = primary-cluster1.vpc.example.com:9092, primary-cluster2.vpc.example.com:9092, ... backup.bootstrap.servers = backup-cluster1.vpc.example.com:9092, backup-cluster1.vpc.example.com:9092, ...
Cluster replications can be set up and enabled as follows:
primary->backup.enabled = true backup->primary.enabled = true
The default
srm.properties shipped with SRM contains examples for cluster
aliases, connection information, and
cluster->cluster replications. In
addition, it also contains a number of other pre-set properties. These properties however
are not required for SRM to function, they are included to give users a basic example of how
the configuration file can be set up. Cloudera recommends that you study the example
configuration provided in Configuration Examples to find out more about how SRM
should be set up for different replication scenarios. | https://docs.cloudera.com/srm/1.0.0/configuration/topics/srm-configuration-overview.html | 2020-05-25T14:36:51 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.cloudera.com |
Documentation Updates for the Period Ending August 13, 2016
Recently Edited & Reviewed Docs
The following documents were edited by request, as part of the normal doc review cycle, or as a result of updates to how the Fastly web interface and API operate:
- API Authentication
- Authenticating before returning a request
- Enabling URL token validation
- Paying your bill
Docs for Fastly's Next Web Interface
As part of the features being created and updated for Fastly's next web interface, we've added or significantly updated the following guides:
- About the web interface controls
- Curl and other caching verification methods
- Enabling and disabling two-factor authentication
- Glossary of terms
- Submitting support requests
Our documentation archive contains PDF snapshots of docs.fastly.com site content as of the above date. Previous updates can be found in the archive as well. | https://docs.fastly.com/changes/2016/08/13/changes | 2020-05-25T15:24:11 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.fastly.com |
How to Update Layers Child Themes
Layers Child Themes are best updated manually unless the author has provided you with an auto-update plugin that will not interfere with the Layers parent theme.
Get Notified
If you purchased your Child Theme on ThemeForest, ensure your email notifications are setup so you are emailed when a new version is uploaded. You may also subscribe to our Updates RSS feed here to get notified on updates of Layers and products created by Obox.
Get Access to Updates
If you have purchased a Child Theme created by Obox, redeem your purchase code on layerswp.com to gain access to updates, special bundles and discounts, and dedicated support. You can view our list of products here.
To access updates from our site, login and click the Download button under the item you want to save the updated zip file to your desktop.
If you have purchased your Child Theme from an independent website or author on Themeforest, check with the author for update info. Examples of independently authored themes are:
Update Layers First
- Go to→ and activate the main Layers theme.
- Go to Layers → and verify you have the Layers Updater Plugin installed. Refer to How to Update Layers
- Go to→ from your main WordPress admin menu. If an update for Layers is available, you will see it listed in the Themes section. Apply any updates needed.
Reinstall via WordPress
Often the quickest route. Note this will overwrite any modifications you may have made directly to child theme files.
- Go to→ and ensure the main Layers theme is activated.
- Click on the child theme’s thumbnail and click Delete in the lower-right corner. Confirm deletion. Don’t worry, this does not affect your content, presets or widgets!
- Install the Child Theme using the updated zip and Activate it.
Update Manually via FTP (advanced)
If you need to update outside of WordPress for any reason, you can do so by overwritin the child theme folder with the new one via FTP. Your host provides you with FTP file access to your webspace via a free program such as FileZilla, or through your hosting control panel. If this option sounds daunting, see the Reinstall method above.
- Download your updated Child Theme package and unzip it to your desktop. You should have a theme folder (ex. layers-coffee) containing a style.css and other theme content.
- If downloading from themeforest, open the themename-package folder to find the updated theme zip (themeforest).
- Connect to your site via FTP
- Upload the child theme’s themename folder you unzipped in step 2 to wp-content/themes
- You should be asked to confirm overwriting of this folder and its contents. Confirm the overwrite.
Did you know?
Our friends at Jetpack are doing some incredible work to improve the WordPress experience. Check out Jetpack and improve your site's security, speed and reliability.
| https://docs.layerswp.com/doc/how-to-update-layers-child-themes/ | 2020-05-25T13:58:55 | CC-MAIN-2020-24 | 1590347388758.12 | [array(['https://refer.wordpress.com/wp-content/uploads/2018/02/leaderboard-light.png',
'Jetpack Jetpack'], dtype=object) ] | docs.layerswp.com |
Application Management
Application Management
The Application Management API provides a way to manage Glue42 Enterprise applications. It offers abstractions for:
- Application - a program as a logical entity, registered in Glue42 Enterprise with some metadata (name, description, icon, etc.) and with all the configuration needed to spawn one or more instances of it. The Application Management API provides facilities for retrieving application metadata and for detecting when an application is started.
On how to define and configure an application, see the Configuration section of the documentation.
- Instance - a running copy of an application. The Application Management API provides facilities for starting/stopping application instances and tracking application related events.
Application Stores
Glue42 Enterprise can obtain application configurations from a path to a local app configurations folder, as well as from a remote REST service. The settings for defining application configuration stores can be edited in the
system.json file of Glue42 Enterprise, located in
%LocalAppData%\Tick42\GlueDesktop\config.
In the standard Glue42 Enterprise deployment model, application definitions are not stored locally on the user machine but are served remotely. If Glue42 Enterprise is configured to use a remote FDC3 App Directory compatible application store, it will poll it periodically and discover new application definitions. The store implementation is usually connected to an entitlement system based on which different users can have different applications or versions of the same application. In effect, Glue42 Enterprise lets users run multiple versions of the same application simultaneously and allows for seamless forward/backward application rolling.
Local Path App Stores
If you want to add an app store, add an object like the one below in the
appStores array property:
"appStores": [ { "type": "path", "details": { "path": "path to a folder with app configurations" } }, { "type": "path", "details": { "path": "path to another folder with app configurations" } } ]
REST Service App Stores
Application configurations can also be hosted on a server and obtained from a REST service.
For a reference implementation of a remote application configurations store, see our Node.js Application and Layout REST Server Example that implements the FDC3 App directory and is compatible with Glue42 Enterprise. This basic implementation does not take the user into account and returns the same set of data for all requests. For instructions on running the sample server on your machine, see the
README.md in the repository.
For a .NET implementation of a remote application configurations store, see our .NET Application and Layout REST Server Example.
If your Glue42 Enterprise copy is not configured to retrieve its configuration from a remote source, you will need to edit the
system.json file (located in the
%LOCALAPPDATA%\Tick42\GlueDesktop\config directory).
Connecting to the REST Service
To configure a connection to the REST service providing the application store, you only need to add a new entry to the
appStores top-level key:
"appStores": [ { "type": "rest", "details": { "url": "", "auth": "no-auth", "pollInterval": 30000, "enablePersistentCache": true, "cacheFolder": "%LocalAppData%/Tick42/UserData/%GLUE-ENV%-%GLUE-REGION%/gcsCache/" } } ]
The only required properties are
type, which should be set to
rest, and
url, which is the address of the remote application store. You can also set the authentication, polling interval, cache persistence and cache folder.
auth- authentication configuration;
pollInterval- interval at which to poll the REST service for updates;
enablePersistentCache- whether to cache and persist the layouts locally (e.g., in case of connection interruptions);
cacheFolder- where to keep the persisted layout files; | https://docs.glue42.com/glue42-concepts/application-management/overview/index.html | 2020-05-25T15:54:05 | CC-MAIN-2020-24 | 1590347388758.12 | [array(['../../../images/app-management/app-management.gif',
'App Management'], dtype=object)
array(['../../../images/configuration-stores/app-stores.png',
'App Stores'], dtype=object) ] | docs.glue42.com |
Updated Windows Azure Training Kit
Microsoft released the latest version of the Windows Azure platform training kit. The kit is now in 2 versions, one for vs2008 developers and one for 2010 developers. You can download the kit from.
What’s New:
- "Asynchronous Workload Handling"
- Added a new exercise to the "Deploying Applications in Windows Azure" hands-on lab to show how to use the new tools to directly deploy from Visual Studio 2010.
- Added a new exercise to the "Introduction to the AppFabric Service Bus" hands-on lab to show how to connect a WCF Service in IIS 7.5 to the Service Bus
- Updated the "Introduction to AppFabric Service Bus" hands-on lab based on feedback and split the lab into 2 parts
- All of the presentations have also been updated and refactored to provide content for a 3 day training workshop.
- Updated the training kit navigation pages to include a 3 day agenda, a change log, and an improved setup process for hands-on labs.
Join.
To Get these Great Benefits Visit this Link: Microsoft Platform Ready | https://docs.microsoft.com/en-us/archive/blogs/usisvde/updated-windows-azure-training-kit | 2020-05-25T15:22:52 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.microsoft.com |
Assigning values from process results
You can set a field to the stdout value that results from running a specified process. The process used to set the field can be located on one of the following:
- The client system (active links only).
- The server system on which a BMC Remedy AR System server has been installed.
Warning
If the process runs on the server, it uses the permissions of the user who started the BMC Remedy AR System server. If the process runs on the client, it uses the permissions of the user who started the mid tier. This can have security implications for your system.
The syntax identifies where the process that you want to run is located.
For active links, you can run a process in the following ways:
- On the client computer — To access a process located on the same computer as the client, use the following syntax:
$PROCESS$ <processToRun>
- On the current BMC Remedy AR System server — To run a process on the current BMC Remedy AR System server and set a field in a form that resides on that server, use the following syntax:
$PROCESS$ @@:<processToRun>
- On any specific BMC Remedy AR System server — To run a process located on specific BMC Remedy AR System server and set a field located in a form that resides on the current BMC Remedy AR System server, use the following syntax:
where ARSserver is the name of a specific BMC Remedy AR System server where the process runs.
$PROCESS$ @<ARSserver:processToRun>
For filters or escalations, the syntax for loading the return of a process is as follows:
$PROCESS$ <processToRun>
The $PROCESS$ tag indicates that all text that follows is a command line. The command line can include substitution parameters from the current screen to enable values to be placed into the command line before it is executed. The command cannot exceed 4096 bytes after the substitution parameters are expanded. The actual maximum length is limited by the operating system in use with BMC Remedy AR System server. Select substitution parameters (and the $PROCESS$ string) from the Value list.
For a list of available $PROCESS$ commands, see Process commands.
When the action is executed:
- The specified command line is executed.
- The calling program waits for the process to be completed.
- The program reads and processes all data returned to stdout according to the exit status code that the process returns.
- If the process returns an exit status code of 0, the returned data is used as the value for the field.
The data is expected in text format and is converted, as needed, to match the data type of the target field. If the process returns a code other than 0, it is assumed that there was an error and the process failed. In this case, the returned value is treated as the text of an error message and is returned to the user.
If the process is located on a server, activity for that server thread is blocked until the process is completed or the process interval is exceeded. If the process has timed out, the server stops its blocking action but does not stop the process. However, the server ignores any process response after the time-out. You can use the Timeouts tab of the AR System Administration: Server Information form to configure the process interval so that the server continues processing other tasks, even if the requested process response has not been received. For more information, see the timeouts and configuration information in Configuring AR System servers.
For active links, when you design an active link that loads field values from a process that is run on the client, be aware of the hardware platforms and operating systems that your clients might be using. The process that you are specifying might not be available on all platforms and operating systems. If your users run the client tools on more than one type of platform or operating system, you can build a qualification for the active link by using the $HARDWARE$ and $OS$ keywords to verify that the client is running on an appropriate platform and operating system at the time the active link executes. See Using buttons and menu bar items to execute active links for more information.
When assigning values from process results, remember the following tips:
- Adjust your command syntax appropriately for the platform on which your server is running and include the explicit path to the command; for example, /home/jim/bin/command. In a Windows environment, you also must specify the drive; for example, d:\home\jim\bincommand.bat.
- On a Windows server, you can only run a process that runs in a console (such as a .bat script or runmacro.exe ).
- In a UNIX environment, the process runs under a Bourne shell.
- Use double quotation marks around substituted fields when the values might contain spaces or other special characters; for example, /bin/cmd "$ field$".
- Substituted field values that contain hard returns or other special characters can have unexpected results. | https://docs.bmc.com/docs/ars1805/assigning-values-from-process-results-804715578.html | 2020-05-25T15:22:21 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.bmc.com |
Access point for the People custom configuration
The custom configuration for people is accessed from the Foundation > People expandable command list on the Custom tab of the Application Administration Console. Remedy administrators can access this application setting.
The following figure shows the access point.
Tip
Carefully read the instructions for configuring people in the rest of this section before using the configuration commands in the Foundation > People expandable list.
Access point for the People custom configuration
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/itsm1805/access-point-for-the-people-custom-configuration-804709462.html | 2020-05-25T15:38:25 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.bmc.com |
We're improving your VCL documentation experience! Check out the future home of our VCL reference at the Fastly Developer Hub.
Fastly VCL reference. Fastly has included a number of extensions to VCL that won't be covered by any other documentation.
Reference
Functions
Functions available in Fastly VCL.
Variables
VCL variables supported by Fastly.
Local variables
Fastly VCL supports variables for storing temporary values during request processing.
Operators
Fastly VCL provides various arithmetic and conditional operators.
Types
Fastly VCL is a statically typed language. Several types are available.
Directors
Fastly's directors contain a list of backends to direct requests to.
Rounding modes
Fastly VCL provides access to rounding modes by way of independent functions for rounding values.
VCL Snippets
About VCL Snippets
VCL Snippets are short blocks of VCL logic that can be included directly in your service configurations.
Using dynamic VCL Snippets
Dynamic VCL Snippets are versionless sections of VCL logic that can be inserted into your service configuration without requiring custom VCL. API only.
Using regular VCL Snippets
Regular VCL Snippets are versioned sections of VCL logic that can be inserted into your service configuration without requiring custom VCL.
Custom VCL
Creating custom VCL
Create your own Varnish Configuration Language (VCL) files with specialized configurations.
Uploading custom VCL
Upload custom VCL files to use custom VCL and Fastly VCL together at the same time.
Previewing and testing VCL
Preview and test custom VCL prior to activating a new version of a service.
Categories
Content negotiation
Functions for selecting a response from common content negotiation request headers.
Cryptographic
Functions for cryptographic- and hashing-related purposes.
Date and time
Variables and functions that provide flexibility when dealing with dates and times.
Edge Side Includes (ESI)
Variables that allow you to track and control requests with ESI.
Floating point classifications
Floating point classification functions.
Geolocation
Variables that provide the ability to search a geolocation database for a given host or IP address.
Math constants and limits
Features that support various math constants and limits.
Math rounding
Rounding of numbers.
Math trigonometric
Trigonometric functions.
Miscellaneous
Miscellaneous features that don't easily fit into other categories.
Query string manipulation
Functions for query string manipulation based on Dridi Boukelmoune's vmod-querystring for Varnish.
Randomness
Functions that support the insertion of random strings, content cookies, and decisions into requests.
Segmented Caching
Variables related to controlling range requests via Segmented Caching.
Server
Variables relating to the server receiving the request.
Size
Variables that give more insight into what happened in a request.
String manipulation
Functions manipulating strings of arbitrary text.
Table
Functions that provide a means to declare a constant dictionary and to efficiently look up values in the dictionary.
TCP info
Variables that provide TCP information.
TLS and HTTP
Variables that expose information about the TLS and HTTP attributes of a request.
UUID
Functions that provide interfaces for generating and validating unique identifiers. | https://docs.fastly.com/vcl | 2020-05-25T15:27:01 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.fastly.com |
We don't capture, store or use any team room session video or voice transmitted data.
Enhanced security controls allow teams to seamlessly manage user and team room access permissions. For more information on managing users refer to Add/Delete Users/Team Members.
Team members are managed via Settings=> Manage Users page within the EmuCast wen app. Team rooms are managed via the EmuCast client app. | https://docs.emucast.com/security-and-privacy | 2020-05-25T15:00:17 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.emucast.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.