content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
To show that the document we inserted in the previous step is stored in the database, we can do a simple MongoCollection::findOne() operation to get a single document from the collection. This method is useful when there is only one document matching the query or you are only interested in one result.
<?php
$connection = new MongoClient();
$collection = $connection->database->collectionName;
$document = $collection->findOne();
var_dump( $document );
?>
The above example will output:
array(6) { ["_id"]=> object(MongoId)#8 (1) { ["$id"]=> string(24) "4e2995576803fab768000000" } ["name"]=> string(7) "MongoDB" ["type"]=> string(8) "database" ["count"]=> int(1) ["info"]=> array(2) { ["x"]=> int(203) ["y"]=> int(102) } ["versions"]=> array(3) { [0]=> string(5) "0.9.7" [1]=> string(5) "0.9.8" [2]=> string(5) "0.9.9" } }
Note that there is an _id field that has been added automatically to your document. _id is the "primary key" field. If your document does not specify one, the driver will add one automatically.
If you specify your own _id field, it must be unique to the collection. See the example here:
<?php
$connection = new MongoClient();
$db = $connection->database;
$db->foo->insert(array("_id" => 1));
// this will throw an exception
$db->foo->insert(array("_id" => 1));
// this is fine, as it is a different collection
$db->bar->insert(array("_id" => 1));
?>
By default the driver will ensure the server has acknowledged the write before returning. You can optionally turn this behaviour off by passing array("w" => 0) as the second argument. This means that the driver should not wait for the database to acknowledge the write and would not throw the duplicate _id exception.
MongoCollection::findOne() for more information about finding data.
MongoId goes into more detail on unique ids.
The writes section covers writes in more depth, and the Write Concerns chapter goes into details of the various Write Concern options. | http://docs.php.net/manual/en/mongo.tutorial.findone.php | 2015-11-25T00:20:33 | CC-MAIN-2015-48 | 1448398444138.33 | [] | docs.php.net |
JCacheStorage::getAll
From Joomla! Documentation
Revision as ofAll
Description
Get all cached data.
Description:JCacheStorage::getAll [Edit Descripton]
public function getAll ()
- Returns mixed Boolean false on failure or a cached data object
- Defined on line 150 of libraries/joomla/cache/storage.php
- Since
See also
JCacheStorage::getAll source code on BitBucket
Class JCacheStorage
Subpackage Cache
- Other versions of JCacheStorage::getAll
SeeAlso:JCacheStorage::getAll [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JCacheStorage::getAll&oldid=94854 | 2015-11-25T01:15:03 | CC-MAIN-2015-48 | 1448398444138.33 | [] | docs.joomla.org |
Information for "Templates" Basic information Display titleCategory:Templates Default sort keyTemplates Page length (in bytes)914 Page ID227 Page content languageEnglish (en) Page content modelwikitext Indexing by robotsAllowed Number of redirects to this page0 Category information Number of pages118 Number of subcategories14 Number of files0 Page protection EditAllow all users MoveAllow all users Edit history Page creatorChris Davenport (Talk | contribs) Date of page creation17:44, 14 January 2008 Latest editorTom Hutchison (Talk | contribs) Date of latest edit08:04, 16 December 2014 Total number of edits9 Total number of distinct authors5 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded template (1)Template used on this page: Template:Dablink (view source) Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Category:Templates&action=info | 2015-11-25T00:11:25 | CC-MAIN-2015-48 | 1448398444138.33 | [] | docs.joomla.org |
Revision history of "JDatabaseMySQLi:: construct/1.6"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 13:09, 3 May 2013 Wilsonge (Talk | contribs) deleted page JDatabaseMySQLi:: construct/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JDatabaseMySQLi::__construct== ===Description=== Database object constructor. {{Description:JDatabaseMySQLi::__construct}} <span class=..." (and the only contributor was "Doxiki2")) | https://docs.joomla.org/index.php?title=JDatabaseMySQLi::_construct/1.6&action=history | 2015-11-25T00:48:04 | CC-MAIN-2015-48 | 1448398444138.33 | [] | docs.joomla.org |
Frequency-swept cosine generator.
In the following, ‘Hz’ should be interpreted as ‘cycles per time unit’; there is no assumption here that the time unit is one second. The important distinction is that the units of rotation are cycles, not radians.
See also
scipy.signal.waveforms.sweep_poly
Notes
There are four options for the method. The following formulas give the instantaneous frequency (in Hz) of the signal generated by chirp(). For convenience, the shorter names shown below may also be used.
linear, lin, li:
f(t) = f0 + (f1 - f0) * t / t1
quadratic, quad, q:
The graph of the frequency f(t) is a parabola through (0, f0) and (t1, f1). By default, the vertex of the parabola is at (0, f0). If vertex_zero is False, then the vertex is at (t1, f1). The formula is:
if vertex_zero is True:f(t) = f0 + (f1 - f0) * t**2 / t1**2
else:f(t) = f1 - (f1 - f0) * (t1 - t)**2 / t1**2
To use a more general quadratic function, or an arbitrary polynomial, use the function scipy.signal.waveforms.sweep_poly.
logarithmic, log, lo:
f(t) = f0 * (f1/f0)**(t/t1)
f0 and f1 must be nonzero and have the same sign.
This signal is also known as a geometric or exponential chirp.
hyperbolic, hyp:
f(t) = f0*f1*t1 / ((f0 - f1)*t + f1*t1)
f1 must be positive, and f0 must be greater than f1. | http://docs.scipy.org/doc/scipy-0.11.0/reference/generated/scipy.signal.chirp.html | 2015-11-25T00:13:13 | CC-MAIN-2015-48 | 1448398444138.33 | [] | docs.scipy.org |
[This is preliminary documentation and is subject to change.]
On this page we'll outline some tips and tricks that could come in handy when integrating with the ADM
A common use-case is that you'll want to retrieve all objects of a certain type from an AnalysisModel
This can be achieved by using LINQ queries, because the model implements IEnumerable and as such, can be treated as a collection.
AnalysisModel model = new AnalysisModel(); // Get all nodes from the model StructuralPointConnection[] nodesFromModel = model.OfType<StructuralPointConnection>().ToArray(); // Find a material by name, will be NULL when not found StructuralMaterial material = model.OfType<StructuralMaterial>().FirstOrDefault(x => x.Name == "MyNodeName"); // Creating a lookup dictionary for all beams in a model by name Dictionary<string, StructuralCurveMember> beamsLookup = model.OfType<StructuralCurveMember>().ToDictionary(x => x.Name); | https://docs.calatrava.scia.net/html/f7f7f755-b592-4645-a497-02494df62e9b.htm | 2022-08-07T19:50:46 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.calatrava.scia.net |
ScyllaDB was originally designed, following Apache Cassandra, to use gossip for topology and schema updates and the Paxos consensus algorithm for strong data consistency (LWT). To achieve stronger consistency without performance penalty, ScyllaDB 5.0 is turning to Raft - a consensus algorithm designed as an alternative to both gossip and Paxos.
Raft is a consensus algorithm that implements a distributed, consistent, replicated log across members (nodes). Raft implements consensus by first electing a distinguished leader, then giving the leader complete responsibility for managing the replicated log. The leader accepts log entries from clients, replicates them on other servers, and tells servers when it is safe to apply log entries to their state machines.
Raft uses a heartbeat mechanism to trigger a leader election. All servers start as followers and remain in the follower state as long as they receive valid RPCs (heartbeat) from a leader or candidate. A leader sends periodic heartbeats to all followers to maintain his authority (leadership). Suppose a follower receives no communication over a period called the election timeout. In that case, it assumes no viable leader and begins an election to choose a new leader.
Leader selection is described in detail in the raft paper.
Scylla 5.0 uses Raft to maintain schema updates in every node (see below). Any schema update, like ALTER, CREATE or DROP TABLE, is first committed as an entry in the replicated Raft log, and, once stored on most replicas, applied to all nodes in the same order, even in the face of a node or network failures.
Following Scylla 5.x releases will use Raft to guarantee consistent topology updates similarly.
Raft requires at least a quorum of nodes in a cluster to be available. If multiple nodes fail and the quorum is lost, the cluster is unavailable for schema updates. See Handling Failures for information on how to handle failures.
Note that when you have a two-DC cluster with the same number of nodes in each DC, the cluster will lose the quorum if one of the DCs is down. We recommend configuring three DCs per cluster to ensure that the cluster remains available and operational when one DC is down.
Note
In ScyllaDB 5.0:
Raft is an experimental feature.
Raft implementation only covers safe schema changes. See Safe Schema Changes with Raft.
If you are creating a new cluster, add
raft to the list of experimental features in your
scylla.yaml file:
experimental_features: - raft
If you upgrade to ScyllaDB 5.0 from an earlier version, perform a rolling restart
updating the
scylla.yaml file for each node in the cluster to enable the experimental Raft feature:
experimental_features: - raft
When all the nodes in the cluster and updated and restarted, the cluster will begin to use Raft for schema changes.
Warning
Once enabled, Raft cannot be disabled on your cluster. The cluster nodes will fail to restart if you remove the Raft feature.
You can verify that Raft is enabled on your cluster in one of the following ways:
Retrieve the list of supported features by running:
cqlsh> SELECT supported_features FROM system.local;
With Raft enabled, the list of supported features in the output includes
SUPPORTS_RAFT_CLUSTER_MANAGEMENT. For example:
supported_features ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CDC,CDC_GENERATIONS_V2,COMPUTED_COLUMNS,CORRECT_COUNTER_ORDER,CORRECT_IDX_TOKEN_IN_SECONDARY_INDEX,CORRECT_NON_COMPOUND_RANGE_TOMBSTONES,CORRECT_STATIC_COMPACT_IN_MC,COUNTERS,DIGEST_FOR_NULL_VALUES,DIGEST_INSENSITIVE_TO_EXPIRY,DIGEST_MULTIPARTITION_READ,HINTED_HANDOFF_SEPARATE_CONNECTION,INDEXES,LARGE_PARTITIONS,LA_SSTABLE_FORMAT,LWT,MATERIALIZED_VIEWS,MC_SSTABLE_FORMAT,MD_SSTABLE_FORMAT,ME_SSTABLE_FORMAT,NONFROZEN_UDTS,PARALLELIZED_AGGREGATION,PER_TABLE_CACHING,PER_TABLE_PARTITIONERS,RANGE_SCAN_DATA_VARIANT,RANGE_TOMBSTONES,ROLES,ROW_LEVEL_REPAIR,SCHEMA_TABLES_V3,SEPARATE_PAGE_SIZE_AND_SAFETY_LIMIT,STREAM_WITH_RPC_STREAM,SUPPORTS_RAFT_CLUSTER_MANAGEMENT,TOMBSTONE_GC_OPTIONS,TRUNCATION_TABLE,UDA,UNBOUNDED_RANGE_TOMBSTONES,VIEW_VIRTUAL_COLUMNS,WRITE_FAILURE_REPLY,XXHASH
Retrieve the list of experimental features by running:
cqlsh> SELECT value FROM system.config WHERE name = 'experimental_features'
With Raft enabled, the list of experimental features in the output includes
raft.
In ScyllaDB, schema is based on Data Definition Language (DDL). In earlier ScyllaDB versions, schema changes were tracked via the gossip protocol, which might lead to schema conflicts if the updates are happening concurrently.
Implementing Raft eliminates schema conflicts and allows full automation of DDL changes under any conditions, as long as a quorum of nodes in the cluster is available. The following examples illustrate how Raft provides the solution to problems with schema changes.
A network partition may lead to a split-brain case, where each subset of nodes has a different version of the schema.
With Raft, after a network split, the majority of the cluster can continue performing schema changes, while the minority needs to wait until it can rejoin the majority. Data manipulation statements on the minority can continue unaffected, provided the quorum requirement is satisfied.
Two or more conflicting schema updates are happening at the same time. For example, two different columns with the same definition are simultaneously added to the cluster. There is no effective way to resolve the conflict - the cluster will employ the schema with the most recent timestamp, but changes related to the shadowed table will be lost.
With Raft, concurrent schema changes are safe.
In summary, Raft makes schema changes safe, but it requires that a quorum of nodes in the cluster is available.
Raft requires a quorum of nodes in a cluster to be available. If one or more nodes are down, but the quorum is live, reads, writes, and schema udpates proceed unaffected. When the node that was down is up again, it first contacts the cluster to fetch the latest schema and then starts serving queries.
The following examples show the recovery actions depending on the number of nodes and DCs in your cluster.
The Raft Consensus Algorithm
Achieving NoSQL Database Consistency with Raft in ScyllaDB - A tech talk by Konstantin Osipov
Making Schema Changes Safe with Raft - A Scylla Summit talk by Konstantin Osipov (register for access)
The Future of Consensus in ScyllaDB 5.0 and Beyond - A Scylla Summit talk by Tomasz Grabiec (register for access) | https://docs.scylladb.com/stable/architecture/raft.html | 2022-08-07T18:31:36 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.scylladb.com |
Note
This feature is only available with Scylla Enterprise. If you are using Scylla Open Source, this feature will not be available.
Scylla Enterprise customers can manage and authorize users’ privileges via an LDAP server. LDAP is an open, vendor-neutral, industry-standard protocol for accessing and maintaining distributed user access control over a standard IP network. If your users are already stored in an LDAP directory, you can now use the same LDAP server to regulate their roles in Scylla.
New in version Scylla: Enterprise 2021.1.2
Scylla can use LDAP to manage which roles a user has. This behavior is triggered by setting the
role_manager entry in scylla.yaml to com.scylladb.auth.LDAPRoleManager.
When this role manager is chosen, Scylla forbids
GRANT and
REVOKE role statements (CQL commands) as all users get their roles from the contents in the LDAP directory.
Note
Scylla still allows
GRANT and
REVOKE permission statements, such as
GRANT permission ON resource TO role, which are handled by the authorizer, not role manager.
This allows permissions to be granted to and revoked from LDAP-managed roles. In addition, if you have nested Scylla roles, LDAP authorization does not allow them. A role cannot be a member of another role.
In LDAP only login users can be members of a role.
When LDAP Authorization is enabled and a Scylla user authenticates to Scylla, a query is sent to the LDAP server, whose response sets the user’s roles for that login session. The user keeps the granted roles until logout; any subsequent changes to the LDAP directory are only effective at the user’s next login to Scylla.
The precise form of the LDAP query is configured by Scylla administrator in the scylla.yaml configuration file.
This configuration takes the form of a query template which is defined in the scylla.yaml configuration file using the parameter
ldap_url_template.
The value of
ldap_url_template parameter should contain a valid LDAP URL (e.g., as returned by the ldapurl utility from OpenLDAP) representing an LDAP query that returns entries for all the user’s roles.
Scylla will replace the text
{USER} in the URL with the user’s Scylla username before querying LDAP.
Before you begin On your LDAP server, create LDAP directory entries for Scylla users and roles.
Workflow
Ensure Scylla has the same users and roles as listed in the LDAP directory.
Enable LDAP as the role manager in Scylla
Make Scylla reload the configuration (SIGHUP or restart)
Use this example to create a query that will retrieve from your LDAP server the information you need to create a template.
For example, this template URL will query LDAP server at
localhost:5000 for all entries under
base_dn that list the user’s username as one of their
uniqueMember attribute values:
ldap://localhost:5000/base_dn?cn?sub?(uniqueMember={USER})
After Scylla queries LDAP and obtains the resulting entries, it looks for a particular attribute in each entry and uses that attribute’s value as a Scylla role this user will have.
The name of this attribute can be configured in scylla.yaml by setting the
ldap_attr_role parameter there.
When the LDAP query returns multiple entries, multiple roles will be granted to the user. Each role must already exist in Scylla, created via the CREATE ROLE CQL command beforehand.
For example, if the LDAP query returns the following results:
# extended LDIF # # LDAPv3 # role1, example.com dn: cn=role1,dc=example,dc=com objectClass: groupOfUniqueNames cn: role1 scyllaName: sn1 uniqueMember: uid=jsmith,ou=People,dc=example,dc=com uniqueMember: uid=cassandra,ou=People,dc=example,dc=com # role2, example.com dn: cn=role2,dc=example,dc=com objectClass: groupOfUniqueNames cn: role2 scyllaName: sn2 uniqueMember: uid=cassandra,ou=People,dc=example,dc=com # role3, example.com dn: cn=role3,dc=example,dc=com objectClass: groupOfUniqueNames cn: role3 uniqueMember: uid=jdoe,ou=People,dc=example,dc=com
If
ldap_attr_role is set to cn, then the resulting role set will be { role1, role2, role3 } (assuming, of course, that these roles already exist in Scylla).
However, if
ldap_attr_role is set to scyllaName, then the resulting role set will be { sn1, sn2 }.
If an LDAP entry does not have the
ldap_attr_role attribute, it is simply ignored.
Before Scylla attempts to query the LDAP server, it first performs an LDAP bind operation, to gain access to the directory information.
Scylla executes a simple bind with credentials configured in scylla.yaml.
The parameters
ldap_bind_dn and
ldap_bind_passwd must contain, respectively, the distinguished name and password that Scylla uses to perform the simple bind.
Enables Scylla to use LDAP Authorization. LDAP will manage the roles, not Scylla. See Note above
Open the scylla.yaml file in an editor. The file is located in /etc/scylla/scylla.yaml by default.
Edit the
role_manager section. Change the entry to
com.scylladb.auth.LDAPRoleManager. If this section does not exist, add it to the file.
Configure the parameters according to your organization’s IT and Security Policy.
role_manager: "com.scylladb.auth.LDAPRoleManager" ldap_url_template: "ldap://localhost:123/dc=example,dc=com?cn?sub?(uniqueMember=uid={USER},ou=People,dc=example,dc=com)" ldap_attr_role: "cn" ldap_bind_dn: "cn=root,dc=example,dc=com" ldap_bind_passwd: "secret"
Restart the scylla-server service or kill the scylla process.
sudo systemctl restart scylla-server
docker exec -it some-scylla supervisorctl restart scylla
(without restarting some-scylla container)
Open the scylla.yaml file in an editor. The file is located in /etc/scylla/scylla.yaml by default.
Comment out or delete the role_manager section.
Restart the scylla-server service or kill the scylla process.
sudo systemctl restart scylla-server
docker exec -it some-scylla supervisorctl restart scylla
(without restarting some-scylla container)
Before configuring Scylla, it is a good idea to validate the query template by manually ensuring that the LDAP server returns the correct entries when queried. This can be accomplished by using an LDAP search tool such as ldapsearch.
If manual querying does not yield correct results, then Scylla cannot see correct results, either. Try to adjust ldapsearch parameters until it returns the correct role entries for one user.
Once that works as expected, you can use the ldapurl utility to transform the parameters into a URL providing a basis for the ldap_url_template.
Tip
Always provide an explicit
-s flag to both
ldapsearch and
ldapurl; the default
-s value differs among the two tools.
Remember to replace the specific user name with
{USER} in the URL template.
You can turn on debug logging in the LDAP role manager by passing the following argument to the Scylla executable:
--logger-log-level ldap_role_manager=debug.
This will make Scylla log useful additional details about the LDAP responses it receives.
If ldapsearch yields expected results but Scylla queries do not, first check the host and port parts of the URL template and make sure both ldapsearch and Scylla are actually querying the same LDAP server. Then check the LDAP logs and see if there are any subtle differences between the logged queries of ldapsearch and Scylla. | https://docs.scylladb.com/stable/operating-scylla/security/ldap-authorization.html | 2022-08-07T18:20:13 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.scylladb.com |
Purpose
The Abort parcel terminates processing of a specific request. It is created by CLIv2 in response to an abort request or to a logoff of a session in which a request is active.
Usage Notes
The Abort parcel is sent in a simplex request asynchronously with the start request carrying the Teradata SQL request to be aborted. If the database is processing the specified request, the request is aborted and the Failure parcel is returned. If the request had been completed upon receipt of the abort, the abort is ignored.
Parcel Data
The following table lists field information for the Abort parcel. | https://docs.teradata.com/r/Teradata-Call-Level-Interface-Version-2-Reference-for-Workstation-Attached-Systems/October-2021/Parcels/Parcel-Descriptions/Abort | 2022-08-07T19:15:35 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.teradata.com |
Drops the definition for an empty database from the Data Dictionary.
You must drop all the objects contained by that database before you can drop the database itself.
The drop operation verifies that the database is empty, verifies that the database does not own any other databases or users, drops the database, and adds the PERM and TEMP space that the drop makes available to that of the immediate owner database or user.
After a database is dropped, you cannot recover it by using the Dump and Restore utility unless it is restored.
To delete objects from a database, use the DELETE DATABASE statement. See DELETE DATABASE. To delete objects from a user, use the DELETE USER statement. See DELETE USER.
Required Privileges
You must have the DROP DATABASE privilege on the database to be dropped. | https://docs.teradata.com/r/Teradata-VantageTM-SQL-Data-Definition-Language-Syntax-and-Examples/September-2020/Database-Statements/DROP-DATABASE | 2022-08-07T18:23:44 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.teradata.com |
.
Importing the GRE "Smart Cluster" Template
From Genesys Administrator
- Navigate to Application Templates.
- Click Upload Templates (upper right corner).
- Choose the Genesys_Rules_Engine_Application_Cluster_900.apd file.
--synchronization options: | https://docs.genesys.com/Documentation/GRS/latest/Deployment/ImportingGREClusterTemplate | 2022-08-07T18:36:53 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.genesys.com |
SPEND THE TREASURY
The default set to 6 days. The Treasury attempts to spend as many proposals in the queue as it can without running out of funds.
If the Treasury ends a budget period without spending all of its funds, it suffers a burn of a percentage of its funds -- thereby causing deflationary pressure. This percentage is currently 0.2% on Kusari, with the amount currently going to Society rather than being burned.
When a stakeholder wishes to propose a spend from the Treasury, they must reserve a deposit of at least 5% of the proposed spend (see below for variations). This deposit will be slashed if the proposal is rejected, and returned if it Council, and how the funds will be spent is up to their judgment.
Funding the Treasury
The Treasury is funded from different sources:
- Slashing: When a validator is slashed for any reason, the slashed amount is sent to the Treasury with a reward going to the entity that reported the validator (another validator). The reward is taken from the slash amount and varies per offence and number of reporters.
- Transaction fees: A portion of each block's transaction fees goes to the Treasury, with the remainder going to the block author.
- Staking inefficiency: Inflation is designed to be 20% in the first year, and the ideal staking ratio is set at 50%, meaning half of all KSI should be locked in staking. - Any deviation from this ratio will cause a proportional amount of the inflation to go to the Treasury. In other words, if 50% of all KSI are staked, then 100% of the inflation goes to the validators as reward. If the staking rate is greater than or less than 50%, then the validators will receive less, with the remainder going to the Treasury.
Creating a Treasury Proposal
The proposer has to deposit 5% of the requested amount or 0.067 KSI (whichever is higher) as an anti-spam measure. This amount is burned if the proposal is rejected, or refunded otherwise. These values are subject to governance so they may change in the future.
Please note that there is no way for a user to revoke a treasury proposal after it has been submitted. The Council will either accept or reject the proposal, and if the proposal is rejected, the bonded funds are burned.
Announcing the Proposal
To minimize storage on chain, proposals don't contain contextual information. When a user submits a proposal, they will probably need to find an off-chain way to explain the proposal. Most discussion takes place on Discord.
Spreading the word about the proposal's explanation is ultimately up to the proposer - the recommended way is using our official channels like Discord or Telegram.
Creating the Proposal
One way to create the proposal is to use the Substrate Explorer App. From the website, use the Governance tab and select the Treasury, then click on submit proposal and enter the desired amount and recipient.
The system will automatically take the required deposit, picking the higher of the two values mentioned above.
Once created, your proposal will become visible in the Treasury screen and the Council can start voting on it.
Hint
Remember that the proposal has no metadata, so it's up to the proposer to create a description and purpose that the Council could study and base their votes on.
At this point, a Council member can create a motion to accept or to reject the treasury proposal. It is possible that one motion to accept and another motion to reject are both created. The proportions to accept and reject Council proposals vary between accept or reject, and possibly depend on which network the Treasury is implemented.
The threshold for accepting a treasury proposal is at least three-fifths of the Council. On the other hand, the threshold for rejecting a proposal is at least one-half of the Council.
Tipping
Next to the proposals process, a separate system for making tips exists for the Treasury. Tips can be suggested by anyone and are supported by members of the Council. Tips do not have any definite value; the final value of the tip is decided based on the median of all tips issued by the tippers.
Currently, the tippers are the same as the members of the Council. However, being a tipper is not the direct responsibility of the Council, and at some point the Council and the tippers may be different groups of accounts.
A tip will enter a closing phase when more than a half plus one of the tipping group have endorsed a tip. During that timeframe, the other members of the tipping group can still issue their tips, but do not have to. Once the window closes, anyone can call the close_tip extrinsic, and the tip will be paid out.
There are two types of tips: public and tipper-initiated. With public tips, a small bond is required to place them. This bond depends on the tip message length, and a fixed bond constant defined on chain, currently 0.166. Public tips carry a finder's fee of 20% which is paid out from the total amount. Tipper-initiated tips, i.e. tips that a Council member published, do not have a finder's fee or a bond.
To better understand the process a tip goes through until it is paid out, let's consider an example.
Example
Bob has done something great for Kusari. Alice has noticed this and decides to report Bob as deserving a tip from the Treasury. The Council is composed of three members Charlie, Dave, and Eve.
Alice begins the process by issuing the
report_awesome extrinsic. This extrinsic requires two arguments, a reason and the address to tip. Alice submits Bob's address with the reason being a UTF-8 encoded URL to a post on Discord that explains her reasoning for why Bob deserves the tip.
As mentioned above, Alice must also lock up a deposit for making this report. The deposit is the base deposit as set in the chain's parameter list plus the additional deposit per byte contained in the reason. This is why Alice submitted a URL as the reason instead of the explanation directly, it was cheaper for her to do so.
For her trouble, Alice is able to claim the eventual finder's fee if the tip is approved by the tippers.
Since the tipper group is the same as the Council, the Council must now collectively (but also independently) decide on the value of the tip that Bob deserves.
Charlie, Dave, and Eve all review the report and make tips according to their personal valuation of the benefit Bob has provided to Kusari.
Charlie tips 1 KSI. Dave tips 3 KSI. Eve tips 10 KSI.
The tip could have been closed out with only two of the three tippers. Once more than half of the tippers group have issued tip valuations, the countdown to close the tip will begin. In this case, the third tipper issued their tip before the end of the closing period, so all three were able to make their tip valuations known.
Now the actual tip that will be paid out to Bob is the median of these tips, so Bob will be paid out 3 KSI from the Treasury.
In order for Bob to be paid his tip, some account must call the close_tip extrinsic at the end of the closing period for the tip. This extrinsic may be called by anyone.
Bounties Spending
There are practical limits to Council Members curation capabilities when it comes to treasury proposals: Council members likely do not have the expertise to make a proper assessment of the activities described in all proposals. Even if individual Councillors have that expertise, it is highly unlikely that a majority of members are capable in such diverse topics.
Bounties Spending proposals aim to delegate the curation activity of spending proposals to experts called Curators: They can be defined as addresses with agency over a portion of the Treasury with the goal of fixing a bug or vulnerability, developing a strategy, or monitoring a set of tasks related to a specific topic: all for the benefit of the Kusari ecosystem.
A proposer can submit a bounty proposal for the Council to pass, with a curator to be defined later, whose background and expertise is such that they are capable of determining when the task is complete. Curators are selected by the Council after the bounty proposal passes, and need to add an upfront payment to take the position. This deposit can be used to punish them if they act maliciously. However, if they are successful in their task of getting someone to complete the bounty work, they will receive their deposit back and part of the bounty reward.
When submitting the value of the bounty, the proposer includes a reward for curators willing to invest their time and expertise in the task: this amount is included in the total value of the bounty. In this sense, the curator's fee can be defined as the result of subtracting the value paid to the bounty rewardee from the total value of the bounty.
In general terms, curators are expected to have a well-balanced track record related to the issues the bounty tries to resolve: they should be at least knowledgeable on the topics the bounty touches, and show project management skills or experience. These recommendations ensure an effective use of the mechanism. A Bounty Spending is a reward for a specified body of work - or specified set of objectives - that needs to be executed for a predefined treasury amount to be paid out. The responsibility of assigning a payout address once the specified set of objectives is completed is delegated to the curator.
After the Council has activated a bounty, it delegates the work that requires expertise to the curator who gets to close the active bounty. Closing the active bounty enacts a delayed payout to the payout address and a payout of the curator fee. The delay phase allows the Council to act if any issues arise.
To minimize storage on chain in the same way as any proposal, bounties don't contain contextual information. When a user submits a bounty spending proposal, they will probably need to find an off-chain way to explain the proposal (any of the available community forums serve this purpose). We will provide a template that can help as a checklist of all needed information for the Council to make an informed decision.
The bounty has a predetermined duration of 90 days with the possibility of being extended by the curator. Aiming to maintain flexibility on the tasks’ curation, the curator will be able to create sub-bounties for more granularity and allocation in the next iteration of the mechanism.
Creating a Bounty Proposal
Anyone can create a Bounty proposal using Substrate Explorer App: Users are able to submit a proposal on the dedicated Bounty section under Governance. The development of a robust user interface to view and manage bounties in the Substrate Explorer App is still under development and it will serve Council members, Curators and Beneficiaries of the bounties, as well as all users observing the on-chain treasury governance. For now, the help of a Councillor is needed to open a bounty proposal as a motion to be voted.
To submit a bounty, please visit Substrate Explorer App and click on the governance tab in the options bar on the top of the site. After, click on 'Bounties' and find the button '+ Add Bounty' on the upper-right side of the interface. Complete the bounty title, the requested allocation (including curator's fee) and confirm the call.
After this, a Council member will need to assist you to pass the bounty proposal for vote as a motion. You can contact the Council by joining our Discord server and publishing a short description of your bounty, with a link to one of the forums for contextual information.
A bounty can be cancelled by deleting the earmark for a specific treasury amount or be closed if the tasks have been completed. On the opposite side, the 90 days life of a bounty can be extended by amending the expiry block number of the bounty to stay active.
Closing a Bounty
The curator can close the bounty once they approve the completion of its tasks. The curator should make sure to set up the payout address on the active bounty beforehand. Closing the Active bounty enacts a delayed payout to the payout address and a payout of the curator fee.
A bounty can be closed by using the extrinsics tab and selecting the Treasury pallet, then
Award_bounty, making sure the right bounty is to be closed and finally sign the transaction. It is important to note that those who received a reward after the bounty is completed, must claim the specific amount of the payout from the payout address, by calling Claim_bounty after the curator closed the allocation.
What prevents the Treasury from being captured by a majority of the council?
The majority of the Council can decide the outcome of a treasury spend proposal. In an adversarial mindset, we may consider the possibility that the Council may at some point go rogue and attempt to steal all of the treasury funds. It is a possibility that the treasury pot becomes so great, that a large financial incentive would present itself.
For one, the Treasury has deflationary pressure due to the burn that is suffered every spend period. The burn aims to incentivize the complete spend of all treasury funds at every burn period, so ideally the treasury pot doesn't have time to accumulate mass amounts of wealth. However, it is the case that the burn on the Treasury could be so little that it does not matter.
However, it is the case on Kusari that the Council is composed of mainly well-known members of the community. Remember, the Council is voted in by the KSI holders, so they must do some campaigning or otherwise be recognized to earn votes. In the scenario of an attack, the Council members would lose their social credibility. Furthermore, members of the Council are usually externally motivated by the proper operation of the chain. This external motivation is either because they run businesses that depend on the chain, or they have direct financial gain (through their holdings) of the token value remaining steady.
Concretely, there are a couple on-chain methods that resist this kind of attack. One, the Council majority may not be the token majority of the chain. This means that the token majority could vote to replace the Council if they attempted this attack - or even reverse the treasury spend. They would do this through a normal referendum. Two, there are time delays to treasury spends. They are only enacted every spend period. This means that there will be some time to observe this attack is taking place. The time delay then allows chain participants time to respond. The response may take the form of governance measures or - in the most extreme cases a liquidation of their holdings and a migration to a minority fork. However, the possibility of this scenario is quite low.
Further Reading
- Substrate's Treasury Pallet - The Rust implementation of the Treasury (Docs)
Written by Masterdubs & Petar | https://docs.kusari.network/what-to-try/treasury/ | 2022-08-07T18:18:43 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['../assets/treasury-proposal-01.png#center', 'treasury-proposal'],
dtype=object)
array(['../assets/bounty-process.jpg#center', 'bounty-process'],
dtype=object) ] | docs.kusari.network |
Introduction to the Cloud Monitoring Console
The Cloud Monitoring Console (CMC) lets Splunk Cloud Platform administrators view information about the status of your Splunk Cloud Platform deployment. CMC dashboards provide insight into how the following areas of your Splunk Cloud Platform deployment are performing:
- Data ingestion and data quality
- Forwarder connections
- HTTP Event Collection tokens
- Indexing
- Indexer clustering and search head clustering, if applicable
- License usage
- Search
- User behavior
- Workload management
The Cloud Monitoring Console does not store or retain any customer data displayed in the dashboards. Customer data remains local to the customer stack.
Locate the Cloud Monitoring Console
To locate the CMC app in your Splunk Cloud Platform deployment, follow these steps:
- From anywhere in Splunk Web, select Apps.
- Select Cloud Monitoring Console.
On the Apps page that you access through Apps > Managed Apps, the CMC is named splunk_instance_monitoring.
Select the correct documentation version for your deployment
Ensure that you are viewing the correct CMC documentation version for your Splunk Cloud Platform deployment.
To determine your Splunk Cloud Platform deployment version, follow these steps:
- In the CMC app, select Support & Services > About. The CURRENT APPLICATION area at the bottom of the About page shows the app's version and build numbers.
- In this documentation, select the correct version from the Version dropdown menu in the upper right corner.
Set your default time zone
The CMC app displays time-based data in panels, charts, and tables based on the default time zone set for your user profile. To review or reset your current time zone setting, perform the following steps:
- In the CMC app, select your user profile adjacent to Support & Services, then select Preferences.
- In the Preferences page, select Global.
- Specify an option for the Time zone field and select Apply., log in and file a new case using the Splunk Support Portal. Otherwise, contact Splunk Customer Support.
Do not modify any part of a CMC dashboard. Any local changes that you make might Platform documentation site
For more information about features and functionality of Splunk Cloud Platform, see the Splunk Cloud Platform Platform products.! | https://docs.splunk.com/Documentation/SplunkCloud/8.2.2203/Admin/MonitoringIntro | 2022-08-07T19:18:52 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
A collection of nanite buoy assets that can be used for games!
This project includes everything pictured with all assets, maps, and materials created in the Unreal Engine. Each asset was created for realistic AAA quality visuals, style, and budget.
Special thanks as this project was supported by Epic Games MegaGrants!
NOTE: These assets all are constructed using nanite for high-quality fidelity polycounts.
Art created by Dekogon Studios Artists.
Features:
Scaled to Epic Skeleton: Yes
Collision: Yes, automatically generated and per-poly mix based on the complexity of the asset
Vertex Count: Displayed in Documentation
LODs: None (All Nanite Meshes)
Number of Meshes: 42
Number of Materials and Material Instances: 8
Number of Textures: 24
Supported Development Platforms: Windows
Supported Target Build Platforms: Window/Mac/PS4/Xbox
Documentation and Credits: | https://docs.unrealengine.com/marketplace/ja/product/seaside-docks-vol-3-buoys-nanite | 2022-08-07T18:42:39 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.unrealengine.com |
Package codedeploy provides the client and types for making API requests to AWS CodeDeploy.
AWS.
Use the information in this guide to help you work with the following AWS CodeDeploy components:
* Application: A name that uniquely identifies the application you want to deploy. AWS CodeDeploy uses this name, which functions as a container, to ensure the correct combination of revision, deployment configuration, and deployment group are referenced during a deployment. * Deployment group: A set of individual instances, CodeDeploy Lambda deployment configuration settings, or an Amazon ECS service and network details. A Lambda deployment group specifies how to route traffic to a new version of a Lambda function. An Amazon ECS deployment group specifies the service created in Amazon ECS to deploy, a load balancer, and a listener to reroute production traffic to an updated containerized application. An EC2/On-premises deployment group contains individually tagged instances, Amazon EC2 instances in Amazon EC2 Auto Scaling groups, or both. All deployment groups can specify optional trigger, alarm, and rollback settings. * Deployment configuration: A set of deployment rules and deployment success and failure conditions used by AWS CodeDeploy during a deployment. * Deployment: The process and the components used when updating a Lambda function, a containerized application in an Amazon ECS service, or of installing content on one or more instances. * Application revisions: For an AWS Lambda deployment, this is an AppSpec file that specifies the Lambda function to be updated and one or more functions to validate deployment lifecycle events. For an Amazon ECS deployment, this is an AppSpec file that specifies the Amazon ECS task definition, container, and port where production traffic is rerouted. For an EC2/On-premises deployment, this is an archive file that contains source content—source code, webpages, executable files, and deployment scripts—along with an AppSpec file. Revisions are stored in Amazon S3 buckets or GitHub repositories. For Amazon S3, a revision is uniquely identified by its Amazon S3 object key and its ETag, version, or both. For GitHub, a revision is uniquely identified by its commit ID. ()
See for more information on this service.
See codedeploy package documentation for more information.
To contact AWS CodeDeploy CodeDeploy client CodeDeploy for more information on creating client for this service.
The stub package, codedeployiface, can be used to provide alternative implementations of service clients, such as mocking the client for testing. | https://docs.aws.amazon.com/sdk-for-go/api/service/codedeploy/#TargetGroupPairInfo | 2022-08-07T19:30:24 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.aws.amazon.com |
Subscribes to reference quotes.
The calculation takes into account mid prices from every exchange, with a maximum lookback of 1 minute, meaning if a pair for an exchange has not been updated in the most recent 60 seconds, it is dropped and not used in the calculation of the current price.
The reference quote is calculated as the average of mid prices across all exchanges which support this pair.
Mid is defined as (bid + ask) / 2 (best bid and best ask).
This feed returns point-in-time data, meaning it returns real-time data, and is not retroactively changed after the fact due to some reconciliation phase or some delayed data for example.
Make sure you're connected.
Request
Default feed:
{"jsonrpc":"2.0","id":1,"method":"subscribe","params":["market:spot:reference-quote:1s",{"pair":"btc_usdt"}]}
Choose the exchanges participating in the price calculation:
{"jsonrpc":"2.0","id":1,"method":"subscribe","params":["market:spot:reference-quote:1s",{"pair":"btc_usdt", "sources":"binance,huobi"}]}
Response
{ "result": { "timestamp" : 1652979512953, "pair": "btc_usdt", "price": 30124.69504984455, "sources": ["kraken","poloniex","bitfinex","gdax","bitstamp","zb","huobi","bybit","ftx","okex","binance"] } }
Example
const WebSocket = require('ws'); const ws = new WebSocket('wss://ws.web3api.io/quotes', {headers: {x-api-key:'<api_key>'}}); ws.on('open', () => { ws.send(JSON.stringify({ jsonrpc: '2.0', method: 'subscribe', params: ["market:spot:reference-quote:1s", { "pair": "btc_usdt"}], id: 1, })); }); ws.on('message', data => { console.log(JSON.stringify(JSON.parse(data), null, 2)); }); | https://docs.amberdata.io/reference/ws-market-spot-reference-quotes-1s | 2022-08-07T19:00:25 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.amberdata.io |
The following examples demonstrate common scenarios that can be solved using WebCopy. These examples can also be adapted and extended to cover other scenarios.
Unless otherwise specified, all examples can be tested using the WebCopy demonstration website, available at.
Some examples include rule lists. Although not explicitly mentioned, the Enabled option for each rule is set (which is also the default value when creating a rule). If this option is not set, the examples will not work as expected. | https://docs.cyotek.com/cyowcopy/1.4/examples.html | 2022-08-07T20:02:46 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.cyotek.com |
Collation and Unicode support
Applies to:
SQL Server (all supported versions)
Azure SQL Database
Azure SQL Managed Instance
Azure Synapse Analytics
Analytics Platform System (PDW)
Collations in SQL Server provide sorting rules, case, and accent sensitivity properties for your data. Collations that are used with character data types, such as char and varchar, dictate the code page and corresponding characters that can be represented for that data type.
Whether you're installing a new instance of SQL Server, restoring a database backup, or connecting server to client databases, it's important to understand the locale requirements, sorting order, and case and accent sensitivity of the data that you're working with. To list the collations that are available on your instance of SQL Server, see sys.fn_helpcollations (Transact-SQL).
When you select a collation for your server, database, column, or expression, you're assigning certain characteristics to your data. These characteristics affect the results of many operations in the database. For example, when you construct a query by using
ORDER BY, the sort order of your result set might depend on the collation that's applied to the database or dictated in a
COLLATE clause at the expression level of the query.
To best use collation support in SQL Server, you should understand the terms that are defined in this topic and how they relate to the characteristics of your data.
Collation terms
Collation
A collation specifies the bit patterns that represent each character in a dataset. Collations also determine the rules that sort and compare data. SQL Server supports storing objects that have different collations in a single database. For non-Unicode columns, the collation setting specifies the code page for the data and which characters can be represented. The data that you move between non-Unicode columns must be converted from the source code page to the destination code page.
Transact-SQL statement results can vary when the statement is run in the context of different databases that have different collation settings. If possible, use a standardized collation for your organization. This way, you don't have to specify the collation in every character or Unicode expression. If you must work with objects that have different collation and code page settings, code your queries to consider the rules of collation precedence. For more information, see Collation Precedence (Transact-SQL).
The options associated with a collation are case sensitivity, accent sensitivity, kana sensitivity, width sensitivity, and variation-selector sensitivity. SQL Server 2019 (15.x) introduces an additional option for UTF-8 encoding.
You can specify these options by appending them to the collation name. For example, the collation Japanese_Bushu_Kakusu_100_CS_AS_KS_WS_UTF8 is case-sensitive, accent-sensitive, kana-sensitive, width-sensitive, and UTF-8 encoded. As another example, the collation Japanese_Bushu_Kakusu_140_CI_AI_KS_WS_VSS is case-insensitive, accent-insensitive, kana-sensitive, width-sensitive, variation-selector-sensitive, and it uses non-Unicode encoding.
The behavior associated with these various options is described in the following table:
1 If Binary or Binary-code point is selected, the Case-sensitive (_CS), Accent-sensitive (_AS), Kana-sensitive (_KS), and Width-sensitive (_WS) options are not available.
Examples of collation options
Each collation is combined as a series of suffixes to define case-, accent-, width-, or kana-sensitivity. The following examples describe sort order behavior for various combinations of suffixes.. For more information, see the UTF-8 Support section in this article.
Collation sets
SQL Server supports the following collation sets:
Windows collations
Windows collations define rules for storing character data that's based on an associated Windows system locale. For a Windows collation, you can implement a comparison of non-Unicode data by using the same algorithm as that for Unicode data. The base Windows collation rules specify which alphabet or language is used when dictionary sorting is applied. The rules also specify the code page that's used to store non-Unicode character data. Both Unicode and non-Unicode sorting are compatible with string comparisons in a particular version of Windows. This provides consistency across data types within SQL Server, and it lets developers sort strings in their applications by using the same rules that are used by SQL Server. For more information, see Windows Collation Name (Transact-SQL).
Binary collations
Binary collations sort data based on the sequence of coded values that are defined by the locale and data type. They're case-sensitive. A binary collation in SQL Server defines the locale and the ANSI code page that's used. This enforces a binary sort order. Because they're relatively simple, binary collations help improve application performance. For non-Unicode data types, data comparisons are based on the code points that are defined on the ANSI code page. For Unicode data types, data comparisons are based on the Unicode code points. For binary collations on Unicode data types, the locale isn't considered in data sorts. For example, Latin_1_General_BIN and Japanese_BIN yield identical sorting results when they're used on Unicode data. For more information, see Windows Collation Name (Transact-SQL).
There are two types of binary collations in SQL Server:
The legacy BIN collations, which performed an incomplete code-point-to-code-point comparison for Unicode data. These legacy binary collations compared the first character as WCHAR, followed by a byte-by-byte comparison. In a BIN collation, only the first character is sorted according to the code point, and remaining characters are sorted according to their byte values.
The newer BIN2 collations, which implement a pure code-point comparison. In a BIN2 collation, all characters are sorted according to their code points.).
During SQL Server setup, the default installation collation setting is determined by the operating system (OS) locale. You can change the server-level collation either during setup or by changing the OS locale before installation. For backward compatibility reasons, the default collation is set to the oldest available version that's associated with each specific locale. Therefore, this isn't always the recommended collation. To take full advantage of SQL Server features, change the default installation settings to use Windows collations. For example, for the OS locale "English (United States)" (code page 1252), the default collation during setup is SQL_Latin1_General_CP1_CI_AS, and it can be changed to its closest Windows collation counterpart, Latin1_General_100_CI_AS_SC.
Note
When you upgrade an English-language instance of SQL Server, you can specify SQL Server collations (SQL_*) for compatibility with existing instances of SQL Server. Because the default collation for an instance of SQL Server is defined during setup, make sure that you specify the collation settings carefully when the following conditions are true:
- Your application code depends on the behavior of previous SQL Server collations.
- You must store character data that reflects multiple languages.
Collation levels
Setting collations are supported at the following levels of an instance of SQL Server:
- Server-level collations
- Database-level collations
- Column-level collations
- Expression-level collations
Server-level collations
The default server collation is determined during SQL Server setup, and it becomes the default collation of the system databases and all user databases.
The following table shows the default collation designations, as determined by the operating system (OS) locale, including their Windows and SQL Language Code Identifiers (LCID):
After you've assigned a collation to the server, you can change it only by exporting all database objects and data, rebuilding the master database, and importing all database objects and data. Instead of changing the default collation of an instance of SQL Server, you can specify the desired collation when you create a new database or database column.
To query the server collation for an instance of SQL Server, use the
SERVERPROPERTY function:
SELECT CONVERT(nvarchar(128), SERVERPROPERTY('collation'));
To query the server for all available collations, use the following
fn_helpcollations() built-in function:
SELECT * FROM sys.fn_helpcollations();
You can not change or set the instance level collation on Azure SQL Database. For information about SQL Managed Instance and SQL Server, see: Set or Change the Server Collation.
Database-level collations
When you create or modify a database, you can use the
COLLATE clause of the
CREATE DATABASE or
ALTER DATABASE statement to specify the default database collation. If no collation is specified, the database is assigned the server collation.
You can't change the collation of system databases unless you change the collation for the server.
The database collation is used for all metadata in the database, and the coll might fail if the collations cause a conflict in evaluating the character data. You can resolve this issue by specifying the
COLLATE clause in the query. For more information, see COLLATE (Transact-SQL).
You can change the collation of a user database by using an
ALTER DATABASE statement that's similar to the following:
ALTER DATABASE myDB COLLATE Greek_CS_AI;
Important
Altering the database-level collation doesn't affect column-level or expression-level collations.
You can retrieve the current collation of a database by using a statement that's similar to the following:
SELECT CONVERT (nvarchar(128), DATABASEPROPERTYEX('database_name', 'collation'));
Column-level collations
When you create or alter a table, you can specify collations for each character-string column by using the
COLLATE clause. If you don't specify a collation, the column is assigned the default collation of the database.
You can change the collation of a column by using an
ALTER TABLE statement that's similar to the following:
ALTER TABLE myTable ALTER COLUMN mycol NVARCHAR(10) COLLATE Greek_CS_AI;
Expression-level collations
Expression-level collations are set when a statement is run, and they affect the way a result set is returned. This enables
ORDER BY sort results to be locale-specific. To implement expression-level collations, use a
COLLATE clause such as the following:
SELECT name FROM customer ORDER BY name COLLATE Latin1_General_CS_AI;
Locale
A locale is a set of information that's associated with a location or a culture. The information can include the name and identifier of the spoken language, the script that a charset. Code pages are used to provide support for the character sets and keyboard layouts that are used by different Windows system locales.
Sort order
Sort order specifies how data values are sorted. The order affects the results of data comparison. Data is sorted by using collations, and it can be optimized by using indexes.
Unicode support
Unicode is a standard for mapping code points to characters. Because it's designed to cover all the characters of all the languages of the world, you don't need different code pages to handle different sets of characters.
Unicode basics
Storing data in multiple languages within one database is difficult to manage when you use only character data and code pages. It's also difficult to find one code page for the database that can store all the required language-specific characters. Additionally, it's difficult to guarantee the correct translation of special characters when they're being read or updated by a variety of clients that are ensure that the database is installed with a code page that will handle the characters of all three languages. You must also take care to guarantee the correct translation of characters from any of the languages when the characters are read by clients that are running a code page for another language.
Note
The code pages that a client uses are determined by the operating system (OS) settings. To set client code pages on the Windows operating system, use Regional Settings in Control Panel.
It would be difficult to select a code page for character data types that will support all the characters that are required by a worldwide audience. The easiest way to manage character data in international databases is to always use a data type that supports Unicode.
Unicode data types
If you store character data that reflects multiple languages in SQL Server (SQL Server 2005 (9.x) and later), use Unicode data types (nchar, nvarchar, and ntext) instead of non-Unicode data types (char, varchar, and text).
Note
For Unicode data types, the Database Engine can represent up to 65,536 characters using UCS-2, or the full Unicode range (1,114,112 characters) if supplementary characters are used. For more information about enabling supplementary characters, see Supplementary Characters.
Alternatively, starting with SQL Server 2019 (15.x), if a UTF-8 enabled collation (_UTF8) is used, previously non-Unicode data types (char and varchar) become Unicode data types using UTF-8 encoding. SQL Server 2019 (15.x) doesn't change the behavior of previously existing Unicode data types (nchar, nvarchar, and ntext), which continue to use UCS-2 or UTF-16 encoding. For more information, see Storage differences between UTF-8 and UTF-16.
Unicode considerations
Significant limitations are associated with non-Unicode data types. This is because a non-Unicode computer is limited to using a single code page. You might experience performance gain by using Unicode, because it requires fewer code-page conversions. Unicode collations must be selected individually at the database, column, or expression level because they aren't supported at the server level..
Tip
You can also try to use a different collation for the data on the server. Choose a collation that maps to a code page on the client.
To use the UTF-16 collations that are available in SQL Server (SQL Server 2012 (11.x) and later) to improve searching and sorting of some Unicode characters (Windows collations only), you can select either one of the supplementary characters (_SC) collations or one of the version 140 collations.
To use the UTF-8 collations that are available in SQL Server 2019 (15.x), and to improve searching and sorting of some Unicode characters (Windows collations only), you must select UTF-8 encoding-enabled collations(_UTF8).
The UTF8 flag can be applied to:
- Linguistic collations that already support supplementary characters (_SC) or variation-selector-sensitive (_VSS) awareness
- BIN2 binary collation
The UTF8 flag can't be applied to:
- Linguistic collations that don't support supplementary characters (_SC) or variation-selector-sensitive (_VSS) awareness
- The BIN binary collations
- The SQL_* collations
To evaluate issues that are related to using Unicode or non-Unicode data types, test your scenario to measure performance differences in your environment. It's a good practice to standardize the collation that's used on systems across your organization, and to deploy Unicode servers and clients wherever possible.
In many situations, SQL Server interacts with other servers or clients, and your organization might use multiple data-access standards between applications and server instances. SQL Server clients are one of two main types:
- Unicode clients that use OLE DB and Open Database Connectivity (ODBC) version 3.7 or later.
- Non-Unicode clients that use DB-Library and ODBC version 3.6 or earlier.
The following table provides information about using multilingual data with various combinations of Unicode and non-Unicode servers:
Supplementary characters
The Unicode Consortium allocates to each character a unique code point, which is a value in the range 000000–10FFFF. The most frequently used characters have code point values in the range 000000–00FFFF (65,536 characters) which fit into an 8-bit or 16-bit word in memory and on-disk. This range is usually designated as the Basic Multilingual Plane (BMP).
But the Unicode Consortium has established 16 additional "planes" of characters, each the same size as the BMP. This definition allows Unicode the potential to represent 1,114,112 characters (that is, 216 * 17 characters) within the code point range 000000–10FFFF. Characters with code point values larger than 00FFFF require two to four consecutive 8-bit words (UTF-8), or two consecutive 16-bit words (UTF-16). These characters located beyond the BMP are called supplementary characters, and the additional consecutive 8-bit or 16-bit words are called surrogate pairs. For more information about supplementary characters, surrogates, and surrogate pairs, refer to the Unicode Standard.
SQL Server provides data types such as nchar and nvarchar to store Unicode data in the BMP range (000000–00FFFF), which the Database Engine encodes using UCS-2.
SQL Server 2012 (11.x) introduced a new family of supplementary character (_SC) collations that can be used with the nchar, nvarchar, and sql_variant data types to represent the full Unicode character range (000000–10FFFF). For example: Latin1_General_100_CI_AS_SC or, if you're using a Japanese collation, Japanese_Bushu_Kakusu_100_CI_AS_SC.
SQL Server 2019 (15.x) extends supplementary character support to the char and varchar data types with the new UTF-8 enabled collations (_UTF8). These data types are also capable of representing the full Unicode character range.
Note
Starting with SQL Server 2017 (14.x), all new collations automatically support supplementary characters.
If you use supplementary characters:
Supplementary characters can be used in ordering and comparison operations in collation versions 90 or greater.
All version 100 collations support linguistic sorting with supplementary characters.
Supplementary characters aren't supported for use in metadata, such as in names of database objects.
The SC flag can be applied to:
- Version 90 collations
- Version 100 collations
The SC flag can't be applied to:
- Version 80 non-versioned Windows collations
- The BIN or BIN2 binary collations
- The SQL* collations
- Version 140 collations (these don't need the SC flag, because they already support supplementary characters)
The following table compares the behavior of some string functions and string operators when they use supplementary characters with and without a supplementary character-aware (SCA) collation:
GB18030 support
GB18030 is a separate standard that's're stored in the server, they're treated as Unicode characters in any subsequent operations.
You can use any Chinese collation, preferably the latest 100 version. All version 100 collations support linguistic sorting with GB18030 characters. If the data includes supplementary characters (surrogate pairs), you can use the SC collations that are available in SQL Server to improve searching and sorting.
Note
Ensure that your client tools, such as SQL Server Management Studio, use the Dengxian font to correctly display strings that contain GB18030-encoded characters.
Complex script support
SQL Server can support inputting, storing, changing, and displaying complex scripts. Complex scripts include the following types:
-.
Japanese collations added in SQL Server 2017 (14.x)
Starting with SQL Server 2017 (14.x), new Japanese collation families are supported, with the permutations of various options (_CS, _AS, _KS, _WS, and _VSS), as well as _BIN and _BIN2.
To list these collations, you can query the SQL Server Database Engine:
SELECT name, description FROM sys.fn_helpcollations() WHERE COLLATIONPROPERTY(name, 'Version') = 3;
All the new collations have built-in support for supplementary characters, so none of the new 140 collations has (or needs) the SC flag.
These collations are supported in Database Engine indexes, memory-optimized tables, columnstore indexes, and natively compiled modules.
UTF-8 support
SQL Server 2019 (15.x) introduces full support for the widely used UTF-8 character encoding as an import or export encoding, and as database-level or column-level collation for string data. UTF-8 is allowed in the char and varchar data types, and it's enabled when you create or change an object's collation to a collation that has a UTF8 suffix. One example is changing LATIN1_GENERAL_100_CI_AS_SC to LATIN1_GENERAL_100_CI_AS_SC_UTF8.
UTF-8 is available only to Windows collations that support supplementary characters, as introduced in SQL Server 2012 (11.x). The nchar and nvarchar data types allow UCS-2 or UTF-16 encoding only, and they remain unchanged.
Azure SQL Database and Azure SQL Managed Instance also support UTF-8 on database and column level, while Managed Instance supports this on a server level as well.
Storage differences between UTF-8 and UTF-16
The Unicode Consortium allocates to each character a unique code point, which is a value in the range 000000–10FFFF. With SQL Server 2019 (15.x), both UTF-8 and UTF-16 encodings are available to represent the full range:
- With UTF-8 encoding, characters in the ASCII range (000000–00007F) require 1 byte, code points 000080–0007FF require 2 bytes, code points 000800–00FFFF require 3 bytes, and code points 0010000–0010FFFF require 4 bytes.
- With UTF-16 encoding, code points 000000–00FFFF require 2 bytes, and code points 0010000–0010FFFF require 4 bytes.
The following table lists the encoding storage bytes for each character range and encoding type:
1 Storage bytes refers to the encoded byte length, not the data-type on-disk storage size. For more information about on-disk storage sizes, see nchar and nvarchar and char and varchar.
2 The code point range for supplementary characters.
Tip
It's common to think, in CHAR(n) and VARCHAR(n) or in NCHAR(n) and NVARCHAR(n), that n defines the number of characters. This is because, in the example of a CHAR(10) column, 10 ASCII characters in the range 0–127 can be stored by using a collation such as Latin1_General_100_CI_AI, because each character in this range uses only 1 byte.
However, in CHAR(n) and VARCHAR(n), n defines the string size in bytes (0–8,000), and in NCHAR(n) and NVARCHAR(n), n defines the string size in byte-pairs (0–4,000). n never defines numbers of characters that can be stored.
As you've just seen, choosing the appropriate Unicode encoding and data type might give you significant storage savings or increase your current storage footprint, depending on the character set in use. For example, when you use a Latin collation that's UTF-8 enabled, such as Latin1_General_100_CI_AI_SC_UTF8, a
CHAR(10) column stores 10 bytes and can hold 10 ASCII characters in the range 0–127. But it can hold only 5 characters in the range 128–2047 and only 3 characters in the range 2048–65535. By comparison, because a
NCHAR(10) column stores 10 byte-pairs (20 bytes), it can hold 10 characters in the range 0–65535.
Before you choose whether to use UTF-8 or UTF-16 encoding for a database or column, consider the distribution of string data that will be stored:
- If it's mostly in the ASCII range 0–127 (such as English), each character requires 1 byte with UTF-8 and 2 bytes with UTF-16. Using UTF-8 provides storage benefits. Changing an existing column data type with ASCII characters in the range 0–127 from
NCHAR(10)to
CHAR(10), and using an UTF-8 enabled collation, translates into a 50 percent reduction in storage requirements. This reduction is because
NCHAR(10)requires 20 bytes for storage, compared with
CHAR(10), which requires 10 bytes for the same Unicode string representation.
- Above the ASCII range, almost all Latin-based script, and Greek, Cyrillic, Coptic, Armenian, Hebrew, Arabic, Syriac, Tāna, and N’Ko, require 2 bytes per character in both UTF-8 and UTF-16. In these cases, there aren't significant storage differences for comparable data types (for example, between using char or nchar).
- If it's mostly East Asian script (such as Korean, Chinese, and Japanese), each character requires 3 bytes with UTF-8 and 2 bytes with UTF-16. Using UTF-16 provides storage benefits.
- Characters in the range 010000–10FFFF require 4 bytes in both UTF-8 and UTF-16. In these cases, there aren't storage differences for comparable data types (for example, between using char or nchar).
For other considerations, see Write International Transact-SQL Statements.
Converting to UTF-8
Because in CHAR(n) and VARCHAR(n) or in NCHAR(n) and NVARCHAR(n), the n defines the byte storage size, not the number of characters that can be stored, it's important to determine the data type size you must convert to, in order to avoid data truncation.
For example, consider a column defined as NVARCHAR(100) that stores 180 bytes of Japanese characters. In this example, the column data is currently encoded using UCS-2 or UTF-16, which uses 2 bytes per character. Converting the column type to VARCHAR(200) is not enough to prevent data truncation, because the new data type can only store 200 bytes, but Japanese characters require 3 bytes when encoded in UTF-8. So the column must be defined as VARCHAR(270) to avoid data loss through data truncation.
Therefore, it's required to know in advance what's the projected byte size for the column definition before converting existing data to UTF-8, and adjust the new data type size accordingly. Refer to the Transact-SQL script or the SQL Notebook in the Data Samples GitHub, which use the DATALENGTH function and the COLLATE statement to determine the correct data length requirements for UTF-8 conversion operations in an existing database.
To change the column collation and data type in an existing table, use one of the methods described in Set or Change the Column Collation.
To change the database collation, allowing new objects to inherit the database collation by default, or to change the server collation, allowing new databases to inherit the system collation by default, see the Related tasks section of this article.
Related tasks
Related content
For more information, see the following related content:
- SQL Server Best Practices Collation Change
- Use Unicode Character Format to Import or Export Data (SQL Server)
- Write International Transact-SQL Statements
- SQL Server Best Practices Migration to Unicode (no longer maintained)
- Unicode Consortium
- Unicode Standard
- UTF-8 Support in OLE DB Driver for SQL Server
- SQL Server Collation Name (Transact-SQL)
- Windows Collation Name (Transact-SQL)
- Introducing UTF-8 support for SQL Server
- COLLATE (Transact-SQL)
- Collation Precedence
See also
Contained Database Collations
Choose a Language When Creating a Full-Text Index
sys.fn_helpcollations (Transact-SQL)
Single-Byte and Multibyte Character Sets
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/sql/relational-databases/collations/collation-and-unicode-support?view=sql-server-ver16 | 2022-08-07T20:41:35 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.microsoft.com |
Projects#
A dicehub project is a space where you can find all the things related to a specific problem.
Projects contain applications and can be shared with other users by adding them as members.
Things related to a project on this page:
- Create a project
- Create an application
- Add members to your project
- Remove members from your project
- Change project settings
Create a project#
Projects can be used for work on a specific topic (for example "Car aerodynamics"). In a project you can create applications to solve your simulation problem.
To create a project in dicehub:
- In the left navigation, select Projects.
- Select New Project.
- On the New Project page, edit the following details:
- Project name: This is the name of your project. You can use spaces, hyphens and underscores. Special characters are not allowed.
- Project slug: The slug is used as the path in your project URL. (The project slug is automatically generated when you type in the project name. You can change the slug after you have selected the project name)
- Project visibility: The visibility level determines who can see your project.
- Select Create project.
Create an application#
- Open a Project.
- Select New Application.
- On the New Applications Page: Select application template.
Add members to your project#
To view, add or remove members in your project, open your project and go to the Members section.
To add a member to the project:
- Go to Project > Members.
- In the input field with the label
Find a membertype the username or email of the person you want to add.
- Select the Visibility level and Role.
- Click on Add.
Remove members from your project#
To remove a member from the project:
- Go to Project > Members.
- Click on the Remove button next to the user you would like to remove.
Change project settings#
You can access the project settings by selecting the Settings tab in your project.
The following settings can be changed:
- Profile picture: The image to identify your project.
- General information: The general information of the project such as title and description can be changed here.
- Namespace: Namespace of the project. All contents of the project are in this namespace.
- Privacy settings: Here you can manage privacy settings by adjusting the visibility level.
- Delete project: Deleting this project deletes all its resources and can not be restored. | https://docs.dicehub.com/latest/guide/essentials/projects/ | 2022-08-07T19:57:22 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['../images/create_project_1.png', 'Create project 1'], dtype=object)
array(['../images/create_project_2.png', 'Create project 2'], dtype=object)
array(['../images/create_app_1.png', 'Create application 1'], dtype=object)
array(['../images/create_app_2.png', 'Create application 2'], dtype=object)
array(['../images/project_members.png', 'Project members section'],
dtype=object)
array(['../images/project_settings.png', 'Project settings'], dtype=object)] | docs.dicehub.com |
- 16 Feb 2022
- 5 Minutes to read
-
- DarkLight
Index model
- Updated on 16 Feb 2022
- 5 Minutes to read
-
- DarkLight
Overview
Graylog is transparently managing one or more sets of Elasticsearch indices to optimize search and analysis operations for speed and low resource consumption.
To enable managing indices with different mappings, analyzers, and replication settings Graylog is using so-called index sets which are an abstraction of all these settings.
Each index set contains the necessary settings for Graylog to create, manage, and fill Elasticsearch indices and handle index rotation and data retention for specific requirements.
Graylog is maintaining an index alias per index set which is always pointing to the current write-active index from that index set. There is always exactly one index to which new messages are written until the configured rotation criterion (number of documents, index size, or index age) has been met.
A background task continuously checks if the rotation criterion of an index set has been met and a new index is created and prepared when that happens. Once the index is ready, the index alias is atomically switched to it. That means that all Graylog nodes can write messages into the alias without even knowing what the currently write-active index of the index set is.
Almost every read operation is performed with a given time range. Because Graylog is writing messages sequentially into Elasticsearch it can keep information about the time range each index covers. It selects a lists of indices to query when having a time range provided. If no time range was provided, it will search in all indices it knows.
Eviction of indices and messages
There are configuration settings for the maximum number of indices Graylog is managing in a given index set.
Depending on the configured retention strategy, the oldest indices of an index set will automatically be closed, deleted, or exported when the configured maximum number of indices has been reached.
The deletion is performed by the Graylog master node in a background thread which is continuously comparing the number of indices with the configured maximum:
INFO : org.graylog2.indexer.rotation.strategies.AbstractRotationStrategy - Deflector index <graylog_95> should be rotated, Pointing deflector to new index now! INFO : org.graylog2.indexer.MongoIndexSet - Cycling from <graylog_95> to <graylog_96>. INFO : org.graylog2.indexer.MongoIndexSet - Creating target index <graylog_96>. INFO : org.graylog2.indexer.indices.Indices - Created Graylog index template "graylog-internal" in Elasticsearch. INFO : org.graylog2.indexer.MongoIndexSet - Waiting for allocation of index <graylog_96>. INFO : org.graylog2.indexer.MongoIndexSet - Index <graylog_96> has been successfully allocated. INFO : org.graylog2.indexer.MongoIndexSet - Pointing index alias <graylog_deflector> to new index <graylog_96>. INFO : org.graylog2.system.jobs.SystemJobManager - Submitted SystemJob <f1018ae0-dcaa_96>.
Index Set Configuration
Index sets have a variety of different settings related to how Graylog will store messages into the Elasticsearch cluster.
- Title : A descriptive name of the index set.
- Description : A description of the index set for human consumption.
- Index prefix : A unique prefix used for Elasticsearch indices managed by the index set. The prefix must start with a letter or number, and can only contain letters, numbers,
_,
-and
+. The index alias will be named accordingly, e. g.
graylog_deflector
if the index prefix was
graylog.
- Analyzer : (default:
standard) The Elasticsearch analyzer for the index set.
- Index shards : (default: 4) The number of Elasticsearch shards used per index.
- Index replicas : (default: 0) The number of Elasticsearch replicas used per index.
- Max. number of segments : (default: 1) The maximum number of segments per Elasticsearch index after index optimization (force merge) , see Segment Merging for details.
- Disable index optimization after rotation : Disable Elasticsearch index optimization (force merge) after index rotation. Only activate this if you have serious problems with the performance of your Elasticsearch cluster during the optimization process.
Index rotation
- Message count : Rotates the index after a specific number of messages have been written.
- Index size : Rotates the index after an approximate size on disk (before optimization) has been reached.
- Index time : Rotates the index after a specific time (e. g. 1 hour or 1 week).
Index retention
- Delete : Delete indices in Elasticsearch to minimize resource consumption.
- Close : Close indices in Elasticsearch to reduce resource consumption.
- Do nothing
- Archive : Commercial feature, see Archiving.
Maintenance
Keeping the index ranges in sync
Graylog will take care of calculating index ranges automatically as soon as a new index has been created.
In case the stored metadata about index time ranges has run out of sync, Graylog will notify you in the web interface.This can happen if an index was deleted manually or messages from already “closed” indices were removed.
The system will offer you to just re-generate all time range information. This may take a few seconds but is an easy task for Graylog.
You can easily re-build the information yourself after manually deleting indices or doing other changes that might cause synchronization problems:
$ curl -XPOST
This will trigger a system job:
INFO : org.graylog2.indexer.ranges.RebuildIndexRangesJob - Recalculating index ranges. INFO : org.graylog2.system.jobs.SystemJobManager - Submitted SystemJob <9b64a9d0-dcac-11e6-97c3-6c4008b8fc28> [org.graylog2.indexer.ranges.RebuildIndexRangesJob] INFO : org.graylog2.indexer.ranges.RebuildIndexRangesJob - Recalculating index ranges for index set Default index set (graylog2_*): 5 indices affected. INFO : org.graylog2.indexer.ranges.MongoIndexRangeService - Calculated range of [graylog_96] in [7ms]. INFO : org.graylog2.indexer.ranges.RebuildIndexRangesJob - Created ranges for index graylog_96: MongoIndexRange{id=null, indexName=graylog_96, begin=2017-01-17T11:49:02.529Z, end=2017-01-17T12:00:01.492Z, calculatedAt=2017-01-17T12:00:58.097Z, calculationDuration=7, streamIds=[000000000000000000000001]} [...] INFO : org.graylog2.indexer.ranges.RebuildIndexRangesJob - Done calculating index ranges for 5 indices. Took 44ms. INFO : org.graylog2.system.jobs.SystemJobManager - SystemJob <9b64a9d0-dcac-11e6-97c3-6c4008b8fc28> [org.graylog2.indexer.ranges.RebuildIndexRangesJob] finished in 46ms.
Manually rotating the active write index
Sometimes you might want to rotate the active write index manually and not wait until the configured rotation criterion for in the latest index has been met, for example if you’ve changed the index mapping or the number of shards per index.
You can do this either via an HTTP request against the REST API of the Graylog master node or via the web interface:
$ curl -XPOST
Triggering this job produces log output similar to the following lines:
INFO : org.graylog2.rest.resources.system.DeflectorResource - Cycling deflector for index set <58501f0b4a133077ecd134d9>. Reason: REST request. INFO : org.graylog2.indexer.MongoIndexSet - Cycling from <graylog_97> to <graylog_98>. INFO : org.graylog2.indexer.MongoIndexSet - Creating target index <graylog_98>. INFO : org.graylog2.indexer.indices.Indices - Created Graylog index template "graylog-internal" in Elasticsearch. INFO : org.graylog2.indexer.MongoIndexSet - Waiting for allocation of index <graylog_98>. INFO : org.graylog2.indexer.MongoIndexSet - Index <graylog_98> has been successfully allocated. INFO : org.graylog2.indexer.MongoIndexSet - Pointing index alias <graylog_deflector> to new index <graylog_98>. INFO : org.graylog2.system.jobs.SystemJobManager - Submitted SystemJob <024aac80-dcad_98>. INFO : org.graylog2.indexer.retention.strategies.AbstractIndexCountBasedRetentionStrategy - Number of indices (5) higher than limit (4). Running retention for 1 index. INFO : org.graylog2.indexer.retention.strategies.AbstractIndexCountBasedRetentionStrategy - Running retention strategy [org.graylog2.indexer.retention.strategies.DeletionRetentionStrategy] for index <graylog_94> INFO : org.graylog2.indexer.retention.strategies.DeletionRetentionStrategy - Finished index retention strategy [delete] for index <graylog_94> in 23ms. | https://docs.graylog.org/docs/index-model | 2022-08-07T19:05:49 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/index_set_overview.png',
'index_set_overview'], dtype=object)
array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/index_set_details.png',
'index_set_details'], dtype=object)
array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/index_model_write.png',
'index_model_write'], dtype=object)
array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/index_model_read.png',
'index_model_read'], dtype=object)
array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/index_set_create.png',
'index_set_create'], dtype=object)
array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/index_set_create_rotation.png',
'index_set_create_rotation'], dtype=object)
array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/index_set_create_retention.png',
'index_set_create_retention'], dtype=object)
array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/index_set_maintenance.png',
'index_set_maintenance'], dtype=object) ] | docs.graylog.org |
Error rendering macro 'rw-search'
null
Page History
Versions Compared
Old Version 37
changes.mady.by.user Rasmus Blendal
Saved on
New Version 38
changes.mady.by.user Denitaa Jeganathan
Saved on
Key
- This line was added.
- This line was removed.
- Formatting was changed.
To create a form:
- Go to [Study Name] > Forms.
- Click + New Form.
- Select Form in the Form Type drop down (should be default)
- Enter the name of the form and specify the form details.
- Click Add Question in the bottom. A part where you can specify the question details will appearunfold.
-.
Can't find the answer you're looking for?
Don't hesitate to raise a support request via [email protected]. | https://docs.medei.co/pages/diffpages.action?originalId=43385165&pageId=48955527 | 2022-08-07T20:03:11 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.medei.co |
Model-driven app glossary of common terms
This article contains a glossary of terms for Power Apps model-driven apps.
Accessibility
Accessibility is a term that is used to refer to the extent to which people with disabilities can use digital products. In the case of model-driven apps, consideration has been paid to matters such as responsive design, how user navigate between fields, how the app behaves in high contrast mode, and how screen readers help users to understand the nature of the application.
Using screen readers within model-driven apps
Admin center
The Power Platform admin center is a unified portal for administrators to manage environments and settings for Power Apps, Power Automate, and Dynamics 365 apps. Power Platform admin center doesn't cover administration settings and features associated with Power BI.
Learn more about the Power Platform admin center
App designer
The tool that is used to create and edit model-driven apps. As the modern app designer experience matures, it will replace the classic experience.
Use it to configure the navigation site map, tables, forms, and views relevant to your app.
A preview of the new app designer experience
We can use the classic app designer when we build or edit our apps
App navigation experience
The way in which areas, groups, and subarea are presented in a model-driven app. It is also known as the site map
Application lifecycle management
The way in which we manage the lifecycle of an application from conception to end of life. From a technical perspective, much of application lifecycle management (ALM) is managed via solutions when delivering model-driven app products.
Overview of application lifecycle management with Microsoft Power Platform
Area
A part of the model-driven app navigation experience, apps can have multiple groups and groups can have multiple sub-areas. The sub-area contains the tables relevant to the application. For apps with more than one area, a switch control is displayed in the lower left navigation pane.
App navigation in model-driven apps
Attribute
An attribute is another name for a column and is a term commonly used by Power Apps developers. Each table in Power Apps corresponds to a database table and each table column in Power Apps corresponds to a column in the database table.
Business process flow
Logic built into a given table to ensure that users complete records by updating fields in the correct order.
While these are authored initially using the Power Automate experience, business process flows are experienced within model-driven app as a change in the user interface.
A business process flow is arranged into stages. Each stage defines the columns (fields) that must be completed typically before moving on to the next stage. For example, the default business process flow for the opportunity table has four stages: qualify > develop > propose > close. The current stage in a business process flow is indicated with a dot next to the stage in the sequence of stages from left to right in the flow.
Business process flows overview
Business rule
Business rules are server-side logic that is used with canvas or model-driven apps to set or clear values in one or more columns in a table. Business rules can also be used to validate stored data or display error messages. Model-driven apps can use business rules to show or hide columns, enable or disable columns, and create recommendations based on business intelligence.
Learn more about business rules
Business rules - Microsoft Learn content
Canvas app
An app which is generated using drag and drop controls configured using Power Fx. Canvas apps offer the designer significant control over the user experience and can be connected to a wide range of data sources and data services.
Canvas apps are arranged into screens and controls such as galleries, text boxes, and dropdowns, are placed onto the screens and configured so that they connect to the data sources and to each other correctly.
Whereas a model-driven app comes with many preconfigured features such as forms, views, and a user interface, many Canvas apps are authored from a blank canvas, or a template. There is often more work to be performed and more outright work using code.
Canvas apps are contained within environments and solutions in the same way as model-driven apps.
Find out more about canvas apps here.
Chart
A visual representation of a table of data. These can take the form of line, bar, pie, or donut chart.
Find out more about creating a system chart here.
Classic
The classic interface represents the method in which app makers make changes to features within their Microsoft Dataverse environment.
The classic interface has been replaced over time by the web-based method of app authoring known as the unified interface.
About Unified Interface for model-driven apps in Power Apps
Classic app designer
The modern app designer lets you create model-driven apps and create canvas apps using custom pages.
The modern app designer will soon be the default designer for model-driven apps. Currently, you can still create model-driven apps using the classic app designer.
Column
A column (formerly called a field), is a field within a Dataverse table (formerly called an entity). Columns are similar to fields in databases and have different data types such as text, number, date, as well as data types less familiar to databases such as phone, email, file, and image.
The column type defines the kind of data required by the column and also the controls, such as date picker or text box, that will be available when using the control.
Columns also appear when creating forms. Form tabs also have columns, and this defines where you can put sections. Additionally, form sections have columns, and these define where you can place table columns (form fields in this case).
How to create and edit columns
Add, configure, move, or delete columns on a form
Command bar
The area of a model-driven app that contains basic commands universally used by model-driven apps.
The command bar can be customized. More information: Customize the command bar using command designer (preview)
Component
Components are elements. Components are used when creating the elements that make up a model-driven app. Often these elements will relate to the method of creation of the tables that make up a model-driven app.
Components can be split into data (tables, relationships, columns) UI (site map,forms,views), logic (business process flows, business rules) and visualization (charts, dashboards, and Power BI Tiles).
Learn more about components
Connection
A model-driven app is only connected to the data tables that reside in the same environment. This connection can be considered native because it never has to be set up within the environment.
Connections exist within the environment to enable other elements of the Power Platform to operate correctly. Notably, Power Apps canvas apps and Power Automate flows have the ability to make use of multiple connections.
Control
Controls allow you to interact with information contained within records. They typically are visible on forms, where users update data using the control. Examples of controls are calendar, toggle, choices, slider, and editable grids. In some cases you might want to use different controls depending upon the device employed by the user.
Find out more about controls
Dashboard
A container for one or more charts relating to a table.
Find out more about dashboards here
A dashboard allows charts, Power BI reports, and views of tables to be presented to the app user.
Find out more about how to use Power BI within a model driven app
Data model
A collection of related tables. In the context of model-driven apps, these are held within the Dataverse database.
In a custom solution, the data model is often the set of related tables built with the purpose of delivering the overall business application.
Database
The collective term for all the tables in Dataverse.
Dataverse
Microsoft Dataverse is the collective term for the tables, workflows, business process flows, and related functionality that are provisioned within an environment when a database is created.
Model-driven apps require a Dataverse database.
A Dataverse database contains data structures most closely associated with databases in addition to being able to hold model-driven apps, canvas apps, and Power Automate flows.
Find out more about Dataverse here
Dependency
Dependencies are created when elements of components are reliant on each other for them to work. For example, if a column is used within a view then the view requires the column to exist for it to be able to function. There are many examples of dependencies throughout Dataverse. Another example is a model-driven app being dependent on a table if that table is used within the app.
Dependencies manifest themselves in numerous ways including when a model-driven app is validated. They also become apparent in the most problematic fashion when trying to delete an aspect of a table, form, view or dashboard. When this occurs, the dependencies can be viewed by selecting the item to be deleted, and then selecting "show dependencies" on the command bar.
Dynamics 365
Microsoft Dynamics 365 is a line of enterprise resource planning (ERP) and customer relationship management (CRM) software applications. Microsoft markets Dynamics 365 applications through a network of reselling partners who provide specialized services.
Learn more about Microsoft Dynamics 365
Entity
An entity is the classic way of describing a table. You'll see this terminology within the classic experiences and elsewhere on the internet.
Environment
An environment is a space to store, manage, and share your organization's business data, data structures, apps, chatbots, and flows.
You can package up the various elements as solutions, and these solutions can be exported from one environment to another.
An environment can only ever have one Dataverse database and all your model-driven apps in the environment use this database.
Often multiple environments are used to enable application lifecycle management. For example you might have development, test, and production environments.
Environments exist within a geographical region and can be a means of ensuring that the data physically stays in the correct geographical region.
Find out more about environments here
Flow
Cloud flows are functionality offered by Power Automate that allow automation of tasks to take place based upon triggering of conditions such as recurrence, adding or updating of records or simply selection of buttons by users. Flows can be run with or without the introduction of new parameters.
Form
Forms provide the user interface (UI) that people use to create, view, or edit table records. Use the form designer in Power Apps to create and edit forms.
There are four types of forms: main, quick create, quick view, and card.
More information:
- Form Types
- Opening the form designer
- Add a section to or remove a section from a form
- Add a tab to or remove a tab from a form
Form designer
The design experience for creating and editing forms.
Opening the form designer
Group
A part of the model-driven app navigation experience. Group names appear as a navigation element in an app with the subarea names (tables) within the group listed beneath it.
Legacy
This refers to features that have either been deprecated, or the way in which they are authored, has been moved to more modern experience, such as the web-based unified interface.
Lookup
A lookup is a field type that exists when two tables are related. Lookups can be seen in table views on the many side of a one-to-many relationship. They are generally populated using a form on the many side of the relationship.
Main form
Every table has at least one main form. The main form represents the primary method of interaction with a record. The main form is responsive to the device using the form and can contain controls that are optimized to the device whether it is phone, tablet, or web. Main forms are edited using the form designer.
Monitor
Also know as the app monitor. It lets you understand aspects of the performance of a model-driven app. App monitor can an also be used to monitor canvas apps.
Page
Modern apps have the concept of pages, which can be either model-driven apps or a canvas-based page using a custom pages. Custom pages allow flexible layout, low-code Power Fx functions, and Power Apps connector data.
It is a tool for enabling model-driven apps and canvas apps to exist together.
Power Automate
A Power Platform service that allows users to streamline repetitive tasks. Typically, this automation is performed using cloud flows.
Model-driven app business process flows that direct users to complete table records in a specific fashion, are authored within Power Automate.
Power Automate flows exist within an environment and can also exist within Power Apps solutions.
Learn more about Power Automate
Power BI
A data visualization tool that has the capacity to be embedded within model-driven apps or to live completely independently of them. Power BI can connect to a very wide range of data sources, of which Dataverse is just one.
Power BI Reports don't exist within Dataverse environments or inside solutions.
Publish
The process by which you make the latest iteration of the app available to users within an environment.
Publisher
Every solution has a publisher. You specify the publisher when you create a solution. The solution publisher indicates who developed the app, and will define the prefix, such as Contoso_MyNewTable, for all the solution assets.
Learn more about publishers
Record
A record contains one or more columns of information about a person, a place, or a thing. For example, a record might contain the name, the email address, and the phone number of a single customer. Other tools refer to a record as a "row" or an "item". Records exist within Dataverse tables.
Relationship
The way fields in different tables relate to each other. There are three types of relationship:
- One-to-many. For example, one author to many novels.
- Many-to-one. For example, many pages to one book.
- Many-to-many. For example, many books borrowed by many people.
Model-driven apps often contain tables with relationships between them. Where relationships exist, users navigate to the record within the related table. For example, when looking at a sales invoice record, you can open the related account record to investigate details for that account.
Responsive apps
An app that is responsive will render itself in a way that depends on the device that is accessing the app. This may even mean that there may even be a different control displayed, such as a date picker, depending on whether the user is running the app on a computer, tablet, or phone.
Additionally, tables and fields render themselves according to screen size of the device being used.
Section
Tabs within forms are arranged into sections. Sections can be arranged into one to four columns and they let you arrange the record metadata in a way that is most relevant to the current tab and the current section.
Learn more about working with sections
Security role
A security role defines what people can see and do with a record. This relates to create, read, write, delete, update, and append actions.
Security roles are created and users are put into security roles either as individual user names or by using active directory security groups.
You grant access to model-driven apps through security roles.
- Find out more about security roles
- General overview of security in Microsoft Dataverse
- Getting started with security roles using content from Microsoft Learn
A model-driven app is essentially a collection of tables, dashboards, views, and pages, and these are described via the site map. The site map defines the tables and pages that are included within a model-driven app and the navigation experience users will have when moving between them.
When configuring the navigation experience you're editing the areas, groups, and subarea navigation elements. Tables exist at the level of the subarea, and are arranged into groups. Groups are effectively collections of tables and pages and are visible in the navigation pane. Areas allow you to toggle between visible groups.
Both modern and classic methods of creating a model-driven app include site maps.However, with the modern app designer you can design the site map with a drag and drop experience whereas the classic site map designer doesn't support drag and drop.
To open the site map in the classic site map designer from the modern app building experience, select Switch to classic.
Find out more about app navigation here
Solution
A solution is a wrapper for a very wide range of components including tables, cloud flows, and security roles.
When you make a model-driven app, ensure that the assets associated with it are held inside a solution.
Solutions have two forms:
- Managed solutions generally permit only a small amount of customization or no customization at all.
- Unmanaged solutions give makers full control over the project that they are creating.
Unmanaged solutions are used by makers and developers for exporting projects as a managed solution for use in non-development environments, such as a production environment. This allows for a high level of control for application lifecycle management.
Solution explorer
The name given to the app designers make edits to solution. While it is a legacy experience, solution explorer currently offers additional functionality when editing solutions.
To access the modern solution interface follow these steps:
- Select an environment.
- On the left pane, select Solutions, and then open an unmanaged solution where you want to add a model-driven app. Create a solution if one doesn't already exist.
- Explore the components of the solution.
Find out more about solutions here
Subarea
A part of the model driven app navigation experience. Subareas (tables) and pages appear under the group that they're configured within in the app designer.
Subgrid
Subgrids are areas of main forms that display a list of records from a Dataverse table, while remaining on the form. Typically, a subgrid is used to display child records that relate to the parent record currently under review. For example, books written by an author.
While subgrids are displayed in a model-driven app, they are a property of the form.
Tab
Every form has at least one tab and these are relevant to how we present table record data. A form can have multiple tabs. This lets you, the maker, offer the user a range of ways of looking at the same record. This is often a better user experience, or a more logical way of presenting the data in the record.
From a site map perspective a tab is a "group" when using the site map designer versus a subarea for tables and an area to hold subareas.
Learn more about working with tabs
Table
A table is a method of storing data in columns (or fields) within Dataverse. Tables where formerly called entities.
Tables, in the context of model-driven apps, only exist within a Dataverse database.
A single row within a table is known as a record. For example, a single customer, and the columns describe metadata associated with the customer such as the name, telephone number, or credit limit.
Every model-driven app must contain at least one table. Much of the process of creating a model-driven app is selecting the tables most relevant to solving the business problem.
Tables have views, forms and business rules associated with them.
Additionally, tables also have charts as well as dashboards where charts are presented.
Tables can relate to other tables and these are defined via the relationships that have been set up between them.
Find out more about configuring tables here
Table designer
The design experience for creating and editing tables. This lets you create tables, columns, relationships, business rules, and views.
Create a custom table using the table designer
Unified Interface
The Unified Interface provides a consistent and accessible user experience across devices—whether on a desktop, laptop, tablet, or phone. The predecessor to the Unified Interface was known as the web interface.
Find out more about the unified interface here
Validate
The process by which an app maker confirms if the model-driven app has all the components required for it to function properly.
Learn how to validate an app
View
A tabular representation of records in a Dataverse table. Tables can have multiple views.
Views can be pre-filtered and it is possible to define the specific views that a model-driven app will make available to users.
Tables can have multiple views associated with them and you can define the table views relevant to a model-driven app at the time that you create them.
Find out more about views here
Workflow
A classic workflow is a series of functions or methods, called steps, that are performed sequentially and apply to data contained within tables. The workflow can change the processing direction by using conditionals, referred to as conditional branches.
In many cases classic workflows should be replaced by Power Automate flows.
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/power-apps/maker/model-driven-apps/model-driven-app-glossary | 2022-08-07T19:47:15 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['../../developer/model-driven-apps/media/customization-account-grid-command-bar.png',
'Layout for a Unified Interface app.'], dtype=object) ] | docs.microsoft.com |
Introduction¶
Python library interfacing LED matrix displays with the MAX7219 driver (using SPI) and WS2812 & APA102 NeoPixels (inc Pimoroni Unicorn pHat/Hat and Unicorn Hat HD) on the Raspberry Pi and other Linux-based single board computers - it provides a Pillow-compatible drawing canvas, and other functionality to support:
multiple cascaded devices
LED matrix, seven-segment and NeoPixel variants
scrolling/panning capability,
terminal-style printing,
state management,
dithering to monochrome,
Python 3.6+ is supported
A LED matrix can be acquired for a few pounds from outlets like Banggood. Likewise 7-segment displays are available from Ali-Express or Ebay. | https://luma-led-matrix.readthedocs.io/en/latest/intro.html | 2022-08-07T19:53:49 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['https://raw.githubusercontent.com/rm-hull/luma.led_matrix/master/doc/images/devices.jpg',
'max7219 matrix'], dtype=object) ] | luma-led-matrix.readthedocs.io |
Sensor Motion¶
Python package for analyzing sensor-collected human motion data (e.g. physical activity levels, gait dynamics).
Dedicated accelerometer devices, such as those made by Actigraph, usually bundle software for the analysis of the sensor data. In my work I often collect sensor data from smartphones and have not been able to find any comparable analysis software.
This Python package allows the user to extract human motion data, such as gait/walking dynamics, directly from accelerometer signals. Additionally, the package allows for the calculation of physical activity (PA) or moderate-to-vigorous physical activity (MVPA) counts, similar to activity count data offered by companies like Actigraph.
Requirements¶
This package has the following dependencies, most of which are just Python packages:
- Python 3.x
- numpy
- Included with Anaconda. Otherwise, install using pip (
pip install numpy)
- scipy
- Included with Anaconda. Otherwise, install using pip (
pip install scipy)
- matplotlib
- Included with Anaconda. Otherwise, install using pip (
pip install matplotlib)
Usage¶
Here is brief example of extracting step-based metrics from raw vertical acceleration data:
Import the package:
import sensormotion as sm
If you have a vertical acceleration signal
x, and its corresponding
time signal
t, we can begin by filtering the signal using a low-pass
filter:
b, a = sm.signal.build_filter(frequency=10, sample_rate=100, filter_type='low', filter_order=4) x_filtered = sm.signal.filter_signal(b, a, signal=x)
Next, we can detect the peaks (or valleys) in the filtered signal, which gives us the time and value of each detection. Optionally, we can include a plot of the signal and detected peaks/valleys:
peak_times, peak_values = sm.peak.find_peaks(time=t, signal=x_filtered, peak_type='valley', min_val=0.6, min_dist=30, plot=True)
From the detected peaks, we can then calculate step metrics like cadence and step time:
cadence = sm.gait.cadence(time=t, peak_times=peak_times, time_units='ms') step_mean, step_sd, step_cov = sm.gait.step_time(peak_times=peak_times)
Physical activity counts and intensities can also be calculated from the acceleration data:
x_counts = sm.pa.convert_counts(x, time, integrate='simpson') y_counts = sm.pa.convert_counts(y, time, integrate='simpson') z_counts = sm.pa.convert_counts(z, time, integrate='simpson') vm = sm.signal.vector_magnitude(x_counts, y_counts, z_counts) categories, time_spent = sm.pa.cut_points(vm, set_name='butte_preschoolers', n_axis=3)
For a more in-depth tutorial, and more workflow examples, please take a look at the tutorial.
I would also recommend looking over the documentation to see other functionalities of the package.
Contribution¶
I work on this package in my spare time, on an “as needed” basis for my research projects. However, pull requests for bug fixes and new features are always welcome!
Please see the develop branch for the development version of the package, and check out the issues page for bug reports and feature requests.
Getting Help¶
You can find the full documentation for the package here
Python’s built-in help function will show documentation for any module
or function:
help(sm.gait.step_time)
You’re encouraged to post questions, bug reports, or feature requests as an issue
Alternatively, ask questions on Gitter | https://sensormotion.readthedocs.io/en/develop/ | 2022-08-07T18:23:31 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['_images/filter.png', '_images/filter.png'], dtype=object)
array(['_images/peak_detection.png', '_images/peak_detection.png'],
dtype=object)
array(['_images/pa_counts.png', '_images/pa_counts.png'], dtype=object)] | sensormotion.readthedocs.io |
Packaging Existing Binaries¶
There are specific scenarios in which it is necessary to create packages from existing binaries, for example from 3rd parties or binaries previously built by another process or team that are not using Conan. Under these circumstances building from sources is not what you want. You should package the local files in the following situations:
- When you cannot build the packages from sources (when only pre-built binaries are available).
- When you are developing your package locally and you want to export the built artifacts to the local cache. As you don’t want to rebuild again (clean copy) your artifacts, you don’t want to call conan create. This method will keep your build cache if you are using an IDE or calling locally to the conan build command.
Packaging Pre-built Binaries¶
Running the
build() method, when the files you want to package are local, results in no added value as the files
copied from the user folder cannot be reproduced. For this scenario, run conan export-pkg command directly.
A Conan recipe is still required, but is very simple and will only include the package meta information. A basic recipe can be created with the conan new command:
$ conan new Hello/0.1 --bare
This will create and store the following package recipe in the local cache:
class HelloConan(ConanFile): name = "Hello" version = "0.1" settings = "os", "compiler", "build_type", "arch" def package(self): self.copy("*") def package_info(self): self.cpp_info.libs = self.collect_libs()
The provided
package_info() method scans the package files to provide end-users with
the name of the libraries to link to. This method can be further customized to provide additional build
flags (typically dependent on the settings). The default
package_info() applies as follows: it
defines headers in the “include” folder, libraries in the “lib” folder, and binaries in the “bin” folder. A different
package layout can be defined in the
package_info() method.
This package recipe can be also extended to provide support for more configurations (for example,
adding options: shared/static, or using different settings), adding dependencies (
requires),
and more.
Based on the above, We can assume that our current directory contains a lib folder with a number binaries for this “hello” library libhello.a, compatible for example with Windows MinGW (gcc) version 4.9:
$ conan export-pkg . Hello/0.1@myuser/testing -s os=Windows -s compiler=gcc -s compiler.version=4.9 ...
Having a test_package folder is still highly recommended for testing the package locally before upload. As we don’t want to build the package from the sources, the flow would be:
$ conan new Hello/0.1 --bare --test # customize test_package project # customize package recipe if necessary $ cd my/path/to/binaries $ conan export-pkg PATH/TO/conanfile.py Hello/0.1@myuser/testing -s os=Windows -s compiler=gcc -s compiler.version=4.9 ... $ conan test PATH/TO/test_package/conanfile.py Hello/0.1@myuser/testing -s os=Windows -s compiler=gcc -s ...
The last two steps can be repeated for any number of configurations.
Downloading and Packaging Pre-built Binaries¶
In this scenario, creating a complete Conan recipe, with the detailed retrieval of the binaries could be the preferred method, because it is reproducible, and the original binaries might be traced. Follow our sample recipe for this purpose:
class HelloConan(ConanFile): name = "Hello" version = "0.1" settings = "os", "compiler", "build_type", "arch" def build(self): if self.settings.os == "Windows" and self.settings.compiler == "Visual Studio": url = ("https://<someurl>/downloads/hello_binary%s_%s.zip" % (str(self.settings.compiler.version), str(self.settings.build_type))) elif ...: url = ... else: raise Exception("Binary does not exist for these settings") tools.get(url) def package(self): self.copy("*") # assume package as-is, but you can also copy specific files or rearrange def package_info(self): # still very useful for package consumers self.cpp_info.libs = ["hello"]
Typically, pre-compiled binaries come for different configurations, so the only task that the
build() method has to implement is to map the
settings to the different URLs.
Note
- This is a standard Conan package even if the binaries are being retrieved from elsewhere. The recommended approach is to use conan create, and include a small consuming project in addition to the above recipe, to test locally and then proceed to upload the Conan package with the binaries to the Conan remote with conan upload.
- The same building policies apply. Having a recipe fails if no Conan packages are created, and the --build argument is not defined. A typical approach for this kind of packages could be to define a build_policy="missing", especially if the URLs are also under the team control. If they are external (on the internet), it could be better to create the packages and store them on your own Conan server, so that the builds do not rely on third party URL being available. | https://docs.conan.io/en/1.21/creating_packages/existing_binaries.html | 2022-08-07T20:13:56 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.conan.io |
- 25 May 2022
- 5 Minutes to read
-
- DarkLight
Widgets
- Updated on 25 May 2022
- 5 Minutes to read
-
- DarkLight
Graylog supports a wide variety of widgets which allow you to quickly visualize data from your logs. A widget is either a Message Table or an Aggregation. This section intends to give you some information to better understand each widget type and how they can help you see relevant details of the many logs you receive.
A widget can be freely placed inside a query. A widget can be edited or duplicated by clicking on the chevron at the top right corner of the widget.
Creating a widget
To add a widget to your search or dashboard:
- Click on Create in the sidebar.
- You may also directly click on the plus sign (+ ).
You can create an empty Aggregation or a predefined widget by selecting Message Table or Message Count.
Empty aggregation widget:
Aggregation
The goal of an aggregation is to reduce the number of data points in a meaningful way to get an answer from them. Data points can be numeric field types in a message (e.g. a
took_ms field which contains how long a page needed to be rendered). They can also be string values which may be used to group an aggregation (e.g an action field which contains the name of the controller action).
Configuring an aggregation
As described in the previous section clicking on -> will create an empty widget on the very top of the search page. Clicking on the top right side will open the widget edit modal.
GROUP BY : This option allows you to “group” your chart by rows and columns. When you create a new group using Group By, the values you select get rolled up into the result. This result can be presented in a variety of ways. You may present the data as a table, chart or colored visualization.
At a glance, if
timestamp is a field attributed to a row it will divide data points into intervals. Otherwise the aggregation will take up to 15 elements of the selected field by default and it will apply the selected METRICS function to the data points.
Example The
timestamp field is aggregated with
avg()on
took_ms. The column
action will give the average loading time for a page per action for every 5 minutes..
VISUALIZATION : In order to display the result of an aggregation it is often easier to compare lots of result values in a graphic. An
Area Chart,
Bar Chart,
Heatmap,
Data Table,
Line Chart,
Pie Chart,
Scatter Plot,
Single Numberor
World Map can be used for VISUALIZATION. A
World Mapneeds geographical points in the form of
latitude,longitude.
SORTING/DIRECTION : The order of result values can be configured here. SORTING defines which field the sorting should be done by and DIRECTION configures whether it will be
ascendingor
descending.
INTERPOLATION : Visualizations like the
Area Chartand
Line Chartsupport different interpolation types. Available interpolation types are
Linear,
Step-afterand
Spline.
EVENT ANNOTATIONS : All viualizations which can display a timeline (
Area Chart,
Bar chart,
Line Chart,
Scatter Plot) support event annotations. Each event will be displayed as an entry on the time axis.
Message Table
The Message Table displays the messages and their fields. The Message Table can be configured to show the message fields and the actual message. The actual message is rendered in blue font below the fields. Clicking on a message row opens the detailed view of a message with all its fields.
Value and Field Actions
Values and fields are visible in the Sidebar and in Data Tables and Detail Message Rows. When you click on a value or a field you will get a context menu. You can use this to execute different actions.
Field actions
Various Field actions are displayed based on field type and location whenever a field name (not its value) is clicked on.
Chart : This will generate a new Widget containing a line chart where the field's average value is displayed over time. This chart can be taken as a starting point for a more defined aggregation. This is only possible in fields that are numerical.
Show top values : This action will generate a new Widget containing a data table where the field values are listed in rows and the number of occurrences will be displayed next to it. This was formerly known as the “Quick Values” action.
Statistics : Here field values are given to various statistics functions depending on field in this table.
Remove from all tables : Remove the field from the list displayed fields in all tables.
Value actions
The value actions produce different results depending on the type of value and where the menu is opened. The following actions can be executed.
Insert into view : This action will open up a modal where a view can be selected. A selectable list of Parameters will appear in the selected view. After choosing a parameter a new browser tab which contains the view with the value used in the parameter will appear. This action is only available in Graylog Operations.
Exclude from results : Will add to the query to exclude all results where the field contains the value of the value action.
Add to query : Will add NOT field:value to the query to filter the results additionally for where the field has the value of the value action.
Use in new query : Will add field:value open a new view tab with a query string.
Show documents for value : This is available in Data Tables. It will display documents which were aggregated to display this value.
Create extractor : This provides a short cut to create an extractor for values of type string in Message Tables.
Highlight this value : This action will highlight this value for this field in all Message Tables and Data Tables.
Repositioning and Resizing
Widgets can be freely placed inside the search result grid. You can drag and drop them with the three lines to the left of the widget name or you can resize them by using the gray arrow in the bottom-right corner. To expand a widget to full grid width, click on the arrow in its top-right corner.
If you want to expand the view of aggregated data in your Log View widget, go to Focus on the Widget. | https://docs.graylog.org/docs/widgets | 2022-08-07T18:57:56 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/views_widget.png',
'views_widget'], dtype=object)
array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/log_view_window.png',
'log_view_window'], dtype=object)
array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/views_empty_aggregation_edit.png',
'views_empty_aggregation_edit'], dtype=object)
array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/aggregation_view.png',
'aggregation_view'], dtype=object)
array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/views_messages.png',
'views_messages'], dtype=object)
array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/views_field_actions_v2.png',
'views_field_actions_v2'], dtype=object)
array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/views_value_actions.png',
'views_value_actions'], dtype=object)
array(['https://cdn.document360.io/49d29856-3708-4e61-a1fc-cf1f90558543/Images/Documentation/widget_repositioning_resizing.png',
'widget_repositioning_resizing'], dtype=object) ] | docs.graylog.org |
Teams app planning checklist
An app's lifecycle extends from planning your app to eventually deploying it, and beyond. It takes more than knowing your user and requirements to plan your app. Depending on your app needs, you may also consider planning for future updates.
Let's take a practical look at planning for an app's lifecycle.
Relevant questions
Here's a checklist of questions to consider when you plan your app. Use it as a guideline to ensure that your plan covers the important details of app development.
Understand your user
Understand the problem
Understand the limitations of the app
Provide authentication
Plan onboarding experience
Personal scope apps
Shared scope apps
Choose build environment
Suggestion: Options that help select the correct environment based on app needs.
Plan for testing app
Suggestion: Options that help determine the best testing environment for the app.
Plan for app distribution
Suggestion: Options that help determine the best distribution model.
Plan for hosting your Teams.
Plan beyond app building
Decide what goes in Teams: Whether it's a new app or an existing one, check if you want the entire app within the Teams client. If you integrate only a portion of the app, focus on sharing, collaborating, initiating, and monitoring workflows.
Plan the onboarding experience: Craft your onboarding experience with your key users in mind. How you introduce a chat bot installed in a channel with a thousand people, is different when it's installed in a one-to-one chat.
Plan for the future: Identify new features the user will prefer in the current solution. Any new features may impact app design and architecture.
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/microsoftteams/platform/concepts/design/planning-checklist | 2022-08-07T18:15:49 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['../../assets/images/teams-app-host.png',
'Illustration showing app hosting for Teams app'], dtype=object)] | docs.microsoft.com |
* @param {Boolean} [cancelOnDelay=true] By default, each call to {@link #delay} cancels any pending invocation and reschedules a new* invocation. Specifying this as `false` means that calls to {@link #delay} when an invocation is pending just update the call settings,* If the `cancelOnDelay` parameter was specified as `false` in the constructor, this does not cancel and* reschedule, but just updates the call settings, `newDelay`, `newFn`, `newScope` or `newArgs`, whichever are passed. | https://docs.sencha.com/extjs/5.1.0/api/src/DelayedTask.js.html | 2022-08-07T18:56:42 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.sencha.com |
get
Retrieves the historical TWAP for the specified pair - this is the global TWAP across all exchanges which supports this pair, including all cross rates pairs.
Default results are over a 1m tick / 24h lookback period.
Price is calculated as a time weighted moving average across all exchanges.
If the parameter
exchange is specified, the data returned is the TWAP for that pair on that exchange. | https://docs.amberdata.io/reference/spot-twap-pairs-historical | 2022-08-07T19:12:53 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.amberdata.io |
[.
API
The general goal of the crate is that, as much as possible, the vecs here
should be a "drop in" replacement for the standard library
Vec type. We
strive to provide all of the
Vec methods with the same names and
signatures. The "exception" is of course that the element type of each
method will have a
Default bound that's not part of the normal
Vec type.
The vecs here also have additional methods that aren't on the
Vec type. In
this case, the names tend to be fairly long so that they are unlikely to
clash with any future methods added to
Vec.
Stability
tinyvec is starting to get some real usage within the ecosystem! The more
popular you are, the less people want you breaking anything that they're
using.
- With the 0.4 release we had to make a small breaking change to how the vec creation macros work, because of an unfortunate problem with how
rustcwas parsing things under the old syntax.
If we don't have any more unexpected problems, I'd like to declare the crate to be 1.0 by the end of 2020. | https://docs.rs/tinyvec/0.4.1/tinyvec/ | 2022-08-07T19:15:10 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.rs |
Reset Cause:
RESET_BASE_DEF(EXTERNAL, 0x03U, "EXT")
RESET_EXT_DEF(EXTERNAL, UNKNOWN, 0x00U, "UNK")
RESET_EXT_DEF(EXTERNAL, PIN, 0x01U, "PIN")
results in enums which includes the entries:
RESET_EXTERNAL = 0x03
RESET_EXTERNAL_PIN = 0x0301
For a complete listing of all reset base and extended definitions, see reset-def.h for source code. | https://docs.silabs.com/gecko-platform/3.2/service/api/group-reset-def | 2022-08-07T19:18:55 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.silabs.com |
Simplification of hub-and-spoke topologies
Many large IP Security (IPsec) virtual private networks (VPNs) use a hub-and-spoke topology to reduce the number of connections required for full connectivity. But even a hub-and-spoke IPsec VPN network can be difficult to scale for any of the following reasons:
- Hub configuration can become exceedingly complex when there are many spoke devices because VPN endpoints are statically configured. This problem is exacerbated in networks when addressing is frequently changed.
- A full set of tunnels consumes a great many IP addresses because every set of tunnel endpoints requires a separate IP address space.
- The hub becomes a single point of failure for the network.
- The hub must process all network traffic and can become a processing bottleneck.
A dynamic multipoint VPN improves scaling for hub-and-spoke networks by allowing IPsec tunnels to be dynamically added as needed, without configuration. This greatly simplifies hub configuration and reduces the need for IP address space. In addition, after the hub-and-spoke network has been dynamically built out, network spokes can learn to communicate directly with each other thereby reducing the burden on the hub. | https://docs.vyatta.com/en/supported-platforms/vrouter/configuration-vrouter/security-and-vpn/dmvpn/dmvpn-overview/simplification-of-hub-and-spoke-topologies | 2022-08-07T18:45:48 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.vyatta.com |
define availabe only during build time.
That can be done through the web UI or via the command line, like so:
platform project:variable:set env:COMPOSER_AUTH '{"http-basic": {"my-private-repos.example.com": {"username": "your-username", "password": "your-password"}}}' --json --no-visible-runtime
The
env: prefix will make that variable appear as its own Unix environment variable available by Composer during the build process. The optional
--no-visible-runtime flag means the variable will only be defined during the build hook, which offers slightly better security.
Build your application with Composer
You simply need to enable the default Composer build mode in your
.platform.app.yaml:
build: flavor: "composer"
In that case, Composer will be able to authenticate and download dependencies from your authenticated repository.. | https://docs.platform.sh/tutorials/composer-auth.html | 2017-08-16T19:44:00 | CC-MAIN-2017-34 | 1502886102393.60 | [] | docs.platform.sh |
Challenges Facing Consumers With Limited English Skills
In The Rapidly Changing
Telecommunications Marketplace
Prepared by:
Consumer Services and Information Division
Telecommunications Division
Consumer Protection and Safety Division
California Public Utilities Commission
October 5, 2006
Table of Contents
Table of Contents
Executive Summary
California Public Utilities Commission (Commission) Decision (D.) 06-03-013 ("Consumer Protection Initiative" or "CPI") orders Commission staff to perform a study of the special needs of and challenges faced by California telecommunications consumers with limited English proficiency (LEP). The decision contemplates that the report resulting from this study will serve as "both as a short-term action document with respect to potential new rules and education and enforcement programs, as well as a longer-term reference document"1.
In response to this mandate, Commission staff and a language access consultant assembled information on the language demographics of California, services currently available to LEP Californians through the Commission and telecommunications carriers, and the challenges faced by LEP telecommunications consumers. Sources used in the production of this report include census and other demographic data, records of past and current Commission activities, the Internet and other research into the language accessibility practices of state and federal government agencies, information received from telecommunications carriers, as well as comments and information provided by carriers, community based organizations (CBOs) and consumers groups both in writing and at a series of workshops and public meetings held for this purpose. Pursuant to requests for an extension of the original 180 day study deadline (September 8, 2006), Commission Executive Director Steve Larson granted additional time for parties to submit comments on the draft report, and extended the deadline for this staff report until October 5, 2006.
This document, which represents a report describing research and conclusions to date, includes some recommendations for immediate action and specifies further information for staff to gather in order to make a comprehensive proposal for commission and industry action to address the challenges and problems identified in the course of this study.
Staff recommends that the Commission's next steps on this issue include the development of a set of options for targeted Commission actions that take into account the costs, benefits, and feasibility of solutions to the documented challenges and problems facing LEP consumers. Staff contemplates that this effort will continue beyond the original 180 day deadline specified in the D. 06-03-013. As detailed in this report,.
Overview of Recommendations
The information on available multilingual services as well as the needs and concerns expressed by representatives of LEP communities revealed several issues and concerns. Based on this information, it appears that the Commission should take immediate actions to facilitate improved communications between carriers and CBOs to ensure that systematic problems facing the LEP communities are heard and resolved, and should consider making staff more available to consumers throughout the state to assist in filing informal and (when necessary) formal complaints with the Commission. In addition, the Commission should increase attention and resources available to its own bilingual services office to augment its ability to serve California consumers. The Commission should also broaden the efforts of the Public Advisor's Office already taken in the CPI initiative to add telecommunications education in languages such as Russian and Armenian, which have increasing populations in the state. Moreover, the Commission should develop and propose a set of targeted rules for telecommunications carriers for consideration in a formal Commission proceeding. This should not be a "one-size-fits all" proposal, but instead should take into account the varied circumstances (such as size, geographic and demographic characteristics of the population served, and services offered) of different telecommunications carriers and target rules to provide appropriate protection while allowing flexibility appropriate to these differences. Specific recommendations include:.
2. Reconcile the disparate language requirements in various Commission decisions and programs of target audiences.
4. Based on current demographic data, add to its list of languages appropriate for consumer education and public outreach in California languages with particularly high rates of linguistically isolated households or with growing or concentrated populations.
5. Improve CAB's tracking ability in the new CAB database scheduled to be on line in 2007 to capture the language in which complaints are filed, and whether the outcomes of complaints differ due to language barriers.
6. Send appropriate language-trained staff from the Commission's Consumer Affairs Branch , CBO representatives, and carrier staff are likely to be available to attend, e.g. weekday evenings. In addition to bill clinics, other activities could include dispute resolution and consumer education.
7. Set up procedures to rapidly refer cases of suspected fraud, marketing abuse, and other possible violations involving in-language marketing and customer service to the Commission's Utility Enforcement Branch and to its. Staff should monitor any collaborative process and corresponding results that carriers and/or CBOs initiate to develop a voluntary carrier code of conduct pertaining to in-language issues and challenges. The current CPI education process may serve as model for this effort.
3. Expand consumer education programs to address identified problems and concerns of LEP communities. Based on CBO input, this should include more in-language materials and materials developed specifically for the comprehension of different languages, cultural and educational groups. in-language issues as the nature and demographics of California evolve with respect to language and to ensure the Commission's efforts remain current.
2. Explore how in-language programs developed and implemented under D.06-03-013 may inform challenges in the other utility industries in California.
Research Findings
Multilingual Services at the Commission: (ULTS) programs. Other current activities that include multilingual requirements or educational components include the Commission's involvement with the California Utilities Diversity Council (CUDC), an organization made up of representatives of the utility industry, the community, and the Commission's Utility Supplier Diversity Program. CUDC recently proposed a set of language access principles for California utilities; if adopted by the Commission, these principles may assist the Commission and utilities in developing policies and constructive rules for improving service to LEP and linguistically isolated consumers. Past Commission activities that have addressed language-based issues include the Telecommunications Trust Fund (TET), the electricity restructuring education program, and the Telecommunications Consumer Protection Fund, which support education and outreach on various aspects of the telecommunications industry.
Education, Outreach, and Customer Service: The Commission operates in compliance with the Dymally-Alatorre Bilingual Services Act, is monitored by the State Personnel Board, and commits necessary resources to meet the needs of the public in accordance with legal mandates. An ever-increasing number of written consumer materials are available to customers in Spanish, Chinese, and Vietnamese languages, i.e., consumer handbooks, consumer advisory information, and customer complaint forms. Moreover,. In addition, the Commission offers differential pay in accordance with the State Personnel Board Rules and Bilingual Services Act, and has incorporated continuous language training courses into its training goals. The Commission's CAB staff can speak Spanish, Tagalog, Cantonese, and French. The staff also has access to the language line, which serves 150 languages and has been in use for over 10 years.
Enforcement: The Utility Enforcement Branch of the Consumer Protection and Safety Division (CPSD) has investigated possible violations of the state's Public Utilities Code (PU Code) and Commission rules in the telecommunications area and other industries. Some investigations of alleged slamming and cramming by specific telecommunications companies have involved many requires resources and activities that may not be required for cases in which most complainants are English proficient. CPSD is increasing its capacity to pursue enforcement actions through creation of the Telecommunications Consumer Fraud Unit, and hiring and training of Utility Enforcement Branch investigators. The Commission will monitor the success of these changes as they are implemented.
Carriers' Multilingual Efforts.: In order to gather information on carrier practices, Commission staff sent survey questions to all certificated telecommunications carriers in California (wireless and wireline) asking for information on their services for and interactions with LEP consumers. Approximately 100 telecommunications carriers out of approximately 1,300 responded to this request for information. Several Commission decisions (including D. 96-10-076 and the recent CPI decision D. 06-03-013) and PU Code Section 2890(b) require some carriers to provide limited information in languages other than English to LEP customers under certain circumstances, such as when they make sales in non-English languages. Other multilingual telecommunications services are initiated by the carriers themselves to better serve their customers or to attract new customers. In general, larger carriers and those serving more diverse areas offer more services in more languages than smaller carriers. Services that may be provided in languages other than English include marketing and outreach information (such as brochures on understanding your phone bill) and customer service; but carriers generally focus on providing information in the most common non-English languages, believing that this is more cost effective than attempting to provide information in less-common languages. Few carriers provide service contracts or key terms and conditions of service in-language.
Community Based Organizations' Concerns: Based on input received from consumer advocates in written comments and at the four public meetings held around the state to gather information for this report, there is a need for more in-language information and service. Issues discussed at these meetings also suggest a need for increased Commission enforcement of code and rule violations by carriers that target LEP populations, and increased oversight of dealers, agents and resellers that sell telecommunications products and services to LEP consumers under contract with telecommunications carriers. Concerns described by consumer advocates include, but are not limited to:
· A lack of availability of in-language contracts or written statements of key terms and conditions of service for customers initiating service with telecommunications carriers. Lack of these materials makes it more difficult for consumers to be sure that the service they have purchased is what was represented to them during an in-language sale, leaving a potential for fraud or marketing abuse.
· A lack of in-language billing, which similarly makes it difficult for consumers who purchase service based on an in-language interaction to understand their bills and be sure that the service meets their expectations.
· A relative lack (especially historically) of translated consumer education and public outreach (as opposed to marketing) materials from the Commission and carriers.
· A relative lack (especially historically) of high-quality in-language customer service by the Commission and carriers.
· A lack of education and outreach materials developed specifically to address the special situations and concerns of LEP consumers, including materials that are appropriate to the target community reading levels (which may include "low literacy" written materials or spoken-word outreach through radio or television for some language communities), are sensitive to cultural differences, and include information that addresses non-mainstream needs in obtaining in-language service and support.
· Unfair or fraudulent marketing tactics by wireless phone and prepaid phone card dealers or agents ("resellers") that target vulnerable LEP communities.
As discussed in this report's recommendations, some of these issues can be addressed fairly immediately through improvements in the development and translation of Commission consumer educational materials, while others require further study to develop appropriate solutions. A few issues, such as allegations of fraud aimed at LEP consumers, will require ongoing attention and will benefit from recent improvements in the Commission's ability to respond quickly through innovations such as the CPSD Utility Enforcement Branch's Fraud Hotline and the formation of the Telecom Fraud Unit.
Challenges Facing Consumers With Limited English Skills In The Rapidly Changing Telecommunications Marketplace
I) Introduction and Report Organization
Commission Decision 06-03-013 ("Consumer Protection Initiative" or "CPI") orders Commission staff to perform a study of the special needs of and challenges faced by California telecommunications consumers with limited proficiency in English (LEP consumers). Study goals specified in this decision include:
· Verifying the languages needed for consumer education materials and programs.
· Identifying and reviewing challenges facing LEP consumers.
· Developing strategies for communicating relevant information to LEP populations.
· Recommending rules or programs (if appropriate) to improve service to LEP consumers, and estimating the costs (and benefits) of these recommendations.
The decision contemplates this report serving "both as a short-term action document with respect to potential new rules and education and enforcement programs, but also as a longer-term reference document"2. The overall intention of the report will be to identify gaps in the consumer education of and services available to LEP consumers from the Commission and telecommunications companies and, to the extent possible, suggest ways of filling service and consumer education gaps. The CPI decision also notes that LEP customers may be targeted for fraudulent and deceptive communications in their own languages by unscrupulous persons or businesses, and asks that the study assess whether these in-language needs are sufficiently met by the Commission's current education and enforcement efforts.
To meet the study's goals, Commission staff and a language access consultant assembled information on the language demographics of California, services currently available to LEP Californians through the California Public Utilities Commission and telecommunications companies, and the challenges faced by LEP telecommunications consumers. Sources used in the production of this report include census and other demographic data, records of past and current Commission activities, Internet and other research into the language accessibility practices of State and Federal government agencies information, information received from telecommunications carriers, and comments and information provided by community based organizations (CBOs) and consumers groups both in writing and at a series of workshops and public meetings held for this purpose. In response to requests dated August 25, 2006, for an extension of the original 180 day study deadline (September 8, 2006) from stakeholders in this process, Commission Executive Director Steve Larson granted additional time for parties to submit comments on the draft report, and extended the deadline for this staff report until October 5, 2006.
This report does not attempt a cost-benefit analysis of the provision of LEP services. This is both because information on the full costs and benefits of the myriad approaches to providing in-language services was not readily available in the timeframe for this report, and because a more targeted analysis will be possible once the Commission and the staff better define a desired approach to improving language access. Efforts to define this approach and specific policy options will be included in a comprehensive proposal which will include a set of targeted rules to address the problems and challenges identified in this report, for consideration by the Commission The purpose of this proposal will be to focus comments and stakeholder proposals in the context of a future Order Instituting Rulemaking (OIR) to address persistent problems facing LEP customers which are unlikely to be solved through education alone.
Next steps in the Commission's focus on this issue.
Part II of this report contains background information on the linguistic demographics of the state of California, including the most commonly spoken languages in the state, and some trends in the growth of various languages in the state. This background section also includes a discussion of language access requirements and activities of other state agencies and an overview of similar requirements for federal government agencies; these requirements provide a context for examining the in-language activities of the Commission and telecommunications companies, and may provide models for additional future actions.
Part III of this report describes existing multilingual education efforts, as well as language-related enforcement activities, and availability and effectiveness of Commission services to LEP consumers.
Part IV focuses on the in-language education, outreach, marketing, and customer service activities of telecommunications companies; information in this section is based primarily on information provided by telecommunications service providers in workshops, comments, and responses to a staff request for information.
Part V of identifies challenges facing LEP consumers, including whether current Commission and carrier education, enforcement, and service actions meet existing language access needs. Staff identified barriers and concerns facing LEP consumers through written comments provided by community based organizations and at a series of public meetings organized in cooperation with and at the request of CBOs, especially Latino Issues Forum.
Part VI summarizes these challenges facing LEP communities identified throughout the report, and provides options to address these challenges (where possible) or to study them further. Parts VII and VIII present recommendations and conclusions, respectively.
II) Background
A. Ethnic and Linguistic Landscape of California
California has become the most ethnically, racially, and linguistically diverse state in the nation with growing immigrant and limited English proficient populations from all around the globe. This complex and richly diverse state represents a demographic transformation without historical precedent. The growth of the population as a whole has increased dramatically, and as recently as 1950, California was home to only 10 million people, or about one out of every 15 U.S. residents. By 1990, the state's population had tripled to almost 30 million. By the year 2000 Californians numbered over 34 million, and by 2004 the population exceeded 36.5 million, or approximately 1 in 8 U.S. residents. The U.S. Census Bureau projects that by 2025 California will be home to 50 million residents with Hispanics representing the largest single ethnic group. This phenomenon affects businesses, government agencies, educational institutions, and communities throughout the state.
When it comes to language diversity, California ranks at the top worldwide with Californians speaking between 179 and 220 languages, according to different popular sources and reports. The ever-changing face of the people who populate California and the languages they speak, including dialects, regionalisms and other variations create unique challenges for the delivery of every kind of service in languages other than English. Adding to the complexity of these challenges are issues of literacy levels and cultural aspects within and among different populations and their communities. In order to best identify appropriate languages for consumer education and for the development of effective strategies of communication, consideration of multiple aspects and data sources is necessary.
It is the intent of the section to provide a variety of demographic data, including trends and characteristics that are important to consider when determining the language access needs of Californians. This discussion includes but is not limited to the following topics: limited English proficiency; California's ethnic composition; foreign born, immigration and migration trends and numbers; top languages spoken by adult and school age populations; and linguistically isolated households. Some comparisons at the national level are made to serve as a point of reference. U.S. Census Bureau data for 2000 are used unless otherwise noted.
1. Limited English Proficiency
The term "limited English proficient" refers to a person who does not speak and/or read, write, or understand the English language sufficiently to access services to which he or she may be entitled. As of 2000, about 20 percent of California's population -- over 6 million residents experienced difficulty speaking English and it is estimated that those numbers have increased every year to date. Californians' language ability is measured in range from fully bilingual to partially bilingual ("do not speak English well" or "speak English fairly well") to monolingual ("do not speak English at all"). The U.S. Census data measure the levels of "spoken English" and other languages, and not literacy (the ability to read and/or write the English language). Inferences regarding literacy levels in any language made from census data are not necessarily reliable and it is important to note that there is no single definition or measure of literacy that can be applied to the entire adult population.
About 40% of Latinos and Asians overall are limited English proficient (LEP). Central Americans (mainly Guatemalans, Hondurans, and Salvadorans), and Southeast Asians (mainly Vietnamese, Thai, and Hmong) are among those who have the highest rates of limited English proficiency and reach nearly 50% LEP. In the countries from which these populations originate, English is not one of the primary languages whereas in the Philippines and regions of Mexico, English is taught in school and spoken to varying degrees (Source: "California Speaks", APALC).
A look at California's LEP population by racial and ethnic group and subgroup is provided below.
*Not all groups are shown and some may overlap. (Source: Census 2000 SF4 PCT42 Household Language by Linguistic Isolation).
Note that in the table of subgroups below, ethnic groups with relatively high percentages of limited English proficient (LEP) speakers also include those who are Taiwanese, Laotian, Korean, Cambodian, Chinese, Armenian, Iranian, Tongan, Japanese, and Samoan.
* Not all subgroups are shown and some may overlap. (Source: Census 2000 SF4 PCT42)
2. Ethnic Profile of California
California's ethnic profile is provided to give a general overview of the state's diversity, but does not indicate languages spoken by these groups. As evidenced by these numbers, California's Hispanic population is more than double the national average, and the numbers of Asians, who are the fastest growing population in the state, are nearly three times greater. The percentage of the state's total population for both Asian and Hispanic populations increased from 11.03 to 11.63 percent and 32.51 percent to 34.81 percent, respectively. Together in 2004, these two groups made up 46.44% of the state's total population, outnumbering Whites by 1.82% (Resource: California Department of Finance). At the writing of this report, new data from the U.S. Census indicate that the numbers of Hispanics are growing more rapidly in the Southern states than ever before, however California remains one of the states with the largest concentration of Hispanics.
Comments by the Communities for Telecommunications Rights (CTR) are included here as they highlight important trends and information about the Asian Pacific Islander population. "The Asian Pacific Islander ethnicities represent the most rapidly growing populations and are more linguistically isolated than Latinos...From 1990 to 2000, the Asian population grew as much as 52%, followed by Latinos, who grew 43%. This is compared to the state's total population growth of 14%. The Asian and Pacific Islander population is projected to more than double from 4 to 9 million people between 2000 and 2025."
Within the Hispanic and Asian countries and communities as well as within Middle Eastern, European and other groups, members represent a variety of countries of origin, cultural characteristics including religion, differences in languages and dialects, and other important and distinguishing characteristics. It is to be noted that the ethnic groups mentioned in this report represent those with the highest numbers in California, but do not represent all possible ethnic groups.
3. Foreign Born, Immigrant, and Migrant Trends
Foreign born and migrant trends in California offer different but relevant information when considering languages spoken by Californians. For this section of the report, the following definitions provided by the U.S. Census are used: Foreign born persons are those who are not U.S. citizens at birth who are counted by the census, and may be referred to as immigrants herein. Migrants are those who move into, out of, or within a given area.
(Source: The Foreign Born of California, Place of Origin, Region of Residence, Race, Time of Entry and Citizenship";)
A report from the Public Policy Institute of California, California Counts, provides relevant information about California's recent immigrants--or foreign born. Twenty six percent of all Californians are foreign born, with 8 percent (2.8 million) being recent immigrants who arrived between 1990 and 2000. Nearly half of California's new immigrants were born in Mexico and the next largest country of origin, the Philippines, represented about 7 percent of this group. The overwhelming majority comprising 88.5 percent were born in Latin America or Asia. In descending order, the top ten countries of origin for immigrants arriving between 1990 and 2000 are ranked as follows: Mexico (46.2 %); Philippines (6.8%); Vietnam (4.7%); China (3.8%); India (3.6%); El Salvador (3.4%); Korea (2.7%); Guatemala (2.4%); Taiwan (1.75); Japan (1.7%) and: Other (22.9%). Due to less time in California, recent immigrants have had less time and fewer opportunities to learn English. The languages most spoken are integrated in a later section on linguistically isolated households.
(Note: The net migrant figures above indicate the end mathematical difference between entering and parting migrants. This means that the total number of immigrants per any given year may be greater or less than the number indicated.)
Migrant populations in California are unique in their consumer, educational and other needs and characteristics and are a population that is highly challenging to measure and track. However, they represent a significant number of consumers of telecommunication services, and are therefore included in this section of the report.
There has been a steady increase in the number of migrants for the three ethnic groups represented here--Hispanics, Asians, and Whites. Hispanics have consistently outnumbered all other migrant groups, every year and on average, and make up 60 percent of migrants; they come mainly from Mexico, followed by Central Americans from mainly El Salvador, Guatemala, and Honduras. The next largest group, 29 percent, migrates from Asian countries including Taiwan, Vietnam, Korea, Thailand, China, Cambodia and the Philippines. The numbers of Indo-European and Middle Eastern migrants are lower but have consistently increased over the last decade.
4. Primary Languages Spoken at Home
California is home to more residents over the age of five that speak a language other than English at home than any other state in the nation. In 2000, nearly 40 percent of California residents spoke a language other than English at home indicating an increase from 31.5 percent in 1990. The number of residents that speak a language other than English at home increased from 8.6 million in 1990 to 12.4 million people in 2000, a 44 percent increase over the ten year period. Current trends in migration and ethnic composition indicate the trend has been consistent into 2006. Though ethnic composition and migration numbers do not correspond directly with language proficiency, they do provide a context for understanding the linguistic and cultural differences.
The population over 5 years of age in these tables and the information on English Learners in California schools in the table below do not reflect the exact demographics of those who utilize or need access to telecommunications service in California. These numbers do provide an indication of the growing number of people whose primary language is not English who may become consumers of these services in years to come.
(Source: California Department of Education)
5. Linguistically Isolated Households in California
"Linguistically isolated household" in the U.S. Census refers to a household in which no member 14 years or older speaks English "very well". This refers to spoken English and not to literacy, and is a strong predictor of the need for language assistance for adult members of the household.
One quarter of Asian and Latino households are linguistically isolated in comparison to 10 percent of all households in the state. While the younger, school-age populations are learning English, often their parents, guardians, and families do not learn English for a variety of reasons. Social, cultural, educational, generational, and economic factors impact the degree to which this mostly immigrant population learns English.
* Not all groups are shown. Groups may overlap. (Source: Census 2000 SF4 PCT42 (Household Language by Linguistic Isolation).
* Not all subgroups are shown. Subgroups may overlap. (Source: Census 2000 SF4 PCT42 Household Language by Linguistic Isolation).
Based on discussion at the public meetings held for this project, there may be a correlation in some populations between linguistic isolation and low literacy even in the primary language, though detailed information on literacy levels is not available to document this. If this is the case, it may be appropriate to target linguistically isolated populations using oral outreach such as radio, television, and other means, as described later in this report for reaching low literacy populations.
This data alone is does not clearly show which languages have the most population living in linguistically isolated households. The information is organized by subgroup, not by language, and shows the number of the total population in each group and the percent of that number that are linguistically isolated. Subgroups may not perfectly match language groups, since some groups may have more than one common language, or multiple groups on the list may speak variations or dialects of the same language. This makes it difficult to draw clear conclusions about which languages (other than Spanish and Chinese) have the greatest number of linguistically isolated households, and even more difficult to use this data by itself to determine the languages most in need of language access assistance. This data may be most useful when looked at along with data on LEP communities and trends of migration, to get an overall view of the languages spoken by households that may be more comfortable conversing in a language other than English.
B. Government Requirements and Best Practices
1. Dymally-Alatorre Bilingual Services Act of California
The Dymally-Alatorre Bilingual Services Act was enacted in 1973. In passing this Act, the Legislature "found and declared that the effective maintenance and development of a free and democratic society depends on the right and ability of its citizens and residents to communicate with their government and the right and ability of the government to communicate with them." The Act mandates state agencies to eliminate language barriers that preclude Californians, either because they do not speak or write English or because their primary language is other than English, from having equal access to public services to which they may be entitled. This Act mandates that State and local agencies directly involved in the furnishing of information or the rendering of services to the public must employ a sufficient number of qualified bilingual persons in public contact positions to ensure the provision of information and services to the public in the language of the non- or limited English proficient (LEP) people.
The Act further mandates that every State and local agency that serves a substantial number of non-English speaking people, and provides materials in English explaining services, shall also provide the same type of materials in any non-English language spoken by a substantial number of the public served by the agency. In 1977, the Legislature amended the Act to define "substantial" as five percent or more of the people served by any office or unit. When this threshold is met, departments are required to employ a sufficient number of qualified bilingual staff in public contact positions, translate documents providing information about services, rights and benefits, or identify other appropriate means for meeting the language need of LEP persons.
The Act also requires that each State and local agency conduct a biennial language survey to measure the level of public contact at each local office and facility; report the number of contacts received by language; identify staffing available to provide services; and submit their findings to the State Personnel Board (SPB) by March 31 of each even-numbered year. The results of the survey are compiled by the California State Personnel Board and reported to the Legislature. In addition, the SPB requires state agencies to develop corrective action plans to respond to deficiencies identified by the survey and provide other relevant information to the SPB to substantiate their efforts to ensure equal access to services. The results for each agency are posted on the SPB website, however there is a significant time lag before the public has access to this information.
2. Need for Greater Compliance
The findings of the State Personnel Board as reported to the State Legislature and as a result of the 2001-2002 survey which is the most current available to the public, indicated that state agencies understand and comply with aspects of the Act to varying degrees. Lack of compliance may be due to various factors; departments frequently cited the need for technical assistance, funding and resources for recruiting and retaining multilingual staff, resources for staff development, a centralized system for resources and information, qualified interpreters and translation services, and improved survey tools to assist in their compliance efforts. Other challenges to providing meaningful access to government services and complying with the Act include implementing an effective bilingual fluency testing program, including a central monitoring and enforcement system, and improving access to and knowledge of complaint procedures for the limited English proficient population.
In November, 1999, the California State Auditor's Bureau submitted a report to the Legislature titled "Dymally-Alatorre Bilingual Services Act: State and Local Governments Could Do More to Address Their Clients' Need for Bilingual Services." The report concluded that state agencies were not fully complying with the Act and that they could not ensure that they were providing equitable services to clients who required bilingual assistance. The State Personnel Board has worked to address deficiencies by updating and streamlining the biennial language survey methods, by providing more technical assistance and greater oversight of agencies, by forming an advisory group made up of state agency bilingual coordinators and by publishing survey results for all agencies on their website. The Commission's Bilingual Services Coordinator, described below, participates in the SPB advisory group. The next section of this report will include best practices in complying with the Bilingual Services Act in state government agencies.
C. Best Practices in California State Government Agencies
This section is intended to highlight some state agencies and a University of California medical center that make strong efforts to communicate with their limited-English speaking clients. This list of state agencies providing multilingual services is not comprehensive and is based on the most current information available to the public on the State Personnel Board website and in their publications. The discussion of UC Davis Medical Center, below, provides an initial review of LEP education and services available from health care organizations. As Latino Issues Forum notes in their comments on the staff's Study Plan from June 2006, "[h]ealth agencies have much experience in outreaching to LEP clients to inform them about vital information affecting their health," and could be part of a broader review of language access practices of government agencies. One good source for further information may be the California Healthcare Interpreting Association (). Staff would also welcome additional reports on agency practices from CBOs and others, as suggested in LIF's comments. Notable language access and information practices of the agencies profiled below include:
· Public distribution of accurate and culturally appropriate documents in commonly encountered languages, through various formats and media.
· Client access to high quality interpretation and translation services.
· Availability and identification of bilingual staff.
· Initial and continuing training of employees in responsibilities to LEP clients.
· Quality control and oversight of bilingual services.
1. Department of Motor Vehicles
The California Department of Motor Vehicles (DMV) has a statewide consumer base and offices throughout the state. The DMV provides printed materials such as the Driver's License Handbook in 33 languages. On their website, the Driver's License Handbook is available in six languages in addition to English (Spanish, Chinese, Korean, Vietnamese, Tagalog, and Russian).
The DMV contracts out for interpreter services. Certified interpreters are not required at hearings for infractions or medical problems, but DMV is required by Government Code 11435.05-.65 to use certified interpreters when there is an administrative hearing (e.g. appealing a DMV decision); a shortage of qualified interpreters in California can make compliance with this requirement challenging. Interpretation assistance is also provided if an LEP client needs instructions on taking the written portion of the exam.
To increase language access, notices of bilingual staff are posted in local offices and bilingual staff members wear badges indicating the languages in addition to English that they speak. The DMV telephone service centers throughout the state provide interpretation and translation services, and the DMV provides an interactive voice response system primarily in Spanish which refers callers to bilingual staff statewide.
2. Employment Development Department
One of the largest state departments, the state Employment Development Department (EDD) has employees located at hundreds of service locations throughout the state who provide services to millions of Californians each year, including assistance in job placement and referrals, unemployment insurance, disability insurance, employment and training, labor market information, payroll taxes for 17 million California workers.
According to the summary and analysis of the Employment Development Department's bilingual services by SPB, the department does a good job in administering its bilingual services program. The department receives millions of contacts with LEP customers each year, mainly in Spanish, Armenian, Cantonese and Vietnamese. EDD also has employees certified bilingual in 30 different non-English languages including American Sign Language (ASL).
EDD offers multilingual services in hundreds of locations throughout the state via printed forms and publications, telephone inquiries, and their website. Many of EDD's one-stop partnership offices (where clients can receive a variety of state services) provide multilingual services.
Throughout the department's Unemployment Insurance, Disability Insurance and Tax programs, telephone call centers perform initial intake and answer customer inquiries. The call center's toll free telephone number is available in several of the most commonly spoken languages in California. The EDD website contains a number of links to services and programs in Spanish such as Disability Insurance applications and Unemployment Insurance applications.
EDD also tracks individual customer language preferences and further data on the need for multilingual services. This data helps EDD identify additional strategies to increase access to programs and services. EDD is working with community partners to develop a language access complaint process and to train its employees to ensure they are aware of their responsibilities in providing bilingual services. EDD is developing a process to identify which documents should be translated into languages other than English (LOTEs) and is working to assure that certified interpreters provide services at administrative proceedings.
3. Franchise Tax Board
California's Franchise Tax Board (FTB) collects taxes on behalf of the state of California. FTB is committed to providing meaningful services to English and non-English speaking clients. Multilingual agents are available at the call center to handle over 15 different languages. FTB cannot contract with telephone interpretation services like Language Line3 due to confidentiality issues related to financial information and personal identification such as Social Security numbers, so callers speaking a language other than the ones provided are instructed to provide their own interpreters.
FTB Bilingual Services Program employees monitor calls handled by Spanish speaking operators for quality control, and operators receive periodic training in proper vocabulary, language usage, and telephone etiquette. FTB also has a Spanish language service line and web page to provide information and assistance on tax issues. The Spanish service line is equivalent to the English service line in all matters concerning tax assistance.
In addition, FTB utilizes volunteer groups from the Korean, Chinese, Vietnamese and Russian communities to assist LEP taxpayers with their returns.
4. California Department of Education
In addition to providing English language development and supplemental educational services to students learning English, the California Department of Education (CDE) is required by state and federal laws (see below) to provide information to parents of limited English proficient (LEP) students in the language they comprehend. Additional state legislation created and championed by the Asian Legislative Caucus has added timelines and additional requirements for providing information to parents of English learners as required by law.
As a result, in 2005 CDE developed and implemented a web-based resource, the Clearinghouse for Multilingual Documents (CMD) that provides information about public elementary and secondary education documents translated into non-English languages by California educational agencies. The CMD helps districts and county offices to locate useful translations of parental notification documents and reduce redundant translation efforts. In so doing, the CMD helps schools meet state and federal requirements for document translation and parental notification, including the requirements in California Education Code Section 48985, the No Child Left Behind Act, and legislation that originated within the Asian Legislative Caucus in 2004.
The California Education Code requires that "when 15% or more of the pupils enrolled in a public school that provides instruction in kindergarten or any grades one through twelve speak a single primary language other than English as determined from CDE census data, all notices, reports or records must be sent to the parent or guardian of any such pupil by the school or district, shall in addition to being written in English, be written in such primary language, and may be responded to in English or the primary language."4 The federal No Child Left Behind law also requires that information (such as academic assessments, reports, school improvement plans, documents related to individual student progress and programs, and state and federal plans and standards) be translated into the language that parents can comprehend.
5. UC Davis Medical Interpreting and Translating Center
Medical institutions that are operated by the state or that receive federal funding are required by law to provide information and services in the languages spoken by their customers. A private facility may choose not to serve this population in which case it does not have to provide services or materials in languages other than English.
The UC Davis Center for Interpreting and Translating offers medical interpreting to hospital clients in 18 languages. Trained medical interpreters know how to convey the meaning in dual languages using specialized terminology, colloquialisms and idioms. They guarantee in-depth understanding, confidentiality, and reliability in the following languages and dialects: Armenian, American Sign Language (ASL), Cambodian, Cantonese, French, Hindi, Hmong, Korean, Lao, Mandarin, Mien, Punjabi, Russian, Spanish, Thai, Ukrainian, Urdu, and Vietnamese.
The mission of the UC Davis Center for Interpreting and Translating is to provide clients with a full-range of language-related services of the highest quality and utility, in the most user-friendly manner and at the lowest cost consistent with good value. The UC Davis Medical Center is dedicated to enhancing access to health-care services for linguistically and culturally diverse patient population through professional medical interpretation, translation and cross-cultural communication. Multilingual kiosks are being installed throughout the campus and medical center. These kiosks will provide automated instructions guiding patients and their families and visitors to facilities and office locations in several commonly spoken languages at the touch of a fingertip.
D. Existing In-Language Mandates
1. Relevant Sections of the California State Public Utilities Code
A few mandates exist with respect to in-language issues. First, the California Public Utilities (PU) Code contains some references. Namely, PU Code §2890 (b) states the following regarding solicitation materials and orders for a product or service:
. [emphasis added] Written orders may not be used as entry forms for sweepstakes, contests, or any other program that offers prizes or gifts."
PU Code §2889.5 (a) (6) contains additional guidance:
..[emphasis added]
The Commission may wish to consider further investigation of compliance and enforcement of these code sections. In draft report comments, Staff received limited information from carriers and consumer groups on how carriers are currently complying with these code sections or specific suggestions for rules or enforcement mechanisms to ensure compliance with them. For example, while AT&T provided information on some materials and bills that it produces in languages other than English where there is a business supported justification5, it is unclear how these practices comport with PU Code §2889.5(a)(6) and 2890(b). Similarly, there was not much detailed information from other carriers on their in-language material and billing practices and how they relate to the above statutes. In examining the implementation of the above statutes, the Commission may also want to examine other items, such as the suggestion that carriers be afforded discretion as to which languages they provide materials in and the use of objective criteria for adding and deleting languages6.
Likewise, the Commission may seek to solicit more information from consumer groups. For instance, the Asian Law Caucus (ALC) cites preliminary pilot study results indicating that consumers with limited English proficiency (LEP) negotiate the price and terms of telecommunications service solely in other languages, but are given contract and other written documents only in English at the point of sale.7 While ALC calls for development of rules on distribution of in-language materials, it is unclear how the aforementioned PU Code sections relate to this recommendation. Also, comments by the Communities for Telecommunications Rights (CTR) include a recommendation that the Commission require that carriers provide a translation of the key rates, terms and conditions (KRTC) in the language that the telecommunications service was negotiated in by the carrier representative. CTR attaches a one-page KRTC template in English, Spanish, Vietnamese and Chinese as part of its proposal8. Similarly, the Watsonville Law Center and the Division of Ratepayer Advocates recommend providing in-language translations of contracts and/or in-language KRTC summaries to consumers when services are marketed in languages other than English9. The Commission may want to examine how such proposals relate to compliance with PU Code §2889.5(a)(6) and 2890(b). Moreover, in evaluating the use of these code sections and whether to adopt additional rules or enforcement mechanisms, the Commission may need to examine whether these statutes should be uniformly applicable to all types of telecommunications carriers. Furthermore, it may wish to re-examine and seek updates to party positions on such in-language rules explored earlier in this proceeding10.
2. Relevant Commission Orders
The Commission adopted a few in-language provisions in Commission decisions in the mid-1990s, however these requirements were later modified. Specifically, the Commission established certain in-language requirements when it opened the state's local telecommunications market to competition. In D.95-07-054, the Commission established interim rules for local exchange service competition in California. In that decision, it required that competitive local exchange carriers (CLCs) making sales in a language other than English provide the customer with a confirmation letter written in the language in which the sale was made describing the services ordered and itemizing all charges which will appear on the customer's bill11. Later, in D.95-12-056, the Commission expanded upon the CLC rules in D.95-07-054 and ordered that:
"CLCs shall inform each new customer, in writing and in the language in which the sale was made, of the availability, terms and statewide rates of Universal Lifeline Telephone Service and basic service. CLCs shall also provide bills, notices and access to bilingual customer service representatives in the languages in which prior sales were made."12
The Commission initially deferred consideration of such a requirement for ILECs to the Universal Service proceeding, R.95-01-020/I.95-01-021 and eventually declined to adopt it for ILECs.
In response to a petition to modify, the Commission modified the in-language requirements previously adopted for CLCs13 as part of the local competition docket. Therefore in D.96-10-076, the Commission modified the above requirement regarding confirmation letters, billing and notices. Instead, it required that ILECs and CLCs meet specified requirements when they sell their services in Spanish, Mandarin, Cantonese, Vietnamese, Korean, Japanese, or Tagalog14. Should the Commission take additional actions regarding in-language issues as part of the Consumer Protection Initiative, it may wish to evaluate how ILECs and CLCs are meeting the modified requirements and whether to extend them to wireless carriers.
Furthermore, any modifications or additions that the Commission makes to existing in-language requirements should consider the impact of its recent order regarding the Uniform Regulatory Framework (URF). In August 2006, the Commission adopted D.06-08-030, which granted ILECs and CLCs broad pricing freedoms concerning most telecommunications services, new telecommunications products, bundles of services, promotions, and contracts.
3. Other Government Requirements
The Dymally Alatorre Bilingual Services Act is the main state law applying language requirements to the California Public Utilities Commission, and P.U Code § 2890 (b) and § 2889.5 (a)(6) apply directly to telecommunications companies. In addition, there are other language requirements that apply to aspects of the telecommunications industry, as well as some that do not apply directly to the Commission or the industries it regulates, but may provide valuable models for serving LEP consumers. Several of these requirements are described in this section.
As noted in the comments provided by the Consumer Federation of California on the Commission's Study Plan issued in June 2006, there are California state laws that do not apply directly to the Commission nor telecommunications companies (nor to other regulated industries), but do address the need for specific language requirements to enable LEP consumers to access services of government agencies and private companies. Such laws include sections of the California Civil Code (1632 and 1689.7), Business and Professional Code (11245, 17538.9, 22442), and Insurance code (762).(CFC comments page 12) These sections mandate specific disclosures and actions related to contracts in various industries, when the contracts are negotiated or a sale takes place primarily in a language other than English. Some provisions also offer consumer protections in the event that the company responsibilities are not met. According to CFC, equal protection clauses in state and federal constitutions are also relevant, generally prohibiting discrimination against any class of individuals by governmental entities.
Public Utilities Code § 453 (b) prohibits public utilities (in this case, wireline carriers) to disadvantage customers on many bases, including national origin. This is similar to Title VI of the federal Civil Rights Act of 1964, which prohibits discrimination based on national origin. Federal Executive Order (E.O.) 13166 specifies that failing to provide services in a person's native language can constitute discrimination on the basis of national origin, prohibited by Title VI. The federal government has guidance for federal agencies to follow to ensure compliance with E.O. 13166 requiring language access for people with limited proficiency in English. These guidelines, discussed in detail on the Website, describe this Executive Order, which specifically requires "federal agencies to take reasonable steps to provide meaningful access for LEP people to federally conducted programs and activities (essentially, everything the federal government does)," and mandates that "every federal agency that provides financial assistance to non-federal entities must publish guidance on how those recipients can provide meaningful access to LEP persons and thus comply with Title VI and Title VI regulations."15 The requirements of Title VI also apply to state, local, and private entities that receive federal funding. It is not clear from research on this Web site whether or how the Federal government monitors compliance with Title VI, E.O. 13166, and related regulations, nor what penalties exist for entities that fail to take steps to provide the meaningful access required by these provisions.
Still, the guidance provided on the Web site contains information on best practices for agencies to follow to facilitate language accessibility, and lists numerous links to state and federal resources. Information on this Web site includes recommendations for creating a language assistance policy, and a report that attempts to assess the benefits and costs of compliance with the Executive Order's requirements, and acknowledges the difficulty of quantifying benefits and estimating costs. Specifically, in discussing benefits of providing language access services, an OMB report linked to this Web site states: "While it is not possible to estimate, in quantitative terms, the value of language-assistance services for either LEP individuals or society, we are able to discuss the benefits of the Executive Order qualitatively."16 Discussing costs, this same report states "Because sufficient information was not available on the cost of providing language-assistance services before and after issuance of the Executive Order, we were unable to evaluate the actual costs of implementing the Executive Order. Instead, this report uses assumptions about different types of language-assistance services that could be provided to the LEP population to assess costs."17
The federal LEP Web site also provides specific ideas for improving service to LEP persons in various types of work. A more detailed review of the information available on or through this site provides strategies for improving communication with LEP individuals and populations, and may assist in evaluating the effectiveness, costs, and benefits of various options. The approaches outlined on this site, which may be applicable to the Commission or telecommunications industries, include conducting an assessment of the needs for language access services (similar to this study) and responding to the identified needs by improving access. Specific strategies discussed on the LEP.gov Web site for improving language access include the provision of quality translated materials, quick access to interpreters, educational materials to inform LEP individuals of their rights to access government and government-supported services, and increasing resources to facilitate their access. The Commission, CBOs, and carriers may all find useful strategies and resources on this Web site to improve their service to LEP consumers. Documents and resources available on this Web site include:
· A "know your rights" brochure targeted at LEP individuals and communities
· A brochure explaining the responsibilities of federal agencies and federally assisted programs
· Guidelines for Choosing a language access provider, such as a translation or interpretation service
· A language assistance planning and self-assessment tool
· A document containing "Tips and Tools" for improving language access services
· Links to census information and language access resources
III) In-language Activities of the Commission Related to Telecommunications Service
One important aspect of providing access to telecommunications services for limited English proficient consumers is to ensure that accurate, useful, and understandable information on existing telecommunications services reaches consumers. The Commission's Consumer Protection Initiative requires the Commission to conduct a program of education and outreach to "inform consumers of the significant features of a service, technology, or a market that should affect their decision to purchase." The decision notes that "[c]onsumer education also can help consumers by informing them of the rights that they have under existing laws and regulations." (D. 06-03-013 at 118.) The CPI decision supports increased education because "education may offer a quicker and more robust way to protect consumers than the adoption of regulatory rules that constrain service offerings by imposing a one-size-fits-all model on a complex and fast-moving industry using many different business models.... An education program can be narrowly tailored to address specific problems encountered by identifiable groups of consumers"(D. 06-03-013, page 119). Based on this, information available to all consumers is intended to inform consumers about their rights and what they should know to obtain and maintain needed or desired services, and avoid discontinuation of service or other negative personal or financial consequences, such as harm to credit. This section of this report describes existing educational efforts by the Commission, including the multilingual CPI education initiative and other Commission-led efforts that target LEP consumers, as well as existing multilingual education programs of telecommunications companies, and identifies some related challenges facing LEP consumers.
A. Past and Current Commission Programs Involving Language Access Efforts programs. Other current activities that include multilingual requirements or educational components include the Commission's involvement with the California Utilities Diversity Council (CUDC), the multilingual outreach requirements for utilities offering the California Alternate Rates for Energy program, and the Commission's related Low-Income Needs Assessment. In addition, the Commission's Electric Education Trust and Telco Education Trust Programs, as well as other mandated Telco education programs have been designed and used to educate consumers. This report will focus on the CPI interim education initiative, the ULTS marketing efforts, and the CUDC activities, as the most recent, systematic, and well-developed examples of Commission multilingual activities. This report also outlines the non-English and multilingual services provided through the Commission's bilingual services office and staff throughout the organization.
1. CPI Education Program
The on-going consumer education portion of the Consumer Protection Initiative (CPI) is being implemented with a focus on educating the most vulnerable customers, including those with limited English proficiency. Commission Decision (D.) 06-03-013 ordered Commission Staff to lead the effort to design, implement, maintain and monitor a telecommunications consumer education program in coordination with representatives of consumer groups, community based organizations (CBOs), as well as wireline and wireless telecommunications carriers.18 The program has three prongs, including one that specifically focuses on protecting and educating consumers who communicate best in a language other than English19.
Work on CPI consumer education began quickly after D.06-03-013 was adopted. In late March 2006, the implementing commissioners and Commission Staff convened a workshop to outline the tasks of the first phase of the program which is to be performed using current Commission resources. As a result, two task forces (content and media outreach) consisting of carriers, community based organizations and consumer groups were created to collaboratively develop materials, design a website, and plan consumer education outreach. A second all-party workshop was held on April 28, 2006, to review the work completed by the task forces and to finalize the timeline for the June 29, 2006 launch of the program. In this first effort of the CPI, it became apparent that the complexities of designing and implementing a linguistically and culturally sensitive outreach and education effort were time and resource intensive.
On June 29, 2006, the Commission launched the first phase of this program, i.e. the telecommunications Consumer Education Initiative20 establishing interim consumer education measures. The centerpiece of the first phase is a new consumer-oriented website, CalPhoneInfo (), to inform consumers about their rights and what they should know to achieve and maintain the best telecommunications service to match their individual needs. It features electronic versions of brochures on issues such as understanding phone bills, slamming, cramming, buying wireless telephone service, and tips about phone service (e.g. choosing telecommunications companies and services, prepaid phone cards, and avoiding telephone fraud and misleading ads). The CalPhoneInfo website also includes other informational pieces, "Frequently Asked Questions", "Tips of the Day", "Hot Items", consumer resources, and information on how to file complaints. Website information will be updated as needed.
The brochures, Tips, and Frequently Asked Questions are available in English, Spanish and Chinese - the three most commonly used languages in California. In addition, the Commission is working to provide translations of the brochures in ten more languages which studies indicate are used by consumers that have limited English proficiency. Translations of the brochures are already available on CalPhoneInfo in three of those languages: Korean, Tagalog and Vietnamese.21 In the near future, the Commission plans to provide the same information on the website in the remaining seven languages, i.e. in Cambodian, Thai, Hmong, Russian, Armenian, Arabic, and Farsi. Moreover, the brochures on the website are available in large font and audio versions in English and Spanish to aid disabled consumers. These versions of the brochures will soon be available in Chinese.
Early indications show much interest in the Consumer Education Initiative program. As of August 1, 2006, the CalPhoneInfo website has received 24,606 "hits" or inquiries in a little over a month. Brochures and posters about the website and the assistance that the Commission offers to California consumers have been provided to carriers and CBOs, who are voluntarily providing outreach to their customers or community members in various ways (handing out brochures, billing messages, free text messages). Additionally, the Commission's own outreach efforts are already progressing, with our Consumer Affairs Branch and Public Advisor's Office providing educational materials to consumers as part of their usual contact with the public.
The second phase of the CPI Consumer Education Initiative is geared toward establishing a permanent consumer education program regarding telecommunications services. This phase of the education effort will build upon the work in the first phase and will include grassroots outreach (particularly for consumers who are disabled or have limited English proficiency) and a mass media campaign to reach consumers who may not have access to the website. Commission Staff has already issued a Request for Proposal (RFP) for a consultant to assist with the outreach component and is developing a RFP for consultant help in designing and implementing the media campaign. Additional brochures, website enhancements, and Commission-sponsored outreach events will also be developed as part of the initial and ongoing programs.
The Commission determined in D.06-03-013 that the education program should be regularly monitored and evaluated in order to develop reliable data on which to base changes to the education program as well as to support any necessary future rulemaking or enforcement action. In that decision, Commission Staff was directed to develop a collaborative forum to contemplate various monitoring and evaluation options and to create an education monitoring and evaluation program based on its review of different features. The monitoring and evaluation efforts will consist of five fundamental components: design, data collection, analysis, reporting, and evaluation critique. Staff will also provide Commissioners with annual education and evaluation reports.
Correspondingly, preliminary monitoring and evaluation work is already underway. Commission Staff will schedule time to discuss evaluation options in the current Consumer Education Initiative working group forum. Among other items, participants may discuss how to measure the effectiveness of the consumer education program in reaching California consumers who have limited English proficiency and/ or who have special needs, such as consumers with disabilities. Subject to budgeting approval for this year, Commission Staff is seeking one or more consultants to advise Staff on and/or complete some critical evaluation tasks.
2. ULTS Marketing Effort
The Commission's Universal Lifeline Telephone Service (ULTS or California Lifeline) program focuses on low-income California consumers, including those who have limited English proficiency or other specific language needs. California Lifeline was established in 198422 to comply with the Moore Universal Telephone Service Act (AB 1348, Chapter 1143, Statutes 1983)23 to provide discounted basic telephone service to low-income households and as a means to achieve universal service by providing affordable residential telephone service to low-income households. In D.94-09-065, the Commission adopted a goal that at least 95% of California households have telephone service irrespective of income-level, ethnicity, or language spoken in the households.24 This goal was reiterated and incorporated in the Adopted Universal Service Rules approved by the Commission in D.96-10-066, with a specific focus on improving the service subscribership of California customers, including those in low-income, disabled, non-white and non-English speaking households.25 The Commission currently has a rulemaking underway to consider programmatic changes to the California Lifeline program. 26
As part of administering California Lifeline, the Commission contracted with Richard Heath and Associates (RHA) to provide marketing and community outreach for the program and to maintain a call center to enroll qualified people. The marketing campaign includes focus on customers with special language needs. The 2004-2005 ULTS Marketing Program began in August 2004 with emphasis on multiple approaches to reach the program's target populations in English, Spanish, and Asian language-specific markets.27 As part of the marketing program, RHA contracted with community based organizations (CBOs) with direct experience serving target populations to conduct consumer education and foster transfers to the RHA call center. The marketing program also included development of advertising in English, Spanish and Asian languages28 and public relations activities including those involving non-profit organizations, adult education and ESL instructors, local businesses, and utilities. The second year of the ULTS Marketing Program for 2005-2006 is building on the infrastructure established the previous year. As part of that effort, RHA has executed agreements with 29 CBOs and has identified nine additional geographic areas as the immediate focus for increased outreach to target populations.29
RHA Call Center data also provides some clues on the language needs of the state's telecommunications consumers. The RHA Call Center provides in-language services to callers in English, Spanish, Cambodian, Cantonese, Hmong, Korean, Lao, Mandarin, Tagalog and Vietnamese. Between July 1, 2004 and June 30, 2005, the RHA call center logged a total of 24,455 calls in their online system and a total of 31,883 calls recorded by the phone system software.30 According to RHA the calls received during that time period were broken down by language and ethnicity as follows31:
3. The California Utilities Diversity Council: Purpose and Activities
The California Utilities Diversity Council (CUDC) was established in March, 2003 to be a resource to and to work collaboratively with the California Public Utilities Commission and regulated utility companies. The purpose of the CUDC is to promote and increase diversity within utilities' governance, customer service and marketing, employment, procurement, and philanthropy programs and practices.
Council members represent diverse business communities, consumer advocacy entities, multi-language interests, education, labor, service-disabled veterans, women's business groups, and the utility companies. Members meet monthly and committees in each of the areas described above determine goals and deliverables.
In 2005, in response to the exponentially growing numbers of limited English proficient (LEP) consumers and the growing demands and challenges of this population, CUDC through its Customer Service and Marketing Committee conducted a survey of language policies and practices within the CUDC utility companies and the Commission. While recognizing that diversity goes beyond languages and includes numerous other characteristics, the LEP population presents special challenges that affect every carrier and the Commission in terms of product development, service delivery, customer satisfaction, human resources, written and spoken communication, health and safety, and profitability.
The survey was conducted voluntarily with the CUDC utility company members and the Commission. It contained questions related to language demographics; customer service and satisfaction; communications strategies and outreach to ethnic communities; availability of translated materials and quality assurance; interpretive services and quality assurance; assessment of multilingual skills; compensation of multilingual staff; language and cultural awareness training; projections of future needs; and greatest challenges. The survey intentionally focused on best practices rather than on deficiencies. The results indicated trends, highlighted outstanding practices, revealed differences and similarities in practices, identified valuable resources, and indicated challenges and areas to be developed or strengthened. The following utility companies responded to the survey: AT&T (formerly SBC); Verizon; Pacific Gas & Electric Company; Southern California Edison; San Diego Gas and Electric Company and Southern California Gas Company; Southern California Water Co.; and San Jose Water Company. Results of the Commission survey are summarized below, and results of the utility company survey are summarized in the Carrier Multilingual Practices section, below.
a. CUDC Language Access Survey Results as Reported by the CPUC
In its response to the CUDC survey last year, the Commission indicated it serves the linguistically and culturally diverse residents of California; the results of the survey are summarized herein, and do not reflect additional, more recent activities in response to the CPI.
The Commission operates in general compliance with the Dymally-Alatorre Bilingual Services Act and is monitored by the State Personnel Board, and commits necessary resources to meet the needs of the public in accordance with legal mandates. An ever-increasing number of written consumer materials, i.e., consumer handbooks, consumer advisory information, and customer complaint forms are available to customers in Spanish, Chinese, and Vietnamese languages.. Similarly, the Commission can also provide assistive listening in several formats including real time captioning, electronic amplification and/or American Sign Language interpretation and Spanish Sign Language interpretation services through contracted vendors. The Commission has also acquired equipment to offer simultaneous on-site interpretation.
The Commission offers differential pay in accordance with the State Personnel Board Rules and Bilingual Services Act, and has incorporated continuous language training courses into its training goals. The Commission identified its greatest challenge as keeping pace with the needs of the public in order to provide useful, clear and accurate information.
b. CUDC Development and Approval of Language Access Principles
This year, the CUDC developed and approved a set of language access principles that are intended to offer consistency and flexibility for all California utilities in their ongoing efforts and challenges in serving their linguistically diverse customers. These principles will be presented to the CPUC for consideration of formal endorsement; if endorsed by the Commission, these principles may assist the Commission and utilities in developing and enhancing policies, tactics, quality indicators and benchmarks for improving service to LEP and linguistically isolated consumers. The CUDC acknowledges that many utility companies currently operate to varying degrees under some or all of these principles, and that these principles evolved out of the current best practices of its member companies. These principles are intended to accommodate and assist companies that differ in industry services and products, company size, available human and fiscal resources, levels of existing language access services and their respective customer language preferences and needs. CUDC encourages companies to determine their own plan of action and pace of implementation of any or all of these principles. Toward this end, the CUDC is seeking Commission support for the following six principles:
Principle #1
The Language of Business is the Language of the Customer
Principle #2
Emergencies and Public Safety Require Attention in All Languages
Principle #3
Recruit, Train, and Compensate for Multilingual Expertise
Principle #4
Measure and Monitor Multilingual Programs and Customer Satisfaction
Principle #5
Establish and Implement Quality Indicators for Multilingual Programs and Practices
Principle #6
Corporate Culture: Language Services and Expertise are Value Added
These principles, among other things, acknowledge of the importance of recognizing the customer's language, and the need for establishing quality indicators and monitoring customer satisfaction. If adopted by the Commission, these principles can help to inform future Commission policies for improving service to telecommunications consumers.
B. Commission Efforts to Increase Language Access to Agency Services
In addition to the document translations done as part of the CPI and the translations of general consumer information handouts, the Commission has also embarked upon translating key Commission reports, press releases, and decisions that might have an impact upon non-English speaking constituents. These translations are done on an "as requested" basis for Commissioners, our Executive Office, and Commission Divisions.
A major challenge that the Commission faces in all of its document/consumer materials efforts, is ensuring that the translations are correct and accurately reflect the, often technical, and frequently complex, messages we are portraying. The translation firms with which the Commission contracts do not have a good grasp of the technical and industry terms used in Commission documents. Many of the terms, such as "cramming and slamming," and other coined terminology, have no direct translations in any language.
To overcome this challenge, the Commission uses in-house staff, who are fluent in other languages, to review the translations for accuracy and correct use of terminology. However, the Bilingual Service Office, described in more detail below, is not always able to secure the necessary in-house resources, when they are needed. An example of this is when the Commission sent the "Energy Action Plan, II" out for translation into Chinese, with the intention that the final document would be presented to members of the Chinese government who set energy policy. Because it was a long and technical document, and there was only a short time for the review of the translated version, the Bilingual Services Office split the review of the document up among several of the Commission staff who review Chinese language documents. Only a few of these reviewers had a background in energy and were capable of correcting the technical portions of the translation, and it was difficult to secure these experts' time as they had critical deadlines to meet with their daily assignments.
The Bilingual Services office is developing a process for assigning and tracking the review of translated documents in this agency. In addition, the Commission is starting an on-going project of creating a "glossary," in multiple languages, of the technical terms used in Commission documents. These glossaries will be shared internally and with the translation firms with which the Commission contracts so that the translations they produce will be of a higher quality and accuracy.
1. The Commission Bilingual Services Office
The Commission's Bilingual Services Office (BSO), consisting of a Bilingual Services Coordinator (Coordinator) within the Public Advisor's Office, addresses the language services needs of the California public. The primary responsibilities of the BSO and its coordinator are to assist limited English proficiency (LEP) residents in their dealings with the Commission, and develop a systematic, organized, and effective way of complying with the Dymally-Alatorre Bilingual Services Act (BSA). As discussed above, the State Personnel Board (SPB) administers the Biennial Bilingual Survey to all state agencies to monitor compliance with the BSA; among other responsibilities, BSO coordinates the Commission's response to this survey. In addition, the Bilingual Services Office has looked at the scope of work and potential public contact positions in each Commission Division, and is establishing a plan for each division to ensure that all members of the public are treated with fairness and respect and can communicate in the language they choose.
The 2001-02 biennial bilingual survey showed many areas in which the Commission could improve its services to LEP Californians, including a deficit in the number of SPB-certified bilingual public contact staff and the lack of a complaint process for by which the public can language access concerns.32 The initial results of the 2003-04 survey, received recently by the Commission, showed improvement in several areas, reflecting the steps taken since 2002 to improve compliance with the Dymally-Alatorre Bilingual Services Act. Training instituted since 2002 includes a plan to ensure that all employees understand their responsibilities under the BSA, through showing the SPB training videos, Your Responsibility under the Dymally-Alatorre Act and How To Use The Language Line (to access 150+ language interpreters in less than a minute). In addition, the Commission now has a pool of 89 people proficient in 23 languages, 43 of whom are certified by SPB, and has established a toll-free language hotline that the public may call to lodge a complaint about language access. The Commission's Consumer Guide lists the language hotline number with an explanation that public members may call and report complaints regarding language assistance.
The SPB Bilingual Survey identifies language access needs by tallying the number of public contacts and identifying which languages exceed a 5% threshold. If this threshold is met for any given language, the Commission must ensure it has staff that speaks that language. According to the 2003-04 survey, this threshold was met for Spanish and Tagalog. Interpretation (verbal communication) is the service most used to assist consumers. The Commission staff assigned to the Consumer Affairs Branch can speak Spanish, Tagalog, Cantonese, and French, and have access to the Language Line, with interpreters in over 150 languages, which has been in use for over 10 years. As described above, the Commission creates and disseminates translated consumer materials to the public, and has contracted with companies that specialize in translation and interpretation services.
The results from the 2005-06 survey will serve as a vehicle to determine what areas are deficient in bilingual services and to develop and implement a plan that will greatly improve the provision of non-English language services to the general public of California.
Going beyond the basic requirements of the BSA, the Bilingual Services Coordinator is working with CBOs to develop a set of languages for providing culturally appropriate and translated materials for their specific communities. In addition, the Commission recently purchased equipment enabling it to provide simultaneous interpretation in public meetings held in its auditorium as well as ensuring that service is available in Commission co-sponsored public forums in other venues around the state.
The Commission has incorporated continuous language training courses into its training goals through an Employee Language Training Plan. The Commission is offering free Spanish language courses during the work day. The classes are being offered to staff in public contact positions, especially staff who must communicate with limited English proficient persons, e.g. administrative law judges, attorneys, consumer affairs representatives, and employees of the Consumer Protection and Safety Division and the Office of Ratepayer Advocates. As part of the Commission's five-year training plan, the language courses will be available for other staff including supervisors, managers, and directors.
2. Language Access to Commission Services
Two main units in the Commission have public contact responsibilities. The Commission's Consumer Affairs Branch (CAB) helps consumers resolve disputes with utilities, including complaints about billing and services, and assists consumers who have questions about their utility services. The Public Advisor's Office conducts outreach and assists consumers who wish to participate in formal Commission proceedings. The CPI Education Initiative summarized herein exemplifies the outreach functions of the Public Advisor's Office; in addition, the Public Advisor's Office provides translated materials, Commission forms, and Web page material, assists consumers in filing formal complaints, and organizes public participation hearings and other public meetings for the Commission.
CAB has bilingual staff that can take calls in several languages, including Spanish, Cantonese, and Tagalog, and pursuant to the CPI decision, is working on hiring more bilingual staff. CAB can take calls in additional languages with the assistance of the Language Line, an outside telephone service that supports over 150 languages. CAB's Interactive Voice Response (IVR) system provides callers with the option to be assisted in English, Spanish, and Chinese. The IVR is set up to route calls from Spanish and Chinese speakers to staff that are proficient in those languages. The IVR tracks the number of calls received, and in what languages they were received. Over the last six months, the large majority of non-English calls through the IVR have been in Spanish (an average of around 4,000 per month for the first several months of 2006), followed by Mandarin, Cantonese, Korean, and Vietnamese, typically with several dozen IVR calls per month.
The CAB database is also set up to track the language in which complaints are made. This allows review of complaints by language, for example, the number of complaints, the number and size of refunds of impounds, and the dispositions of complaints (for customer or utility) by language. A new CAB database, which has been approved and is expected to be in place in 2007, should provide improved information and better tracking of language trends.
Complaints about transportation companies (limousines and household goods movers, for example) come to the Transportation Enforcement Section of the Consumer Protection and Safety Division. Since this group also has public contact responsibilities, many of the responsibilities and lessons of CAB's bilingual experience may be relevant to the transportation complaint experience.
3. Past Commission-Ordered Programs that Included Language Access Components
Telecommunication Trust Fund (TET): TET was established in 1986, with $16.5 million in shareholders' fund assessed by the Commission against Pacific Bell (SBC) for marketing abuses. Its purpose is to reduce California consumers' vulnerability to unfair marketing practices through a better understanding of their service and equipment options in an increasingly competitive telecommunications environment.
TET also emphasizes efforts to educate the public regarding telecommunications policies and regulatory issues, as the industry grows increasingly competitive. Because so many of those affected by the former marketing abuses were limited English speakers, as well as low-income or inexperienced consumers, funding emphasis has been on programs serving these and other disadvantaged consumers. TET was a "grand experiment" in consumer education to protect all California consumers through empowerment: that by teaching them to make educated choices and understand new technologies will benefit consumer of every age, ethnic group and economic condition. Since 1986, TET has funded over 180 projects targeted at consumer telecommunication education and use of better technology towards better service. Its goal is to disburse $3 million per year to promote ratepayer education efforts over the next 5 years.
The Program supported consumer education projects in three general areas:
1. Projects that help consumers particularly (those who are vulnerable to abuse) to understand and use their telephone service and equipment options, and protect themselves by being better informed. Such groups might include recent immigrants and other limited English speakers; low-income, disabled, or rural consumers; consumers with very limited literacy skills; children; and others with educational needs as identified and justified by specific proposals,
2. projects that help California consumers understand how they will be affected by changing technology, services, and regulation of the telecommunications industry in the coming decade. Target groups might include those listed above, other residential customers and small business proprietors, and
3. special innovative projects that enhance Californians' understanding of the telecommunications system. Needs identified by applicant proposals must be capable of being addressed through educational information efforts.
TET utilized creative collaborators in its education campaign efforts. Grantees have delivered services throughout the State to urban and rural communities. They have spread the word in multiple languages: English, Spanish, Chinese, Japanese, Vietnamese, Khmer, Hmong, Lao, Cambodian, Tagalog, and Armenian. Grantees educated their constituents by holding workshops, home visits, classroom instruction, radio, television, satellite on-line, TDD, the CRS and set up a toll free telephone lines.
Electric Restructuring Education Program: D. 97-03-069 authorized the formation of a joint statewide customer education program (CEP) by PG&E, SDG&E and SCE to inform the public about the changes taking place in the electric industry and to provide consumers with information necessary to allow them to compare and select among products and services in the electric market. To begin the process of educating the public about the electric restructuring, the Commission authorized the establishment of the Electric Restructuring Education Group (EREG). This body would be made up of stakeholders and was established as a non-profit entity to provide oversight in the development and implementation of the CEP. EREG acknowledged the premise that stakeholder representation is one of the basic consumer education principles vital in the success of a public information program. The EREG was composed of representatives from IOUs, CPUC, ORA, consumer advocates, environmental entities, retailers and energy providers. The Commission charged the EREG with the responsibility to devise and implement the CEP in compliance with the CPUC Code Section 392, which meant informing customers of the changes in the electric industry and providing customers with the necessary information to help them make appropriate decisions regarding their electric service.
Electric Education Trust Fund: EET was to take over CEP efforts from the EREG after the implementation of direct access and continue to educate consumers about the changes in the electric market place in the restructured environment. The main focus of the EET program was to ensure that customers, especially people with limited English speaking language/or other disadvantages, have correct, reliable, and easily understood information to help them make informed choices when dealing with professional and sophisticated marketers. Electric Education Trust Fund (EET) education efforts were modeled after TET to ensure independent and multicultural education and advocacy to benefit residential and small business customers. As mentioned in the context of the EREG, this targeted program utilizing grants to CBOs was a small part of the larger outreach and education effort associated with California's electric restructuring. D.97-08-064 authorized $10 million for a grant program in which the CBOs receive grants to educate their constituents about the changes and choices in the electric industry - this funded a CBO electric education and outreach plan, which was adopted by D.98-12-085.
A total of 111 CBOs and 7 non-CBOs received grants. There were some agencies that did not use all their funds, did not complete their project or withdrew from the program. 105 CBOs and 6 non-CBOs completed their education programs. EET program was extended by SB 477 July 1, 1999 through December 31, 2001.
GeM, a CBO, administered the electric education and Commission outreach project and provided program monitoring and support services to ensure the success of the EET program. They also assisted CBOs with media training to affect successful media outreach and aided in the development of culturally responsive, language appropriate materials. They also developed a plan for the EET sub-committee group for the non-CBO outreach program to deliver services to areas underserved by other CBOs. The non- CBO program is comprised of government agencies designed to cover geographic or ethnic-specific areas with gaps in community-based organization participation.
The collaborative works of the Commission, EET Committee, CBOs and non-CBOs have been well served by the consumer education program. EET was successful in delivering information to the targeted population using the talents and resources of both CBOs and government agencies in the non-community based programs.
Caller ID Consumer Education Program: In Caller ID consumer education program, the Commission issued D. 92-06-065 that allowed the LECs to offer Caller-ID, under a condition that they conducted a comprehensive consumer education program to alert California consumers to the privacy implications of the service. The Commission also further required that the education campaign provide adequate information on the two forms of number blocking available to consumers: per blocking and per line blocking.33 The goal of the consumer education efforts is to ensure that all Californians are aware of the services and their implications that include understanding their options for maintaining their privacy as a calling party. The Commission mandated that the education campaign be "on-going for as long as the services are being offered".
The mandated customer education campaign conducted in 1996 was a massive undertaking, unprecedented in its scope and funded largely with ratepayers' money. The budgeted cost of the education program was $33 million. Pacific and other LECs ran radio and TV spots and placed full-page ads in major newspapers. They disseminated bill inserts and letters explaining blocking options and privacy implications. Pacific provided toll free numbers for customers to register their blocking choice in several languages.
Phase I of the community education program involved community based organizations educating hard-to-reach consumers. Pacific and GTE formed a partnership to carry out the community education program for Caller-ID Blocking and hired Richard Heath and Associates (RHA), a consulting firm to develop community outreach component of the education campaign. 34 RHA administered a competitive grant process to 188 CBOs and domestic violence shelters to reach out into their communities with this important consumer education program. Workshops were held throughout the State. Grantees were trained to reach their constituents with effective educational messages. Educational materials were developed in 24 languages, Braille and American Sign Language. RHA developed a training program approved by Pacific/GTE under the direction of the Commission. A training manual was developed that provided a recommended dialogue by grantees.
From March 1996 through February 1997, Pacific Bell/GTE Community Education Program for Caller ID Blocking educated over 6 million consumers. Public education was provided through workshops, videos, one-on-one counseling, classroom presentations, teleconference, fact sheets, informational mailers, radio talk shows, consumer affairs televisions shows, comic books, theater presentations, songs, community meetings, religious organizations, newsletters, community newspapers and hotlines. A total of $4,179,638 was spent in three areas: (1) grants to community organizations; (2)educational materials including video development, translations, printing and shipping of all materials; and (3) administration, training, meetings, travel and conference.
Grantees educated 5,944,033 hard-to-reach consumers and domestic violence victims. 165,003 community leaders were educated, who in turn educated an unknown number of their constituents. In all, 6,109,036 were educated at a cost of just over $1 per person.
Community Collaborative Agreement (CCA): CCA was created from the merger of Bell Atlantic and GTE California in D.00-03-021. As part of the merger approval, Bell Atlantic executed the CCA with community organizations representing diverse constituencies. The Commission established a Community Collaborative Fund (CCF) for $25 million over 10 years to increase access to telecommunication services for underserved communities in California. Underserved communities comprise the low-income, ethnic, minority, limited English speaking and disabled communities in various rural, urban, and inner-city regions.
To accomplish this goal, it provides the following: (1) Community Collaborative Fund - $2.5 million per year for 10 years, funded from the ratepayer allocation of merger benefits. Funds will be used to benefit under-served community access to telecommunications and information services, education, literacy, telemedicine, economic development and telecommunications advocacy; (2) universal service funds of $1.3 million per year for three years for GTE/Bell Atlantic to consider other under-served populations such as the disabled and Native Americans to ascertain what issues and policies, including a universal design policy and public-interest pay telephones in these under-served communities; (3) increased its community support for a minimum of 4 years, $1 million per year will be directed to grants to non-profit CBOs serving the underserved communities; (4) encourage and support their California employees to donate their time and knowledge to non-profit agencies that focus on literacy, education, and technology application programs; (5) maintain or improve the quality of telephone service in California, including the underserved communities, and (6) committed to continue to make diversity a critical component in the recruitment, hiring, career development and promotion of all people including minority, women and disabled employees at all levels to meet the diverse needs of their customers.
A total of 180 grants totaling $6.9 million were awarded since 2001. This program will expire in 2011.
Telecommunications Consumer Protection Fund (TCPF): The fund was created and authorized by the Commission (D.98-12-084) to finance a customer education program as a result of GTEC marketing practices. The fund was designed to provide consumer education about telecommunications matters to limited or non-English speaking communities. This education campaign targeted ethnic and local media to educate the consumers and direct them to local grantees if they need assistance. Grantees were given media training and briefing packets to work with their own local and ethnic media. The goal was to address telecommunications issues and build a statewide network, which can be used to distribute other types of communications information. This network would also link smaller grassroots organizations with larger, more established groups to share information and technical support beyond the grant period.
Program goals and priorities are to: (1) Support smaller, grassroots organizations, especially in rural, semi-rural or suburban communities; (2) utilize the local ethnic and community media to educate consumers about telecommunications marketing abuses and availability of grantees to assist consumers; (3) provide information, assistance and referral to individual consumers regarding telecommunications grievances or complaints; (4) teach or empower individual consumers to access consumer information on their own and advocate on their behalf; (5) develop the long-term capacity of grassroots, non-profit organizations to provide telecommunications consumer education and assistance to limited-English speaking communities; (6) reduce the duplication of effort in creating and distributing telecommunications consumer materials; (7) develop and support electronic networking and coordination between grassroots community groups and major consumer organizations that will last beyond the grants awarded; and (8) develop the capacity of grassroots organizations serving limited-English speaking communities to serve as representatives and advocates in statewide telecommunications protection, regulation and legislation.
Communities for Telecommunication Rights (CTR): Latino Issues Forum recently obtained a $38,000 grant (two cycles of one year grants) from the California Consumer Protection Fund to fund its education campaign on the telecommunications for a year. Asian Pacific American Legal Center of Southern California (APALC), LIF and Utility Consumers' Action Network, (UCAN) are the lead agencies that will facilitate grantees through the education, interviewing and complaint process. The CTR project creates data on the telecommunications issues that impact the Latinos and Asian communities and establish a precedent on consumer education program.
The lead agencies conducted their training on October 10, 2003 in which there was participation by 26 CBOs grantees. The education campaign addresses the needs of the non-English and limited English households and low - income families about their rights as consumers and how they can assert their rights and prevent and act against telecommunications fraud and other abusive practices targeting the Latinos and Asians that have language barriers in addressing their complaint. Their education program will focus on slamming, cramming, payphone, do not call, and do not disconnect service.
LIF is committed to use its website to post the new consumer protection rights and rules adopted by the Commission. Fact sheets and dispute resolution on the consumer protection initiative rules and rights will also be posted. LIF as lead agency to 24 CBOs and Greenlining will disseminate consumer education program notices and other related information "to get the word out" to the local communities and "hard to reach" consumers. Information on the above will be linked to the 24 CBOs' websites. LIF will replenish the CPI information on their website to educate CBOs and their constituents. CTR has been a major contributor in the implementation of CPI.
C. Commission Enforcement Efforts Related to Language
The Utility Enforcement Branch of the Consumer Protection and Safety Division (CPSD) investigates alleged or apparent violations of the Public Utilities Code, other state laws, and Commission regulations by telecommunications, energy, and water utilities, and other industries regulated by the Commission. These investigations typically involve consumer fraud, false or misleading advertising, bait-and-switch tactics, unfair and unlawful business practices, and unregistered operations. When sufficient evidence of violations is uncovered, the Enforcement Analysts of the Utility Enforcement Branch has a variety of administrative, criminal, and civil remedies to address these problems.
Administrative remedies may be appropriate (and possible) only where the individual or company suspected of being in violation has Commission operating authority, or has applied for it. A decision pending before the Commission may delegate authority to the Utility Enforcement Branch to issue citations, carrying fines of up to $20,000, which the carrier may choose to either pay, or deny and request a hearing before an Administrative Law Judge (ALJ). CPSD also asks the Commission to open a formal investigation (OIIs) or may protest a company's application. Unless CPSD.
Most of the provisions of the Public Utilities Code, as well as many laws in other California Codes relevant to persons and companies regulated by the Commission, carry criminal penalties. In addition to Public Utilities Code provisions such as unlawful marketing practices, operations without Commission authority, perjury before the Commission, and contempt of the Commission, the Utility Enforcement Branch's investigations commonly involve crimes contained in other California codes. When appropriate, the Utility Enforcement Branch files reports on such cases with local prosecutors or the California Attorney General's Office with a recommendation for criminal (felony or misdemeanor) or civil prosecution in the appropriate California courts.
In civil actions under the Business and Professions Code (Section 17200) and various provisions of the Public Utilities Code (e.g. Sections 2102 and 5259), the Utility Enforcement Branch may seek injunctive relief from the courts to enjoin individuals and companies from further violations of the law. Also, in addition to or in lieu of criminal prosecution, local prosecutors may elect to civilly prosecute violations of the Public Utilities Code and other statutes as unfair, unlawful business practices under Section 17200 of the Business and Professions Code.
The CPSD Utility Enforcement Branch has investigated possible violations of Public Utilities Codes and Commission rules, in the telecommunications area and other industries. Some investigations of alleged slamming and cramming by specific telecommunications companies have involved many Limited English Proficient require resources and activities that may not be required for cases in which most complainants are English proficient. In these cases, CPSD may utilize bilingual staff, and may be required to take additional time explaining the role of the Commission as a regulator, consumer rights, and more specifically the staff role in investigating complaints.
CBOs raised serious issues about possible fraudulent activities by wireless dealers and providers of prepaid phone cards using in-language materials to target specific communities. In addition to enforcement actions that Commission staff are currently undertaking, an effort is underway to develop a better working relationship between CBOs and the Commission's new Telecommunications Fraud Unit to improve investigation of such activities. Enforcement personnel should work swiftly with the CBOs and appropriate law enforcement authorities to deter such fraud. Progress in this area has been made. On September 26, 2006, CBOs met with CPSD to initiate a dialog and to apprise CPSD of the scope of CBO efforts to identify potentially fraudulent or abusive practices.
Provisions of Decision 06-03-013, the Consumer Protection Initiative, call for CPSD to enhance its ability to pursue enforcement actions, which it has done through the creation of a Telecommunications Fraud Unit that directly takes and investigates consumer reports of alleged fraud. CPSD is also working to establish ways to cooperate with local District Attorneys, state Attorneys General, other law enforcement agencies, the FCC, the FTC, and CBOs, to further improve enforcement activities.
IV) Carriers' Multilingual Practices
In order to gather information on carrier practices, Commission staff sent survey questions to all certificated and registered telecommunications carriers in California (wireless and wireline) asking for information on their services for and interactions with LEP consumers. Approximately 100 telecommunications companies responded to this request for information. Many companies stated that they were not able to provide information on LEP customers because they do not track such information or do not provide non-English services. Many companies of varying sizes and with varying business models were able to provide information on their multilingual marketing, education, and outreach services, and the language demographics of their customers. Because some carriers asked that their information be kept confidential, this report will summarize the approaches and types of information offered by the respondents without referring to particular companies by name. The Commission also received three sets of comments from telecommunications carriers or groups of carriers responding to the Commission's study plan, and these comments (see Appendix C) provide some insight into different possible approaches telecommunications companies may take to language access. Four carriers or groups of carriers (CTIA/Joint Wireless carriers, AT&T, Cox Communications, and the Small and Mid-size LECs) also commented on the draft staff report issued on August 21, 2006. Carriers submitting comments expressed their support for a voluntary and collaborative process for resolving the challenges faced by LEP consumers, primarily through cooperation between carriers, CBOs, and the Commission in resolving individual customer complaints. In addition, this study considers the results of the CUDC survey of utility companies mentioned above. Though respondents to this survey represent several different industries in addition to the telecommunications industry, the findings largely agree with information Commission staff received from telecommunications companies, and are relevant to this study because all companies are public utilities serving California residents.
A. CUDC Survey of Company Language Practices
All eight utility companies that responded to the CUDC survey (AT&T, formerly SBC; Verizon; Pacific Gas & Electric Company; Southern California Edison; San Diego Gas and Electric Company and Southern California Gas Company; Southern California Water Co.; and San Jose Water Company) are serving linguistically and culturally diverse customer bases and are projecting continued growth of these populations. The utility companies are committing human and fiscal resources to meet the demands, to provide better services, to grow profitability, and to remain competitive.
All companies provide some level of customer service in at least one language in addition to English, and are either expanding multilingual services or in the case of the smaller companies, considering it. There is awareness, action and appreciation for California's highly diverse population, and some strategic and effective programs in all of the large companies and in some of the smaller ones. One CUDC member expressed urgency in increased attention to recruiting and retention policies that create a management/employee population that reflects the community they serve.
Most companies regularly monitor customer service telephone calls for quality assurance. Several companies utilize third party vendors that provide telephone interpretation in many languages in order to serve all customers who speak languages other than English. All companies indicated that Spanish is spoken by the majority of their limited English proficient (LEP) customers, followed by several Asian languages including Mandarin Chinese, Vietnamese, Korean, Tagalog, and to a lesser degree Indo-European and other less commonly spoken languages. All companies provide some translated materials in Spanish; the larger companies provide numerous materials in the languages most commonly spoken in their service regions.
The majority of respondents provide diversity and cultural awareness training, some with comprehensive training that includes company leadership or diversity champions promoting the value of diversity. Programs include but are not limited to online training, internally produced videos, externally produced videos and other resources, trainer led sessions, advanced management training, and web-based resources. Three of the responding companies offer pay differential for multilingual employees.
Assessment of bilingual proficiency of employees varies from company to company. Methods include using professional language testing by telephone, and in-house role play and interviews. Two companies have no formal assessment. No utility company assesses its applicants or employees for literacy in reading and writing in languages other than English.
Top challenges noted by the majority of companies include a rapidly increasing multilingual and multicultural customer base, the costs of effective programs and services, cultural inclusion, cultural relevance and appropriateness of products and marketing strategies, and human resources staffing and scheduling.
Analysis of the CUDC survey results suggests areas that may warrant further consideration for some companies. These include pay differential for multilingual employees, effective strategies and resources for assessing bilingual proficiency, assessing biliteracy (the ability to read and write in a second language), offering language courses for employees and executives who communicate with LEP customers, and more in-depth cultural awareness that includes American Indian, and other ethnic/cultural groups.
B. Telecommunications Carriers' In-language Activities
Many telecommunications companies provide their own in-language marketing, outreach, and education for their customers and prospective customers. Based on the information received from carriers, many of these in-language practices are initiated by the companies to better serve their customers or to attract new customers. Other in-language activities are in compliance with state requirements, including past Commission orders.
1. Commission Requirements for In-language Carrier Outreach and Education
Several Commission decisions provide current requirements for in-language outreach and education by telecommunications companies. For example, Decision 96-10-076 requires Competitive Local Exchange Carriers (CLECs) and large Incumbent Local Exchange Carriers (ILECs) to provide specific information to customers in specific languages if they market services in those languages. The languages specified in this decision are limited to the seven languages most commonly spoken in the state at the time of the decision in 1996: Spanish, Mandarin, Cantonese, Vietnamese, Korean, Japanese, and Tagalog (D. 96-10-076).35 Carriers are subject to these requirements if they choose to market their services in these languages, and thus can avoid in-language obligations under this decision by not marketing in-language.36
Several other state laws and regulations, described above and enumerated in the carriers' comments on the LEP Draft Study Plan, also impose obligations on various providers of telecommunications services. A review of these requirements shows that do not apply equally to different types of telecommunications providers (e.g. wireless versus wireline companies, incumbent versus competitive local exchange carriers), which may provide different incentives to provide or not provide in-language outreach, thus affecting the information and services available to LEP populations. For example, at least one carrier stated in the June 26, 2006 workshop that it stopped marketing in some languages due to the fact that such marketing would invoke additional in-language obligations; this carrier noted that it has related unregulated businesses not subject to these requirements that market in additional languages. These and other effects, implications, and associated costs of these regulatory requirements and how it impacts carriers' decisions whether or not to market in language to diverse California LEP communities are an appropriate subject for further Commission study. If regulatory action seems warranted based on that study, staff may recommend a formal proceeding to develop a case record, facilitate discovery and information gathering on costs and benefits of such programs and initiatives, and allow for examinations and verification of information entered into the record.
2. Carrier-Initiated Marketing, Education, and Customer Service Efforts
Carriers responding to the Commission's information request in June and July 2006 described many measures that they take to communicate with their LEP customers. To briefly summarize, common practices include asking whether customers prefer to receive information in a language other than English at the time a customer opens an account, and tracking these language preferences in a carrier database or billing system to enable the carrier to send future information (ranging from written order confirmations to bills, new service offerings, and other information) in the customer's language of choice. Overall, larger carriers are more likely than smaller companies to serve larger linguistic groups (e.g. Spanish, Chinese) with in-house employees, and to use Language Line telephone interpretation services for others. Some larger carriers note that their in-language marketing, education, and services have grown slowly over the past two decades or more, as state demographics change and the companies attempt to identify and better serve LEP populations.
Three approaches to serving LEP populations seem to be common among telecommunications carriers that serve few customers in the state or operate in a limited geographic area: (1) provide no particular in-language marketing or services; (2) utilize limited marketing and offer some communications in the most commonly spoken non-English languages; or (3) specialize in multicultural, multiethnic, or LEP populations, offering marketing, information, and services in a variety of languages. These different approaches represent different marketing and customer service strategies, and which is chosen by a given carrier seems to depend on the carrier's business model, as well as the actual or perceived need for non-English services in the carrier's main geographic area and the actual or expected costs of serving LEP customers.
Several smaller carriers state that they do not provide non-English educational materials because they do not perceive a need for such services among their particular customer bases. This may be because the geographic service area in which the carrier operates is small and/or the population is overwhelmingly English-speaking. Alternatively, the carrier may not have investigated the linguistic demographics of the area and has not received requests for information in languages other than English. In such cases, carriers point to the costs of tracking language preferences and providing in-language materials and explain that doing so does not appear to be cost-effective because of the apparent lack of LEP customers served by the carrier. These claims are difficult to evaluate at this time because carriers have not provided data in support of their claims, but are explaining their business judgments to date.
Larger carriers similarly cite cost effectiveness to explain why they provide more information in English, Spanish, and other commonly spoken languages such as Mandarin and Cantonese, than in other languages. State demographics support the claims of many carriers that they have more Spanish-speaking customers than customers speaking any other single language except English, and so Spanish language communications are likely to be more cost-effective than materials in additional languages. Still, many carriers provide confirmation letters, ULTS information, and other educational and marketing materials in up to seven languages.
Larger carriers, and some smaller carriers that specialize in multiethnic customers, also provide in-house customer service in languages other than English, most commonly Spanish, but also several Asian languages, as well as Russian, Armenian, Arabic, and others, and at least one large carrier has several call centers dedicated to serving LEP Spanish-speaking populations. Some carriers have bilingual employees to provide non-English customer service, but have limited staff and hours in which bilingual services are available; callers who do not reach a bilingual staff person or call when bilingual services are unavailable may be asked to leave a message to receive a return call in their language of preference, rather than receiving immediate assistance. Many carriers utilize the Language Line translation services when dealing with customers that do not speak a language supported by carrier staff. Carriers that do not have bilingual staff or utilize the language line suggest that their LEP customers cope with the lack of in-language materials by providing their own translators or interpreters to assist them in shopping for telecommunications services and understanding written and oral communications from their service providers.
3. Carrier Quality Control and Oversight of Bilingual Activities
The level to which carriers engage in quality control over internal bilingual operations also varies throughout the industry. Some companies use internal staff to monitor the quality of in-language customer service, and some carriers contract with third parties to review in-language communications for accuracy and quality. Some of California's largest telecommunications carriers undertake customer surveys of their Spanish customers to ensure customer satisfaction with in-language services; few companies provide this level of quality control in Asian languages. Smaller carriers that rely on customers to provider their own interpreters or do no translate their materials into languages other than English do not generally have systems in place for monitoring the satisfaction of their LEP customers with in-language service, and do not have quality control monitoring for in-language services.
The Commission did not formally gather data on the manner, frequency, and effectiveness of telecommunications carrier oversight of third party dealers or agents (resellers of a carrier's wireless services, for example), despite the fact that some such dealers focus their advertising and marketing on LEP populations. When possible language-related issues with some dealers were raised by consumer groups in the June 26, 2006 workshop and the four July and August 2006 public meetings related to this project, carrier representatives in attendance were able to provide a brief response. In general, carriers reported that if they become aware of fraud or abusive marketing on the part of one or more dealers, agents or resellers under contract to market their products, the carriers will (and in the past have had occasion to) discontinue contracts with those dealers.
C. Carrier Comments on the August 2006 Draft Staff Report
On September 14, 2006, various carriers (CTIA/Joint Wireless carriers, AT&T, Cox Communications, and the Small and mid-size LECs) submitted a total of four sets of comments on the draft staff report issued on August 21, 2006. As mentioned above, the carriers' comments generally expressed a preference for a voluntary and collaborative process for resolving the challenges faced by LEP consumers, primarily through cooperation between carriers, CBOs, and the Commission in resolving individual customer complaints. Commenting carriers universally asserted that formal rules are neither necessary nor desirable, due to the complexity of the issues facing carriers in serving their LEP customers and the varying characteristics and business models of carriers. AT&T made a few new proposals for action to improve Commission language access, including that the Commission should appoint a "language Czar" in the Public Advisor's Office to oversee language access issues, and that the Commission should use objective, consistent, and transparent criteria for adding or deleting languages for future education efforts. Several carriers were supportive of some recommendations of the draft report, such as setting up formal agreements between CBOs and carriers to facilitate CBO advocacy and complaint resolution, and continuing to study LEP challenges and issues. Other recommendations garnered mixed reactions in these comments, and some expressed a desire for clarification of specific proposals from the draft staff report. Commission staff have considered these comments and have revised later sections of this report to address these issues.
V) Challenges and Needs of LEP Telecommunications Consumers
Based on input received from consumer advocates, both in written comments from several CBOs and at the workshops and public meetings held to gather information for this report, staff concludes that there is a need for more in-language information and service. Issues discussed at these meetings also suggest a need for increased and speedier Commission enforcement of fraudulent activities and other PU code and rule violations by unscrupulous persons or companies that target LEP populations, and increased attention on the issue of how to require carriers to have better oversight over dealers, agents or resellers that market telecommunications products and services under contract with telecommunications companies. Some of these issues are best resolved in a focused effort with a formal Commission proceeding, in which parties can develop a formal record and determine the need for, and where appropriate, the specific terms of rules to address ongoing and persistent problems facing LEP consumers.
A. Information Needs
CBO representatives suggested in the public meetings that information available to English-speaking customers, including service contracts, bills, or a confirming document outlining the rates and key terms and conditions of the customer agreement, should be translated into the languages other than English (LOTEs) in which the carrier markets or conducts sales. Such foreign language documents would allow LEP consumers to better understand the products that they purchase, and their rights and responsibilities as a customer. Having materials that clearly state the rates and key terms and conditions (for example, services provided, early termination fees, term length if any, and exceptions and limitations to service) could help to avoid or address many of the problems encountered by CBOs working with LEP populations, which they describe as "a disconnect" between what the consumer believed he or she was buying and what the dealer/agent/reseller or telecommunications provider believes was sold. CBOs state that in-language contracts or lists of key services and terms would allow customers and consumer advocates to have a reference document that records their service agreements and provide a reference to help answer future questions. A recurring point made at the public meetings is that regardless of the language in which a sale takes place, people do not always remember all they are told accurately, so it is wise to have key information in writing; of course, such information is only useful to a customer if he or she can understand it.
In addition to this need for the translation of documents provided in English to be translated into languages other than English, some participants in the public meetings stated that there is an unmet need for information that addresses the special situations of LEP consumers, is culturally appropriate, and is appropriate to the target audience's reading level. CBOs and some carriers suggested that merely translating "mainstream" information from English is inadequate to serve LEP populations, for several reasons.
First, LEP customers may need different or additional advice than English proficient consumers to assist them in shopping for telecommunications services. In written comments on the study plan, for example, Asian Law Caucus, points out that educational materials sometimes make recommendations that are not helpful to LEP customers (e.g., to read contracts, terms, and conditions, when those are not provided in a language the Asian customer can read), when different advice would be more relevant to these consumers (e.g., to bring a fluent English Speaker, preferably an adult, to interpret and translate when shopping for telephone services)37.
Second, some carrier and Commission information assumes a relatively high (high school or above) reading level, and some knowledge of existing telecommunications terminology and services; CBOs state that these assumptions may not be realistic for some language communities. Some language minorities in the state have average education levels much lower than the overall state average, and in these cases more graphics and fewer words may be more informative. For example, at the Stockton public meeting, it was stated that many migrant workers from Mexico typically have first to third grade educations in their native language. Similarly, the Hmong and Lao communities made similar points about low education levels for some of its population38. In the case of communities with low rates of literacy in both English and their primary language, alternative education methods such as oral radio and television PSA type announcements may be more effective.
Based on CBO comments and the demographics of the state, it may also be appropriate to have information available in additional languages. Rather than considering only the languages that are most commonly spoken in the state of California, or in which the Commission already receives large numbers of complaints, the Commission could consider other languages mentioned by CBOs. Materials in some additional languages that are not among those most commonly spoken statewide may be valuable if the language population has particularly high rates of linguistic isolation (meaning that speakers are less likely to have household members who can assist in English transactions), is growing quickly (e.g. Russian and Armenian39), or is common in a particular geographic area (Cambodian, Hmong, or other Southeast Asian communities in certain parts of the Central Valley).
CBO representatives also noted that despite the best efforts of the Commission and telecommunications companies, the in-language materials that currently exist do not always reach the appropriate customers in time to assist with critical decision-making. Some argued that many purchase their wireless services at "kiosk" type facilities at community gathering places, relying on oral representations of the salesperson. As a result, customers are not always aware of their rights, do not compare rate plans or coverage maps between carriers, and do not always have access to the information that they need to ensure that they purchase the best services for their particular situations. When marketing and sales take place in a language other than English but all written confirmation of the sale is provided in English, it may be difficult or impossible for an LEP consumer to verify that what was purchased matches what was represented by the salesperson or marketing materials. This situation may at best promote misunderstanding and at worst facilitate fraud or abusive marketing practices.
B. Customer Service Needs
In addition to an increase in education, CBOs suggest that consumers would benefit from changes to the customer service systems of the Commission and telecommunications service providers. As mentioned above, many companies offer few if any in-language customer service; this makes it difficult for LEP consumers to resolve billing questions, service complaints, and other issues directly with their service provider. Some companies that offer in-language customer service offer it only during limited hours or on a call-back basis; this may not adequately serve the needs of LEP consumers who have busy or inflexible work schedules or other personal, family, or community commitments. CBOs also pointed to cultural characteristics, such as a communities' inherent distrust of government, utilities or corporations or a reluctance to complain, which can also lead to difficulties in resolving complaints. Customer service procedures can be improved by making them more accessible to and tolerant of customers with cultural differences.
CBO representatives at two of the public meetings described that this difficulty is exacerbated by a "lack of continuity" in customer service: most companies do not allow customers to deal with a single customer service representative from the initial question or complaint until the issue is resolved. This increases the efficiency of service providers' call centers, but result in consumers (and CBOs attempting to assist them) having to repeat their questions or concerns during subsequent calls if the issue is not resolved during the first contact. While this issue may seem the same for English speakers as for LEP consumers, it can cause additional hardships for LEP people who need to provide their own interpreter or access the Language Line just to be understood. A further complication may occur if the interpreter (whether provided by the customer, the service provider, or the Language Line) is not familiar with technical terms used in the telecommunications industry.
CBOs also describe their complaint resolution function, assisting LEP and other customers in working with utilities and the Commission's Consumer Affairs Branch to address and solve customer complaints. CBOs associated with Communities for Telecommunications Rights (CTR), in particular, take an active role in working with customers, and tracking the complaints that they receive. CTR and other CBOs also note the difficulties that they experience in dealing with carriers, due to privacy concerns and the CBOs' lack of recognized standing with the carriers to advocate on behalf of specific consumers. In addition, several CBOs express concern in their comments on the draft staff report about the imminent loss of funding for CTR, and the negative effect that this is likely to have on customers who depend on CTR CBOs for this assistance. Both CBOs and carriers support the continuation of CTR funding by the Commission.
C. Enforcement Needs
In the workshops and public hearings, CBOs raised many concerns about fraudulent and abusive activities targeted at LEP telecommunications consumers. These issues include "bait and switch" sales tactics, the misrepresentation of terms of wireless phone contracts or pre-paid phone cards by carrier-authorized and unauthorized agents/dealers/resellers, and other possible scams that involve misleading advertising or bad faith on the part of a carrier or dealer/agent, reseller. One example given from the APALC involved an advertisement in a Chinese language newspaper for a very low monthly wireless phone rate that include discounts for a rebate available only if the customer stays on the service for a specific time period (e.g. two years) that then never materializes. Though it is true that many of the activities described by CBOs would constitute fraud, abusive marketing, or other statute or rule violations, regardless of the language in which they take place, there is a belief among consumer advocates that LEP populations are more vulnerable because of language barriers. Without a study, it is not possible to know whether such tactics are more common in ethnic media or in-language marketing than in English, but it is clear from the examples shown in our workshops and public meetings that some non-English speakers are susceptible to misunderstandings and unscrupulous practices. CBOs suggest that additional enforcement focused on abusive in-language marketing would be appropriate because LEP customers encountering these schemes may be less likely report or resolve their problems due to a of lack of information on their rights, to a lack of access to in-language customer service, or to cultural differences.
CBOs also recommend that the Commission adopt formal rules, particularly to require in-language disclosures (of contracts or key terms and conditions) when marketing and sales transactions take place in languages other than English. Such rules could empower LEP consumer by providing them with the information necessary for accurate understanding of the terms of any agreement.
D. Comments of CBOs and Consumer Advocacy Organizations on the August 2006 Draft Staff Report
On September 14, 2006, seven sets of comments were submitted by individuals, CBOs, carriers, and consumer advocacy organizations on the draft staff report issued on August 21, 200640. In addition, P-Core, another CBO, provided a background paper on the Filipino language and culture. Most of these comments expressed a strong preference for the initiation of a formal Commission proceeding to address the challenges faced by LEP consumers, and advocated for the adoption of rules to ensure that carriers provide in-language information such as contracts or key terms and conditions of service contracts to LEP, and take responsibility for the actions of dealers or agents that sell their services.
The comments from CTR contained several specific proposals for action to improve language access, including: 1) initiate of a formal proceeding to consider rules; 2) adopt rules that would require carriers that market and conduct sales in the five most commonly spoken languages in the state to provide a statement of key contract terms and conditions at the time of sale to customers purchasing service in those languages; 3) adopt rules mandating in-language billing by carriers for consumers who are marketed and sold services in these languages; 4) adopt rules clarifying carrier responsibility for third party dealers or agents selling their products or services; 5) use California Civil Code § 1632 as a model for language-related rules; 6) continue CTR funding for complaint resolution and outreach activities; 7) improve tracking of language-related complaints; 8) require carriers to track and report on language-related complaints. CTR also requests that the Commission clarify and explain Commission enforcement and complaint resolution by carriers, and take other actions to improve information and service available to LEP consumers from the Commission and carriers. Several CBOs state their belief that education of LEP consumers alone will not be adequate to overcome the problems and challenges faced by LEP consumers, and CTR in particular advocates for affording LEP consumers with the same protections already available to English-speaking consumers. Commission staff have considered these comments, and have revised later sections of this report with them in mind.
VI) Options for Consideration by the Commission
Section 14 of Decision 06-03-013 states that "in preparation for any regulatory action that may be directed by the study, [the Commission] will open a proceeding specifically designed to address in-language issues" (D.06-03-013, at 138). This staff study has revealed the depth and complexity of issues facing LEP consumers, as well as some general approaches for addressing these issues in the short and long term. Further time and information is required, however, to define specific options and analyze their costs, benefits, anticipated outcomes, and feasibility of options for addressing those challenges. This should begin before a formal proceeding is initiated, to ensure a focused and expeditious response to these problems; many actions can be accomplished quickly and without the need for a formal proceeding. There is also the possibility that the collaborative process that has been guiding CPI implementation may be able to yield voluntary solutions by the carriers in a manner that satisfies the Commission. Still, further recommends that to the extent possible, solutions that do not require formal Commission action, such as staff initiatives that may be undertaken at the direction of the Commission's executive director and voluntary industry actions, should not be delayed awaiting the results of any forthcoming proceeding.
A further information gathering process followed by a formal proceeding to determine the necessity for and, if appropriate, specifics of rules that will also address concerns of CBOs for additional time to provide information and do research that better describes and suggests ways to address the challenges facing of LEP consumers. Two parties in their comments on the staff study plan released in June 2006 requested that the due date for the staff report be delayed by two months to allow them to perform their own research and contribute additional information to this Commission effort. Though such an extension was not granted, staff proposes to continue its information gathering consistent with a goal of presenting the Commission with a set of specific policy options and recommendations in the near future. The following discussion is intended to outline both immediate actions and the possible scope of a formal proceeding, to be based around a staff proposal targeted for later this year.
A. Options for Improving Education
As proposed by the CBOs, the Commission should investigate the actual costs and benefits of providing service contracts, bills, and a confirming document or key rates, terms and conditions of service into the languages in which the telecommunications service provider conducts its sales. As suggested in the comments of CFC, the Commission (and carriers) should learn "the relative cost of providing essential information to a buyer in the language in which wireless service was sold, when compared to amounts spent for marketing telecommunications products in that language," (CFC study plan comments, at.15) in order to make an informed judgment about the impact on service availability and service quality of encouraging or requiring this practice. Carriers state that the cost of providing in-language services to LEP communities is not equivalent simply to the cost of translating one contract, or a set of terms and conditions; marketing and customer service require staff and technology that support in-language services. Commenters did not provide specific cost data for use in this analysis, however. It is clear from the information provided throughout this study that many of the larger carriers have already incorporated in-language marketing, customer service, and billing into their businesses. What is not clear is how much it would cost for carriers that have already developed infrastructure to support multi-lingual marketing, education and customer service to provide additional services such as billing or contracts to LEP consumers. It is also unclear what guidelines and criteria these carriers use to make decisions on which and how much in-language services to provide.
To the extent possible, the Commission and phone carriers should work to develop new in-language materials that focus on meeting the needs of LEP customers in light of the findings in this study. These materials could include suggestions appropriate to LEP populations, such as reminders to bring an English-proficient adult when shopping for telecommunications services, to ask relevant questions about rates and key terms and conditions, and to ask what in-language customer service is available through a particular provider before entering into a contract. The Commission and carriers should research cultural characteristics relevant to reaching different language populations to ensure that materials are sensitive to cultural differences that may affect the usefulness of the materials to their target populations. The Commission and carriers should bear in mind that such materials should be very simply worded and not rely on overly technical terms should the targeted community have lower literacy rates. The CBO Action Plan ordered in the CPI decision may provide another avenue for the Commission to work with CBOs and a possible structure for ongoing review and evaluation of the effectiveness of these materials. Materials can also be developed as part of future Commission and carrier education programs.
The Commission should also develop ways to ensure that in-language materials reach their intended audiences. Though the Commission already works closely with Community Based Organizations to distribute in-language and other consumer education materials, one finding of this study and of the public outreach done is that the current distribution is not meeting the needs of all LEP communities. Particular efforts should be made towards the linguistically isolated households in California. Options for addressing this issue include contracting with a consultant that specializes in hard-to-reach populations to learn how to distribute materials more effectively, and facilitating distribution by working with more local and regional grass-roots organizations, especially CBOs, that are known and trusted within their communities. Again, the CBO Action Plan may provide a venue for this and structure for review and evaluation of these efforts. Like the development of additional appropriate materials, improved distribution should take place as part of new and ongoing Commission and phone carrier efforts.
Based on the research already conducted, the Commission should also increase the internal resources available in its own bilingual services office. As the amount of bilingual materials grows, the bilingual services coordinator must ensure that increasing numbers of documents are properly translated, must be prepared to serve increasing numbers of LEP consumers, and must support more multilingual activities in an industry that is constantly evolving new products and services. Currently, only the Bilingual Services Coordinator has formal responsibility for these activities, and it can be difficult to get additional staff to work on language-related projects. Possible options for improving this situation include additional staff in the bilingual services office dedicated to improving language services, and better institutional support to make additional staff available on a project-by-project basis. This is a suggestion that can be implemented quickly via PUC Executive Director action, including CSID work to expand the Commission's Bilingual Services Office during the upcoming fiscal year.
B. Options for Improving Customer Service
There are many possible ways that the Commission and carriers could improve customer service to LEP consumers. Potential options include increasing the number of languages in which customer service representatives can work with consumers, either by increasing in-house bilingual staff or by contracting with outside companies (such as the Language Line) that provide high-quality interpretation services. In addition, both the Commission and many telecommunications providers can institute formal and systematic quality control for calls that take place in languages other than English. An expansion of telephone service hours during which bilingual services are available could also be helpful. The Commission already contemplated this through the expansion of CAB hours ordered in the CPI Decision. The Commission can encourage carriers to likewise extend hours in which bilingual customer support is available.
A key way of improving customer service to LEP communities came up consistently in the workshops and public meetings held to gather information for this study: better cooperation and communication between CBOs, telecommunications providers, and the Commission. Possible strategies for accomplishing this that were suggested during the course of this study include allowing CBOs to enter into formal relationships (similar to a power of attorney) with carriers that would enable them to advocate on behalf of consumers, and make it easier for carriers to share customer information with CBOs once a customer has given permission for the CBO to act on his or her behalf.
CBOs also lament their lack of resources available to enable them to work with phone consumers. Because CBO funding is often project-based, and specific funding is not available to assist with complaint resolution for LEP telecommunications consumers, it can be difficult for CBOs to dedicate time to these issues vis-à-vis other issues. Consistent funding that would specifically support these consumer education and complaint resolution activities would address this concern of CBOs.
The Commission should investigate the costs and benefits for consumers and telecommunications carriers of offering LEP information and services by companies that currently assume these services are not needed by their customers. While this assumption may be correct in some cases, it is not appropriate to assume that the lack of complaints about language access or requests for language services means that there is no demand for services among a carrier's customer base. As discussed in the public meetings on this topic, consumers who cannot communicate in English may be hesitant to ask for assistance or may be unable to communicate to a carrier when they have a problem. Carriers that do not currently make available professional interpretation services and depend on customers to provide interpreters from their friends and family could be more proactive in providing interpretation services, possibly through options such as the Language Line, to avoid the high costs of internal staffing and allow LEP customers better and more continuous access to customer service assistance.
In addition, carriers and the Commission should increase their use of culturally appropriate materials, and ensure sensitivity to cultural differences and issues that may influence the effectiveness of outreach and education materials. One way of accomplishing this is to work directly with smaller CBOs that are based in and representative of their communities. As several CBOs pointed out in the public meetings, it is difficult for many agencies to engage in activities beyond those for which they are funded, so the increasing CBO involvement in LEP outreach may require some funding for these organizations. As in the development and distribution of other translated materials, the CBO Action Plan could provide a framework for this effort.
The merits of and methods for increasing in-language customer services, encouraging cooperation among carriers and CBOs, providing funding for CBOs, and expansion other services should be examined in a future staff proposal that will define and evaluate specific options for improving in-language access. Such as proposal will also recommend appropriate procedures for acting on specific recommendations; possible approaches may range from staff implementation (for example, to improve educational materials) to a possible formal Commission proceeding (if new rules or specific regulatory actions are contemplated).
C. Strategies for Improving Enforcement
Carriers wish to expand in-language services only when business factors make such changes worthwhile for the carrier, using demand analyses and similar tools. Documented concerns about fraud, marketing abuse, a relative lack of quality customer service, and other issues raise the possibility that some carriers may need to offer information and services based on something other than market factors. A formal proceeding will allow the Commission to balance these positions and other factors such as carrier size, resources, and customer characteristics in developing rules or standards that protect consumers while allowing businesses the flexibility to determine the extent of their non-English marketing. Such rules could reduce the need for enforcement by empowering consumers (for example, through increased in-language information disclosures), and could assist the Commission in taking enforcement action against carriers that use tactics that are associated with fraud or marketing abuse or that allow third party dealers of their products and services to engage in such activities.
VII) Recommendations
The CPI Order envisioned an enumeration of recommendations regarding language challenges faces by California's telecommunications consumers:
[w]e intend for Commission staff to develop a report that verifies the languages identified for education elsewhere in this decision, reviews the challenges faced by those with limited English proficiency relating to communications services, and enumerates recommendations for effective programs and strategies for communicating relevant information in multiple languages.41
This report has identified those challenges to a great degree and some specific recommendations can be made. The following enumeration was developed to allow for immediate action by the Commission on some recommendations - and to allow for the consideration of other recommendations in short and long term action plans.
The use of short and long-term action plans will allow issues to be placed into the Commission's schedule based on importance of the issue, stage of development of the issues, and data resources for analysis of the issues. Staff recommends that challenges placed in the short and long-term action plans below can be addressed through a formal proceeding or by utilizing the collaborative processes developed in CPI implementation.42. To the extent possible, solutions that do not require formal Commission action, such as staff initiatives that may be undertaken at the direction of the Commission's executive director and voluntary industry actions, should not be delayed awaiting the results of any forthcoming proceeding.
Recommendations. This should not be a "one-size-fits all" proposal, but instead should take into account the different circumstances (such as size, geographic and demographic characteristics of the population served, and services offered) of different of telecommunications providers and target rules to provide appropriate protection while allowing flexibility appropriate to these differences.
2. Reconcile the language requirements in various Commission decisions, and also in its programs that have different language requirements (e.g. third grade reading level in the foreign language) of target audiences.
4. Based on current demographic data, add languages with particularly high rates of linguistically isolated households - and - languages with growing or concentrated populations (such as Russian and Armenian) to its list of languages appropriate for consumer education and public outreach.
5. Improve CAB's tracking ability in the new CAB database scheduled to be on line in 2007 to capture the language in which complaints are filed, and whether the outcomes of complaints differ due to language barriers.
6. Send appropriate language-trained staff from and carrier staff are likely to be available to attend, for example weekday evenings. Activities would include bill clinics, dispute resolution, and consumer education, for example.
7. Set up procedures to rapidly refer cases of suspected fraud, marketing abuse, and other possible violations involving in-language marketing and customer service to the Commission's Utility Enforcement Branch and. If carriers and/or CBOs initiate a collaborative process, similar to the current CPI process, to develop a voluntary code of conduct by the carriers pertaining to in-language issues and challenges, staff should monitor this process and its results.
3. Expand consumer education programs to address identified problems and concerns of LEP communities. This should include more in-language materials and materials developed specifically for the comprehension of different language, cultural and educational groups, based on input from CBOs. issues as the nature and demographics of California evolve with respect to language, to ensure the Commission's efforts remain current.
2. Explore how in-language programs developed and implemented under D.06-03-013 may inform challenges in the other utility industries in California.
VIII) Conclusion
The challenges and issues facing limited English proficient and non-English speaking telecommunications consumers are complex and varied. During the course of this study, staff gathered a great deal of information on language services offered by the Commission and carriers, the challenges faced by LEP consumers in obtaining and maintaining telecommunications services, and the roles played by Community Based Organizations in assisting LEP customers both before they receive service and when they encounter problems with their service.
This report includes the research, conclusions and recommendations that staff has made to date. This report also informs the next steps that the Commission should take and becomes a source document for scoping issues and challenges related to language. The Commission's study of these issues, which is continuing beyond the original 180 day deadline specified in the D. 06-03-013,. The goal of this proposal, targeted for release later this year, will be to provide a focus for the comments and counterproposals of stakeholders in a formal proceeding.
In the short term, the Commission should continue to provide education information in the languages noted in the CPI decision and should add materials available in additional languages as required. The Commission should also facilitate communication processes between CBOs, carriers, customers and the Commission, to ensure that complaints are addressed adequately and in a timely way, and that enforcement can be brought to bear when appropriate to protect LEP consumers. Again, staff recommends that to the extent possible, solutions that do not require formal Commission action, such as staff initiatives that may be undertaken at the direction of the Commission's executive director and voluntary industry actions, should not be delayed awaiting the results of any forthcoming proceeding.
1 D.06-03-013, p. 138.
2 D.06-03-013, p. 138
3 The language line is a telephone service that provides access to interpreters in over 150 languages. Language Line interpreters translate over the phone using a three-way call. Companies and government agencies may contract with the language line to make its services available to clients and consumers.
4 California Education Code § 48985
5 AT&T California's Comments on the Draft Report on the Challenges Facing Consumers with Limited English Skills in the Rapidly Changing Telecommunications Marketplace, September 14, 2006, pp. 9-11.
6 Id.
7 Asian Law Caucus' Comments on the CPUC's Staff Draft Report: Challenges Facing Consumers with Limited English Skills in the Rapidly Changing Telecommunications Marketplace, September 14, 2006, pp. 7-8.
8 The Communities for Telecom Rights' Recommendations and Comments on the Report on Language Issues for California Telecommunications Consumers: Before the Public Utilities Commission of the State of California, September 14, 2006, pp. 1-6 and attachments.
9 See Division of Ratepayer Advocates Comments on the Staff Draft Report: Facing Consumers with Limited English Skills in the Rapidly Changing Telecommunications Marketplace (Draft Report), September 14, 2006 and Watsonville Law Center Comments on Draft Report: Challenges Facing Consumers with Limited English Skills in the Rapidly Changing Telecommunications Marketplace, September 14, 2006.
10 Earlier in R.00-02-004, a proposal for a certain in-language rule was deferred. The proposed rule required service agreements, contracts, bills and notices to be available in each language employed by the carrier in solicitations directed at consumers (see R.00-02-004, Draft Decision mailed July 24, 2003). The Commission crafted the rule in light of PU Code §2890(b). However, carriers responded that the more in-language requirements that they faced, the more likely they (especially small carriers) were to pull back from directing information about their services and products at non-English speaking audiences. Other parties disagreed and suggested other possible solutions. Correspondingly, the Commission decided to defer the finalization of rules on this issue until a later time.
11 See D.95-07-054, Ordering Paragraph 1 and Appendix B: Rule 2. Appendix B established Consumer Protection and Consumer Information Rules for CLCs.
12 See D.95-12-056, Ordering Paragraph 64 and Appendix C. While D.95-12-056 first adopted the expanded in-language requirement, the Commission later adopted D.95-02-072, Appendix E which amended and replaced the earlier rules adopted in Appendix C of D.95-12-056.
13 On April 3, 1996, the California Telecommunications Coalition filed a petition to modify the CLC in-language requirements adopted through D.95-04-054, D.95-12-056 and D.96-02-072.
14 Appendix A of D.96-10-076:
"1. Incumbent LECs and CLCs that sell their services in any of the following seven languages- Spanish, Mandarin, Cantonese, Vietnamese, Korean, Japanese, or Tagalog- shall be required to do the following in those languages in which they sell their services:
A. Identify and store in a database the language preference ("language preference database") specified by their customers.
B. Send Commission-mandated notices, including the universal lifeline service notice with the rates, terms and conditions in language.
C. Upon initiation of local serve, send the confirmation letter to the customer in the preferred language, setting forth a brief description of the services ordered and itemizing all changes which will appear on the customer's bill.
D. Upon initiation of local service and annually thereafter, provide a bill insert to the customer in the preferred language that explains the customer's bill.
E. Provide a toll-free number for access to bilingual service representatives in the preferred languages in which the CLC sells its services.
2. Provide all residential customers with the Commission-mandated Universal Lifeline Telephone Service notice in the 7 languages identified above and include with the notice toll free telephone numbers for access to bilingual customer service representatives in the languages in which the CLC sells its services from those listed above.
3. All LECs and CLCs are encouraged to provide additional bilingual or in-language services to their customers."
15, Limited English Proficiency Resource Document: Tips and Tools from the Field, September 24, 2004, U.S. Department of Justice Office of Civil Rights
16 Report To Congress: Assessment of the Total Benefits and Costs of Implementing Executive Order No. 13166: Improving Access to Services for Persons with Limited English Proficiency, March 14, 2002, p. 16.
17 Ibid. p. 23.
18 The following groups have participated in developing the program along with Commission staff: Asian Law Caucus, Asian Pacific American Legal Center, Communities for Telecom Rights, Consumer Action, Greenlining Institute, Latino Issues Forum, The Utility Reform Network, AT&T California, CTIA- The Wireless Association, Comcast Phone of California, LLC, Cricket Communications, Inc., The California Association of Competitive Telecommunications Companies (CalTel), Cingular Wireless, Cox California Telecom LLC d/b/a Cox Communications, Sprint Nextel (i.e. Nextel of California, Inc., Sprint Telephony PCS, L.P., Sprint Spectrum L.P d/b/a Sprint PCS, Sprint Communications Company), Omnipoint Communications, Inc. d/b/a T-Mobile, Verizon California, Inc., Verizon Wireless, and the Small and Midsized Local Exchange Companies (i.e. Calaveras Telephone Company, Cal-Ore Telephone Company, Citizen's Telecommunications Company of California d/b/a Frontier Communications of California, Ducor Telephone Company, Foresthill Telephone Company, Global Valley Networks, Inc., Happy Valley Telephone Company, Hornitos Telephone Company, Kerman Telephone Company, Pinnacles Telephone Company, The Ponderosa Telephone Company, Sierra Telephone Company, Inc., SureWest Telephone, The Siskiyou Telephone Company, Volcano Telephone Company, and Winterhaven Telephone Company).
19 D.06-03-013 at p. 121: "The first prong is a broad-based information campaign that helps all consumers in the face of the complex and ever-changing array of telecommunications choices. The second prong consists of an education program designed to inform consumers of their rights. ... The third prong combines the first two prongs and focuses more on orienting those customers who are non-English or low-English proficiency speaking, seniors, disabled or low-income."
20 Ordering Paragraph 24 of D.06-03-013 directed Commission Staff to post to the Commission's website the consumer education material developed in the consumer education program within 120 days of the decision issuance. The program launch on June 29th fulfilled that directive.
21 As discussed in Section IV below, several CBOs have stated that the language in these current four brochures is complex and not easily understood by some LEP communities.
22
See Decision D.84-11-028.
23 The Moore Universal Telephone Service Act was codified at Public Utilities Code § 871 et seq.
24 D.94-09-065, page 6.
25 D.96-10-066, Appendix B. Rule 3.B.3 states:
"It is the objective of the Commission to improve the subscribership rate of basic service to all customer groups, including low income, disabled, non-white, and non-English speaking households, by means of the following mechanisms:
a. All incumbent local exchange carriers (ILECs) and competitive local exchange carriers (CLECs) shall be responsible for pursuing the objective of achieving a 95% subscribership rate among all customer groups, including low income, disabled, non-white, and non-English speaking households, in their service territories.
b. ILECs and CLECs shall have the flexibility to develop innovative strategies to contribute to the attainment of this objective.
c. In service territories where there is a substantial population of non-English speakers, a carrier's efforts to communicate with such customers in their native languages shall be a factor that the Commission considers in assessing each local carrier's contribution to pursuit of universal service targets."
26 Commission Rulemaking (R.). 06-05-028 initiates a comprehensive review of its Telecommunications Public Policy Programs - California Lifeline, Payphones Programs, Deaf and Disabled Telecommunications Program, and California Teleconnect Fund. It sets out to examine funding, accountability, fulfillment of statutory goals, and proposal to address identified deficiencies in these programs.
27 The 2004-2005 Lifeline Marketing Campaign Report. The target populations included the following language-specific markets: 1) English-speaking adults, inclusive of African Americans, Native Americans, Latinos, Asian Americans and Caucasians, 2) Spanish-speaking Latinos, 3) Asian-language-speaking adults, inclusive of Chinese, Koreans, Vietnamese, Filipinos, Hmong, Cambodians, and Laotians, and 4) "social service recipients". (see p. 21) In addition, community organizations, community partners and agency partners were selected and developed to reach low income households that included hard-to-reach ethnicities, non-English speaking populations, seniors, Native Americans, rural areas, and social service recipients. (see p. 31)
28 The 2004-2005 Lifeline Marketing Campaign Report, pp. 22-24. The Asian Languages were Mandarin/ Cantonese, Korean, Vietnamese, Tagalog/ Taglish, Hmong, Cambodian, and Laotian.
29 CPUC Contract 03PS5427, 2005-2006 ULTS Marketing Program: First Quarter Report, August 31, 2005-November 15, 2005, p. 1
30 The 2004-2005 Annual Report Providing a Summary of All ULTS Call Center Activities for the Period July 1, 2004 to June 30, 2005, p. 1.
31 Id, pp. 2-3.
32
2001-2002 Language Survey Departmental Summary and Analysis of California Public Utilities Commission, in Statewide Language Survey Volume 2: 2001-2002 Language Survey Data Tables and Departmental Summary and Analysis Reports, Section B, report 163, page 11.
33
California was the last state in the nation to implement Caller ID in July, 1996, but was the first to pass a law requiring blocking options to be provided for California consumers in 1989 (AB 1446, 1989 and PU code section 2893). The length of time between the 1989 legislation and 1996 Caller ID implementation provided the Commission to learn from other states' experience in implementing Caller ID. It also proved to be the impetus in California approach to consumer education.
34
Final Report: Pacific Bell/GET Community Education Program for Caller ID Blocking, Lynn Victor, Richard Heath & Assoc., May 20, 1997.
35
As pointed out in study plan comments from some carriers, these are not the seven languages mentioned for education in the CPI decision, which reflect the seven languages most commonly spoken in the state today: English, Spanish, Chinese, Vietnamese, Korean, Tagalog and Hmong. As discussed above, the current CPI initiative will translate materials into as many as 13 languages.
36 Comments of Small and Mid-size LECs, at 3.
37 ALC Study Plan Comments, p. 3.
38 Source: comment on Cambodian community, Fresno meeting, similar comments on Hmong, Lao and migrant worker communities at Stockton and other public meetings.
39 In its presentation at the June 26, 2006, workshop and elsewhere, AT&T identifies Russian and Armenian as the two languages for which it has the most demand for Language Line services, and two of several languages appropriate for additional outreach.
40 Comments were submitted by CTR, the Watsonville Law Center, Asian Law Caucus, Roy Segovia, Consumer Federation of California, Cox Communications, and the Small and mid-size LECs.
41 D.06-03-013, p. 138.
42 Collaborative processes in the CPI implementation have been developed during the 120-day education/outreach program, the 180-day in-language access study and enforcement workshops. | http://docs.cpuc.ca.gov/Published/Report/60608.htm | 2017-01-16T19:18:28 | CC-MAIN-2017-04 | 1484560279248.16 | [] | docs.cpuc.ca.gov |
Mass elements are primitive parametric objects that have specific shapes, such as arch, box, cylinder, and gable. They function as the building blocks of conceptual design (also schematic design) in AutoCAD Architecture. You can create preliminary studies, or mass models, by grouping mass elements together in mass groups.
Mass Groups and Mass Models
When you create a mass group, you can combine the shapes of mass elements in the mass group by adding, subtracting, and intersecting them in a specific order. The resulting complex shape of the mass group forms your conceptual building design, or mass model. The mass model defines the basic structure and proportion of a building model.
As you continue developing your mass model, you can combine mass elements into mass groups and create complex building shapes by adding, subtracting, and intersecting the mass elements.
You can change mass elements in the mass group as necessary to reflect the building design. You can edit individual mass elements that are attached to a mass group to further refine the building model. You can also nest mass groups within other mass groups. For more information, see Creating a Mass Group.
You can use the Model Explorer to create your entire mass model, or you can create your mass model in the current drawing. You can also modify your mass model or change the relationships of mass elements in the Model Explorer. In addition to a graphics area that displays the mass model, the Model Explorer has a tree view in which you can drag and drop mass elements and mass groups to arrange and view the building blocks of your model in a hierarchical structure. For more information, see Using the Model Explorer to Create Mass Models.
When creating massing studies, you can create mass groups with any AutoCAD three-dimensional (3D) object, including AutoCAD ACIS solids. These can be combined with mass elements in the Model Explorer to allow more complex studies of potential designs. However, only objects that have volume affect the appearance of the mass group. For example, a polyline, even with thickness, does not contribute to the mass group.
Continuing Building Design from a Mass Model
The mass model that you create is a refinement of your original idea that you carry into the next phase of the project, where you can slice floorplates from the mass model. You can convert the floorplates to space boundaries to start space planning, or you can convert them to polylines and then to walls to begin your building design.
Display Configurations and Layouts for Mass Modeling
Drawings created from templates provided with AutoCAD Architecture contain display configurations and layouts that enable you to work effectively with mass elements and mass groups. For example, a layout has two viewports: one is assigned a display configuration that displays only mass groups and the other is assigned a display configuration that displays only mass elements.
Materials in Mass Elements
In AutoCAD Architecture, you can assign materials to a mass element. These materials are displayed in wireframe and working shade views, or when rendered. Materials have specific settings for individual components of a mass element, such as linework or surface hatches.
Mass elements with assigned materials in rendered view
AutoCAD Architecture provides predefined materials for all common design purposes. These materials contain settings for roof slabs. You can use these predefined materials, or modify them to your special designs. You can also create your own materials from scratch.
For more information, see Using Materials for Mass Elements and Mass Groups.
Other Uses of Mass Elements and Mass Groups
You can use mass elements and mass groups to create 3D body pieces of the building model. For example, you can apply them to walls as 3D body modifiers. | http://docs.autodesk.com/BLDSYS/2011/ENU/filesUsersGuide/WSad9b9e21c4998e1ddff10df9ede7a856-7ffc.htm | 2017-01-16T19:14:46 | CC-MAIN-2017-04 | 1484560279248.16 | [] | docs.autodesk.com |
For example, if you want to create a slide from a K2 item, all you have to do is select that K2 item and Frontpage Slideshow will automatically populate all slide fields, which you are free of course to edit as you wish! JooStar has dropdown menu, 4 color styles (dark blue, green, orange, violet), 6 layout options. Oxygen is a good idea for your Joomla website.
Похожие записи: | http://wp-docs.ru/2016/09/21/11697-%D1%88%D0%B0%D0%B1%D0%BB%D0%BE%D0%BD-%D0%B4%D0%B6%D1%83%D0%BC%D0%BB%D0%B0-2-5-joostar | 2017-01-16T19:18:05 | CC-MAIN-2017-04 | 1484560279248.16 | [array(['http://www.joomlaworks.net/images/galleries/FPSS_3.0.0_screenshots/11-module-params.png',
None], dtype=object) ] | wp-docs.ru |
Replication¶
Because each replica in Swift functions independently, and clients generally require only a simple majority of nodes responding to consider an operation successful, transient failures like network partitions can quickly cause replicas to diverge.’t know what data exists elsewhere in the cluster that it should pull in. in the face of disk failures, though some replicas may not be in an immediately usable location. Note that the replicator doesn’t maintain desired levels of replication when other failures, such as entire node failures, occur because most failure are transient.
Replication is an area of active development, and likely rife with potential improvements to speed and correctness.
There are two major classes of replicator - the db replicator, which replicates accounts and containers, and the object replicator, which replicates object data.
DB Replication¶
The first step performed by db replication is a low-cost hash comparison to determine whether two replicas already match. Under normal operation, this check is able to verify that most databases in the system are already synchronized very quickly. If the hashes differ, the replicator brings the databases in sync by sharing records added since the last sync point.
This sync point is a high water mark noting the last record at which two databases were known to be in sync, and is stored in each database as a tuple of the remote database id and record id. Database ids are unique amongst all replicas of the database, and record ids are monotonically increasing integers. After all new records have been pushed to the remote database, the entire sync table of the local database is pushed, so the remote database can guarantee that it is in sync with everything with which the local database has previously synchronized.
If a replica is found to be missing entirely, the whole local database file is transmitted to the peer using rsync(1) and vested with a new unique id.
In practice, DB replication can process hundreds of databases per concurrency setting per second (up to the number of available CPUs or disks) and is bound by the number of DB transactions that must be performed.
Object Replication¶.
The object replication process reads in these hash files, calculating any invalidated hashes. It then.
Performance of object replication is generally bound by the number of uncached directories it has to traverse, usually as a result of invalidated suffix directory hashes. Using write volume and partition counts from our running systems, it was designed so that around 2% of the hash space on a normal node will be invalidated per day, which has experimentally given us acceptable replication speeds..
One of the first improvements planned is an “index.db” that will replace the hashes.pkl. This will allow quicker updates to that data as well as more streamlined queries. Quite likely we’ll implement a better scheme than the current one hashes.pkl uses (hash-trees, that sort of thing).
Another improvement planned all along the way is separating the local disk structure from the protocol path structure. This separation will allow ring resizing at some point, or at least ring-doubling.
Note that for objects being stored with an Erasure Code policy, the replicator daemon is not involved. Instead, the reconstructor is used by Erasure Code policies and is analogous to the replicator for Replication type policies. See Erasure Code Support for complete information on both Erasure Code support as well as the reconstructor.
Hashes.pkl¶
The hashes.pkl file is a key element for both replication and reconstruction (for Erasure Coding). Both daemons use this file to determine if any kind of action is required between nodes that are participating in the durability scheme. The file itself is a pickled dictionary with slightly different formats depending on whether the policy is Replication or Erasure Code. In either case, however, the same basic information is provided between the nodes. The dictionary contains a dictionary where the key is a suffix directory name and the value is the MD5 hash of the directory listing for that suffix. In this manner, the daemon can quickly identify differences between local and remote suffix directories on a per partition basis as the scope of any one hashes.pkl file is a partition directory.
For Erasure Code policies, there is a little more information required. An object’s hash directory may contain multiple fragments of a single object in the event that the node is acting as a handoff or perhaps if a rebalance is underway. Each fragment of an object is stored with a fragment index, so the hashes.pkl for an Erasure Code partition will still be a dictionary keyed on the suffix directory name, however, the value is another dictionary keyed on the fragment index with subsequent MD5 hashes for each one as values. Some files within an object hash directory don’t require a fragment index so None is used to represent those. Below are examples of what these dictionaries might look like.
Replication hashes.pkl:
{'a43': '72018c5fbfae934e1f56069ad4425627', 'b23': '12348c5fbfae934e1f56069ad4421234'}
Erasure Code hashes.pkl:
{'a43': {None: '72018c5fbfae934e1f56069ad4425627', 2: 'b6dd6db937cb8748f50a5b6e4bc3b808'}, 'b23': {None: '12348c5fbfae934e1f56069ad4421234', 1: '45676db937cb8748f50a5b6e4bc34567'}}
Dedicated replication network¶
Swift has support for using dedicated network for replication traffic. For more information see Overview of dedicated replication network. | http://docs.openstack.org/developer/swift/overview_replication.html | 2017-01-16T19:13:12 | CC-MAIN-2017-04 | 1484560279248.16 | [] | docs.openstack.org |
Modifying Items and Attributes with Update Expressions
To delete an item from a table, use the
DeleteItem operation. You must
provide the key of the item you want to delete.
To update an existing item in a table, use the
UpdateItem
operation. You must provide the key of the item you want to update. You must also
provide an update expression, indicating the attributes you want to
modify and the values you want to assign to them. For more information, see Update Expressions.
The
DeleteItem and
UpdateItem operations support
conditional writes, where you provide a condition expression to
indicate the conditions that must be met in order for the operation to succeed. For more
information, see Conditional Write Operations.
If DynamoDB modifies an item successfully, it acknowledges this with an HTTP 200 status code (OK). No further data is returned in the reply; however, you can request that the item or its attributes are returned. You can request these as they appeared before or after an update. For more information, see Return Values.
Note
The examples in the following sections are based on the ProductCatalog item from Case Study: A ProductCatalog Item.
Update Expressions
An update expression specifies the attributes you want to modify, along with new values for those attributes. An update expression also specifies how to modify the attributes—for example, setting a scalar value, or deleting elements in a list or a map. It is a free-form string that can contain attribute names, document paths, operators and functions. It also contains keywords that indicate how to modify attributes.
The
PutItem,
UpdateItem and
DeleteItem operations require a primary key value, and will
only modify the item with that key. If you want to perform a conditional
update, you must provide an update expression
and a condition expression. The condition expression
specifies the condition(s) that must be met in order for the update to succeed. The
following is a syntax summary for update expressions:
update-expression ::= SET
set-action, ... | REMOVE
remove-action, ... | ADD
add-action, ... | DELETE
delete-action, ...
An update expression consists of sections. Each section begins with a
SET,
REMOVE,
ADD or
DELETE
keyword. You can include any of these sections in an update expression in any order.
However, each section keyword can appear only once. You can modify multiple
attributes at the same time. The following are some examples of update
expressions:
SET list[0] = :val1
REMOVE #m.nestedField1, #m.nestedField2
ADD aNumber :val2, anotherNumber :val3
DELETE aSet :val4
The following example shows a single update expression with multiple sections:
SET list[0] = :val1 REMOVE #m.nestedField1, #m.nestedField2 ADD aNumber :val2, anotherNumber :val3 DELETE aSet :val4
You can use any attribute name in an update.
To specify a literal value in an update expression, you use expression attribute values. For more information, see Expression Attribute Values.
SET
Use the
SET action in an update expression to add one or more attributes and
values to an item. If any of these attribute already exist, they are replaced by
the new values. However, note that you can also use
SET to add or
subtract from an attribute that is of type Number. To
SET multiple
attributes, separate them by commas.
In the following syntax summary:
The
pathelement is the document path to the item. For more information, see Document Paths.
An
operandelement can be either a document path to an item, or a function. For more information, see Functions for Updating Attributes.
set-action ::=
path= value value ::=
operand|
operand'+'
operand|
operand'-'
operandoperand ::=
path| function
The following are some examples of update expressions using the
SET action.
The following example updates the Brand and Price attributes. The expression attribute value
:bis a string and
:pis a number.
SET Brand = :b, Price = :p
The following example updates an attribute in the RelatedItems list. The expression attribute value
:riis a number.
SET RelatedItems[0] = :ri
The following example updates some nested map attributes. The expression attribute name
#pris ProductReviews; the attribute values
:r1and
:r2are strings.
SET #pr.FiveStar[0] = :r1, #pr.FiveStar[1] = :r2
Incrementing and Decrementing Numeric Attributes
You can add to or subtract from an existing numeric attribute. To do this, use
the
+ (plus) and
- (minus) operators.
The following example decreases the Price value of an item. The
expression attribute value
:p is a number.
SET Price = Price - :p
To increase the Price, use the
+ operator
instead.
Using SET with List Elements
When you use
SET to update a list element, the contents of that element are
replaced with the new data that you specify. If the element does not already
exist,
SET will append the new element at the end of the array.
If you add multiple elements in a single
SET operation, the
elements are sorted in order by element number. For example, consider the
following list:
MyNumbers: { ["Zero","One","Two","Three","Four"] }
The list contains elements
[0],
[1],
[2],
[3],
[4]. Now, let's use the
SET action to add two new elements:
set MyNumbers[8]="Eight", MyNumbers[10] = "Ten"
The list now contains elements
[0],
[1],
[2],
[3],
[4],
[5],
[6],
with the following data at each element:
MyNumbers: { ["Zero","One","Two","Three","Four","Eight","Ten"] }
Note
The new elements are added to the end of the list and will be assigned the next available element numbers.
Functions for Updating Attributes
The
SET action supports the following functions:
if_not_exists (– If the item does not contain an attribute at the specified
path,
operand)
path, then
if_not_existsevaluates to
operand; otherwise, it evaluates to
path. You can use this function to avoid overwriting an attribute already present in the item.
list_append (– This function evaluates to a list with a new element added to it. The new element must be contained in a list, for example to add
operand,
operand)
2to a list, the operand would be
[2]. You can append the new element to the start or the end of the list by reversing the order of the operands.
Important
These function names are case-sensitive.
The following are some examples of using the
SET action with these
functions.
If the attribute already exists, the following example does nothing; otherwise it sets the attribute to a default value.
SET Price = if_not_exists(Price, 100)
The following example adds a new element to the FiveStar review list. The expression attribute name
#pris ProductReviews; the attribute value
:ris a one-element list. If the list previously had two elements,
[0]and
[1], then the new element will be
[2].
SET #pr.FiveStar = list_append(#pr.FiveStar, :r)
The following example adds another element to the FiveStar review list, but this time the element will be appended to the start of the list at
[0]. All of the other elements in the list will be shifted by one.
SET #pr.FiveStar = list_append(:r, #pr.FiveStar)
REMOVE
Use the
REMOVE action in an update expression to remove one or more attributes
from an item. To perform multiple
REMOVE operations, separate them
by commas.
The following is a syntax summary for
REMOVE in an update expression. The only
operand is the document path for the attribute you want to remove:
remove-action ::=
path
The following is an example of an update expression using the
REMOVE action.
Several attributes are removed from the item:
REMOVE Title, RelatedItems[2], Pictures.RearView
Using REMOVE with List Elements
When you remove an existing list element, the remaining elements are shifted. For example, consider the following list:
MyNumbers: { ["Zero","One","Two","Three","Four"] }
The list contains elements
[0],
[1],
[2],
[3], and
[4]. Now, let's use the
REMOVE action to remove two of the elements:
REMOVE MyNumbers[1], MyNumbers[3]
The remaining elements are shifted to the right, resulting in a list with elements
[0],
[1], and
[2], with the
following data at each element:
MyNumbers: { ["Zero","Two","Four"] }
Note
If you use
REMOVE to delete a nonexistent item past the last element of the
list, nothing happens: There is no data to be deleted. For example, the
following expression has no effect on the
MyNumbers
list:
REMOVE MyNumbers[11]
ADD
Important
The
ADD action only supports Number and set data types. In general, we
recommend using
SET rather than
ADD.
Use the
ADD action in an update expression to do either of the
following:
If the attribute does not already exist, add the new attribute and its value(s) to the item.
If the attribute already exists, then the behavior of
ADDdepends on the attribute's data type:
If the attribute is a number, and the value you are adding is also a number, then the value is mathematically added to the existing attribute. (If the value is a negative number, then it is subtracted from the existing attribute.)
If the attribute is a set, and the value you are adding is also a set, then the value is appended to the existing set.
To perform multiple
ADD operations, separate them by commas.
In the following syntax summary:
The
pathelement is the document path to an attribute. The attribute must be either a Number or a set data type.
The
valueelement is a number that you want to add to the attribute (for Number data types), or a set to append to the attribute (for set types).
add-action ::=
path
value
The following are some examples of update expressions using the
add action.
The following example increments a number. The expression attribute value
:nis a number, and this value will be added to Price.
ADD Price :n
The following example adds one or more values to the Color set. The expression attribute value
:cis a string set.
ADD Color :c
DELETE
Important
The
DELETE action only supports set data types.
Use the
DELETE action in an update expression to delete an element from a set.
To perform multiple
DELETE operations, separate them by
commas.
In the following syntax summary:
The
pathelement is the document path to an attribute. The attribute must be a set data type.
The
valueelement is the element(s) in the set that you want to delete.
delete-action ::=
path
value
The following example deletes an element from the Color set using the DELETE action. The
expression attribute value
:c is a string set.
DELETE Color :c
Conditional Write Operations
To perform a conditional delete, use a
DeleteItem operation with a
condition expression. The condition expression must evaluate to true in order for
the operation to succeed; otherwise, the operation fails.
Suppose that you want to delete an item, but only if there are no related items. You can use the following expression to do this:
Condition expression:
attribute_not_exists(RelatedItems)
To perform a conditional update, use an
UpdateItem operation with an
update expression and a condition expression. The condition
expression must evaluate to true in order for the operation to succeed; otherwise,
the operation fails.
Suppose that you want to increase the price of an item by a certain amount, defined
as
:amt, but only if the result does not exceed a maximum price.
You can do this by calculating the highest current price that would permit the increase,
subtracting the increase
:amt from the maximum. Define the result as
:limit, and then use the following conditional expression:
Condition expression:
Price <= :limit)
Update expression:
SET Price = Price + :amt
Now suppose you want to set a front view picture for an item, but only if that item doesn't already have such a picture—you want to avoid overwriting any existing element. You can use the following expressions to do this:
Update expression:
SET Pictures.FrontView = :myURL
(Assume that
:myURLis the location of a picture of the item, such as.)
Condition expression:
attribute_not_exists(Pictures.FrontView)
Return Values
When you perform a
DeleteItem or
UpdateItem
operation, DynamoDB can optionally return some or all of the item in the response. To
do this, you set the
ReturnValues parameter. The default
value for
ReturnValues is
NONE, so no data will
be returned. You can change this behavior as described below.
Deleting an Item
In a
DeleteItem operation, you can set
ReturnValues to
ALL_OLD. Doing this will
cause DynamoDB to return the entire item, as it appeared before the delete
operation occurred.
Updating an Item
In an
UpdateItem operation, you can set
ReturnValues to one of the following:
ALL_OLD– The entire item is returned, as it appeared before the update occurred.
ALL_NEW– The entire item is returned, as it appears after the update.
UPDATED_OLD– Only the value(s) that you updated are returned, as they appear before the update occurred.
UPDATED_NEW– Only the value(s) that you updated are returned, as they appear after the update. | http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.Modifying.html | 2017-01-16T19:21:29 | CC-MAIN-2017-04 | 1484560279248.16 | [] | docs.aws.amazon.com |
Loretta M. Lynch is the Assigned Commissioner and Charlotte F. TerKeurst is the assigned ALJ in this proceeding.
Findings of Fact
1. The draft EIR described the route of the Collocation Alternative and identified and discussed its possible environmental impacts at length. Parties were able to, and did, submit extensive and substantive comments on the Collocation Alternative.
2. The route options for the Collocation Alternative added in the FEIR do not constitute significant new information for which recirculation is required.
3. The project alternatives considered in the FEIR constitute a reasonable range of feasible alternatives, as required by the CEQA Guidelines.
4. It is reasonable to use PG&E's March 2003 load forecast in assessing need for the Jefferson-Martin project.
5. The Jefferson-Martin project is needed in order to allow PG&E to continue to reliably meet electric demand in the San Francisco Peninsula Area beginning in 2007, when demand is anticipated to be 1978 MW in the San Francisco Peninsula Area.
6. The Jefferson-Martin project has diversification, economic, and environmental benefits that warrant its construction before 2007.
7. The environmentally superior alternative for the Jefferson-Martin project based on the FEIR consists of Route Option 1B in the southern segment, with one of three crossings of the Crystal Springs Dam, in conjunction with either the Proposed Project's northern underground segment modified to include Route Option 4B rather than Route Option 4A or the Collocation Alternative.
8. It is reasonable to modify PG&E's preliminary EMF management plan for the Jefferson-Martin project, as described in Section VI.C.
9. For the southern portion of the Jefferson-Martin project, the hybrid alternative using Route Option 1B between the Jefferson substation and a new transition tower replacing tower 11/70 west of Trousdale Drive, and PG&E's proposed overhead route north of the transition tower provides the best balance among competing considerations. In particular, it will minimize visual and biological impacts south of the transition tower, avoid impacts on Edgewood Park and the Pulgas Ridge Natural Preserve, avoid Route Option 1B's effects on residences and businesses along Trousdale Drive and El Camino Real and seismic concerns in that area, and eliminate most EMF concerns regarding the southern segment.
10. It is reasonable to allow PG&E to determine which of five options for crossing Crystal Springs Dam to utilize, based on the timing of project construction and the preferences of the SFPUC, the County of San Mateo, and the USFWS.
11. The environmentally superior route consisting of Route Option 1B in the southern segment in conjunction with the Proposed Project's northern underground segment modified to include Route Option 4B rather than Route Option 4A poses less harm to the environment than do the other routes proposed by PG&E and other parties to this proceeding.
12. The Proposed Project's northern underground segment is preferable to the Collocation Alternative because of the risks associated with the Collocation Alternative's construction through contaminated areas and along the Bay and the loss of diversification due to its collocation with the existing underground 230 kV line.
13. Route Option 4B is preferable to Route Option 4A because it will avoid construction impacts to residences along Hoffman and Orange Streets.
14. The approved route consisting of the Trousdale Drive hybrid alternative using Route Option 1B, with five options for crossing Crystal Springs Dam, and PG&E's Proposed Project in the southern segment in conjunction with the Proposed Project's underground northern segment with Route Option 4B reflects community values more accurately than does the environmentally superior route.
15. We are not obligated to choose the least costly route if that route causes greater environmental harm than more costly routes or if some other route most closely reflects the prevalent community values.
16. The Commission has reviewed and considered the information in the FEIR before approving the project.
17. The FEIR identifies significant environmental effects of the route we approve that can be mitigated or avoided to the extent that they become not significant. The FEIR describes measures that will reduce or avoid such effects.
18. The environmental mitigation measures identified in the FEIR, with modifications in Appendix A, are feasible and will avoid significant environmental impacts. The environmental mitigation measures applicable to the approved transmission line route are in Appendix B.
19. As lead agency under CEQA, the Commission is required to monitor the implementation of mitigation measures adopted for this project to ensure full compliance with the provisions of the monitoring program.
20. The Mitigation Monitoring, Compliance, and Reporting Plan in Section G of the FEIR conforms to the recommendations of the FEIR for measures required to mitigate or avoid environmental effects of the project that can be reduced or avoided.
21. The Commission will develop a detailed implementation plan for the Mitigation Monitoring, Compliance, and Reporting Plan.
22. The FEIR identifies no significant environmental impact of the approved route that cannot be mitigated or avoided.
23. We have considered and approve of the discussion in the FEIR covering parks and recreation, cultural and historic resources, environmental impacts generally, and the public comment and response section.
24. The maximum reasonable and prudent cost for the approved project is $206,988,000.
25. The five photographs included in the comments of 280 Citizens on the proposed decision are not needed for our consideration of 280 Citizens's comments.
26. It is reasonable to not require that a Supplemental FEIR be prepared for the San Bruno Mountain and El Camino Real alternative route segments, as described in Section V.B.4.
Conclusions of Law
1. The Commission has jurisdiction over the proposed project pursuant to Pub. Util. Code § 1001 et seq.
2. Recirculation of the FEIR is not required by CEQA because no "significant new information" is contained in the FEIR, as that term is used in CEQA.
3. The motion by the City of South San Francisco and CBE-101 requesting recirculation of the FEIR should be denied.
4. Because the FEIR considered a reasonable range of feasible alternatives, it is not necessary to amend the FEIR as Daly City suggests or to recirculate the FEIR for comments on Daly City's suggested alternative.
5. PG&E's preliminary EMF management plan for the Jefferson-Martin project should be modified as described in Section VI.C.
6. The Commission has authority to specify a "maximum cost determined to be reasonable and prudent" for the Jefferson-Martin project pursuant to Pub. Util. Code § 1005.5.
7. The Commission should approve a maximum reasonable and prudent cost of $206,988,000 for this project.
8. This Commission's determination regarding the maximum reasonable and prudent cost pursuant to § 1005.5 has bearing on the amount of cost recovery PG&E may seek from the FERC.
9. The Commission retains authority to approve PG&E's EMF mitigation plan to ensure that it does not create other adverse environmental impacts.
10. Commission approval of PG&E's application, as modified herein, is in the public interest.
11. EMF mitigation measures, as described in Section VI.C and Section XII, should be adopted and made conditions of project approval.
12. The Jefferson-Martin 230 kV Transmission Line Project Addendum to Final Environmental Impact Report attached as Appendix A should be approved.
13. Project approval should be conditioned upon construction according to the following route:
Beginning at the Jefferson substation, the project should USFWS; and transitioning to an overhead configuration at a new transition structure sited at the location of existing tower 11/70;
From the new transition structure, the transmission line should should be constructed in an underground configuration along Glenview Drive to its intersection with San Bruno Avenue where it should travel east down San Bruno Avenue; and
From San Bruno Avenue, the line should be constructed consistent with PG&E's proposed underground route in the northern segment, modified to include Route Option 4B rather than Route Option 4A, to the Martin substation.
14. Project approval should be conditioned upon use of Mitigation Measure T-9a at the discretion of the City of San Bruno.
15. Project approval should be conditioned upon the completion of the mitigation measures in Appendix B. The mitigation measures are feasible and will minimize or avoid significant environmental impacts. Those mitigation measures should be adopted and made conditions of project approval.
16. Any disputes between PG&E and local governments regarding land use matters should be submitted to the Commission for resolution as provided in Section XIV of G.O. 131-D.
17. After considering and weighing the values of the community, benefits to parks and recreational areas, the impacts on cultural and historic resources, and the environmental impacts caused by the project, we conclude that the CPCN for the Jefferson-Martin project as described in this decision should be approved.
18. Based on the completed record before us, we conclude that other alternatives identified in the FEIR are infeasible, pose more significant environmental impacts, or are less consistent with community values than the route we select in this decision.
19. Pub. Util. Code § 625(a)(l)(A) does not apply to this project. However, PG&E must provide notice pursuant to § 625(a)(l)(B) if and when it pursues installation of facilities for purposes of providing competitive services.
20. The motion of 280 Citizens to reopen the record for receipt of five photographs included in its comments on the proposed decision should be denied
21. The Petition to Intervene of the San Bruno Mountain Coalition should be denied because a Supplemental FEIR for the San Bruno Mountain route alternative will not be prepared.
22. This order should be effective today so that PG&E may proceed expeditiously with construction of the authorized project. | http://docs.cpuc.ca.gov/published/FINAL_DECISION/39122-14.htm | 2017-01-16T19:12:58 | CC-MAIN-2017-04 | 1484560279248.16 | [] | docs.cpuc.ca.gov |
Button
In the Layer Translator, you specify the layers in the current drawing that you want to translate, and the layers to translate them to.
Specifies the layers to be translated in the current drawing. You can specify layers by selecting layers in the Translate From list or by supplying a selection filter.
The color of the icon preceding the layer name indicates whether or not the layer is referenced in the drawing. A dark icon indicates that the layer is referenced; a white icon indicates the layer is unreferenced. Unreferenced layers can be deleted from the drawing by right-clicking in the Translate From list and choosing Purge Layers.
Specifies layers to be selected in the Translate From list, using a naming pattern that can include wild-cards. For a list of valid wild-cards, see the table in Filter and Sort the List of Layers in the User's Guide. The layers identified by the selection filter are selected in addition to any layers previously selected.
Lists the layers you can translate the current drawing's layers to.
Loads layers in the Translate To list using a drawing, drawing template, or standards file that you specify. If the specified file contains saved layer mappings, those mappings are applied to the layers in the Translate From list and are displayed in Layer Translation Mappings.
You can load layers from more than one file. If you load a file that contains layers of the same name as layers already loaded, the original layers are retained and the duplicate layers are ignored. Similarly, if you load a file containing mappings that duplicate mappings already loaded, the original mappings are retained and the duplicates are ignored.
Layer Translation Mappings
Lists each layer to be translated and the properties to which the layer will be converted. You can select layers in this list and edit their properties using Edit.
Opens the Edit Layer dialog box, where you can edit the selected translation mapping. You can change the layer's linetype, color, and lineweight. If all drawings involved in translation use plot styles, you can also change the plot style for the mapping.
Saves the current layer translation mappings to a file for later use.
Layer mappings are saved in the DWG or DWS file format. You can replace an existing file or create a new file. The Layer Translator creates the referenced layers in the file and stores the layer mappings in each layer. All linetypes used by those layers are also copied into the file.
Opens the Settings dialog box, where you can customize the process of layer translation.
Starts layer translation of the layers you have mapped.
If you have not saved the current layer translation mappings, you are prompted to save the mappings before translation begins. | http://docs.autodesk.com/ACD/2010/ENU/AutoCAD%202010%20User%20Documentation/files/WS1a9193826455f5ffa23ce210c4a30acaf-4a75.htm | 2017-01-16T19:13:56 | CC-MAIN-2017-04 | 1484560279248.16 | [] | docs.autodesk.com |
Chained SMS
Chained SMS is a CiviCRM extension that enables you to carry out automated conversations via SMS.
Conversations start with an outbound text from CiviCRM. Subsequent texts can then be sent dependent on the reply received to the first text. These couplets of outbound and inbound texts can be combined into longer chains (hence the name). Conversations can branch with different pathways based on the answers to previous questions.
Examples
Here's a simple example:
CiviCRM: Hello, do you plan to vote for Alice as the next prime minister? [Please answer 'yes', 'no', or 'maybe']
Contact: yes
CiviCRM: Thanks for letting us know!
Here's a more involved example where the second text we send depends on the answer to the first text.
CiviCRM: Hello, are you interested in distributing placards to support Bob in the upcoming election? [Please answer 'yes' or 'no']
// If they answer 'yes'
Contact: yes
CiviCRM: Great, we can send you up to 20 placards. How many would you like? (please answer with a number)
Contact: 3
CiviCRM: OK - we'll get them sent to you as soon as possible. We may be in contact again to clarify your address and other details.
// OR if they answer 'no'
CiviCRM: OK. Thanks anyway!
Getting started
Chained SMS is easy to use. To get started, you'll need to install the extension, create some message chains, and test them out. Once you are happy that the conversation is proceeding as desired, you can start the chain by sending the first message in the chain to either a contact or a group of contacts. | https://docs.civicrm.org/chained-sms/en/latest/ | 2017-03-23T08:12:59 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.civicrm.org |
Creating an app to extend scaffold¶
Although you’ve installed it, scaffold won’t do much by itself. Think of it as a kind of abstract application, akin to the notion of an abstract class in python. In other words, scaffold is meant to be extended by an application that you create. We’ll call this the concrete app from here on out.
This is not to say scaffold doesn’t have a lot going on under the hood; like any Django app, scaffold has views, models, templates and media files. However, any one of these elements can–and should be–extended or overridden as needed. Let’s walk through the steps we’ll need to get a basic concrete app working using scaffold.
A typical use case for scaffolding is creating a tree of sections and subsections for a web site. Let’s say we’re putting together a simple news site, which will have sections for news, weather, entertainment and shopping. Some of these sections–entertainment–will have sub-sections (say, movies, theater, music, and art). Content creators will be able to create articles which can be attached to any one of these sections and subsections. All in all, a simple, common task for a web developer and one that scaffold can help with.
1. Create a new application.¶
Let’s start by creating an application for handling sections in the site. We’ll even call the application “sections”:
python manage.py startapp sections
2. Create a model which extends scaffold¶
We decide that a section should have a title, a description (which we’ll use in meta tags for SEO purposes), and a photo. We’ll start by creating a model in the models.py file that extends the
scaffold.models.BaseSection model.
Here’s some of what’s in that
BaseSection model:
class BaseSection(MP_Node): slug = models.SlugField(_("Slug"), help_text=_("Used to construct URL")) title = models.CharField(_("Title"), max_length=255) order = models.IntegerField(_("Order of section"), blank=True, default=0)
Notice that the model only defines 3 fields. Let’s ignore “order” for the moment; scaffold assumes that anything that extends
BaseSection will have at least a slug (for constructing the url of the section) and a title.
Now we can create a model which adds the fields we need. In the
models.py for your new app, add the following:
from scaffold.models import BaseSection class Section(BaseSection): description = models.TextField("Description", help_text="For SEO.") photo = models.ImageField("Photo", upload_to="section_images")
...and that’s it, we’re done. BaseSection provides a number of powerful methods that we’ll get into later.
3. Setup your URL Configuration¶
Change the default urls.py file for your Django project to the following:"), )
We’ve done a couple things here. First, we’ve enabled the admin app by uncommenting the lines which turn on autodiscover and route
/admin/ urls to the admin app. That takes care of the admin interface and allows us to manage a sections/subsections tree in the admin (Scaffold provides a number of admin views to manage your models, but these are all handled in a special
ModelAdmin class called
SectionAdmin and do not need to be specially referenced in your URL conf.)
But how will we actually view a section or subsection on the website? The second url pattern handles this:
url(r'^(?P<section_path>.+)/$', 'scaffold.views.section', name="section")
This line works for a very specific, but common URL addressing schema: Top level sections will have root-level slugs in the url. Our site has an “Entertainment” section with the slug
entertainment. The URL will therefore be. There is also a subsection of entertainment, called “Dining Out” with the slug
dining. It’s URL would be.
Like almost everything about scaffold, you are not required to use this pattern. You can write your own url conf, or completely override the
scaffold.views.section view if you like.
Note
The positioning of the url patterns here is very deliberate. The regular expression ‘^(?P<section_path>.+)/$’ is rather greedy and will match anything, therefore we put it last.
4. Register your Section model in the admin site¶
Create an admin.py file in your concrete application and register your new
Section model there:
from django.contrib import admin from models import Section from scaffold.admin import SectionAdmin admin.site.register(Section, SectionAdmin)
You’ll notice that we’re registering our concrete model with the admin site using the
SectionAdmin class in django-scaffold. This step is crucial if you want scaffold to work properly in the admin interface. The standard
admin.ModelAdmin class does not provide the special properties and views needed to manage scaffold’s concrete models.
5. Add the necessary project settings¶
All that’s left to do is add a single setting to your Django project. In your settings.py file, place the following:
SCAFFOLD_EXTENDING_APP_NAME = 'sections'
Note: this example assumes your concrete app is called sections. Use whatever you’ve named your app as the SCAFFOLD_EXTENDING_APP_NAME setting.
6. Make the the scaffold media available.¶
Django-scaffold has a number of CSS, JavaScript and image files which it uses in the admin interface. These are stored in media/scaffold in the scaffold application directory. You can copy the
scaffold folder from the scaffold media directory to your own project’s media directory, but it’s best to simply create a symlink instead. (Make sure, if you’re using apache to server this, you have the
Options FollowSymLinks directive in place.)
At this point, you should be able to start up your Django project, browse to the admin interface and start creating sections. | http://django-scaffold.readthedocs.io/en/latest/extending.html | 2017-03-23T08:09:34 | CC-MAIN-2017-13 | 1490218186841.66 | [] | django-scaffold.readthedocs.io |
a9s PostgreSQL for PCF Release Notes. The entire featureset of a9s PostgreSQL will be added to a9s PostgreSQL for PCF in subsequent releases.
Features included in this release:
- On-demand Service Instance Provisioning
- Service Instance Isolation
- High Availability
v1.0.0
Release Date: March 14, 2017
- Service Instance Capacity Upgrade
- Logging and Monitoring
- Removed stemcell from tile to the a9s Bosh for PCF tile
- Remove restriction to have three AZs configured in the a9s Bosh for PCF
v1.0.0+ (Upcoming)
- On-demand Encrypted Remote Connectivity | https://docs.pivotal.io/partners/a9s-postgresql/release-notes.html | 2017-03-23T08:07:12 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.pivotal.io |
Framework¶
- class
cppmicroservices::
Framework¶
A Framework is itself a bundle and is known as the “System Bundle”. The System Bundle differs from other bundles in the following ways:
- The system bundle is always assigned a bundle identifier of zero (0).
- The system bundle
GetLocationmethod returns the string: “System Bundle”.
- The system bundle’s life cycle cannot be managed like normal bundles. Its life cycle methods behave as follows:
Framework instances are created using a FrameworkFactory. The methods of this class can be used to manage and control the created framework instance.
- Remark
- This class is thread-safe.
- See
- FrameworkFactory::NewFramework(const std::map<std::string, Any>& configuration)
Inherits from cppmicroservices::Bundle
Public Functions
Framework(Bundle b)¶
Convert a
Bundlerepresenting the system bundle to a
Frameworkinstance.
- Parameters
b: The system bundle
- Exceptions
std::logic_error: If the bundle is not the system bundle.
- void
Init()¶
Initialize this Framework.
After calling this method, this Framework has:
- Generated a new framework UUID.
- Moved to the STATE_STARTING state.
- A valid Bundle Context.
- Event handling enabled.
- Reified Bundle objects for all installed bundles.
- Registered any framework services.
This Framework will not actually be started until Start is called.
This method does nothing if called when this Framework is in the STATE_STARTING, STATE_ACTIVE or STATE_STOPPING states.
- FrameworkEvent
WaitForStop(const std::chrono::milliseconds &timeout)¶
Wait until this Framework has completely stopped.
The
Stopmethod on a Framework performs an asynchronous stop of the Framework if it was built with threading support.
This method can be used to wait until the asynchronous stop of this Framework has completed. This method will only wait if called when this Framework is in the STATE_STARTING, STATE_ACTIVE, or STATE_STOPPING states. Otherwise it will return immediately.
A Framework Event is returned to indicate why this Framework has stopped.
- Return
- A Framework Event indicating the reason this method returned. The following
FrameworkEventtypes may be returned by this method.
FRAMEWORK_STOPPED - This Framework has been stopped.
FRAMEWORK_ERROR - The Framework encountered an error while shutting down or an error has occurred which forced the framework to shutdown.
FRAMEWORK_WAIT_TIMEDOUT - This method has timed out and returned before this Framework has stopped.
- Parameters
- | http://docs.cppmicroservices.org/en/latest/framework/doc/api/main/Framework.html | 2017-03-23T08:18:32 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.cppmicroservices.org |
Advanced User's Guide
From BaseX Documentation
This page is one of the Main Sections of the documentation. It contains details on the BaseX storage and the Server architecture, and presents some more GUI features.
- Storage
- Configuration: BaseX start files and directories
- Backups: Backup and restore databases
- Catalog Resolver Information on entity resolving
- Storage Layout: How data is stored in the database files
- Use Cases
- Statistics: Exemplary statistics on databases created with BaseX
- Twitter: Storing live tweets in BaseX
- Server and Query Architecture
- User Management: User management in the client/server environment
- Transaction Management: Insight into the BaseX transaction management
- Logging: Description of the server logs | http://docs.basex.org/wiki/Advanced_User%27s_Guide | 2017-03-23T08:09:11 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.basex.org |
New in version 2.1.
- name: run show version on remote devices eos_command: commands: show version - name: run show version and check to see if output contains Arista eos_command: commands: show version wait_for: result[0] contains Arista - name: run multiple commands on remote nodes eos_command: commands: - show version - show interfaces - name: run multiple commands and evaluate the output eos_command: commands: - show version - show interfaces wait_for: - result[0] contains Arista - result[1] contains Loopback0 - name: run commands and specify the output format eos_command: commands: - command: show version output: json. | http://docs.ansible.com/ansible/eos_command_module.html | 2017-03-23T08:21:44 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.ansible.com |
Capabilities in Technical Preview 1602 for System Center Configuration Manager
Applies to: System Center Configuration Manager (Technical Preview)
This article introduces the features that are available in the Technical Preview for System Center Configuration Manager, version 16.
Improvements to mobile device management
iOS Activation Lock
System Center, Intune can retrieve the Activation Lock bypass code and directly issue it to the device.
For details, see Help protect iOS devices with Activation Lock bypass for Configuration Manager
Improvements to Software Center in version 1602
Refresh PC machine and user policy from Software Center
A new option, Sync Policy has been added to the Options > Computer Maintenance page of Software Center that causes the PC to refresh it’s Configuration Manager machine and user policy.
Improvements to Windows 10 Servicing
In the 1602 Technical Preview we have added the following improvements for Windows 10 Servicing:
New filter options for Servicing Plans. You can now filter for Language, Required, and Title. Only upgrades that meet the specified criteria will be added to the associated deployment.
When you select the Upgrades classification for software updates synchronization, a warning dialog is displayed to let you know that WSUS hotfix 3095113 is required to successfully synchronize software updates and for the Windows 10 Servicing to work properly. From the dialog, you can go to the knowledge base article for the hotfix.
Available Windows 10 upgrades now only display in the Windows 10 Servicing \ All Windows 10 Updates node of the Configuration Manager console. These updates no longer display in the Software Updates \ All Software Updates node.
End-users that start a Windows 10 Upgrade package will be prompted with a dialog that lets them know they will be upgrading their operating system. | https://docs.microsoft.com/en-us/sccm/core/get-started/capabilities-in-technical-preview-1602 | 2017-03-23T08:36:02 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.microsoft.com |
WombatOAM reference
Logs
The logs are in
rel/wombat/wombat/log.
debug.log contains all log messages;
wombat.log contains only error, warning and info messages. These log files can
be backed up using the
scripts/wombat-debug.sh script.
WombatOAM data
All data (the list of nodes, node families, collected metrics, etc.) is
stored in the
rel/wombat/wombat/data/ directory. If you want a fresh start,
delete this directory. The disk usage information of these data files can be
collected using the
scripts/wombat-debug.sh script.
Managing older Erlang nodes
The WombatOAM agents module are compiled with an older Erlang/OTP version (R14B04) as WombatOAM itself. This way WombatOAM can manage nodes that run R14B04 or newer Erlang/OTP.
Multiple WombatOAM instances
You can run multiple WombatOAM instances on the same machine, but you need to
modify the
wombat.config files to make sure that different instances use
different ports and different directories for storing data.
Gauges
If you would like to see graphical gauges showing the current value of a few
numeric metrics, the metrics need to be specified in the
wombat.config file.
You can find a sample configuration in
sys.config. | https://docs.pivotal.io/partners/wombat/reference.html | 2017-03-23T08:06:53 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.pivotal.io |
Capabilities in Technical Preview 1603 for System Center Configuration Manager
Applies to: System Center Configuration Manager (Technical Preview)
This article introduces the features that are available in the Technical Preview for System Center Configuration Manager, version 1603. You can install this version to update and add new capabilities to your Configuration Manager technical preview site. Alternately, when you use System Center Technical Preview 5, this version installs as a baseline version of the System Center Configuration Manager Technical Preview. for this Technical Preview:
This release includes updates for previously released features but does not introduce new features. Therefore, the Features page of the Update Wizard will be empty if you have previously upgraded to 1602 and enabled all of the features included in 1602.
After your site server updates to the Technical Preview 1603, clients are unable to use any remote control features until they also update to version 1603.
The following are new features you can try out with this version.
Improvements to Software Center
New tiled view for apps
End users can now choose between a list of apps, or a tiled view of apps in the Applications tab of Software Center.
Select multiple updates in Software Center
In the Updates tab of Software Center, you can now select multiple updates, or select Update All to begin installing multiple updates simultaneously.
Improvements to remote control
Limit shared clipboard access in a remote control session.
This adds a layer of protection for the end user as previously, if the viewer was granted full control of the end user’s computer, they would be able to use the shared clipboard to transfer files from the session to their local computer in a way that was entirely transparent to the end user.
Customize the RamDisk TFTP block size and window size on PXE-enabled distribution points
In the 1603 Technical Preview, you can customize the RamDisk TFTP block size and window size for PXE-enabled distribution points. If you have customized your network, it could cause the boot image download to fail with a time-out error because the block or window size is too large. The RamDisk TFTP block size and window size customization allow you to optimize TFTP traffic when using PXE to meet your specific network requirements.
You will need to test the customized settings in your environment to determine what is most efficient..
Try it out!
Try to complete the following tasks and then use the feedback information near the top of this topic to let us know how it worked:
I can customize the RamDisk TFTP window size on the PXE-enabled distribution point.
I can customize the RamDisk TFTP block size on the PXE-enabled distribution point.
To modify the RamDisk TFTP window size
Add the following registry key on PXE-enabled distribution points to customize the RamDisk TFTP window size:
Location: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\DP
Name: RamDiskTFTPWindowSize
Type: REG_DWORD
Value: <customized window size>
The default value is 1 (1 data block fills the window)
To modify the RamDisk TFTP block size
Add the following registry key on PXE-enabled distribution points to customize the RamDisk TFTP window size:
Location: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\DP
Name: RamDiskTFTPBlockSize
Type: REG_DWORD
Value: <customized block size>
The default value is 4096 (4k). | https://docs.microsoft.com/en-us/sccm/core/get-started/capabilities-in-technical-preview-1603 | 2017-03-23T08:36:25 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.microsoft.com |
Remove syntonisation will continue Working. Just the checkout page it self will be separated.
If you want to take the checkout process out of the members profile go to WC4BP General Settings and check Turn off "Cart" tab.
If you want to add the cart back to the Profil and only want the checkout to be separated, you can integrate the cart page or create a new page and add the ShortCode [woocommerce_cart] for the cart display.
Add your cart page in the WC4BP Plugin Settings under "Integrate Pages". This will bring back the Cart to the Profile.
Was this a solution to your problem? If not get in contact and let us know. | http://docs.themekraft.com/article/292-remove-checkout-from-member-profile | 2017-03-23T08:10:47 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.themekraft.com |
New in version 0.5.
Moses is Free software for machine translation (MT). Moses is a statistical machine translation system that allows you to automatically train translation models for any language pair. All you need is a collection of translated texts (parallel corpus).
The Virtaal plugin provides the Moses machine translation output as suggestions. Read on the Moses website how to get a Moses server running, and configure the settings in your tm.ini file.
Remember that the suggestions from the Moses plugin are unreviewed machine-generated translations, that could be wrong, inaccurate, or flawed in some other way. It is meant as a way to help you increase your productivity, not to substitute the expertise of a human translator. | http://docs.translatehouse.org/projects/virtaal/en/latest/moses.html?id=virtaal/moses | 2017-03-23T08:15:49 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.translatehouse.org |
This post outlines the wiring steps for connecting 2-wire PT100 probes to the Wattics Octopus. You can connect the 2-wire PT100 temperature probes to the Octopus Gateway or I/O Extension units in any available analog input.
Hardware Connection – Octopus Gateway unit
Hardware Connection – Octopus I/O Extension unit
Note: Non supported PT100 probes will require the development of a new Octopus PT100 software driver. This steps involves you sending us a sample PT100 probe for calibration. You may contact us at [email protected] for any | http://docs.wattics.com/2016/04/08/connecting-pt100-probes-to-the-wattics-octopus/ | 2017-03-23T08:10:18 | CC-MAIN-2017-13 | 1490218186841.66 | [array(['/wp-content/uploads/2016/04/PT100-Gateway.jpg', None],
dtype=object)
array(['/wp-content/uploads/2016/04/PT100-IO.jpg', None], dtype=object)] | docs.wattics.com |
Using the a9s PostgreSQL for PCF
- Use a9s PostgreSQL for PCF with an App
- Delete a a9s PostgreSQL for PCF Service Instance
This topic describes how to use a9s PostgreSQL for Pivotal Cloud Foundry (PCF) after it has been successfully installed. For more information, see Installing and Configuring PostgreSQL for PCF.
Use a9s PostgreSQL for PCF with an App
To use the a9s PostgreSQL for PCF with with an app, follow the procedures in this section to create a service instance and bind the service instance to your app. For more information about managing service instances, see Managing Service Instances with the cf CLI.
View the Service
After the tile is installed,
a9s-postgresql and its service
plans appear in your PCF marketplace. Run
cf marketplace to
see the service listing:
$ cf marketplace Getting services from marketplace in org test / space test as admin... OK service plans description a9s-postgresql postgresql-single-small, postgresql-cluster-small, postgresql-single-big, postgresql-cluster-big This is the anynines PostgreSQL 9.4 service.
See the next section for instructions on creating PostgreSQL service instances based on the plans listed in the
cf marketplace output.
Create a Service Instance
You can provision a database with the
cf create-service.
The following example creates a
postgresql-single-small service that provisions a single VM PostgreSQL server. In contrast, the
cluster service plans provision PostgreSQL clusters consisting of 3 virtual machines.
$ the Service Instance to an App
After you create your database, run
cf bind-service to bind the service to your app:
$ cf bind-service a9s-postgresql-app my-postgresql-service
Restage or Restart Your App
To enable your app to access the service instance, run
cf restage or
cf restart to restage or restart your app.
Obtain Service Instance Access Credentials
After you bind your service instance to your app, you can find the credentials of your PostgreSQL database in the environment variables of the app.
Run
cf env APP-NAME to display environment variables. The credentials are listed under
the VCAP_SERVICES key.
$ cf env a9s-postgresql-app Getting env variables for app a9s-postgresql-app in org test / space test as admin... OK System-Provided: { "VCAP_SERVICES": { "a9s-postgresql": [ { "credentials": { "host": with a PostgreSQL client to connect to the database.
Delete a a9s PostgreSQL for PCF Service Instance
Note: Before deleting a service instance, you must back up data stored in your database. This operation cannot be undone and all the data will be lost when the service is deleted.
Follow the instructions below to unbind your service instance from all apps and delete it.
List Available Services
Run
cf service to list your available services.
$ cf service
Run
cf delete-service to delete the service.
$ cf delete-service my-postgresql-service
It may take up to several minutes to delete the service. Deleting a service deprovisions the corresponding infrastructure resources. Run the
cf services command to check the deletion status. | https://docs.pivotal.io/partners/a9s-postgresql/using.html | 2017-03-23T08:05:18 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.pivotal.io |
The scaffold API¶
Model methods¶
The following methods are provided by
scaffold.models.BaseSection:
- class
scaffold.models.
BaseSection(*args, **kwargs)¶
An abstract model of a section or subsection. This class provides a base level of functionality that should serve as scaffold for a custom section object.
get_associated_content(only=[], sort_key=None)¶
This method returns an aggregation of all content that’s associated with a section, including subsections, and other objects related via any type of foreign key. To restrict the types of objetcs that are returned from foreign-key relationships, the only argument takes a list of items with the signature:
{app name}.{model name}
For example, if you wanted to retrieve a list of all subsections and associated articles only, you could do the following:
section = Section.objects.all()[0] section.get_associated_content(only=['articles.article'])
Furthermore, if all objects have a commone sort key, you can specify that with the sort_key parameter. So, since sections have an ‘order’ field, if articles had that field as well, you could do the following:
section = Section.objects.all()[0] section.get_associated_content( only=['articles.article'], sort_key='order' )
...and the list returned would be sorted by the ‘order’ field.
get_first_populated_field(field_name)¶
Returns the first non-empty instance of the given field in the sections tree. Will crawl from leaf to root, returning None if no non-empty field is encountered.
A method to access content associated with a section via a foreign-key relationship of any type. This includes content that’s attached via a simple foreign key relationship, and content that’s attached via a generic foreign key (for example, through a subclass of the SectionItem model).
This method returns a list of tuples:
(object, app name, model name, relationship_type)
To sort associated content, pass a list of sort fields in via the sort_fields argument. For example, let’s say we have two types of content we know could be attached to a section: articles and profiles Articles should be sorted by their ‘headline’ field, while profiles should be sorted by their ‘title’ field. We would call our method thusly:
section = Section.objects.all()[0] section.get_related_content(sort_fields=['title', 'headline'])
This will create a common sort key on all assciated objects based on the first of these fields that are present on the object, then sort the entire set based on that sort key. (NB: This key is temporary and is removed from the items before they are returned.)
If ‘infer_sort’ is True, this will override the sort_fields options and select each content type’s sort field based on the first item in the ‘ordering’ property of it’s Meta class. Obviously, infer_sort will only work if the types of fields that are being compared are the same.
Middleware¶
Use the middleware if you need access to the section outside the view context.
- class
scaffold.middleware.
SectionsMiddleware¶
Middleware that stores the current section (if any) in the thread of the currently executing request
scaffold.middleware.
get_current_section()¶
Convenience function to get the current section from the thread of the currently executing request, assuming there is one. If not, returns None. NB: Make sure that the SectionsMiddleware is enabled before calling this function. If it is not enabled, this function will raise a MiddlewareNotUsed exception.
scaffold.middleware.
lookup_section(lookup_from)¶
NB: lookup_from may either be an HTTP request, or a string representing an integer. | http://django-scaffold.readthedocs.io/en/latest/api.html | 2017-03-23T08:09:42 | CC-MAIN-2017-13 | 1490218186841.66 | [] | django-scaffold.readthedocs.io |
Use a lookup
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Use a lookup
Using a lookup is transparent. You can use the created field just like any other field.
Use a lookup from the search bar
activity now appears as a field in your results and you can search on it, display it in results, and do everything you can do with any other field.
1. To search only for events that include the ModifyAccount activity, run the following search:
2. You can also select the activity field in the field menu by clicking on the histogram and then clicking Select/show in results:
3. Click on the event table icon to change the view:
4. Run a search to see a table of all activities by name:
index="test" activity=*
Use a lookup in a saved search
Now use your generated
activity field in a report. The Monitor transaction performance walkthrough showed a report, Average Duration by Activity Code. The report contained the following search:
eventtype="CONTENT_EVENTS" | transaction accountNumber subscriberID maxspan=1m maxpause=30s | timechart span=1m avg(duration) by activityCode
You want to reuse that report, but you want to modify it to show the activity by name, not just by code. You also want to set this search to run every 5 minutes, so that you can display it in a dashboard later. You can edit the search or make a copy of it and edit the copy. To edit a copy of a saved search:
1. From the Search app in Splunk Web, navigate to Manager.
2. Click Searches and Reports.
3. Locate Average Duration by Activity Code and click Clone.
4. Enter Average Duration by Activity for the Name.
5. Change the search to:
eventtype="CONTENT_EVENTS" | transaction accountNumber subscriberID maxspan=1m maxpause=30s | timechart avg(duration) by activity
6. Change the Start time to -5m and make sure the Finish time is set to -20m. Again, this gives time for the events to be finalized in the index before you run the search.
7. For this search, Schedule this search is already selected. Select Basic for the schedule type and select Every 5 minutes.
8. You do not need an alert for this search, so reset Alert Conditions to choose.
9. Click Save.
This documentation applies to the following versions of Splunk: 4.1 , 4.1.1 , 4.1.2 , 4.1.3 , 4.1.4 , 4.1.5 , 4.1.6 , 4.1.7 , 4.1.8 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/4.1.4/AppManagement/Usealookup | 2012-05-27T07:50:51 | crawl-003 | crawl-003-021 | [] | docs.splunk.com |
Understand transactions in Splunk
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Contents
Understand transactions in Splunk
For many application management use cases, including fault detection and monitoring, you frequently want to tie multiple events together into a single transaction. In Splunk, a transaction is any sequence of information exchange and related work that you want to treat as a unit.
The events in your logs often contain overlapping information that you can use to tie events together. For example, web logs often contain a session ID field that appears in more than one event. By tying these events together, you can get information about an entire session and how long it took. For troubleshooting, you can find sessions that did not complete or that exceeded some threshold. You can also use this information to find out how users are interacting with your application or your site and how long it takes them to accomplish a task. Splunk's
transaction command can be used to tie together events based on a timeframe and one or more common values. This can be used to measure duration, whether or not a transaction completed, and more. These associations can be built across tiers and using multiple keys.
See About transactions and Search for transactions in the Knowledge Manager manual for more information about transactions and the
transaction command.
This walkthrough shows how to use Splunk's
transaction command to find web transactions that exceed a specified duration. It also gives some examples of how to construct transactions that cross tiers.
Other uses of transactions
- For infrastructure logs, you can use an IP address to track the network behavior of a host through the router logs to look for network-layer abnormalities.
This documentation applies to the following versions of Splunk: 4.1 , 4.1.1 , 4.1.2 , 4.1.3 , 4.1.4 , 4.1.5 , 4.1.6 , 4.1.7 , 4.1.8 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/4.1.4/AppManagement/UnderstandtransactionsinSplunk | 2012-05-27T07:50:48 | crawl-003 | crawl-003-021 | [] | docs.splunk.com |
delete
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Contents
delete
Synopsis
Performs a deletion from the index.
Syntax
delete
Description
Piping a search to the delete operator marks all the events returned by that search so that they are never returned by any future search. No user (even with admin permissions) will be able to see this data using Splunk.. | http://docs.splunk.com/Documentation/Splunk/4.0.6/SearchReference/Delete | 2012-05-27T08:34:40 | crawl-003 | crawl-003-021 | [] | docs.splunk.com |
Upgrade to 4.3 on UNIX
Upgrade to 4.3 on UNIX
This topic describes the procedure for upgrading your Splunk instance from version 4.0.x or later to 4.3.
Before you upgrade
Make sure you've read this information before proceeding, as well as the following:.
- Note: AIX tar will fail to correctly overwrite files when run as a user other than root. Use GNU tar (
gtar) to avoid this problem.
- If you are using a package manager, such as RPM, type
rpm -U [--prefix <existing Splunk location>] splunk_package_name.rpm
- If you are using a .dmg file (on Mac OS X), double-click it and follow the instructions. Be sure specify.3 , 4.3.1 , 4.3.2 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/latest/Installation/Upgradeto4.3onUNIX | 2012-05-27T10:45:42 | crawl-003 | crawl-003-021 | [] | docs.splunk.com |
Upgrade to 4.3 on Windows
Contents
Upgrade to 4.3 on Windows
This topic describes the procedure for upgrading your Windows Splunk instance from version 4.0.x or later to 4".
Note: When you upgrade to Splunk 4.3 on Windows, the installer will overwrite]
Migrate searches for local performance monitoring metrics in the Windows app
The Windows app currently does not make use of the Windows performance monitor collection features available in Splunk 4.3. While the app does work, and is supported, by default it will continue to gather local performance metrics using WMI-based inputs.
If you're using the Windows app,.
This documentation applies to the following versions of Splunk: 4.3 , 4.3.1 , 4.3.2 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/latest/Installation/Upgradeto4.3onWindows | 2012-05-27T10:45:45 | crawl-003 | crawl-003-021 | [] | docs.splunk.com |
Unified Origin - LIVE¶
The Live Media and Metadata Ingest Protocol outlines how the encoder uses HTTP POST to stream the live event to an origin. See LIVE Ingest for an overview. | http://docs.unified-streaming.com/documentation/live/index.html | 2019-10-13T23:45:06 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.unified-streaming.com |
Adaptive Bitrate (ABR) Streaming¶
Table of Contents
The streaming module can also act as a Publishing Point.
A Publishing Point is simply a URL that accepts input streams from one or more software/hardware encoders.
The encoder should follow Live Media and Metadata Ingest Protocol to send the audio/video fragments to the webserver. See the Supported Encoders section in the factsheet and LIVE Ingest for an overview.
Attention
Apache should be used for Live streaming (ingest and egress). Live Media and Metadata Ingest Protocol.
--dvr_window_length¶
Length of DVR moving window (default 30 seconds).
Attention
The dvr_window_length must be shorter than the archive_length.
--archive_length¶
The length of archive to be kept (in seconds).
Attention
The archive_length must be longer than the dvr_window_length.
--archive_segment_length¶
If specified, the live presentation is archived in segments of the specified length (default to 0 seconds, meaning no segmentation takes place).
--archiving¶
When archive_segment_length is set, setting this variable to 1 keeps the archives stored on disk. (defaults to 0, not archiving, so only the last two segments are kept on disk).
-
--restart_on_encoder_reconnect¶
Used when creating the server manifest for the publishing point so when an encoder stops it can start again and publish to the same publishing point (provided the stream layout is the same and the next timestamps are higher).
The encoder needs to be configured to use Coordinated Universal Time (UTC) as the time it uses, please refer to the Encoder Settings section or the encoder manual on how to configure this.
--time_shift¶
The time shift offset (in seconds). Defaults to 0. no full archive is being kept (--archiving=0), only two segments of 60 seconds are stored on disk, and the DVR window available is 30 seconds:
#!/bin/bash mp4split -o \ --archive_segment_length=60 \ --dvr_window_length=30 \ --archiving=0 Starting with Live section for a full example.
Event ID¶
To make re-using an existing publishing point possible, a unique ID must be specified for each Live presentation. This 'EventID' allows for the restart of a publishing point that is in a stopped state, which is impossible otherwise.
To add an EventID to a Live presentation, an encoder should specify it in the URL of the publishing point to which it POSTs the livestream.">
When using an EventID, Unified Origin will archive the media files of the session associated with the ID in a subdirectory, of which the name is equal to the ID. Given the example above, the following subdirectory would be added to the directory of the publishing point after starting the new event:
2013-01-01-10_15_25/
As conflicting names of archive directories makes restarting a publishing point impossible, a unique EventID must be used for each Live session. The best method to do this is to use a date and timestamp as an EventID.
Because a new subdirectory is created for each EventID, a new encoding session with a unique EventID will not remove files from a previous session. The files that are associated with older EventIDs can be used for other purposes or they can be removed. The latter can be done by setting up a simple script that removes the files after a restart or after a certain period.
Note
When a publishing point is re-used with a new EventID, the server manifest will be associated with the new instead of the old event. Thus, from then on, all requested client manifests will be associated with the new event. However, without any additional changes, playout of the old event will still be possible, as the media segments will remain available (e.g., through requests based on a client manifest that was cached before the new event was published). | http://docs.unified-streaming.com/documentation/live/streaming.html | 2019-10-13T22:23:45 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.unified-streaming.com |
Ge
Before MySQL 5.6.1, spatial extensions only support bounding box operations (what MySQL calls minimum bounding rectangles, or MBR). Specifically, MySQL did not conform to the OGC standard. Django supports spatial functions operating on real geometries available in modern MySQL versions. However, the spatial functions are not as rich as other backends like PostGIS.
Support for spatial functions operating on real geometries was added.]. Essentially, if the input is
not a
GEOSGeometry object, the geometry field will attempt to create a
GEOSGeometry instance from the input.
For more information creating
GEOSGeometry
objects, refer to the GEOS tutorial.)}]}, ... )
GeoDjango’s lookup types may be used with any manager method like
filter(),
exclude(), etc. However, the lookup types unique to
GeoDjango are only available on spatial fields.
Filters on ‘normal’ fields (e.g.
CharField)
may be chained with those on geographic fields. Geographic lookups accept
geometry and raster input on both sides and input types can be mixed freely.
The general structure of geographic lookups is described below. A complete reference can be found in the spatial lookup reference.
Geographic queries with geometries take the following general form (assuming
the
Zipcode model used in the GeoDjango Model API):
>>> qs = Zipcode.objects.filter(<field>__<lookup_type>=<parameter>) >>> qs = Zipcode.objects.exclude(...)
For example:
>>> qs = Zipcode.objects.filter(poly__contains=pnt) >>> qs = Elevation.objects.filter(poly__contains=rst)
In this case,
poly is the geographic field,
contains
is the spatial lookup type,
pnt is the parameter (which may be a
GEOSGeometry object or a string of
GeoJSON , WKT, or HEXEWKB), and
rst is a
GDALRaster object.
The raster lookup syntax is similar to the syntax for geometries. The only
difference is that a band index can be specified as additional input. If no band
index is specified, the first band is used by default (index
0). In that
case the syntax is identical to the syntax for geometry lookups.
To specify the band index, an additional parameter can be specified on both sides of the lookup. On the left hand side, the double underscore syntax is used to pass a band index. On the right hand side, a tuple of the raster and band index can be specified.
This results in the following general form for lookups involving rasters
(assuming the
Elevation model used in the GeoDjango Model API):
>>> qs = Elevation.objects.filter(<field>__<lookup_type>=<parameter>) >>> qs = Elevation.objects.filter(<field>__<band_index>__<lookup_type>=<parameter>) >>> qs = Elevation.objects.filter(<field>__<lookup_type>=(<raster_input, <band_index>)
For example:
>>> qs = Elevation.objects.filter(rast__contains=geom) >>> qs = Elevation.objects.filter(rast__contains=rst) >>> qs = Elevation.objects.filter(rast__1__contains=geom) >>> qs = Elevation.objects.filter(rast__contains=(rst, 1)) >>> qs = Elevation.objects.filter(rast__1__contains=(rst, 1))
On the left hand side of the example,
rast is the geographic raster field
and
contains is the spatial lookup type. On the right
hand side,
geom is a geometry input and
rst is a
GDALRaster object. The band index defaults to
0 in the first two queries and is set to
1 on the others.
While all spatial lookups can be used with raster objects on both sides, not all underlying operators natively accept raster input. For cases where the operator expects geometry input, the raster is automatically converted to a geometry. It’s important to keep this in mind when interpreting the lookup results.
The type of raster support is listed for all lookups in the compatibility table. Lookups involving rasters are currently only available for the PostGIS backend..
Availability: PostGIS, Oracle, SpatiaLite, PGRaster (Native)
The following distance lookups are available:
distance_lt
distance_lte
distance_gt
distance_gte
dwithin
Distance lookups take a tuple parameter comprising:
Distanceobject containing the distance.)))
Raster queries work the same way by replacing the geometry field
point with
a raster field, or the
pnt object with a raster object, or both. To specify
the band index of a raster input on the right hand side, a 3-tuple can be
passed to the lookup as follows:
>>> qs = SouthTexasCity.objects.filter(point__distance_gte=(rst, 2, D(km=7)))
Where the band with index 2 (the third band) of the raster
rst would be
used for the lookup.
The following table provides a summary of what spatial lookups are available
for each spatial database backend. The PostGIS Raster (PGRaster) lookups are
divided into the three categories described in the raster lookup details: native support
N, bilateral native support
B,
and geometry conversion support
C.
The following table provides a summary of what geography-specific database functions are available on each spatial backend. | https://django.readthedocs.io/en/latest/ref/contrib/gis/db-api.html | 2019-10-13T22:50:33 | CC-MAIN-2019-43 | 1570986648343.8 | [] | django.readthedocs.io |
Installing Non-Prehung Storm Doors: Part 1 Whats in the box?
Your Package Should Contain the Following Components:
- Door slab with bottom expander attached to the bottom of the slab
- Installation z-bars
- screw cap covers
- Vinyl sweeps and hardware kit will ship separately from door slab
Tools & Materials You Will Need:
- Measuring Tape
- Level
- Pliers
- Power Drill
- Stiff Utility Knife
- Soft Mallet
- Hammer
- Hacksaw
- Drill Bits - 3/32", 5/16"
- Square
- Pencil
- Phillips and Flathead Screwdrivers
Installation Screw Packs:
- 1 pk #8 x 1/2" Phillips washer head screws
- For hinge size z-bar mounting onto door slab on both leaf & piano
- 1 pk #7x 1" Phillips panhead screws
- For hinge side of leaf z-bar mounted onto jamb
- 1 pk #6 x 1" Phillips panhead screws
- For exterior mounting of z-bar frame for both leaf and piano hinge z-bars
- 1 pk #8 x 1-1/2" Phillips truss head screws
- For hinge side of piano z-bar mounted onto jamb
- 12 #6 1/2" Phillips panhead screws - color matched
- 10 for astragal
- 2 for bottom expanders
Remember: Always use the appropriate personal protective equipment. | https://docs.grandbanksbp.com/article/83-installing-non-prehung-storm-doors-part-1-whats-in-the-box | 2019-10-13T22:29:56 | CC-MAIN-2019-43 | 1570986648343.8 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55d1ec04e4b089486cadc212/file-dYoTQUxNjV.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55d1ec93e4b01fdb81eb3faf/file-TqonqrGgPK.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55d1ed11e4b089486cadc21b/file-jraYNlZRYw.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55d1ed47e4b089486cadc21e/file-idLzvAV34U.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/550c6910e4b061031401efcb/images/55d1ed82e4b01fdb81eb3fb8/file-Frk1HfKoVz.png',
None], dtype=object) ] | docs.grandbanksbp.com |
OEMName
OEMName specifies the name of the Original Equipment Manufacturer (OEM) for the group or groups of app tiles that you pin to the Start screen.
This value will be appended with the word "apps", or the local equivalent, on the Start screen.
Values
Valid Configuration Passes
specialize
Parent Hierarchy
Microsoft-Windows-Shell-Setup | OEMName
Applies To
For the list of the supported Windows editions and architectures that this component supports, see Microsoft-Windows-Shell-Setup.
XML Example
The following XML output shows how to set the OEM name. On the Start screen, the group of Start Tiles created by the OEM appears with the heading: "Fabrikam apps".
<OEMName>Fabrikam</OEMName>
Related topics
Microsoft-Windows-Shell-Setup | https://docs.microsoft.com/en-us/windows-hardware/customize/desktop/unattend/microsoft-windows-shell-setup-oemname | 2019-10-13T22:33:23 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.microsoft.com |
Amethyst AUV Essential parts
While Amethyst is universal platform which can be extensively customised, there are some accessories which are virtually integrated in to the design by their exact proportions. Its brushless motor, servo, o-rings and 26650 Li-Ion batteries. These four exact parts you should buy to start your build upon open source files. Special part is acrylic tube, wich can be from other materials or even 3D printed, but exact dimensions of tube are must be respected.
List updated: 1. 6. 2019
Basic setup essentials
Brushless Motor
Amethyst AUV Platform now using these cheapest Chinese motors. There are plenty of producers out there and unfortunately not all of the motors are then identical - in therms of diameters and quality of wiring. Here is one of proven sellers at AliExpress providing these motors with good price and quality guaranteed. link
Servo
This is most common and most cheapest servo you can find. We use them for dive planes and rudder. Their quality and power of 9g is just ok.
26650 Li-Ion Batteries
Amethyst has two battery packs wit 3 battery cells each. You need etleast 6 of this type then.
Type of battery is essential, capacity and brand is up to your preferences.
70x3 O-Rings
Used to water tight dry compartment. For one acrylic tube 4 o-rings are needed.
Use internet to find your local vendor
60x2 O-Rings
Used to water tight battery packs. For one pack one o-ring is needed. Amethyst has at least two packs so two o-rings are minimum.
Use internet to find your local vendor
Acrylic or aluminium tube
Dimension of 200x90x(84inner) mm. Acrylic is beter for use with internal camera. Aluminium will allow greater depths.
Use internet to find your local vendor | https://docs.beobachtung3d.com/amethyst-auv-essential-parts.html | 2019-10-13T23:50:58 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.beobachtung3d.com |
App Subscription
- How do I cancel my White Christmas subscription?
- How do I install White Christmas on my store?
- White Christmas terms and conditions
- Can I get a reduced app fee?
- Will I be double-charged after reinstalling?
- I think I've been charged during the trial period
- How does White Christmas work on Shopify Plus stores?
- Can I get a trial extension?
- Is White Christmas partner-friendly?
- How can I check my subscription details?
- I have closed my store. How do I cancel my subscription?
- After paying monthly subscription re-install is asking for payment again
- Does White Christmas work on my Shopify plan? | https://docs.codeblackbelt.com/category/413-app-subscription | 2019-10-13T23:21:58 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.codeblackbelt.com |
{{$root.getMsg("downLoadHelpAsPdf")}} {{helpModel.downloadHelpPdfDataStatus}}
i-net Designer
i-net Clear Reports is a reporting solution which allows generation of reports based on dynamic data. To create a report you have to design the report template and specify where the data is coming from.
The following sections provide some details on the different elements that can be used in i-net Clear Reports and their common properties. If you are looking for more hands-on information you should take a look at the "Getting Started Guide".
What to know about i-net Designer
With i-net Designer you will get one tool to access, calculate and analyze your enterprise data and create challenging and feature-rich reports in a professional way. Benefit from following i-net Designer features:
- Data access to every data source accessible by JDBC or ODBC
- Report Wizard to assist you while creating report templates
- Simple and intuitive report design by dragging and dropping data, design and representation elements
- Many different data, design and representation elements such as charts, cross tabs, text elements, sub reports, special fields, images or lines and boxes
- Powerful but simple formula language for creating custom formulas and data transforms with more than 400 predefined functions, expressions, constants and control structures
- Flexible, dynamic, and interactive reports by use of parameter fields and setting report options by formulas
- Report and data arrangement by several footers, headers or grouping sections
- Export to several formats, such as *.pdf, *.html, *.xls or ...
- Reading capability for reports from Crystal Reports
- 100% Java product, free information delivery to all platforms
Designing reports
Designing a report with i-net Designer is a simple three step procedure:
- Choose and configure your data source (if needed).
- If needed, create custom elements such as SQL queries, formula fields or parameter fields
- Drag and Drop elements from i-net Designer field browser and arrange them as you like it in i-net Designer design area.
Result can be reviewed in an i-net Designer preview and exported to one of the export formats.
Installation requirements
The one and only requirement to run i-net Designer is a Java Runtime Environment 7 (or higher).
i-net Designer features
The following sections give you a short outline of some of the most important features of i-net Designer.
First, the above image 1 shows the graphical user interface of i-net Designer. In the frame on the right side you can see the report design area. Most report elements are taken from the field browser on the left side and are arranged in the design area by dragging and dropping them there. Further functionality is available from the menu or the tool bar.
Database Wizard
As soon as you have selected your report data source i-net Designer shows the Database Wizard to set up tables and table relations. As following image 2 shows, simply drag and drop field relations and set the relation type.
Page layout / sections
Every report may be divided into several sections. At the very least, each report has a report header and footer, a detail section, and a page header and footer. But this can be enhanced. As following image 3 shows, you can insert your own sections and customize every section for its own by using i-net Designer section properties.
If you have groups in your report (see section Grouping), there are group headers and footers as well.
Fields
Data fields are an essential part of every report to represent your enterprise data. In i-net Designer those fields are taken by dragging and dropping them from the field browser (see following image 4) and arranging them in the design area.
For every field there are a lot of options to set, such as font, border, color or format options. The following field types are available.
Database Fields
A database field represents data from a table column of your configured data source. Let’s assume there is a table with products and prices in your data source and you would like to have a report with products and prices. In this case, you’d configure this data source and drag the database fields to your report detail section.
Formula Fields
By using formula fields you can create your individual data calculations and transformations. To create these fields, i-net Designer provides a formula editor with a powerful formula language. There are more than 400 predefined functions, expressions, constants or control structures to make your custom formula field possible.
The above image 5 shows a very simple sample. Using the DateDiff operation from a database field "Orders.OrderDate" the difference is calculated.
Parameter Fields
A parameter field allows interaction and gives the user more flexible and customized reports. By defining a parameter field for a report every time a user requests or refreshes this report the parameter will be prompted.
So if you designed a report about last year’s enterprise sales you could create a parameter field for the year and show sales only for the user prompted year. Another example is to suppress specific report elements corresponding to a given user prompt.
Special Fields
To add common information to your report, i-net Designer provides special fields. Examples are print date, page numbers, report author or report creation date. Currently there are 24 special fields available in i-net Designer.
Summary Fields
A summary field is a field that returns a calculated value from the records either of a group (for grouping see chapter "Grouping") or of all report records. For example, if you have a list of your enterprise product sales, a summary field could calculate the total sales. Simply calculating a sum is one of the simplest summary operations, however. Currently there are 19 operations available such as average, median, covariance, nth largest or sample standard deviation.
SQL-Expression Fields
An SQL Expression field is a type of database field. The difference is that the SQL-Expression field is a link to a native SQL expression, not to a single table column. So, use for an SQL-Expression purpose would be applying non-standard functions of your specific database management system such as generating sequences. Another purpose would be for speeding up database queries. Using the SQL-Expressions editor, as seen in the following image 8, for creating SQL-Expression fields is quite simple.
Group Name Fields
A group name field is a design element that returns the name of a group (for grouping see chapter "Grouping").
Elements
Representing data in fields is important but professional and effective reports need more than this. Therefore i-net Designer additionally provides the following powerful design and representation elements.
Cross Calculated Tables (Cross Tabs)
Using a cross tab is an effective way to represent and calculate complex data coherences. Each dimension of a crosstab can be determined by one or more variables resulting in one compound variable. For each pair of compound variables you can calculate a summary value which is printed at the corresponding location.
Charts
For graphical representation of your enterprise data, i-net Designer offers 46 different chart types. Every chart type may be customized by a multitude of settings. Example chart types are bar, line, area, 3d surface, radar or bubble.
Sub-Reports
A sub-report is a completely separate report as part of the main report. This is very helpful for creating reports with an 1:m correlation. The only difference to a normal report is the missing page header and footer. A sub-report may be marked to be rendered on demand, so that it is not really a part of the resulting report but a separate report reachable by a link.
Pictures, Text Fields, Lines and Boxes
Other graphical elements for customizing your report are pictures, text field, lines and boxes. Simply arrange them per drag and drop in your report.
Grouping
Another function to structure your data representation is grouping. Grouping means aggregating elements by a certain criterion. In the report you will get a number of bundled elements in a certain sort order. A simple example may be grouping enterprise customers by country, as showed in the following image 11.
But it is also possible to create a group hierarchy like grouping sales by countries and customers. If a group hierarchy exists in your report, so-called "drill down" operations are possible. That means suppressing information from lower group hierarchy levels and drilling down to this information (making it visible) only when needed. Referring to the example grouping sales by countries and customers, customer sales would be suppressed by default, but a report user could drill down to these sales if wanted.
Customizing i-net Designer Settings per Formula
Formulas can not only be used for calculating or transforming your enterprise data but also for customizing nearly every setting for the various i-net Designer fields and elements.
The example in the following image 12 shows how simple it is to change font color of a field depending on a field value.
| https://docs.inetsoftware.de/reporting/help/CC | 2019-10-13T22:27:33 | CC-MAIN-2019-43 | 1570986648343.8 | [array(['com/inet/remote/designer/help/designer/designer/en/../images/overview/graphical-user-interface.31a4378f52ec761df85c493b9b54c3e1.jpeg',
None], dtype=object)
array(['com/inet/remote/designer/help/designer/designer/en/../images/overview/database-wizard.c44f49e1ec874b02bdfd3e7c2f7b3ceb.jpeg',
None], dtype=object)
array(['com/inet/remote/designer/help/designer/designer/en/../images/overview/section-properties.c3cde94b73b83a4a12c44221c93d1042.jpeg',
None], dtype=object)
array(['com/inet/remote/designer/help/designer/designer/en/../images/overview/fields-browser.1d8e7a8af04cb1c712d4c9fe542fcb1c.jpeg',
None], dtype=object)
array(['com/inet/remote/designer/help/designer/designer/en/../images/overview/formula-editor.56c61c27575ae3a696096fa07ac4ded8.jpeg',
None], dtype=object)
array(['com/inet/remote/designer/help/designer/designer/en/../images/overview/report-parameter-prompt.ff9aae8757e61b462c0ada003407626c.jpeg',
None], dtype=object)
array(['com/inet/remote/designer/help/designer/designer/en/../images/overview/special-fields.f464ace1c4848dc63ebe7b4f7ccff6df.jpeg',
None], dtype=object)
array(['com/inet/remote/designer/help/designer/designer/en/../images/overview/sql-expression-editor.56435f7e53e0946cf9463225f4a81c0a.jpeg',
None], dtype=object)
array(['com/inet/remote/designer/help/designer/designer/en/../images/overview/cross-calculated-table.80a2d113c32e6ec76f18c4c34c09f506.jpeg',
None], dtype=object)
array(['com/inet/remote/designer/help/designer/designer/en/../images/overview/chart-properties.57cbc6905cab34d014bc2478e2d57c33.jpeg',
None], dtype=object)
array(['com/inet/remote/designer/help/designer/designer/en/../images/overview/picture.653cddc774c2b5fff9c292bf79909340.jpeg',
None], dtype=object)
array(['com/inet/remote/designer/help/designer/designer/en/../images/overview/grouping.5e7925fd9fe09fd4edb03879b8449c64.jpeg',
None], dtype=object)
array(['com/inet/remote/designer/help/designer/designer/en/../images/overview/property-formula-1.952714e9f7565dbaaaba825c22a311c7.jpeg',
None], dtype=object) ] | docs.inetsoftware.de |
Styles Styles Styles Interface
Definition
public interface class Styles : System::Collections::IEnumerable
[System.Runtime.InteropServices.Guid("00020853-0000-0000-C000-000000000046")] [System.Runtime.InteropServices.InterfaceType(2)] public interface Styles : System.Collections.IEnumerable
Public Interface Styles Implements IEnumerable
- Attributes
-
- Implements
-
Remarks).
Use the Styles property to return the Styles collection.
Use the Add(String, Object) method to create a new style and add it to the collection.
Use Styles(
index), where
index is the style index number or name, to return a single Style object from the workbook Styles collection. | https://docs.microsoft.com/en-us/dotnet/api/microsoft.office.interop.excel.styles?view=excel-pia | 2019-10-14T00:06:55 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.microsoft.com |
Web
Part
Web Manager. On Selected Web Part Changed(WebPartEventArgs) Part
Web Manager. On Selected Web Part Changed(WebPartEventArgs) Part
Web Manager. On Selected Web Part Changed(WebPartEventArgs) Part
Method
Manager. On Selected Web Part Changed(WebPartEventArgs)
Definition
Raises the SelectedWebPartChanged event, which occurs after a WebPart control has either been newly selected or had its selection cleared.
protected: virtual void OnSelectedWebPartChanged(System::Web::UI::WebControls::WebParts::WebPartEventArgs ^ e);
protected virtual void OnSelectedWebPartChanged (System.Web.UI.WebControls.WebParts.WebPartEventArgs e);
abstract member OnSelectedWebPartChanged : System.Web.UI.WebControls.WebParts.WebPartEventArgs -> unit override this.OnSelectedWebPartChanged : System.Web.UI.WebControls.WebParts.WebPartEventArgs -> unit
Protected Overridable Sub OnSelectedWebPartChanged (e As WebPartEventArgs)
Parameters
A WebPartEventArgs that contains the event data.
Remarks
The OnSelectedWebPartChanged method raises the SelectedWebPartChanged event, which is typically a point in time where a developer might want to change the appearance of the user interface (UI). For example, when a new WebPart control is selected, the Web Parts control set changes the rendering of the newly selected control. After a control's selection is cleared, the rendering is returned to normal.
After a user selects a particular WebPart control for editing, the OnSelectedWebPartChanged method is called. When the user finishes editing the control and closes it, with the result that the control's selection is cleared, the OnSelectedWebPartChanged method is called again.
Notes to Inheritors
There are several options related to the SelectedWebPartChanged event, to allow developers to customize the rendering that occurs after the selected control has changed. In declarative code, within the
<asp:webpartmanager> element you could set the
OnSelectedWebPartChanged attribute, and assign to it the name of a custom method. In the custom method, you could modify the rendering of the selected controls when the event occurs. Another option is to inherit from the WebPartManager class and override the method. A third option is to customize the rendering at the zone level; for example, you can inherit from the EditorZoneBase class, and override its OnSelectedWebPartChanged(Object, WebPartEventArgs) method, to customize the rendering of controls selected and cleared during the editing process. | https://docs.microsoft.com/en-us/dotnet/api/system.web.ui.webcontrols.webparts.webpartmanager.onselectedwebpartchanged?view=netframework-4.8 | 2019-10-13T23:01:21 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.microsoft.com |
Get educationClass
Retrieve a class from the system. A class is a universal group with a special property that indicates to the system that the group is a class. Group members represent the students; group admins represent the teachers in the class. If you're using the delegated token, the user will only see classes in which they are members.
Permissions
One of the following permissions is required to call this API. To learn more, including how to choose permissions, see Permissions.
HTTP request
GET /education/classes/{id}
Optional query parameters
This method supports the OData Query Parameters to help customize the response.
Request headers
Request body
Do not supply a request body for this method.
Response
If successful, this method returns a
200 OK response code and an educationClass object in the response body.
Example
Request
Here is an example of the request.
GET{class-id}
Response
The following is an example of the response.
Note: The response object shown here might be shortened for readability. All the properties will be returned from an actual call.
HTTP/1.1 200 OK Content-type: application/json Content-length: 224 { "id": "11023", "description": "English Level 2", "classCode": "11023", "createdBy": { "user": { "displayName": "Susana Rocha", "id": "14012", } }, "displayName": "English - Language 2", "externalId": "301", "externalName": "English Level 1", "externalSource": "School of Fine Art", "mailNickname": "fineartschool.net " }
Feedback | https://docs.microsoft.com/en-us/graph/api/educationclass-get?view=graph-rest-1.0 | 2019-10-13T23:24:32 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.microsoft.com |
File-backed memory¶
Important
As of the 18.0.0 Rocky release, the functionality described below is only supported by the libvirt/KVM driver.
The file-backed memory feature in Openstack allows a Nova node to serve guest memory from a file backing store. This mechanism uses the libvirt file memory source, causing guest instance memory to be allocated as files within the libvirt memory backing directory.
Since instance performance will be related to the speed of the backing store, this feature works best when used with very fast block devices or virtual file systems - such as flash or RAM devices.
When configured,
nova-compute will report the capacity configured for
file-backed memory to placement in place of the total system memory capacity.
This allows the node to run more instances than would normally fit
within system memory.
When available in libvirt and qemu, instance memory will be discarded by qemu
at shutdown by calling
madvise(MADV_REMOVE), to avoid flushing any dirty
memory to the backing store on exit.
To enable file-backed memory, follow the steps below:
Important
It is not possible to live migrate from a node running a version of OpenStack that does not support file-backed memory to a node with file backed memory enabled. It is recommended that all Nova compute nodes are upgraded to Rocky before enabling file-backed memory.
Prerequisites and Limitations¶
- Libvirt
File-backed memory requires libvirt version 4.0.0 or newer. Discard capability requires libvirt version 4.4.0 or newer.
- Qemu
File-backed memory requires qemu version 2.6.0 or newer.Discard capability requires qemu version 2.10.0 or newer.
- Memory overcommit
File-backed memory is not compatible with memory overcommit.
ram_allocation_ratiomust be set to
1.0in
nova.conf, and the host must not be added to a host aggregate with
ram_allocation_ratioset to anything but
1.0.
- Huge pages
File-backed memory is not compatible with huge pages. Instances with huge pages configured will not start on a host with file-backed memory enabled. It is recommended to use host aggregates to ensure instances configured for huge pages are not placed on hosts with file-backed memory configured.
Handling these limitations could be optimized with a scheduler filter in the future.
Configure the backing store¶
Note
/dev/sdb and the
ext4 filesystem are used here as an example. This
will differ between environments.
Note
/var/lib/libvirt/qemu/ram is the default location. The value can be
set via
memory_backing_dir in
/etc/libvirt/qemu.conf, and the
mountpoint must match the value configured there.
By default, Libvirt with qemu/KVM allocates memory within
/var/lib/libvirt/qemu/ram/. To utilize this, you need to have the backing
store mounted at (or above) this location.
Create a filesystem on the backing device
# mkfs.ext4 /dev/sdb
Mount the backing device
Add the backing device to
/etc/fstabfor automatic mounting to
/var/lib/libvirt/qemu/ram
Mount the device
# mount /dev/sdb /var/lib/libvirt/qemu/ram
Configure Nova Compute for file-backed memory¶
Enable File-backed memory in
nova-compute
Configure Nova to utilize file-backed memory with the capacity of the backing store in MiB. 1048576 MiB (1 TiB) is used in this example.
Edit
/etc/nova/nova.conf
[libvirt] file_backed_memory=1048576
Restart the
nova-computeservice | https://docs.openstack.org/nova/latest/admin/file-backed-memory.html | 2019-10-13T23:35:07 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.openstack.org |
Announcing the Flexify SuperFeed (Version 2)
After 4 years on the Shopify App Store we will launch a new version of Flexify in October 2019. We're so excited about it that we're calling it the SuperFeed.
This new version offers a better user interface, making advanced features available to all users. No coding skills required! It also uses new technology to generate your feeds faster & with less errors. This new tech lays the foundation for many new exciting features we have planned for the next year.
What does this mean for you? #
IF YOU ARE NEW TO FLEXIFY: Once the SuperFeed goes live all new users will be using it by default.
IF YOU ARE ON THE FREE PLAN TODAY: Nothing changes. Your old feed still works. But you can opt-in to receive all the benefits of the new version. If you opt-in your limit will be changed from 1000 ITEMS to 1000 PRODUCTS. You can go back to the old feed at any time.
IF YOU ARE ON A PREMIUM PLAN TODAY: Nothing changes. Your old feed still works. But you can opt-in to receive all the benefits of the new version. If you opt-in to use the SuperFeed you will receive the benefits of our MEDIUM plan ($49) for the price of the SMALL plan ($29). You can go back to the old feed at any time.
What is new? #
The Flexify SuperFeed admin interface has been improved to make it easier to set advanced features. The following features are now available to you even if you don't have coding experience:
Switching from Item Count to Product Count #
The single most confusing concept our support team keeps explaining is that Facebook & Shopify think about products in fundamentally different ways:
- On Shopify each product can have multiple variants (e.g. colors, sizes, etc ...)
- Facebook only understands individual products.
This means that if e.g. you have 10 products with 5 variants per products you will end up with 50 products in your feed.
To add to the confusion there is no way for us to predict how many items will be in your feed before we generate it. And on top of that we were using this confusing & unpredictable metric to bill our customers.
In the old version of Flexify we were billing based on the number of items in your feed, an unpredictable and confusing metric. That was a mistake.
We are fixing this mistake by switching to billing by product count.
What are the new prices?
We now have the following self-service plans available:
- FREE: 1000 products in feed, no premium features
- SMALL $29: 2000 products in feed, all premium features
- MEDIUM $49: 5000 products in feed, all premium features
- LARGE (contact us): >5000 products in feed, all premium features
Because in our experience larger amounts of products require a more hands-on approach we have higher tiers available if you contact us.
- CUSTOM: if you have more than 5000 products, contact us for pricing
We handle millions of products every day, just reach out to us & we'll set you up! | https://docs.flexify.net/help/facebook-product-feed/announcing-the-superfeed | 2019-10-13T23:36:14 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.flexify.net |
Trace
Context
Trace Event Args Context
Trace Event Args Context
Trace Event Args Context
Class
Event Args
Definition
Provides a collection of trace records to any method that handles the TraceFinished event. This class cannot be inherited.
public ref class TraceContextEventArgs sealed : EventArgs
public sealed class TraceContextEventArgs : EventArgs
type TraceContextEventArgs = class inherit EventArgs
Public NotInheritable Class TraceContextEventArgs Inherits EventArgs
- Inheritance
-
Examples>
<%@ Page ' </script>
Remarks. | https://docs.microsoft.com/en-us/dotnet/api/system.web.tracecontexteventargs?view=netframework-4.8 | 2019-10-13T23:55:27 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.microsoft.com |
Manage settings
You can modify the enhanced security settings for end users at any time depending on your requirement.
On the Get Started page, click Configure Content Access.
On the Manage tab, in the Content Access Configuration page, click Edit.
Click the trash can for the category or the website that you want to delete.
Click Add to block, allow, or redirect to a secure browser a website category or website.
Click Save for the changes to take effect. | https://docs.citrix.com/en-us/citrix-access-control/manage-settings.html | 2019-10-14T00:25:07 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.citrix.com |
Optimizing performance of MVC sites
This page contains recommendations that can help you optimize the performance of your MVC site.
Optimizing on-line marketing performance
For high-traffic websites that use the Kentico EMS on-line marketing functionality, we strongly recommend following the Best practices for EMS performance in MVC in addition to the recommendations listed below.
Transferring data between storage spaces and the application can be one of the main performance bottlenecks. When retrieving content from the Kentico database (or other external sources), load only the data that you require in your views or other related code. In most cases, you do not need all data columns available in the source. The less data you retrieve, the faster your pages will be.
When loading data using the Kentico DocumentQuery or ObjectQuery API (for example via generated providers or in custom repositories and services), you can limit which data columns are loaded through the Columns method – specify the names of the required columns as an array of strings.
// Gets article data using a generated provider IEnumerable<Article> articles = ArticleProvider.GetArticles() .Columns("NodeID", "NodeAlias", "NodeSiteID", "ArticleTitle", "ArticleText") // Limits the retrieved data columns .OnSite("MySite") .Culture("en-US") .Path("/Articles/", PathTypeEnum.Children) .ToList();
Tip: In custom repositories or services, you may often need to wrap and extend DocumentQuery or ObjectQuery calls from other classes or providers. In these scenarios, use the AddColumns method instead of Columns – this adds to the list of retrieved columns without overriding any columns specified by the previous call.
Caching data and page output
Whenever possible, use caching for retrieved data and the output of controller actions:
- See Caching on MVC sites for more information.
- To avoid displaying of outdated content, set up cache dependencies.
- Use cache keys containing variables to cache different versions of dynamic content (for example different page output for each user).
Enabling IIS content compression
IIS content compression allows the system to lower the volume of transferred data by compressing the resources. There are two types of compression available in the IIS:
- Dynamic compression – compression of dynamically generated responses
- Static compression – compression of static content (images, document and other files on the file system)
To enable IIS compression in your MVC project:
- Install the required compression modules.
- Add a urlCompression element into the projects Web.config file and specify the following settings:
- doDynamicCompression – enables or disables the dynamic compression of content. The default value is true.
- doStaticCompression – enables or disables the static compression of content. The default value is true.
dynamicCompressionBeforeCache incompatible with Kentico environment
Using dynamicCompressionBeforeCache attribute of the urlCompression element is not possible in the Kentico environment.
Kentico is using a custom HTTP module, which modifies the HTML output with output filters (resolves relative links, adds anti-forgery tokens). When using the dynamicCompressionBeforeCache setting, HTML output is compressed before any output filters are applied and this results in invalid HTML output.
Scaling out MVC sites
If your site's performance is not satisfactory after you have taken all possible steps to optimize the website's code, you can consider scaling your hosting environment to multiple web farm servers.
With the MVC development model, your MVC application and Kentico application should already be configured to run as servers in an automatic web farm (see Starting with MVC development). The web farm ensures that the MVC application invalidates cache according to content or setting changes made in the Kentico application and vice versa.
Licensing of the web farm servers works automatically for basic scenarios – see Kentico licensing for MVC applications for details.
If you wish to use a web farm to scale the site's performance, you can add further instances of the MVC application. We recommend using the following process:
- Develop and test the site in a web farm with two servers (one MVC application, one Kentico application).
- Deploy any number of additional instances of the same MVC application. Each instance must connect to the same Kentico database.
The automatic web farm mode automatically registers the new instances as web farm servers and ensures correct synchronization (among all instances of the MVC application and the Kentico application). One way to create a scalable website is to deploy your instance to cloud hosting.
If you need to scale the performance of the Kentico administration interface used to manage the site content and settings, you can also run multiple instances of the Kentico application in the web farm. In this scenario, you need to use one of the servers as the "primary" Kentico instance (for example for holding files shared by the entire web farm, such as locally stored search indexes).
Note: If you scale out to have more than the two basic web farm servers per MVC site (live site + administration), you need to have a license that supports the additional number of web farm servers.
Was this page helpful? | https://docs.kentico.com/k12/configuring-kentico/optimizing-performance-of-mvc-sites | 2019-10-13T23:54:31 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.kentico.com |
Update plannerAssignedToTaskBoardTaskFormat
Update the properties of plannerAssignedToTaskBoardTaskFormat object.
Permissions
One of the following permissions is required to call this API. To learn more, including how to choose permissions, see Permissions.
HTTP request
PATCH /planner/tasks/{id}/assignedToTaskBoardFormat
Optional updated plannerAssignedT.
Example
Request
Here is an example of the request.
PATCH{task-id}/assignedToTaskBoardFormat Content-type: application/json Content-length: 96 If-Match: W/"JzEtVGFzayAgQEBAQEBAQEBAQEBAQEBAWCc=" { "orderHintsByAssignee": { "aaa27244-1db4-476a-a5cb-004607466324": "8566473P 957764Jk!" } }
Response" }
Feedback | https://docs.microsoft.com/en-us/graph/api/plannerassignedtotaskboardtaskformat-update?view=graph-rest-1.0 | 2019-10-13T23:43:48 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.microsoft.com |
Series Types
Chart series have two primary functions: (1) they contain a collection of specific data points representing the actual data, and (2) they visualize the data using an internal predefined model determined by the series type. RadChartView supports a number of series types – Bar, Line (Spline), Area (Spline Area), Scatter, Stock, Pie, Donut, Polar, Radar – and each of them can be used only with a certain area type - Categorical, Pie or Polar. The following schema shows the set of series supported by each area type:
- Cartesian Area: Supports Bar, Line, Area, Ohlc, Candlestick and Scatter series. This area sets a standard Cartesian coordinate system where the position of each point on the plane is identified through a pair of values. Series supported in RadChartView’s Cartesian area can be classified in two groups – scatter and categorical. While the former positions its point using two numerical values and, therefore, requires two numerical axes, the latter uses one numerical and one categorical value to plot its data points. The Categorical series you will be able to use with RadChartView are Bar, Line (Spline), Area (Spline Area), Ohlc, Candlestick. Currently the control supports only Scatter point series.
The CartesianSeries also defines the ClipLabels property which is by default set to false. This property determines whether the series labels will be clipped according to the viewport`s width and height.
Pie Area: Supports Pie, Donut series. Unlike all other areas, Pie area does not use axes. It displays each data point as slices with arc size directly proportional to the magnitude of the raw data point’s value. The supported series types are Pie and Donut.
Polar Area: Supports Polar (Point, Line, Area) and Radar (Point, Line, Area) series. This area setups a polar coordinate system, where each value is positioned using a value-angle couple. Additionally, the Polar area renders Radar series, which splits the polar area into equal-size category sectors.
Funnel Area: Supports FunnelSeries. Funnel area doesn't use an axis as well. It displays a single series of data in progressively decreasing or increasing proportions, organized in segments, where each segment represents the value for the particular item from the series. The items' values can also influence the height and the shape of the corresponding segments.
Each series type contains a DataPoints collection that contains specific data points. For example, Bar, Line and Area series work with CategoricalDataPoints. Scatter and Pie series, however, operate only with ScatterDataPoints and PieDataPoints respectively. Each series type visualizes the data in the best way to present the information stored in its data points. The screenshots below illustrate how each series type is rendered:
Figure 1: Series Types
Chart series support both bound and unbound mode. All series contain the following two binding properties - DataSource and ValueMember. Once a DataSource is assigned, the ValueMember property is used to resolve the property of the data records visualized by the data points. Different series types introduce additional data binding properties, related to the specific of the contained data. These are CategoricalMember, AngleMember, XValueMember, YValueMember. In unbound mode, categorical series can be populated with data manually using the DataPoints collection.
A common scenario for RadChartView is to contain several series instances, which could be of different types. For example, you can easily combine derivatives of CategoricalSeries class.
The chart series also have a mechanism for combining data points that reside in different series but have the same category. This mechanism is controlled via the CombineMode property. The combine mode can be None, Cluster and Stack.
None: The series will be plotted independently of each other.
Cluster: The data points will be in the same category huddled close together.
Stack: Plots the points on top of each other.
Stack100: Presents the values of one series as a percentage of the other series. | https://docs.telerik.com/devtools/winforms/controls/chartview/series-types/series-types | 2019-10-13T22:14:32 | CC-MAIN-2019-43 | 1570986648343.8 | [array(['images/chartview-series-types-overview001.png',
'chartview series types overview 001'], dtype=object)
array(['images/chartview-series-types-overview002.png',
'chartview series types overview 002'], dtype=object)] | docs.telerik.com |
The Material palette contains a library of materials to choose from. Each material contains modifiers which cause it to interact with light in a unique way. Unlike some other palettes, materials are not added to, or removed from this palette. They can be replaced with materials loaded from disk files.
The large material thumbnail acts as a picker — click within this window and drag to the canvas to select the material at that point. In addition, ZBrush remembers all materials used in a document; they’re saved with the document, whether customized or not. Editing or loading a material here also changes any painted elements on the canvas which use the corresponding material.
Load
The Load Material button replaces the selected material with a saved one.
Save
The Save Material button saves the selected material to a disk file..
Show Used
The Show Used button examines all materials used in the document, and displays their corresponding icons in this palette.
CopyMat / Paste Mat
Allows you to copy one material and paste it in to replace another. You can do this if you want to change the replaced material wherever that material is used in the scene. It’s also useful for getting a copy of a starting material, so you can modify the copy but not affect the original.
Material palette sub-palettes
Reference Guide > Material | http://docs.pixologic.com/reference-guide/material/ | 2019-10-13T23:00:37 | CC-MAIN-2019-43 | 1570986648343.8 | [] | docs.pixologic.com |
When you click the Create Group button (under the FlowJo tab or in the static toolbar) you are presented with the dialog window below.
This window also appears when you double-click on a group in order to edit the criteria for membership.
Overview:
The basic process for creating a group is to:
- Give the Group a name, color, and text style.
- Choose whether or not to make the Group “Live” (FlowJo will examine all samples subsequently added to the Workspace and add them to a group if they match that group’s criteria) and/or “Synchronized” (FlowJo will automatically apply changes — namely gate adjustments — made in one of the samples of the group to all other samples within that group).
- Assign the “Role” of a group. The choices available are “Test ”, “Replicate”, “Compensation”, “Baseline”, “Gating Level”, “Controls”, and “Instrumentation”. This will help the user know the purpose of creating a particular group.
- Select the sample inclusion criteria for groups. This could be a specific staining protocol introduced during acquisition or a keyword combination and could include reference samples in another group. Note: You don’t have to utilize keywords. You can also make “manual” groups in which you just manually drag and drop samples into the group within the workspace.
After creating a new group, it will appear within the group panel of the workspace.
If you assigned a set of keyword restrictions to the group, the samples that match the keyword restrictions will be present in the group automatically. You can also manually drag any samples into any group you’ve created. They will automatically become members of the group irrespective of the criteria you have established for the group. You may also remove samples from a group by selecting and deleting (delete or backspace key on your keyboard) them (they will not be deleted from the workspace if they are members of the All Samples group – if you want to remove a sample entirely from the workspace, you have to delete it from the All Samples group).
You can reopen this dialog and change group attributes such as the name, style, and color as well as automatic sample selection criteria by double-clicking the group name in the groups panel of the Workspace Window. When selecting a group in the “group” panel note that it will appear in the “group analysis” panel, where you can easily work on it.
For more details, check out Group Hints.
Further Details:
Select a Group’s Color and Text Style: The color and style apply not only to the nodes of the group in the group list panel, but also to any node belonging to a sample (like gates) which was added to the group AND is still identical to the group’s version of the gate. This is how you can tell if a gate has been modified from the group’s version – it will not appear in the same color/style as the groups, but rather in plain, black text. Therefore, you should avoid having group nodes shown in plain, black text, as you would be unable to distinguish between gates that belong to the sample and gates which belong to the group.
Live Group: If the Live Group checkbox is selected, FlowJo will examine any new data files that you add to the workspace to see if they fit the criteria for this group. If they do, then they are automatically added to the group, and any group based analyses are applied to them as well. If you are constructing template work spaces for future use, be sure that this check box is selected.
Synchronize Gates: If you choose Synchronized, FlowJo will automatically update the gates for all the samples in the group as soon as one gate is adjusted in one of the samples of the group. This option saves time by applying newly adjusted gates automatically to all the samples in the corresponding group, skipping the step of dragging the newly adjusted gate into the group name. Please note, however, that it does not allow for sample variation of gates. All the gates will be the same as the last adjusted gate on any sample in the group.
Stain Protocol List: FlowJo examines all data files in your workspace to build a list of the different staining panels. These are shown in the box in the middle. You can select one or more of these to state that only samples stained with the particular combination of reagents should be added to the group.
Create Multiple Groups: If you select more than one stain(s) as a criteria for the creation of a group, the Multiple button will become available and selecting it will make a separate group for each selected reagent panel.
More Choices/Fewer Choices: This allows you to manage the keywords which denote groups. You can specify one or more search criteria based on FCS keyword values. Select an FCS keyword, a comparison, and the value to compare against; FlowJo will add samples to the group only if they meet the search criteria.
Create a Subgroup: You can specify that a group is composed of only samples from another group (or to use all samples in another group), essentially providing you with the ability to make subgroups. Samples must fit all criteria specified in this window in order to be added to the group. | https://docs.flowjo.com/flowjo/workspaces-and-samples/ws-groups/ws-groupdialog/ | 2022-09-24T22:01:29 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['https://docs.flowjo.com/wp-content/uploads/sites/6/2013/03/Screenshot_102115_062422_PM.jpg',
'Screenshot_102115_062422_PM'], dtype=object)
array(['https://docs.flowjo.com/wp-content/uploads/sites/6/2013/03/Screenshot_102115_063150_PM.jpg',
'Screenshot_102115_063150_PM'], dtype=object) ] | docs.flowjo.com |
Newton Series (3.3.0 - 4.2.x) Release Notes¶
4.2.2¶
Bug Fixes¶
Loopback BMC addresses (useful e.g. with virtualbmc) are no longer used for lookup.
4.2.1¶
Bug Fixes¶
LLC hook now formats the chassis id and port id MAC addresses into Unix format as expected by ironic.
LLC hook ensures that correct port information is passed to patch_port function
LLC hook no longer assumes all inspected ports are added to ironic
4.2.0¶
New Features¶
Adds new processing hook pci_devices for setting node capabilities based on PCI devices present on a node and rules in the [pci_devices] aliases configuration option. Requires “pci-devices” collector to be enabled in IPA.
Bug Fixes¶
Use only single quotes for strings inside SQL statements. Fixes a crash when PostgreSQL is used as a database backend.
Set the node to the error state when it failed get data from swift.
4.1.0¶
New Features¶
Added GenericLocalLinkConnectionHook processing plugin to process LLDP data returned during inspection and set port ID and switch ID in an Ironic node’s port local link connection information using that data.
Add configuration option processing.power_off defaulting to True, which allows to leave nodes powered on after introspection.
Bug Fixes¶
Fix setting non string ‘value’ field for rule’s actions. As non string value is obviously not a formatted value, add the check to avoid AttributeError exception.
4.0.0¶
Prelude¶
Starting with this release only ironic-python-agent (IPA) is supported as an introspection ramdisk.
New Features¶
Added a new “capabilities” processing hook detecting the CPU and boot mode capabilities (the latter disabled by default).
File name for stored ramdisk logs can now be customized via “ramdisk_logs_filename_format” option.
Upgrade Notes¶
The default file name for stored ramdisk logs was change to contain only node UUID (if known) and the current date time. A proper “.tar.gz” extension is now appended.
API “POST /v1/rules” returns 201 response code instead of 200 on creating success. API version was bumped to 1.6. API less than 1.6 continues to return 200.
Default API version was changed from minimum to maximum which Inspector can support.
Support for the old bash-based ramdisk was removed. Please switch to IPA before upgrading.
Removed the deprecated “root_device_hint” alias for the “raid_device” hook.
Bug Fixes¶
Fixed “/v1/continue” to return HTTP 500 on unexpected exceptions, not HTTP 400.
Fix response return code for rule creating endpoint, it returns 201 now instead of 200 on success.
The “size” root device hint is now always converted to an integer for consistency with IPA.
3.3.0¶
New Features¶
Ironic-Inspector is now using keystoneauth and proper auth_plugins instead of keystoneclient for communicating with Ironic and Swift. It allows to finely tune authentification for each service independently. For each service, the keystone session is created and reused, minimizing the number of authentification requests to Keystone.
Add support for using Ironic node names in API instead of UUIDs. Note that using node names in the introspection status API will require a call to Ironic to be made by the service.
Database migrations downgrade was removed. More info about database migration/rollback could be found here
Introduced API “POST /v1/introspection/UUID/data/unprocessed” for reapplying the introspection over stored data.
Upgrade Notes¶
Operators are advised to specify a proper keystoneauth plugin and its appropriate settings in [ironic] and [swift] config sections. Backward compatibility with previous authentification options is included. Using authentification informaiton for Ironic and Swift from [keystone_authtoken] config section is no longer supported.
Handling ramdisk logs was moved out of the “ramdisk_error” plugin, so disabling it will no longer disable handling ramdisk logs. As before, you can set “ramdisk_logs_dir” option to an empty value (the default) to disable storing ramdisk logs.
Deprecation Notes¶
Most of current authentification options for either Ironic or Swift are deprecated and will be removed in a future release. Please configure the keystoneauth auth plugin authentification instead..
Fixed the “is-empty” condition to return True on missing values.
The lookup procedure now uses all valid MAC’s, not only the MAC(s) that will be used for creating port(s).
The “enroll” node_not_found_hook now uses all valid MAC’s to check node existence, not only the MAC(s) that will be used for creating port(s).
The ramdisk logs are now stored on all preprocessing errors, not only ones reported by the ramdisk itself. This required moving the ramdisk logs handling from the “ramdisk_error” plugin to the generic processing code. | https://docs.openstack.org/releasenotes/ironic-inspector/newton.html | 2022-09-24T23:09:39 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.openstack.org |
Aggregations#
Alongside the search functionality, OpenSearch® offers a powerful analytics engine able to perform summary calculations of your data, and extract statistics and metrics very rapidly. Results of these “aggregations” can be then visualised with OpenSearch Dashboards.
Aggregations can be divided into three groups:
metric aggregation performs simple calculations on values extracted from the fields of the documents, for example finding minimum or maximum value, calculating average or collecting statistics about field values.
bucket aggregation distributes documents over a set of buckets based on provided criteria. For example, based on predefined ranges of values or based on how often a value is encountered in a field. Bucket aggregation is also used to create histograms.
pipeline aggregations combine several aggregations in a way that allows using a result of one aggregation as an intermediate step to create a refined output. With pipeline aggregations you can build moving averages, cumulative sums and perform a variety of other mathematical calculations over the data in your documents. | https://docs.aiven.io/docs/products/opensearch/concepts/aggregations.html | 2022-09-24T23:46:35 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.aiven.io |
Get Arbor¶
To get started quickly with Arbor using its Python API on your personal machine, we advise that you install Arbor’s Python package. If you wish to use the C++ API, you can use the Spack package, or build Arbor from source. Note that you can also build the Python bindings using these methods. | https://docs.arbor-sim.org/en/latest/install/index.html | 2022-09-24T21:46:12 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.arbor-sim.org |
Configuring Okta and Jamf Pro for Mobile Device Trust
Updated: 20 May 2022
Product: Jamf Pro
Configuring Okta and Jamf Pro for Mobile Device Trust involves the following steps:
Enable Mobile Device Trust in Okta for Jamf Pro
Configure managed app settings in Jamf Pro for Okta Mobile
General Requirements
User-initiated enrollment enabled for iOS devices in Jamf Pro
Okta Device Trust enabled on the Okta instance
Apps utilizing SAML or WS-FED
In addition, apps must be configured to only allow access with Device Trust. This requires removing display of the app from Okta Mobile.
Step 1: Enable Mobile Device Trust in Okta for Jamf Pro
- Enable Device Trust on the Okta instance.
- Configure Mobile Device Trust in Okta. For more information about configuring Mobile Device Trust in Okta, see the following Okta product documentation:
- When configuring settings on the Enable Mobile Device Trust dialog box, do the following:
- In Trust is established by, select Other.
- In the Enrollment link field, enter your Jamf Pro enrollment URL. The enrollment URL is the full URL for the Jamf Pro server followed by "/enroll".Example: (hosted on Jamf Cloud) (hosted on-premise)
Step 2: Configure managed app settings in Jamf Pro for Okta Mobile
- Log in to Jamf Pro.
- Click Devices at the top of the page.
- Click Mobile Device Apps.
- Add a new App Store app for Okta Mobile or edit the existing app if already added to Jamf Pro. For more information, see Apps Purchased in Volume in the Jamf Pro Documentation.
- On the General pane, ensure that the Make App managed when possible checkbox is selected, and then select the Make app managed if currently installed as unmanaged checkbox.
- Click the App Configuration tab.
- Copy the following key/string combination and paste it in the Preferences field, replacing Okta generated token goes here with the Secret Key Value that was generated when setting up Okta Device Trust:
<dict> <key>managementHint</key> <string>Okta generated token goes here</string> </dict>
- Use the Scope, Self Service, and VPP panes to configure app distribution settings as needed. For more information, see Apps Purchased in Volume in the Jamf Pro Documentation.
- Click Save. | https://docs.jamf.com/technical-articles/Configuring_Okta_and_Jamf_Pro_for_Mobile_Device_Trust.html | 2022-09-24T23:14:03 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.jamf.com |
Service Brokers provides resources for building service brokers and routing services.
Service Broker Resources
The Custom Services Overview topic gives a high-level description of how service brokers work in Pivotal Cloud Foundry (PCF).
The Service Broker API topic gives a more detailed explanation of PCF service brokers, and provides a full specification for the endpoints, requests, responses, and status codes that a service broker must support.
The Example Service Brokers topic offers example brokers written in Ruby, Java, and Go.
The Supporting Multiple Cloud Foundry Instances topic has information about registering a service broker with multiple Cloud Foundry instances.
Route Services Resources
Route Services explains how route services work, and what are the different architectures for using them in a Cloud Foundry PCF Apps Manager UI but not the plain text output of
cf marketplace. | https://docs.pivotal.io/tiledev/2-3/service-brokers.html | 2022-09-24T22:11:41 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.pivotal.io |
25.6.3 Truenames.
function file-truename filename.
If the target of a symbolic links has remote file name syntax,
file-truename returns it quoted. See Functions that Expand Filenames.
function file-chase-links filename \&optional limit"
function file-equal-p file1 file2.
function file-name-case-insensitive-p filename
Sometimes file names or their parts need to be compared as strings, in which case it’s important to know whether the underlying filesystem is case-insensitive. This function returns
t if file
filename is on a case-insensitive filesystem. It always returns
t on MS-DOS and MS-Windows. On Cygwin and macOS, filesystems may or may not be case-insensitive, and the function tries to determine case-sensitivity by a runtime test. If the test is inconclusive, the function returns
t on Cygwin and
nil on macOS.
Currently this function always returns
nil on platforms other than MS-DOS, MS-Windows, Cygwin, and macOS. It does not detect case-insensitivity of mounted filesystems, such as Samba shares or NFS-mounted Windows volumes. On remote hosts, it assumes
t for the ‘
smb’ method. For all other connection methods, runtime tests are performed.
function file-in-directory-p file dir.
function vc-responsible-backend file
This function determines the responsible VC backend of the given
file. For example, if
emacs.c is a file tracked by Git,
(vc-responsible-backend "emacs.c") returns ‘
Git’. Note that if
file is a symbolic link,
vc-responsible-backend will not resolve it—the backend of the symbolic link file itself is reported. To get the backend VC of the file to which
file refers, wrap
file with a symbolic link resolving function such as
file-chase-links:
(vc-responsible-backend (file-chase-links "emacs.c")) | https://emacsdocs.org/docs/elisp/Truenames | 2022-09-24T23:35:31 | CC-MAIN-2022-40 | 1664030333541.98 | [] | emacsdocs.org |
Welcome to the documentation of compute_hyperv¶
Starting with Folsom, Hyper-V can be used as a compute node within OpenStack deployments.
This documentation contains information on how to setup and configure Hyper-V hosts as OpenStack compute nodes, more specifically:
- Supported OS versions
- Requirements and host configurations
- How to install the necessary OpenStack services
nova-computeconfiguration options
- Troubleshooting and debugging tips & tricks
For release notes, please check out the following page.
Contents:
- compute-hyperv
- Contributing
- Installation guide
- Troubleshooting guide
- Configuration
- Usage guide | https://compute-hyperv.readthedocs.io/en/latest/index.html | 2022-09-24T23:00:08 | CC-MAIN-2022-40 | 1664030333541.98 | [] | compute-hyperv.readthedocs.io |
Interactive view
A form opened in interactive mode is a graphical component with a certain design in which the user can trigger various events and thereby navigate through system objects, view and change property values, execute actions, and so on. Developers can also use an additional set of operators with this view, making it possible to manage the open form.
Object views
In the interactive view, object groups can be displayed in a table. The rows in the table are object collections, and the columns are properties. The records displayed in the table and their order are determined by the current filters and orders.
Current values of objects can change either as a result of an action created using the special search operator (
SEEK), or as a result of a change to the current row, if an object group is displayed in a table.
When an object group is displayed in a table, the number of rows (object collections) displayed can either be determined automatically based on the height of the visible part of the table, or specified by the developer explicitly when creating the form.
Object trees
The platform also allows to display multiple object groups in one table simultaneously. This happens similarly to the object group hierarchy in a static view, i.e. if we have two groups
A and
B then, in the "joined" table, the first object collection from
A is displayed first, then all object collections from
B (as filtered), then a second object collection from
A, then again all the object collections from
B and so on. In this case, it is highly desirable that the filters for
B used all objects from
A, since otherwise combining these groups into a single tree doesn't make sense. Initially, when a form is opened in the table, only objects of the topmost object group are displayed, but at the same time, a special column is created on the left of the table, using which the user can open nodes on his own and thus view only objects of interest in the lower object groups. Another function of this created column is to demonstrate the nesting of nodes by tabulating the elements inside this column (this allows the user to better understand what level of the hierarchy he is currently at).
Object trees also can be used to display hierarchical data (such as classifiers). In this case, the descendants of the object collection of a group in the tree can be not only object collections of lower groups but also object collections of the same group (such an object group shall be called hierarchical). To determine these child object collections in a hierarchical object group, it is necessary to define an additional filter for it – which, unlike regular filters, can refer not only to the values of the filtered object collections but also to the values of the "upper in the tree" object collection (the same approach is used in the recursion operator). It is highly desirable that the hierarchical filter uses all the values of the upper object collections, since otherwise, as with filters between different groups of objects, creating such a tree doesn't make sense. Initially, it is assumed that all values of the "upper in the tree" object collection are
NULL.
In the current platform implementation, hierarchical groups allow only trees to be displayed (not directed graphs). Accordingly, it is allowed to use only values of the upper object collections and properties that take lower (filtered) values of objects as input for a hierarchical filter (so that it is guaranteed that the same tree node cannot be reached in different ways)
The properties of different object groups in the tree are arranged in columns under each other, that is, the first column displays the first properties of each object group, the second column displays the second ones, and so on. The total number of tree columns is determined by the last group of objects on the tree (all "extra" properties of the upper groups are simply ignored).
Property views
Any property or action can be displayed on a form in one of the following views:
- Panel (
PANEL): a separate component that displays a property caption and this property value for the current values of the form objects.
- Toolbar (
TOOLBAR): similar to a panel, but this component has a different default location (immediately below the table), and if the table to which a toolbar belongs is hidden then the toolbar is hidden with it.
- table column (
GRID): a separate column in the table that displays the property values for all object collections (rows) in the table.
For each object group, you can specify which default view the properties of this group will be displayed in (by default, this view is a table column). If the property has no parameters (that is, it does not have a display group), it is displayed in a panel. Actions are always displayed in a panel by default.
For the remainder of the section, the behavior of properties and actions is exactly the same and so we will use only the term property (behavior is absolutely identical for actions).
If necessary, the developer can explicitly specify which view a property should use.
If at any point there are no properties displayed in the table for the object group, the table is automatically hidden.
By default, the caption of each property on the form is the title of the property itself. If necessary, the developer can specify a different caption, or, if you need even more flexibility, use a property as a caption. This caption property can receive upper objects of the displayed property as input. It is also worth noting that if groups-in-columns are defined for the property, then it is desirable to have different captions for the created columns (in order to distinguish them somehow): in this case, it is recommended to use a property that receives all (!) objects of the defined group-in-columns as input.
In addition to the captions, you can define colors (both the background color and the text color) for each property view on a form, as well as a condition that needs to be met for the property to be displayed. Like the caption, each of these parameters is defined using some property.
Filter group
In order to provide the user with an interface for choosing filters to apply, they can be combined into filter groups. For each of these groups, a special component will be created on the form: the user can use it to select one filter from the group as the current active filter. If several filters in one group are applied to different object groups, then the component will be displayed for the last of them.
The developer can specify a name for each filter group which can be used to access it in the future (for example, in form design).
Custom filters/orders
The user can change existing orders or add their own, as well as add their own filters using the corresponding interfaces:
- Orders – by double-clicking on the column heading.
- Filters – by using the corresponding button under the table for each object group. By default, the filter is set to the active property in the table, and filters it for equality to the entered value (for all types except case-insensitive string types, where the filter is set to include the entered string). If necessary, the developer can specify the default filtering type explicitly by using the corresponding option.
Default objects selection
In the interactive form view, object group filters can change as a result of various user actions (for example, changing the upper objects of these filters, selecting filters in the filter group, etc.), after which the current objects may no longer meet the conditions of the new filters. Also, when a form is opened, some objects may not be passed or may be passed equal to
NULL. In both of these cases, it is necessary to change the current objects, to some current default objects. The platform provides several options for selecting new current objects:
- First (
FIRST) - the first object collection (in accordance with the current order)
LAST) – last object collection.
PREV) – the previous object collection (or as close to it as possible).
- Undefined (
NULL) –
NULLvalues collection.
If none of these options is explicitly specified, the platform will try to determine whether the permanent filters in the group of objects are a) mutually exclusive for different values of the upper objects (if any), and/or b) the filter selects a very small percentage of the total number of objects of the specified classes. In both of these cases, it makes no sense to search for the previous object and, by default, the first object is selected (
FIRST); in all other cases, the previous object (
PREV).
It is worth noting that the selection of objects by default is pretty the same as the object search operation, where the search objects are:
- for type
PREV
- on opening a form: either the passed objects, or, if there are none, the last used objects for the form object class.
- in other cases: the previous current object values
- for other types
- on opening the form - passed objects
- in other cases – an empty object collection
Search direction is determined by the object's default type (
PREV here is equivalent to
FIRST).
Object operators
When adding properties to a form, you can use a predefined set of operators that implement the most common scenarios for working with objects instead of using specific properties (thus avoiding the need to create and name these properties outside the form each time):
- Object value (
VALUE) – for a form object of built-in class , a special property with one argument will be added which displays the current object value and allows the user to change it. For custom classes, a property will be added which displays the object ID in the database; when you try to change it, it shows a dialog with a list of objects of that class. The selected value will be used as the current value of the object on the form.
- Create object (
NEW) – adds an action without arguments, which creates an object of the class of the passed form object (or the class explicitly specified by the developer), after which it automatically makes this object current. If the class has descendants, the user will be shown a dialog where he can select specific child class. If any filters are applied to the form object, for which the object is created, the system will try to change the newly created object's properties so that it meets these filter conditions (as a rule, for created objects, a default value of the class of each filter's value is written to that filter)
- Edit object (
EDIT) – adds an action with one argument, which calls the
System.formEditaction (which, in turn, open the default edit form for the edited object class).
- Create and edit an object (
NEWEDIT) – adds an action without arguments which creates an object of the form object class, calls the edit object action (
EDIT), and if the input is not canceled, sets the added object as current.
- Delete object (
DELETE) – adds an action with one argument which deletes the current object.
You can also specify options for the last four operators (ignored for all other actions):
- New Session (
NEWSESSION) – in this case, the action added to the form will be executed in a new session. When opening forms in a new session, it is important to remember that changes made in the current session (form) will not be visible. Thus, this mechanism is only recommended if the form is opened from a form in which the user cannot change anything, or if the properties and actions of the two forms do not intersect in any way. Note that when the operator is used to create a new object (
NEW) in a new session, the object is not only created but also edited (
NEWEDIT) (otherwise, the session would immediately close and your changes would be lost).
- Nested Session (
NESTEDSESSION) – the action will be executed in a new nested session. As with a new session,
NEWis replaced by
NEWEDIT.
Selection/editing forms
For each form, you can specify that it is the default form for viewing/editing objects of a given class. In this case, this form will be opened when you call actions created using the operators for object operations (create/edit an object). The same form will be opened when the corresponding form selection option is used in the form opening operator.
If list/edit form is not defined for a class, the platform will create one automatically. This form will consist of one object of the class, along with all properties matching the class and belonging to the
System.base property group. Also, actions of creating, editing and deleting an object in a new session will be automatically added to the form, along with the object value property if there are no properties from the
System.id property group corresponding to the class of the object (that is, no "ID" of the object has been added to the form).
Session owner
Since a form is opened by default in the current session, it may not always be safe to apply/cancel changes to this session: for example, the changes made in other forms may accidentally be applied. To avoid such situations, the platform has the concept of a session owner – a form which is responsible for managing the life cycle of the session (for example, applying / canceling changes). By default, it is considered that a form is the session owner if the session did not have any other owner when the form was opened.
To implement the mechanism for working with session owners the platform uses a numerical local property called
System.sessionOwners. Accordingly, this property is incremented by
1 when you open a form and decremented by
1 when you close it. Thus, it shows the nesting depth of the "form opening stack", and is
NULL if the session has no owner and not
NULL otherwise.
If necessary, the developer can explicitly specify when opening a form that this form is the owner of the session that it uses.
Session ownership only affects the display / behavior of system actions for managing the life cycle of a form / session. When using the remaining actions, it is recommended that the developer should consider the risk of applying the "wrong" changes by himself (and, for example, use the mentioned above
System.sessionOwners property).
System actions for form/session lifecycle management
The following system actions are automatically added to any form (their names are specified in brackets):
- Refresh (
System.formRefresh) - updates the current state of the form, re-reading all the information from the database.
- Save (
System.formApply) - saves the changes made on the form to the database.
- Cancel (
System.formCancel) - cancels all changes made on the form.
- OK (
System.formOk) – closes the current form and, if the form is the session owner, applies the changes to the database.
System.formClose) - closes the current form and does nothing with the changes.
- Drop (
System.formDrop) – closes the current form and returns
NULLas the selected object.
By default, these system actions have the following visibility conditions:
If necessary, all these actions can be shown/hidden by removing the corresponding components from the form design and/or using the corresponding options in the open form operator.
Additional features
You can specify an image file which will be displayed as the form's icon.
Also, if necessary, you can enable automatic update mode for a form: the
System.formRefresh action will then be executed for the form at a specified interval.
Language
All of the above options, as well as defining the form structure, can be done using the
FORM statement.
Open form
To display the form in the interactive view, the corresponding open form operator is used in interactive view.
Examples
date = DATA DATE (Order);
FORM showForm
OBJECTS dateFrom = DATE, dateTo = DATE PANEL
PROPERTIES VALUE(dateFrom), VALUE(dateTo)
OBJECTS o = Order
FILTERS date(o) >= dateFrom, date(o) <= dateTo
;
testShow () {
SHOW showForm OBJECTS dateFrom = 2010_01_01, dateTo = 2010_12_31;
NEWSESSION {
NEW s = Sku {
SHOW sku OBJECTS s = s FLOAT;
}
}
}
FORM selectSku
OBJECTS s = Sku
PROPERTIES(s) id
;
testDialog {
DIALOG selectSku OBJECTS s INPUT DO {
MESSAGE 'Selected sku : ' + id(s);
}
}
sku = DATA Sku (OrderDetail);
idSku (OrderDetail d) = id(sku(d));
changeSku (OrderDetail d) {
DIALOG selectSku OBJECTS s = sku(d) CHANGE;
//equivalent to the first option
DIALOG selectSku OBJECTS s = sku(d) INPUT NULL CONSTRAINTFILTER DO {
sku(d) <- s;
}
} | https://docs.lsfusion.org/next/Interactive_view/ | 2022-09-24T22:51:04 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.lsfusion.org |
Release Notes 1.1.1.0
These release notes provide general information and describe known issues for NGINX Service Mesh version 1.1.0, in the following categories:
- NGINX Service Mesh Version 1.1.0
Updates
NGINX Service Mesh 1.1.0 includes the following updates:
Improvements
- Helm Support for install and removal
- Air-gap installation support for private environments
- In-place upgrades for non-disruptive version updates (control plane, data plane)
- Update to NGINX Plus R24 P1 sidecar images
- Update to SPIRE 0.12.3 images
Bug fixes
- Better error handling on mesh startup
- Fixed issue where re-roll instructions and service details were incorrectly flagging NGINX Plus Ingress Controller as including a sidecar
- Enhanced error notification when installing in existing namespace
Resolved Issues
This release includes fixes for the following issues. You can search by the issue ID to locate the details for an issue.
Kubernetes reports warnings on versions >=1.19 (22721)
Deploying NGINX Service Mesh to an existing namespace fails and returns an inaccurate error (245 ...
NGINX Service Mesh DNS Suffix support (21951):
NGINX Service Mesh only supports the
cluster.local DNS suffix. Services such as Grafana and Prometheus will not work in clusters with a custom DNS suffix.
Workaround:
Ensure your cluster is setup with the default
cluster.local DNS suffix..
Use of an invalid container image does not report an immediate error (24899):
If you pass an invalid value for
--registry-server and/or
--image-tag (for example an unreachable host, an invalid or non-existent path-component or an invalid or non-existent tag), the
nginx-meshctl command will only notify of an error when it verifies the installation. The verification stage of deployment may take over 2 minutes before running.
An image name constructed from
--registry-server and
--image-tag, when invalid, will only notify of an error once the
nginx-meshctl command begins verifying the deployment. The following message will be displayed after a few minutes of running:
All resources created. Testing the connection to the Service Mesh API Server... Connection to NGINX Service Mesh API Server failed. Check the logs of the nginx-mesh-api container in namespace nginx-mesh for more details. Error: failed to connect to Mesh API Server, ensure you are authorized and can access services in your Kubernetes cluster
Running
kubectl -n nginx-mesh get pods will show containers in an
ErrImagePull or
ImagePullBackOff status.
For example:
NAME READY STATUS RESTARTS AGE grafana-5647fdf464-hx9s4 1/1 Running 0 64s jaeger-6fcf7cd97b-cgrt9 1/1 Running 0 64s nats-server-6bc4f9bbc8-jxzct 0/2 Init:ImagePullBackOff 0 2m9s nginx-mesh-api-84898cbc67-tdwdw 0/1 ImagePullBackOff 0 68s nginx-mesh-metrics-55fd89954c-mbb25 0/1 ErrImagePull 0 66s prometheus-8d5fb5879-fgdbh 1/1 Running 0 65s spire-agent-47t2w 1/1 Running 1 2m49s spire-agent-8pnch 1/1 Running 1 2m49s spire-agent-qtntx 1/1 Running 0 2m49s spire-server-0 2/2 Running 0 2m50s
Workaround:
You must correct your
--registry-server and/or
--image-tag arguments to be valid values.
In a non-air gapped deployment, be sure to use
docker-registry.nginx.com/nsm and a valid version tag appropriate to your requirements. See for more details.
In an air gapped deployment, be sure to use the correct private registry domain and path for your environment and the correct tag used when loading images.
Deployments enter a
CrashLoopBackoff status after removing NGINX Service Mesh (25421):
If a traffic policy (RateLimit, TrafficSplit, and so on) is still applied when removing NGINX Service Mesh, the sidecar container will crash causing the pod to enter a
CrashLoopBackoff state.
Workaround:
Remove all NGINX Service Mesh traffic policies before removing the mesh. Alternatively, you can re-roll all deployments after removing the mesh which will resolve the
CrashLoopBackoff state. | https://docs.nginx.com/nginx-service-mesh/releases/release-notes-1.1.0/ | 2022-09-24T23:17:49 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.nginx.com |
2D Arrays
Introduction to 2D Arrays
As we have noted previously, an array is a group of data consisting of the same type. This means that we can have an array of primitive data types (such as integers):
[1, 2, 3, 4, 5]
["hello", "world", "how", "are" "you"]
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
Additionally, we can have 2D arrays which are not rectangular in shape. These are called jagged arrays:
[['a', 'b', 'c', 'd'], ['e', 'f'], ['g', 'h', 'i', 'j'], ['k']]).
Declaration, Initialization, and Assignment
When;
[]) of
intarrays (.Jere os am example of initializing an empty 2D array with 3 rows and 5 columns: This results in a matrix which looks like this: intializer list for a regular array would be:
char[] charArray = {'a', 'b', 'c', 'd'};:
The previous method also applies to assigning a new 2D array to an existing 2D array stored in a variable.
Accessing Elements in a 2D Array
For a normal array, all we need is to provide an index (starting at
0) which represents the position of the element we want to access. Let's look at an example!
Given an array of five strings:
String[] words = {"cat", "dog", "apple", "bear", "eagle"};
0, the last element of the array minus one (in this case,
4), and any of the elements in between. We provide the index of the element we want to access inside a set of brackets. Let's see those examples in code: Now for 2D arrays, the syntax is slightly different. This is because instead of only providing a single index, we provide two indices. Take a look at this example: There are two ways of thinking when accessing a specific element in a 2D array:
The first value represents and row and the second value represents a column in the matrix.
The first value represents which subarray to access from the main array and the second value represents which element of the subarray is accessed.
The above example of the 2D array called
data can be visualized like so. The indices are labeled outside the matrix:
Using this knowledge, we now know that the result of
int stored = data[0][2]; would store the integer
6. This is because the value of
6 is located on the first row (index
0) and the third column (index
2). Here is a template which cal be used for accessing elements in 2D arrays:
datatype variableName = existing2DArray[row][column];
ArrayIndexOutOfBoundsExceptionerror will be given by the application.
Modifying Elements in a 2D Array
Now let's review how to modify elements in a normal array.
For a one dimensional array, you provide the index of the element which you want to modify within a set of brackets next to the variable name and set it equal to an acceptable value:
storedArray[5] = 10;
For 2D arrays, the format is similar, but we will provide the outer array index in the first set of brackets and the subarray index in the second set of brackets. We can also think of it as providing the row in the first set of brakcets if we were to visualize the 2D array as a rectangular matrix:
twoDArray[1][3] = 150;
Let's say we wanted to replace four values from a new 2D array called
intTwoD. Look at this example code to see how to pick individual elements and assign new values to them.
Here is a before and after image showing when the 2D array was first initialized compared to when the four elements were accessed and modified:
Review of Nested Loops
Nested loops consist of two or more loops placed within each other. We will be looking at one loop nested within another for 2D traversal.
The way it works is that, for every iteration of the outer loop, the inner loop finishes all of its iterations.
Here is an example using for loops:The output of the above nested loop looks like so:
The outer index is: 0 The inner index is: 0 The inner index is: 1 The inner index is: 2 The inner index is: 3 The outer index is: 1 The inner index is: 0 The inner index is: 1 The inner index is: 2 The inner index is: 3 The outer index is: 2 The inner index is: 0 The inner index is: 1 The inner index is: 2 The inner index is: 3
This is an important concept for 2D array traversal, because for every row in a two dimensional matrix, we want to iterate through every column.
Nested loops can consist of any type of loop and with any combination of loops. Let's take a look at a few more interesting examples.
Here is an example of nested while loops:We can even have some interesting combinations. Here is an enhanced for loop inside of a while loop: The output of the above example creates a multiplication table:
0 0 0 0 0 1 2 3 4 5 2 4 6 8 10 3 6 9 12 15 4 8 12 16 20 5 10 15 20 25 6 12 18 24 30
Traversing 2D Arrays: Introduction'}};
Let’s see what happens when we access elements of the outer arrayThis would output the following:
[a, b, c] [d, e, f] [g, h, i] [j, k, l]
Let’s take a look at an example which produces the same output, but can handle any sized 2D array.Here is the output:
[a, b, c] [d, e, f] [g, h, i] [j, k, l]
[][],.
Let's look at an example:You can think of the variable
aas being the outer loop index, and the variable
bas being the inner loop index.
This gives the following.
Traversing 2D Arrays: Practice with Loops
In enhanced for loops, each element is iterated through until the end of the array. When we think about the structure of 2D arrays in Java (arrays of array objects) then we know that the outer enhanced for loop elements are going to be arrays.
Let's take a look at an example:
Given this 2D array of character data:
char[][] charData = {{'a', 'b', 'c', 'd', 'e', 'f'},{'g', 'h', 'i', 'j', 'k', 'l'}};
for( datatype elementName : arrayName){. Since 2D arrays in Java are arrays of arrays, each element in the outer enhanced for loop is an entire row of the 2D array. The nested enhanced for loop is then used to iterate through each element in the extracted row. Here is the output of the above code:
a b c d e f g h i j k l
a b c d e f g h i j k l
Traversing 2D Arrays: Row-Major Order
Row-major order for 2D arrays refers to a traversal path which moves horizontally through each row starting at the first row and ending with the last.
Although we have already looked at how 2D array objects are stored in Java, this ordering system conceptualizes the 2D array into a rectangular matrix and starts the traversal at the top left element and ends at the bottom right element.
Here is a diagram which shows the path through the 2D array:
This path is created by the way we set up our nested loops. In the previous exercise, we looked at how we can traverse the 2D array by having nested loops in a variety of formats, but if we want to control the indices, we typically use standard for loops.
Let’s take a closer look at the structure of the nested for loops when traversing a 2D array:
Given this 2D array of strings describing the element positions:Lets keep track of the total number of iterations as we traverse the 2D array: This would produce the following output:
Step: 0, Element: [0][0] Step: 1, Element: [0][1] Step: 2, Element: [0][2] Step: 3, Element: [1][0] Step: 4, Element: [1][1] Step: 5, Element: [1][2] Step: 6, Element: [2][0] Step: 7, Element: [2][1] Step: 8, Element: [2][2] Step: 9, Element: [3][0] Step: 10, Element: [3][1] Step: 11, Element: [3][2]
This is because in our for loop, we are using the number of rows as the termination condition within the outer for loop header
a < matrix.length;. Additionally, we are using the number of columns
b < matrix[a].length as the termination condition for our inner loop. Logically we are saying: “For every row in our matrix, iterate through every single column before moving to the next row”. This is why our above example is traversing the 2D array using row-major order.
Here is a diagram showing which loop accesses which part of the 2D array for row-major order:
Why Use Row-Major Order?
Row-major order is important when we need to process data in our 2D array by row. You can be provided data in a variety of formats and you may need to perform calculations of rows of data at a time instead of individual elements. Let's take one of our previous checkpoint exercises as an example. You were asked to calculate the sum of the entire 2D array of integers by traversing and accessing each element. Now, if we wanted to calculate the sum of each row, or take the average of each row, we can use row-major order to access the data in the order that we need. Let's look at an example!
Given a 6X3 2D array of doubles:
Calculate the sum of each row using row-major order:
The output for the above code is:
Row: 0, Sum: 1.62 Row: 1, Sum: 2.16 Row: 2, Sum: 1.04 Row: 3, Sum: 1.15 Row: 4, Sum: 2.04 Row: 5, Sum: 1.06
Traversing 2D Arrays: Column-Major Order
Column-major order for 2D arrays refers to a traversal path which moves vertically down each column starting at the first column and ending with the last.
This ordering system also conceptualizes the 2D array into a rectangular matrix and starts the traversal at the top left element and ends at the bottom right element. Column-major order has the same starting and finishing point as row-major order, but it’s traversal is completely different
Here is a diagram which shows the path through the 2D array:
In order to perform column-major traversal, we need to set up our nested loops in a different way. We need to change the outer loop from depending on the number of rows, to depending on the number of columns. Likewise we need the inner loop to depend on the number of rows in its termination condition.
Let's look at our example 2D array from the last exercise and see what needs to be changed.
Given this 2D array of strings describing the element positions:Let's keep track of the total number of iterations as we traverse the 2D array. We also need to change the termination condition (middle section) within the outer and inner for loop. Here is the output for the above code:
Step: 0, Element: [0][0] Step: 1, Element: [1][0] Step: 2, Element: [2][0] Step: 3, Element: [3][0] Step: 4, Element: [0][1] Step: 5, Element: [1][1] Step: 6, Element: [2][1] Step: 7, Element: [3][1] Step: 8, Element: [0][2] Step: 9, Element: [1][2] Step: 10, Element: [2][2] Step: 11, Element: [3][2]
matrixis different from the way we accessed them when using row-major order. Let’s remember that the way we get the number of columns is by using
matrix[0].lengthand the way we get the number of rows is by using
matrix.length. Because of these changes to our for loops, our iterator
anow iterates through every column while our iterator
biterates through every row. Since our iterators now represent the opposite values, whenever we access an element from our 2D array, we need to keep in mind what indices we are passing to our accessor. Remember the format we use for accessing the elements
matrix[row][column]? Since
anow iterates through our column indices, we place it in the right set of brackets, and the
bis now placed in the left set of brackets.
Here is a diagram showing which loop accesses which part of the 2D array for column-major order:
Why Use Column-Major Order?
Column major order is important because there are a lot of cases when you need to process data vertically. Let’s say that we have a chart of information which includes temperature data about each day. The top of each column is labeled with a day, and each row represents an hour. In order to find the average temperature per day, we would need to traverse the data vertically since each column represents a day. As mentioned in the last exercise, data can be provided in many different formats and shapes and you will need to know how to traverse it accordingly.
Let’s look at our sum example from the last exercise, but now using column-major order.
Given a 6X3 2D array of doubles:Calculate the sum of each column using column-major order: The output of the above code is:
Column: 0, Sum: 2.83 Column: 1, Sum: 4.31 Column: 2, Sum: 1.93
Combining Traversal and Conditional Logic
When working with 2D arrays, it is important to be able to combine traversal logic with conditional logic in order to effectively navigate and process the data. Here are a few ways in how conditional logic can affect 2D array traversal:
Skipping or selecting certain rows and columns
Modifying elements only if they meet certain conditions
Complex calculations using the 2D array data
Formatting the 2D array
Avoiding exceptions / smart processing
Let’s go over a few examples which use these ideas:
First, let’s think about a situation where you have some string data inside a 2D array. We have an application which allows users to input events on a calendar. This is represented by a 5x7 2D array of strings. Due to the fact that the number of days in each month is slightly different and that there are less than 35 days in a month, we know that some of our elements are going to be empty. We want our application to do a few things:
Detect which days of which weeks have something planned and alert us about the event.
Count the number of events for each week
Count the number of events for each day
Here is a visualization of what our calendar data looks like after a user has entered in some event information:
There are a few things to note:
Row-major or column-major order can be used to access the individual events
Row-major order must be used to count the number of events per week since each row represents a week
Let’s take care of the first 2 requirements in one set of nested row-major loopsThe code above produces the following output:
Week: 1, Day: 1, Event: volunteer Week: 1, Day: 2, Event: delivery Week: 1, Day: 5, Event: doctor Week: 1, Day: 7, Event: soccer Total number of events for week 1: 4 Week: 2, Day: 2, Event: exam 1 Week: 2, Day: 4, Event: mechanic Week: 2, Day: 7, Event: soccer Total number of events for week 2: 3 Week: 3, Day: 1, Event: volunteer Week: 3, Day: 2, Event: off work Week: 3, Day: 4, Event: birthday Week: 3, Day: 6, Event: concert Total number of events for week 3: 4 Week: 4, Day: 2, Event: exam 2 Week: 4, Day: 5, Event: doctor Week: 4, Day: 7, Event: soccer Total number of events for week 4: 3 Week: 5, Day: 1, Event: visit family Total number of events for week 5: 1
The output is:
Number of events on Sundays: 3 Number of events on Mondays: 4 Number of events on Tuesdays: 0 Number of events on Wednesdays: 2 Number of events on Thursdays: 2 Number of events on Fridays: 1 Number of events on Saturdays: 3
Additionally, we can use conditional logic to skip portions of the 2D array. For example, let’s say we wanted to print the events for weekdays only and skip the weekends.
We could use a conditional statement such as
if(j!=0 && j!=6) in order to skip Sunday (
0) and Saturday (
6).
These modifications to our 2D array traversal are very common when processing data in applications. We need to know which cells to look at (skipping column titles for example), which cells to ignore (empty data, invalid data, outliers, etc.), and which cells to convert (converting string input from a file to numbers).
2D Array Review
Let’s review the concepts we have learned throughout these notes.
Arrays are objects in Java, we can have arrays of objects, therefore we can also have arrays of arrays. This is the way 2D arrays are structured in Java.
We can declare and initialize 2D arrays in a few different ways depending on the situation"}};
Before... [[championship, QUANTITY, month], [EMPLOYEE, queen, understanding], [method, writer, MOVIE]] After... [[CHAMPIONSHIP, quantity, MONTH], [employee, QUEEN, UNDERSTANDING], [METHOD, WRITER, movie]] | https://docs.nicklyss.com/java-2d-arrays/ | 2022-09-24T23:16:41 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['https://nicklyss.com/media/uploads/2021/05/Calendar-data.png',
'Calendar Data'], dtype=object) ] | docs.nicklyss.com |
.
Releases the specified address range that you provisioned for use with your Amazon Web Services.
See also: AWS API Documentation
deprovision-byoip-cidr --cidr address range, in CIDR notation. The prefix must be the same prefix that you specified when you provisioned the address range.
- an IP address range from use
The following example removes the specified address range from use with AWS.
aws ec2 deprovision-byoip-cidr \ --cidr 203.0.113.25/24
Output:
{ "ByoipCidr": { "Cidr": "203.0.113.25/24", "State": "pending-deprovision" } }. | https://docs.aws.amazon.com/zh_cn/cli/latest/reference/ec2/deprovision-byoip-cidr.html | 2022-09-24T23:22:34 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.aws.amazon.com |
# Setting up
This article covers the first steps to get started. MacOS and Linux users can dive right in. If you have Windows, please follow our Windows development cookbook article first.
# System requirements
See the links above to install each. To check that these are installed in your environment, try the following commands:
node -v && npm -v # This will display your Node.js version and npm version (installed with Node), # if installed. mongod --version # This will display your MongoDB version, if installed. which convert && which identify # This will display the location of the ImageMagick utilities, if installed.
NOTE
ImageMagick is optional, but recommended. It provides the
convert and
identify command line tools, which Apostrophe uses to scale and crop images quickly. If you do not install it Apostrophe can still handle image uploads, though more slowly.
# The Apostrophe CLI tool
There is an official CLI (opens new window) (Command Line Interface) for quickly setting up starter code for your Apostrophe project. Once in a project it can also help add new module code with a single command so you can focus on the unique parts rather than copying or remembering boilerplate. Keep an eye out for updates once it is installed since it will continue to evolve to help with additional tasks.
Install the CLI tool:
npm install -g @apostrophecms/cli # Or `yarn global add @apostrophecms/cli`, if you prefer. We'll stick to npm commands.
Once installed you have access to the
apos command. Simply use that command, or
apos --help, to see a list of additional commands anytime.
The CLI is not required to work with Apostrophe. It primarily makes developing with Apostrophe faster and takes care of the more repetitive tasks during development.
# Creating a project
Before creating a project, make sure you start MongoDB (opens new window) locally following their instructions. MongoDB can be configured to run all the time or started as needed, but it must be up and running to provide a place for ApostropheCMS to store its information.
The easiest way to get started with Apostrophe is to use the official starter project. If you have the CLI installed, go into your normal projects directory and use the command:
apos create apos-app
The CLI will take care of installing dependencies and walk you through creating the first user. You can then skip down to the "Finishing touches" section. If you don't want to use the CLI, or if you want to see other things it does for you, continue on.
To get started quickly without the CLI, clone the starter repository:
git clone apos-app
If you want to change the project directory name, please do so. We will continue referring to
apos-app.
Open the
app.js file in the root project directory. Find the
shortName setting and change it to match your project (only letters, digits, hyphens and/or underscores). This will be used as the name of your database.
// app.js require('apostrophe')({ shortName: 'apos-app', // 👈 modules: { // ...
Excellent! Back in your terminal we'll install dependencies:
npm install
Before starting up you'll need to create an admin-level user so that you can log in. After running the following command, Apostrophe will ask you to enter a password for this user.
node app @apostrophecms/user:add my-user admin # Replace `my-user` with the name you want for your first user.
# Finishing touches
You should also update the session secret for Express.js (opens new window) to a unique, random string. The starter project has a placeholder for this option already. If you do not update this, you will see a warning each time the app starts up.
// modules/@apostrophecms/express/index.js module.exports = { options: { session: { // If this still says `undefined`, set a real secret! secret: undefined } } };
# Starting up the website
Start the site with
npm run dev. The app will then watch for changes in client-side code, rebuilds it, then refresh the browser when it detects any. You can log in with the username and password you created at (opens new window).
TIP
If you are starting the site in a production environment or do not want the process to watch for changes, start the site with
node app.js.
# Next steps
Now that Apostrophe is installed, you're ready to start building. Check out the essentials guide to learn about essential features or read more about Apostrophe's inner workings in the technical overview. | https://v3.docs.apostrophecms.org/guide/setting-up.html | 2022-09-24T23:37:33 | CC-MAIN-2022-40 | 1664030333541.98 | [] | v3.docs.apostrophecms.org |
Roadmap
Roadmap
Q4 2021
- Launchpad initial design
- Current launchpad review
- Market research
- Foundation of Boba Brewery
- Team creation
- Finish strategic investing
- Cooperation with advisors and partners
- Product kick-off
Q1 2022
- Private sale
- Launch Boba Brewery Launchpad
- Public sale
- Community building
Q2 2022
- Complete Boba Brewery platform launch
- Continue launch selected projects on Boba & other Chains.
Q3 2022
- Ecosystem building
- Launch the Boba Brewery Incubator | https://docs.bobabrewery.com/roadmap.html | 2022-09-24T21:43:46 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.bobabrewery.com |
Introduction
Cluster plots. Please view our video for cluster explorer on our website.
Setting up the Cluster Explorer Plugin
There are detailed instructions on how to set up plugins in our online documentation. Cluster Explorer can be set up by placing the Cluster Explorer JAR file into a “plugins” folder.
Using Cluster Explorer
The Cluster Explorer plugin is used after performing clustering and dimensionality reduction with our other FlowJo plugins or tools. Typically, manual gating is first performed on the sample and then clustering is performed on a manually gated population of interest. This clustering is usually followed by dimensionality reduction to help visualize the populations within the sample of interest. Cluster Explorer can then be run on this sample in order to visualize the populations generated by the clustering. Select a population that has had clustering performed on it. Then select Cluster Explorer from the plugins dropdown list.
After selecting Cluster Explorer from the plugin list, a window will appear asking the user to select additional inputs. The window is divided into three sections. At the top the user should select the clustering parameters they want to view and compare. Multiple clustering parameters can be selected but only one can be visualized at a time. The middle section allows the user to pick parameters to visualize. Usually it is most appropriate to choose the compensated parameters. The bottom section allows the user to choose any dimensionality reduction x and y parameters to display the clusters on. Multiple sets of dimensionality reduction parameters can be selected.
Once the parameters are selected, Cluster Explorer will generate a heatmap, profile graph, barchart and the overlaid dimensionality reduction plots for the cluster populations.
The barchart simply shows the number of events in each cluster.
The profile chart shows the relative intensity of each cluster for each parameter. Each colored line represents a cluster and each point shows the relative intensity for that cluster for a given parameter listed on the x axis. Clicking and dragging on the profile plot will create gates around clusters selecting clusters for display. Multiple gates can be created using boolean logic to highlight multiple clusters.
When the profile plot is selected, there is an option under the “Clusters” tab at the top of the screen to remove outliers. These will remove cells from the plots whose signal fall outside of certain percentile thresholds. Under the “Help” tab there is an about which gives additional information.
The heatmap shows the relative intensity of each parameter for a given cluster. Additional populations can be added to the heatmap by selecting the “Add nodes as columns option”. This allows the visualization of multiple clustering runs. There are also options for saving the image of the heatmap.
If dimensionality reduction parameters are chosen, plots will also be displayed of the clusters overlaid on those parameters. Clicking on the axis of these plots will give options for changing the axis. The resolution of these plots can be changed by going to the “Options” tab at the top of the computer and choosing “Adjust Map Resolution”.
All the plots are interactive and if a population is selected in one plot, that same population will be highlighted in the other plots. Double clicking on the Profile plot will reset which clusters are selected.
The control panel provides options for the various pieces of cluster explorer.
The “Cluster set” allows users to choose which clusters they want to display and allows users to switch between types of clustering. The compare button will open a new window that allows for the comparison between different cluster sets. The comparison will also try to match clusters that seem to contain similar populations between the two cluster sets.
The “Profile Plot Gates” options has options for the boolean logic used to select points with the profile gates. Choosing “and” will cause only the populations that are within all of the profile plot gates to become highlighted. Choosing “Not” will highlight all the populations not present on the profile plot gates. Selecting “Clear Gates” will remove any gates drawn on the profile plot.
The “Profile Points” section of the control panel have options for the statistics being reported in the plots. If a plot window is closed, it can be reopened by selecting it from the “Plots” section of the control panel.
The “Animate Selected Clusters” button can be used to make the plots go through a sequence highlighting
all the individual clusters 1 by 1.
Compatibility
Cluster Explorer is compatible with the following FlowJo plugins and tools
Clustering
Phenograph
FlowSOM
Xshift
Dimensionality Reduction
tSNE
UMAP
TriMap
Leave us your feedback
Please write to [email protected] with any questions! | https://docs.flowjo.com/flowjo/plugins-2/plugin-demonstration-videos/cluster-explorer/ | 2022-09-24T23:43:00 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['https://docs.flowjo.com/wp-content/uploads/2021/11/image-3-1024x580.png',
None], dtype=object)
array(['https://docs.flowjo.com/wp-content/uploads/2021/11/image-4-851x1024.png',
None], dtype=object)
array(['https://docs.flowjo.com/wp-content/uploads/2021/11/image-5-654x1024.png',
None], dtype=object)
array(['https://docs.flowjo.com/wp-content/uploads/2021/11/image-6-1024x603.png',
None], dtype=object)
array(['https://docs.flowjo.com/wp-content/uploads/2021/11/image-7.png',
None], dtype=object)
array(['https://docs.flowjo.com/wp-content/uploads/2021/11/image-8-1024x581.png',
None], dtype=object)
array(['https://docs.flowjo.com/wp-content/uploads/2021/11/image-9-1024x567.png',
None], dtype=object)
array(['https://docs.flowjo.com/wp-content/uploads/2021/11/image-10-1024x415.png',
None], dtype=object)
array(['https://docs.flowjo.com/wp-content/uploads/2021/11/image-11-1024x338.png',
None], dtype=object)
array(['https://docs.flowjo.com/wp-content/uploads/2021/11/image-12-1024x613.png',
None], dtype=object)
array(['https://docs.flowjo.com/wp-content/uploads/2021/11/image-13.png',
None], dtype=object)
array(['https://docs.flowjo.com/wp-content/uploads/2021/11/image-14-1024x400.png',
None], dtype=object)
array(['https://docs.flowjo.com/wp-content/uploads/2021/11/image-15-1024x606.png',
None], dtype=object) ] | docs.flowjo.com |
重要
The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy.
Notes
Spring AOP instrumentation
This version instruments any call that passes through an AOP pointcut that you have declared in your Spring application. This gives your additional insight into the call time of key Spring beans.
Performance Improvements
This version contains optimizations that reduce agent overhead.
Fixes: Hibernate improvements
In this version, we provide more consistent detail into Hibernate calls across supported version of Hibernate (3.3 - 4.2).
Fix: Removed need for WebSphere SSL work-around
Previous versions sometimes required a work-around when using WebSphere. This version removes the need for a work-around.
NOTE: Requires Java SE 6 or 7
Java Agent 3.0 requires Java SE 6 or 7. At signup or on the release notes page, you have the option to download a version of the agent that works with Java SE 5. | https://docs.newrelic.com/jp/docs/release-notes/agent-release-notes/java-release-notes/java-agent-300 | 2022-09-24T23:22:56 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.newrelic.com |
Configure Vault
Follow the steps documented below to help establish trust between your centralized HashiCorp Vault server and your controller Managed clusters
Important
The kubernetes auth method is used to establish trust between a cluster and the Vault Server
Step 1: Create Secret Store¶
- Login into the Web Console as a Project Admin.
- Click on Integrations > Secret Stores
- Click on "New Secret Store"
- Provide a Name , select "Vault" from the drop down for Provider
- Click "CREATE" button to create this secret store
Step 2: Edit Secret Store¶
In the Edit Secret Store page:
- Provide the Vault Host (FQDN or IP address of the Vault Server)
- Users are allowed to skip the certificate verification by setting true or false from the drop-down. Set true to skip Vault CA Certificate Verification (or) false to provide the CA Certificate for verification, only if the certificate is self-signed
- Provide the CA Cert (base64) when skip verification is set to false and only if certificate is self-signed
- (Optional) Click on Add Clusters to add additional managed Kubernetes cluster to use this secret store
- Click Save Changes to save the secret store settings
Important
It will take ~30 seconds for the Vault integration configuration to be deployed to the managed Kubernetes clusters
Once the Vault integration configuration is deployed to the clusters, copy the related Vault settings for each of the clusters and complete the configuration in Vault (See Step 3)
- Kubernetes Host
- Token Reviewer JWT
- Kubernetes CA Certificate
Step 3: Create Auth Path For Clusters¶
In order for Vault to grant access to the clusters to retrieve secrets, the Kubernetes Auth Method for each of the clusters will need to be created in Vault from the information retrieved in Step 2 above.
This step completes the establishment of "Trust" between the clusters and the central Vault Server. The settings in Vault can be updated using the Vault CLI, UI or API.
Important
The steps described below are typically performed by a Vault Admin.
Follow these steps to perform the update using the Vault CLI.
Enable Auth Method for Path¶
Enable Kubernetes auth method for the auth path configured in Web console:
vault auth enable -path=<auth_path> kubernetes
Example of successful command output:
% vault auth enable -path=stage-gke-cluster kubernetes Success! Enabled kubernetes auth method at: stage-gke-cluster/
Set Auth Method¶
Set the auth method with the information we received for the cluster from Step 2
vault write auth/<auth_path>/config \ kubernetes_host=<from Rafay's Kubernetes Host value> \ token_reviewer_jwt=<from Rafay's Token Reviewer JWT value> \ kubernetes_ca_cert=<from Rafay's Kubernetes CA Cert value>
Example of successful command output:
% vault write auth/stage-gke-cluster/config \ [email protected]_reviewer_jwt \ kubernetes_host= \ [email protected] Success! Data written to: auth/stage-gke-cluster/config
Create Role for Auth Path¶
Create a Role to the authpath so that applications can retrieve secrets from Vault
vault write auth/<auth_path>/role/<role_name> \ bound_service_account_names=<service accounts> \ bound_service_account_namespaces=<namespaces> \ policies=<access_policy> \ ttl=<token TTL>
Example of successful command output:
% vault write auth/stage-gke-cluster/role/demo \ bound_service_account_names=vault-auth-demo,app-developer-sa \ bound_service_account_namespaces=staging,qa \ policies=default \ ttl=1h Success! Data written to: auth/stage-gke-cluster/role/demo
Create KV Secret Engine¶
Create a KV secret engine if it has not been already created by the Vault admin. Both KV v1 and v2 are supported. Customers are strongly recommended to use v2 since it has additional functionality such as versioning etc.
KV v1
vault secrets enable -path=<secret_path> kv
Example of successful command output:
% vault secrets enable -path=app-secrets-v1 kv Success! Enabled the kv secrets engine at: app-secrets-v1/
KV v2
vault secrets enable -path=<secret_path> kv-v2
Example of successful command output:
% vault secrets enable -path=app-secrets-v2 kv-v2 Success! Enabled the kv-v2 secrets engine at: app-secrets-v2/
Create Secrets in KV Secret Path¶
Create the application secrets in KV secret path
vault kv put <secret_path>/<secret_name> key1=value1 key2=value2
Example of successful command output:
vault kv put app-secrets-v1/elasticsearch es_user=elastic es_password=mypassword Success! Data written to: app-secrets-v1/elasticsearch
Let's check the secrets
vault kv get app-secrets-v1/elasticsearch ======= Data ======= Key Value --- ----- es_password mypassword es_user elastic | https://docs.rafay.co/integrations/secrets/enable_vault/ | 2022-09-24T23:52:47 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['../img/vault/rafay_add_vault_kv.png', 'Create Vault'],
dtype=object)
array(['../img/vault/rafay_set_vaultkv.png', 'Create Vault'], dtype=object)
array(['../img/vault/vault_configuring.png', 'Create Vault'], dtype=object)
array(['../img/vault/rafay_vaultconfig.png', 'Create Vault'], dtype=object)] | docs.rafay.co |
AWS Fargate Serverless Agents
Introduction
Check the Overview for an explanation of when and why to use serverless agents in “container-as-a-service” cloud environments.
Architecture
The Sysdig serverless agent provides runtime detection through policy enforcement with Falco. At this time, the serverless agent is available for AWS Fargate on ECS. It is comprised of an orchestrator agent and (potentially multiple) workload agents.
The Sysdig serverless orchestrator agent is a collection point installed on each VPC to collect data from the serverless workload agent(s) and to forward them to the Sysdig backend. It also syncs the Falco runtime policies and rules to the workload agent(s) from the Sysdig backend.
The Sysdig serverless workload agent is installed in each task and requires network access to communicate with the orchestrator agent.
Note that the workload agent is designed to secure your workload. However, at deployment, the default setting prioritizes availability over security, using setting that allow your workload to start even if policies are not in place. If you prefer to prioritize security over availability, you can change these settings <>.
Prerequisites
Before starting the installation, ensure that you have the following:
On AWS Side
- A custom Terraform/CloudFormation template containing the Fargate task definitions that you want to instrument through the Sysdig Serverless Agent
- Two VPC subnets in different availability zones that can connect with the internet via a NAT gateway or an internet gateway
On Sysdig Side
Sysdig Secure up and running
The endpoint of the Sysdig Collector for your region
From the Sysdig Secure UI, retrieve the following:
Access Key to install the agent and push the data to the Sysdig platform
Secure API Token to configure the Sysdig provider.
Known Limitations
Sysdig instruments a target workload by patching its task definition to run the original workload below Sysdig instrumentation. To patch the original task definition, Sysdig instrumentation pulls and analyzes the workload image to get the original entry point and the command, along with other information.
Pulling workload images from Public vs Private registries
If you retrieve your workload image from a private registry, you must explicitly define the entry point and the command in the container definition. If you don’t specify them, the Sysdig instrumentation might not be able to collect such information, and the instrumentation might fail.
Instead, if you pull the workload image from a public registry no additional operations are required.
Referencing a parameterized Image in CloudFormation Workload Template
When instrumenting a workload over CloudFormation you must define the Image inline in your TaskDefinition.
Using a parameterized image in the TaskDefinition instead might prevent the Sysdig instrumentation from retrieving the workload image configuration. That could lead to incorrect workload instrumentation.
The example below shows a valid TaskDefinition.
Resources: TaskDefinition: Type: AWS::ECS::TaskDefinition Properties: ContainerDefinitions: - Name: !Ref MyContainerName Image: "MyContainerImage" # Inline Image ...
Install Options
The Sysdig serverless agent can be deployed automatically via Terraform or CloudFormation. Alternatively, you can also use the manual process to complete the instrumentation tasks.
Sysdig recommends using the Terraform deployment method to instrument your Fargate workloads.
Terraform
This option presumes you use Terraform to deploy your workload.
You can deploy the orchestrator agent and install the workload agent by using an automated process which will instrument all your task definitions.
For details, see the following Sysdig Terraform registries:
Deployment Steps
Install the Orchestrator Agent
Set up the AWS Terraform provider:
provider "aws" { region = var.region }
Configure the Sysdig orchestrator module and deploy it:
module "fargate-orchestrator-agent" { source = "sysdiglabs/fargate-orchestrator-agent/aws" version = "0.1.1" vpc_id = var.vpc_id subnets = [var.subnet_a_id, var.subnet_b_id] access_key = var.access_key collector_host = var.collector_host collector_port = var.collector_port name = "sysdig-orchestrator" agent_image = "quay.io/sysdig/orchestrator-agent:latest" # True if the VPC uses an InternetGateway, false otherwise assign_public_ip = true tags = { description = "Sysdig Serverless Agent Orchestrator" } }
Call this module for each VPC that needs instrumentation.
Install the Instrumented Workload
Set up the Sysdig Terraform provider:
terraform { required_providers { sysdig = {= 0.5.39" } } } provider "sysdig" { sysdig_secure_api_token = var.secure_api_token }
Pass the orchestrator host, port, and container definitions of your workload to the
sysdig_fargate_workload_agentdata source:
data "sysdig_fargate_workload_agent" "instrumented" { container_definitions = jsonencode([...]) sysdig_access_key = var.access_key workload_agent_image = "quay.io/sysdig/workload-agent:latest" orchestrator_host = module.sysdig_orchestrator_agent.orchestrator_host orchestrator_port = module.sysdig_orchestrator_agent.orchestrator_port }
Note: Ensure that the input container definitions must be in JSON format.
Include the instrumented JSON in your Fargate task definition and deploy your instrumented workload:
resource "aws_ecs_task_definition" "fargate_task" { ... network_mode = "awsvpc" requires_compatibilities = ["FARGATE"] container_definitions = "${data.sysdig_fargate_workload_agent.instrumented.output_container_definitions}" }
The Sysdig instrumentation will go over the original task definition to instrument it. The process includes replacing the original entry point and command of the containers.
For the images pulled from private registries, explicitly provide the
Entrypointand
Commandin the related container definition, otherwise, the instrumentation will not be completed.
(Latest) CloudFormation
This option presumes you use a CFT to deploy your workload.
As of Serverless Agent 3.0, a YAML provided by Sysdig helps you deploy the orchestrator agent and the instrumentation service in a single step. Then, you will install the workload agent using an automated process which will instrument all the Fargate task definitions in your CFT.
For the Orchestrator Agent and Instrumention Service, Sysdig provides the serverless-instumentation.yaml to use as a CloudFormation Template which you can deploy through the AWS console. You need one Orchestrator deployment per VPC in your environment that your organization wants to secure.
For the Workload Agents, you need one Workload Agent per Fargate task definition. For example, if you have ten services and ten task definitions, each needs to be instrumented.
Deployment Steps
Deploy the Sysdig Instrumentation and Orchestration Stack
Deploy the
serverless-instrumentation.yaml for each desired VPC using CloudFormation:
Log in to the AWS Console, select CloudFormation, Create a stack with new resources, and specify the
serverless-instrumentation.yamlas the Template source.
Specify the stack details to deploy the Orchestrator Agent on the same VPC where your service is running. For standard deployments, provide the parameters highlighted in the figure below.
Click Next, complete the stack creation, and wait for the deployment to complete (usually a few minutes).
Deploy the Workload Agent
Edit Your CFT
Once the Sysdig Instrumentation and Orchestration stack is deployed, the Outputs tab provides the transformation instruction as shown in the figure below. Note that the value here is dependent on what you set it to during the deployment of the Sysdig Instrumentation and Orchestration Stack.
Copy and paste the value of the transformation instruction to the root level of your applications CFT. For Example:
Transform: ["SysdigMacro"]
The Sysdig instrumentation will go over the original task definition to instrument it.
The instrumentation process includes replacing the original entry point and command of the container. If you are using an image from a public registry, it can determine these values from the image. If you are using an image from a private registry then you must explicitly provide the
Entrypoint and
Command in the related container definition; otherwise, the instrumentation will not be completed.
Deploy Your CFT
All the new deployments of your CFT will be instrumented.
When instrumentation is complete, Fargate events should be visible in the Sysdig Secure Events feed.
Upgrade from a Prior Version
Up through version 2.3.0, the installation process deployed two stacks, as described in (Legacy) CloudFormation:
- Orchestration stack, deployed via YAML
- Instrumentation stack, deployed via command-line installer.
Instead, from version 3.0.0, you will deploy the “Instrumentation & Orchestration” stack only, using the (Latest) CloudFormation installation option.
To upgrade to version 3.0.0:
Deploy the new Instrumentation and Orchestration stack and the Workload Agents, as described in (Latest) CloudFormation. When deploying the Instrumentation and Orchestration stack, assign a unique name to your macro, for example,
Transform: MyV2SysdigMacro. You now have two versions of the serverless agent components. When you are ready to switch from the earlier version, proceed with the next step.
Stop all the running tasks and use CloudFormation to delete the earlier stacks.
Clean up the earlier macro using the installer:
./installer-linux-amd64 cfn-macro delete MyEarlierSysdigMacro
Redeploy the workload stack with the updated CFT (
Transform: MyV2SysdigMacro).
(Legacy) CloudFormation
Note: This option has been deprecated.
This option presumes you use a CFT to deploy your workload.
Up to Serverless Agent 2.3.0, a YAML and an installer provided by Sysdig lets you deploy the Orchestrator Agent and the instrumentation service, respectively. Then, you will install the Workload Agent using an automated process which will instrument all the Fargate task definitions in your CFT.
The following components of the serverless agent are installed separately.
For the Orchestrator Agent, Sysdig provides the orchestrator-agent.yaml to use as a CloudFormation Template which you can deploy through the AWS Console. You need one orchestrator deployment per VPC in your environment that your organization wants to secure.
For the instrumentation service, Sysdig provides the installer to run to deploy the instrumentation service that will automatically instrument your task definitions.
For the Workload Agents, you need one Workload Agent per Fargate task definition. For example, if you have ten services and ten task definitions, each needs to be instrumented.
Additional Prerequisites
In addition to the prerequisites defined above, you will need the following on the AWS side:
- AWS CLI configured and permissions to create and use an S3 bucket.
Deployment Steps
Install the Orchestrator Agent
Get the CFT orchestrator-agent.yaml to deploy the orchestrator agent.
Deploy the orchestrator agent for each desired VPC, using CloudFormation. The steps below are an outline of the important Sysdig-related parts.
Log in to the AWS Console, select CloudFormation and Create Stack with new resources and specify the
orchestrator-agent.yamlas the Template source.
Specify the stack details to deploy the Orchestrator Agent on the same VPC where your service is running.
Stack name: self-defined
Sysdig Settings
Sysdig Access Key: Use the agent key of your Sysdig platform.
Sysdig Collector Host:
collector.sysdigcloud.com(default); region-dependent in Sysdig SaaS; custom in Sysdig on-prem.
Sysdig Collector Port:
6443(default), or could be custom for on-prem installations.
Network Settings
VPC Id: Choose your VPC.
VPC Gateway: Choose the type of Gateway in your VPC to balance the load of the orchestrator service.
Subnet A & B: These depend on the VPC you choose; select from the drop-down menu.
Advanced Settings
Sysdig Agent Tags: Enter a comma-separated list of tags (eg.
role:webserver,location:europe) Note: tags will also be created automatically from your infrastructure’s metadata, including AWS, Docker, etc.
Sysdig Orchestrator Agent Image:
quay.io/orchestrator-agent:latest(default)
Check Collector SSL Certificate: Default:
true.
Falsemeans no validation will be done on the SSL certificate received from the collector, used for dev purposes only.
Sysdig Orchestrator Agent Port:
6667(default). Port where the orchestrator service will be listening for instrumented task connections.
Click Next to start the deployment, and wait for the deployment to complete (usually a few minutes).
From the Outputs tab, take note of the
OrchestratorHostand
OrchestratorPortvalues.
Install the Workload Agents
Install the Instrumentation Service
Prerequisite: Have the orchestrator agent deployed in the appropriate VPC and have the Orchestrator Host and Port information handy.
Download the appropriate installer for your OS.
These set up Kilt, an open-source library mechanism for injection into Fargate containers.
Create a macro for the serverless worker agents, using the installer. Any service tagged with this macro will have the serverless worker agent(s) added and Fargate data will be collected.
Log in to the AWS CLI.
Create a CFN macro that applies instrumentation. You will need the outputs from previous task. Example:
./installer-linux-amd64 cfn-macro install -r us-east-1 MySysdigMacro $OrchestratorHost $OrchestratorPort
Edit Your CFT
Once the instrumentation service is deployed, you can use the transformation instruction to instrument your workloads. Copy and paste the transformation instruction at the root level of your CFT. All new deployments of that template will be instrumented.
The Sysdig instrumentation will go over the original task definition to instrument it. The instrumentation process includes replacing the original entry point and command of the containers.
For images pulled from private registries, explicitly provide the
Entrypoint and
Command in the related container definition, otherwise, the instrumentation will not be completed.
Deploy Your CFT
All new deployments of your CFT will be instrumented.
When instrumentation is complete, Fargate events should be visible in the Sysdig Secure Events feed.
Upgrade from a Prior Version
In most cases, it is advised to upgrade directly to 3.0.0 +, as described in (Latest) CloudFormation. These instructions are kept for special cases.
To upgrade to version 2.3.0, follow the instructions in (Legacy) CloudFormation:
Install the Orchestrator Agent, note that the
OrchestratorHostand
OrchestratorPortvalues will be unique.
Install the instrumentation service, note that you have to assign a unique name to your macro, for example,
Transform: MyV2SysdigMacro. At this point you have two versions of the Serverless agent components. When you are ready to switch from the earlier version, proceed with the next step.
Stop all running tasks and use CloudFormation to delete the earlier stack. Redeploy the new stack with the updated CFT.
Clean up the earlier macro using the installer:
./installer-linux-amd64 cfn-macro delete MyEarlierSysdigMacro
Redeploy the workload stack with the updated CFT (
Transform: MyV2SysdigMacro).
Manual Instrumentation
In some cases, you may prefer not to use the Terraform or CloudFormation installers and instead to use one of the following:
- Manually Instrument a Task Definition
- Instrument a Container Image (rare).
Manually Instrument a Task Definition
Install the orchestrator agent via Terraform or CloudFormation, as described above.
Take note of the
OrchestratorHost and
OrchestratorPort values. Such parameters will be passed to the workload via the environment variables
SYSDIG_ORCHESTRATORand
SYSDIG_ORCHESTRATOR_PORT respectively.
Then, instrument the task definition to deploy the workload agent manually as follows:
Add a new container to your existing task definition:
Use
sysdigInstrumentationas a name for the container.
Obtain the image from
quay.io/sysdig/workload-agent:latest.
The entrypoint and command can be left empty.
Edit the containers you want to instrument.
Mount volumes from
SysdigInstrumentation.
Add the
SYS_PTRACEcapability to your container. See AWS Documentation for detail if needed.
Prepend
/opt/draios/bin/instrumentto the entrypoint of your container. So, if your original entrypoint is
["my", "original", "entrypoint"], it becomes
["/opt/draios/bin/instrument", "my", "original", "entrypoint"].
Terraform Example
Task definition before" : ["/bin/ping", "google.com"], } ]) }
Task definition after" : ["/opt/draios/bin/instrument", "/bin/ping", "google.com"], "linuxParameters": { "capabilities": { "add": [ "SYS_PTRACE" ], }, }, "environment": [ { "name": "SYSDIG_ORCHESTRATOR", "value": "<host orchestrator output, region dependent>" }, { "name": "SYSDIG_ORCHESTRATOR_PORT", "value": "6667" }, { "name": "SYSDIG_ACCESS_KEY", "value": "" }, { "name": "SYSDIG_COLLECTOR", "value": "" }, { "name": "SYSDIG_COLLECTOR_PORT", "value": "" }, { "name": "SYSDIG_LOGGING", "value": "" }, ], "volumesFrom": [ { "sourceContainer": "SysdigInstrumentation", "readOnly": true } ], }, { "name" : "SysdigInstrumentation", "image" : "quay.io/sysdig/workload-agent:latest", } ]) }
CloudFormation Example
Task definition before: - /bin/ping - google.com Tags: - Key: application Value: TestApp
Task definition after: ##### BEGIN patch entrypoint for manual instrumentation - /opt/draios/bin/instrument ##### END patch entrypoint for manual instrumentation - /bin/ping - google.com ##### BEGIN add properties for manual instrumentation LinuxParameters: Capabilities: Add: - SYS_PTRACE VolumesFrom: - SourceContainer: SysdigInstrumentation ReadOnly: true Environment: - Name: SYSDIG_ORCHESTRATOR Value: "<host orchestrator output, region dependent>" - Name: SYSDIG_ORCHESTRATOR_PORT Value: "6667" - Name: SYSDIG_ACCESS_KEY Value: "" - Name: SYSDIG_COLLECTOR Value: "" - Name: SYSDIG_COLLECTOR_PORT Value: "" - Name: SYSDIG_LOGGING Value: "" - Name: SysdigInstrumentation Image: !Ref WorkloadAgentImage ##### END add properties for manual instrumentation Tags: - Key: application Value: TestApp
Instrument a Container Image
Alternatively, you can include the Workload Agent in your container at build time. To do so, update your dockerfile to copy the required files:
ARG sysdig_agent_version=latest FROM quay.io/sysdig/workload-agent:$sysdig_agent_version AS workload-agent FROM my_original_base COPY --from=workload-agent /opt/draios /opt/draios
Prepend the
/opt/draios/bin/instrument command to the entrypoint of your container:
ENTRYPOINT ["/opt/draios/bin/instrument", "my", "original", "entrypoint"]
Advanced Configurations
Configuring Workload Starting Policy
To customize the Sysdig instrumentation workload starting policy see Configure Workload Starting Policy.
Configuring Instrumentation Logging
To customize the Sysdig instrumentation logging see Manage Serverless Agent Logs.
Enable Proxy
To configure the Orchestrator/Workload agents to use a proxy see Enable HTTP proxy for agents.
The configuration can be provided through the environment variables that follow:
ADDITIONAL_CONFfor the Orchestrator Agent.
SYSDIG_EXTRA_CONFfor the Workload Agent.
Both of the environment variables expect a valid YAML or JSON.
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.
Last modified September 23, 2022 | https://docs.sysdig.com/en/docs/installation/serverless-agents/aws-fargate-serverless-agents/ | 2022-09-24T22:49:17 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['/image/serverless_architecture.jpg', None], dtype=object)
array(['/image/s3-stack-output.png', None], dtype=object)] | docs.sysdig.com |
.
System requirements for the host operation system, including its web server and database and how they should be configured prior to installation.
The installation chapter provides detailed instructions about how to install TYPO3, it also contains information about how to deploy TYPO3 to a production environment.
Troubleshoot common issues that can occur during installation. The Troubleshooting chapter covers both TYPO3 CMS and the host environment including the web server, database and PHP.
Learn how to create users and configure their backend privileges.
Discover how third-party extensions are installed and managed using Composer.
The Introduction Package is a great place to start if you are looking to test drive TYPO3 and see a prebuilt site that contains a range of example page templates and content.
Next Steps provides an overview of tasks that can be carried out once TYPO3 is installed, such as creating templates and adding content.
- Version
11.5
- Language
- Author
TYPO3 contributors
- License
This document is published under the Open Publication License.
- Rendered
2022-09-23 11:43 | https://docs.typo3.org/m/typo3/tutorial-getting-started/11.5/en-us/ | 2022-09-24T22:22:15 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.typo3.org |
The Rasa Core dialogue engine¶
Note
This is the documentation for version 0.12.4 of Rasa Core. Make sure you select the appropriate version of the documentation for your local installation!
What am I looking at?
Rasa is a framework for building conversational software: Messenger/Slack bots, Alexa skills, etc. We'll abbreviate this as a bot in this documentation.
What's cool about it?
Rather than a bunch of
if/else statements, the logic of your bot is
based on a machine learning model trained on example conversations.! You can build a full example without installing anything, just go to the quickstart! | https://legacy-docs.rasa.com/docs/nlu/0.12.4/ | 2022-09-24T23:21:00 | CC-MAIN-2022-40 | 1664030333541.98 | [] | legacy-docs.rasa.com |
Basially, the publication page is created to represent a
.bib file. But you can use the publication page as needed.
using zzo-bibtex-parer
I made a library to parse the
.bibfile to
mdfile. Follow the steps below to install the library.
Install pip(The Python Package Installer) first.
Install
zzo-bibtex-paresrby typing this in your terminal.
pip install zzo-bibtex-parser==1.0.8
Prepare your .bib file to the root folder.
Assuming your file name is bibtex.bib, from the root of your project, type this.
zzo --path bibtex.bib
Make a publication menu. In your menu.toml file
... [[main]] identifier = "publication" name = "Publications" url = "publication" weight = 6
Set new param to your
params.tomlfile.
... pubPaginate = 20
Now, you can see the publications page when you start the hugo
If the year of the post is the curreny year, the post set the
pinnedparam to
trueby default.
manually
If you don’t have
.bib file and just take advantage of the publication page, you can manually make the pages.
Make a publication folder in content folder.
root/content/publication
Make a subfolder and
_index.mdfile. This subfolder will be a section of the publication page. I’ll make an
articlefolder as an example.
root/content/publication/article/_index.md
Make a folder in the article folder we made above and make
index.mdfile in that folder.
root/content/publication/article/american_libraries_magazine/index.md
In the above
yamlfile, the
ENTRYTYPEparam should be the same with the section name(In our case,
article).
Now, you get the idea. Make any section you want and there, make any publication like above.
You can pin the publication by setting parameter
pinnedto
true. The pinned publication will show up in the overview page. | https://zzo-docs.vercel.app/zzo/pages/publication/ | 2022-09-24T23:07:01 | CC-MAIN-2022-40 | 1664030333541.98 | [] | zzo-docs.vercel.app |
Available Functions
Across all widgets and subordinated dashboards of the application scenarios
"Design & Document"
"Control & Release"
"Read & Explore" (ArchiMate Library only)
"Govern & Manage" (Standard Library only)
the following functions are available:
Open/Edit Elements
Open a model in the graphical editor by clicking the hyperlinked name.
Open the Notebook of an object by clicking the hyperlinked name.
Open a task by clicking it.
Show or hide an element in a chart by clicking its name.
Confirm the data actuality of an object via the icon.
Open an Insights dashboard by clicking the icon.
Hover over a list item to reveal icons for editing that item (e.g. to open an Insights dashboard).
The time period after which an object is marked as 'yellow' or 'red' if its data actuality was not confirmed by the responsible depends on the configuration of your ADOIT installation.
Create Reports
Create a Demands List Report via the icon.
Create an Interface Report via the icon.
Create a Model Report via the icon.
General Tools
Export the content of a widget as a PDF file or Excel file (XLSX format) via the icon.
Refresh a widget via the icon.
Open a "Management Dashboard" which contains detailed information regarding your models via the icon.
Interactive Pie Charts
Select the corresponding items in the appropriate list by clicking a segment.
Open a separate page which lists the corresponding items by double-clicking a segment.
Filter Data in Columns
Filter data in columns to show the elements you want and hide the rest. Once you have filtered elements in a column, you can either apply additional filters, or clear all filters to redisplay all elements.
Apply Filter
To apply a filter to a column:
The button is activated in the header of any column by mouseover. Click this button to open a drop-down menu (1).
Point to Filters (2), and then use the provided filter (3). The type of filter available depends on the column’s contents:
Filter by text: In the Enter Text box, type the text you want to search for. This type of filter is available in columns with text content like Name or Description.
Filter by values: Select the values you want to show from the list. This type of filter is available in columns like Type or State (that is, columns with content from a predefined range of values).
Filter by date: Point to one of the operators Before, After or On, and then choose a date. This type of filter is available in columns like Date.
Filter by number: Select one of the operators Equals (
=), Greater (
>) or Smaller (
<), and then enter a number. This type of filter is available in columns like Total Value.
When a filter is applied, the button will appear in the column header. It replaces the button.
Remove or Reapply a Filter
To remove (or reapply) a filter from a column:
- Click the button in the header of any column, and then click Filters. This command is a toggle. You click it to remove the filter (check mark will be cleared), and you click it again to reapply the filter (the check mark will be selected).
Remove all Filters in a Widget
To remove all filters in a widget:
- Click the button in the header of any column, and then click Clear all filters.
Filter Objects by Responsibility (All/My)
In many widgets, you can switch between displaying all or my relevant objects. Depending on the widget, the displayed objects may include Application Components, System software etc.).
In order to filter objects by responsibility:
Click the widget title (button) to open a drop-down menu (1).
Select either the menu entry All or My (2).
All lists all relevant objects in the database.
My lists all relevant objects:
ArchiMate Application Library: For which you are assigned as responsible, accountable, consulted or informed business actor (object attributes in the Notebook chapter "Organisation").
ADOIT Standard Application Library: For which you are assigned as Responsible Person or Responsible Person (Deputy) (object attributes in the Notebook chapter "Organisation"). | https://docs.boc-group.com/adoit/en/docs/15.1/user_manual/das-700000/ | 2022-09-24T23:47:38 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['/adoit/en/assets/images/d3316f6b59d89da4c0e7ad2d96d8fa0ac0533259-c44b443fe20d1d6c80b760e9dd238068.png',
'Filter Data in Columns - Apply Filter'], dtype=object) ] | docs.boc-group.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.