content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
- Security >
- Authentication >
- Enterprise Authentication Mechanisms >
- Kerberos Authentication >
- Troubleshoot Kerberos Authentication
Troubleshoot Kerberos Authentication¶
On this page
Kerberos Configuration Debugging Strategies¶
If you have difficulty starting or authenticating against
mongod or
mongos with Kerberos:
Ensure that you are running MongoDB Enterprise, not MongoDB Community Edition. Kerberos authentication is a MongoDB Enterprise feature and will not work with MongoDB Community Edition binaries.
To verify MongoDB Enterprise binaries, pass the
--versioncommand line option to the
mongodor
mongos:
In the output from this command, look for the string
modules: subscriptionor
modules: enterpriseto confirm your system has MongoDB Enterprise.
Ensure that the canonical system hostname of the
mongodor
mongosinstance is a resolvable, fully qualified domain name.
On Linux, you can verify the system hostname resolution with the
hostname -fcommand at the system prompt.
On Linux, ensure that the primary component of the service principal name (SPN) of the SPN is
mongodb. If the primary component of the SPN is not
mongodb, you must specify the primary component using
--setParameter saslServiceName.
On Linux, ensure that the instance component of the service principal name (SPN) in the keytab file matches the canonical system hostname of the
mongodor
mongosinstance. If the
mongodor
mongosinstance’s system hostname is not in the keytab file, authentication will fail with a
GSSAPI error acquiring credentials.error message.
If the hostname of your
mongodor
mongosinstance as returned by
hostname -fis not fully qualified, use
--setParameter saslHostNameto set the instance’s fully qualified domain name when starting your
mongodor
mongos.
Ensure that each host that runs a
mongodor
mongosinstance has
Aand
PTRDNS records to provide both forward and reverse DNS lookup. The
Arecord should map to the
mongodor
mongos’s FQDN.
Ensure that clocks on the servers hosting your MongoDB instances and Kerberos infrastructure are within the maximum time skew: 5 minutes by default. Time differences greater than the maximum time skew prevent successful authentication.
Kerberos Trace Logging on Linux¶
MIT Kerberos provides the
KRB5_TRACE environment variable for
trace logging output. If you are having persistent problems with
MIT Kerberos on Linux, you can set
KRB5_TRACE when starting your
mongod,
mongos, or
mongo instances to
produce verbose logging.
For example, the following command starts a standalone
mongod
whose keytab file is at the default
/etc/krb5.keytab path
and sets
KRB5_TRACE to write to
/logs/mongodb-kerberos.log:
Common Error Messages¶
In some situations, MongoDB will return error messages from the GSSAPI interface if there is a problem with the Kerberos service. Some common error messages are:
GSSAPI error in client while negotiating security context.
This error occurs on the client and reflects insufficient credentials or a malicious attempt to authenticate.
If you receive this error, ensure that you are using the correct credentials and the correct fully qualified domain name when connecting to the host.
GSSAPI error acquiring credentials.
- This error occurs during the start of the
mongodor
mongosand reflects improper configuration of the system hostname or a missing or incorrectly configured keytab file. | https://docs.mongodb.com/master/tutorial/troubleshoot-kerberos/ | 2019-08-17T14:01:28 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.mongodb.com |
Contents London release notes Previous Topic Next Topic Dashboards release notes Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Dashboards release notes ServiceNow® Dashboards product enhancements and updates in the London release. London upgrade information Responsive dashboards are enabled by default on new instances. On upgrading instances, responsive canvas must be enabled by an administrator. New in the London release. Dashboards upgrade informationDashboards upgrade information for the London release. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/london-release-notes/page/release-notes/performance-analytics-reporting/par-dashboards-rn.html | 2019-08-17T13:20:52 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.servicenow.com |
Difference between revisions of "Git"
From CSLabsWiki
Latest revision as of 18:59, 24 October 2018
Two articles on this wiki may be referred to as "git"
- Git (VM), a retired VM that ran Gitorious;
- Git (Server), a hardware server running GoGs.
If you followed an internal link here, please update it to point to the correct article! | http://docs.cslabs.clarkson.edu/mediawiki/index.php?title=Git&diff=prev&oldid=8705 | 2019-08-17T12:37:57 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.cslabs.clarkson.edu |
UpdateClusterVersion
Updates.
Request Syntax
POST /clusters/
name/updates HTTP/1.1 Content-type: application/json { "clientRequestToken": "
string", "version": "
string" }
URI Request Parameters
The request requires the following URI parameters.
Request Body
The request accepts the following data in JSON format.
- clientRequestToken
Unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
Type: String
Required: No
- version
The desired Kubernetes version following a successful update.
Type: String
Required: Yes updates the
devel cluster to Kubernetes
version 1.11.
Sample Request
POST /clusters/devel/updates2834Z Authorization: AUTHPARAMS { "version": "1.11", "clientRequestToken": "b07dab93-51bc-4094-8372-96f3ccf888ff" }
Sample Response
HTTP/1.1 200 OK Date: Thu, 29 Nov 2018 17:28:35 GMT Content-Type: application/json Content-Length: 228 x-amzn-RequestId: 33000f0c-f3fc-11e8-9ddb-9bc150e1f1e4 x-amz-apigw-id: RIo2bEs8vHcFXoA= X-Amzn-Trace-Id: Root=1-5c0021c2-e5132580188eafa8600f2fb0: | https://docs.aws.amazon.com/eks/latest/APIReference/API_UpdateClusterVersion.html | 2019-08-17T13:32:30 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.aws.amazon.com |
administrativeUnit resource type
Important
APIs under the
/beta version in Microsoft Graph are subject to change. Use of these APIs in production applications is not supported.
An administrative unit provides a conceptual container for User and Group directory objects. Using administrative units, a company administrator can now delegate administrative responsibilities to manage the users and groups contained within or scoped to an administrative unit to a regional or departmental administrator.
Let's look at an example. Imagine that Contoso Corp is made up of two divisions - a West Coast Division and an East Coast Division. Directory roles at Contoso are scoped to the entire tenant. Lee, a Contoso company administrator, wants to delegate administrative responsibilities, but scope them to the West Coast Division or the East Coast division. Lee can create a West Coast admistrative unit and place all West Coast users into this administrative unit. Similarly, Lee can create an East Coast adminstrative unit. Now Lee, can start delegating administrative responsibilities to others, but scoped to the new administrative units he's created. Lee places Jennifer in a helpdesk administrator role scoped to the West Coast administrative unit. This allows Jennifer to reset any user's password, but only if those users are in the West Coast administrative unit. Similarly, Lee places Dave in a user account administrator role scoped to the East Coast administrative unit. This allows Dave to update users, assign licenses and reset any user's password, but only if those users are in the East Coast administrative unit. For a video overview, please see Introduction to Azure Active Directory Administrative Units.
This resource lets you add your own data to custom properties using extensions.
This topic provides descriptions of the declared properties and navigation properties exposed by the administrativeUnit entity, as well as the operations and functions that can be called on the administrativeUnits resource.
Methods
Properties
Relationships
JSON representation
Here is a JSON representation of the resource.
{ "description": "string", "displayName": "string", "id": "string (identifier)", "visibility": "string" }
See also
Feedback | https://docs.microsoft.com/en-us/graph/api/resources/administrativeunit?view=graph-rest-beta | 2019-08-17T14:29:56 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
Contents Security Operations Previous Topic Next Topic Content packs for Security Incident Response Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Content packs for Security Incident Response Content packs contain preconfigured best practice dashboards. The dashboards present important metrics for analyzing your Security Incident Response process, such as new security incidents or the average age of open security incidents. content packsContent packs and in-form analyticsPerformance Analytics On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/jakarta-security-management/page/use/dashboards/application-content-packs/security-incident-content-pack.html | 2019-08-17T13:14:02 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.servicenow.com |
FBUPKEEP is a Python library that provides a task executor engine and a command-line tool to execute tasks. Its primary purpose is to run maintenance tasks for Firebird ® servers and databases, but could be easily extended to run other tasks of various type.
Built-in tasks:
FBUPKEEP is designed to run on Python 3.5+, and uses FDB Firebird driver.
pip install fbupkeep | https://fbupkeep.readthedocs.io/en/latest/ | 2019-08-17T12:51:06 | CC-MAIN-2019-35 | 1566027313259.30 | [] | fbupkeep.readthedocs.io |
numpy.ma.MaskedArray.cumprod¶
- MaskedArray.cumprod(axis=None, dtype=None, out=None)[source]¶
Return the cumulative product of the array elements over the given axis.
Masked values are set to 1 internally during the computation. However, their position is saved, and the result will be masked at the same locations.
Refer to numpy.cumprod for full documentation.
See also
- ndarray.cumprod
- corresponding function for ndarrays
- numpy.cumprod
- equivalent function
Notes
The mask is lost if out is not a valid MaskedArray !
Arithmetic is modular when using integer types, and no error is raised on overflow. | https://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.ma.MaskedArray.cumprod.html | 2017-05-22T21:25:28 | CC-MAIN-2017-22 | 1495463607120.76 | [] | docs.scipy.org |
Toggle navigation
Documentation Home
Online Store
Support
All Documentation for Dynamic Shortcodes keyword
+ Filter by product
Wishlist Smart Shortcodes
Wishlist Pay Per Post Shortcodes
Wishlist Drip Dynamic Shortcodes
Wishlist Smart Shortcodes
Wishlist Pay Per Post Shortcodes
Adding the Sidebar Widget
Adding the shortcodes to Posts and Pages
Complete Activation Process of Wishlist Pay Per Post Shortcodes in 5 Steps
Introduction to Wishlist Pay Per Post Shortcodes
Why is Wishlist Pay Per Post Shortcodes necessary for the pay-per-post flow?
Does Wishlist Pay Per Post Shortcodes work with free pay-per-post registrations?
Wishlist Drip Dynamic Shortcodes
Adding the shortcodes to Posts and Pages
Introduction to Wishlist Drip Dynamic Shortcodes | http://docs.happyplugins.com/doc/keyword/dynamic-shortcodes | 2017-03-23T08:11:24 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.happyplugins.com |
Response Time¶
The Scrapy Cluster Response time is dependent on a number of factors:
- How often the Kafka Monitor polls for new messages
- How often any one spider polls redis for new requests
- How many spiders are polling
- How fast the spider can fetch the request
- How fast your item pipeline can get the response into Kafka
With the Kafka Monitor constantly monitoring the incoming topic, there is very little latency for getting a request into the system. The bottleneck occurs mainly in the core Scrapy crawler code.
The more crawlers you have running and spread across the cluster, the lower the average response time will be for a crawler to receive a request. For example if a single spider goes idle and then polls every 5 seconds, you would expect a your maximum response time to be 5 seconds, the minimum response time to be 0 seconds, but on average your response time should be 2.5 seconds for one spider. As you increase the number of spiders in the system the likelihood that one spider is polling also increases, and the cluster performance will go up.
The next bottleneck in response time is how quickly the request can be conducted by Scrapy, which depends on the speed of the internet connection(s) you are running the Scrapy Cluster behind. This is out of control of the Scrapy Cluster itself, but relies heavily on your ISP or other network configuration.
Once your spider has processed the response and yields an item, your item pipeline before the response gets to Kafka may slow your item down. If you are doing complex processing here, it is recommended you move it out of Scrapy and into a larger architecture.
Overall, your cluster response time for brand new crawls on a domain not yet seen is a lot slower than a domain that is already in the crawl backlog. The more Crawlers you have running, the bigger throughput you will be able to achieve. | http://scrapy-cluster.readthedocs.io/en/dev/topics/advanced/responsetime.html | 2017-03-23T08:07:13 | CC-MAIN-2017-13 | 1490218186841.66 | [] | scrapy-cluster.readthedocs.io |
Deploy MongoDB Worker Roles in Azure¶
The MongoDB Worker Role is currently a preview release. Please provide feedback at mongodb-dev, mongodb-user, or IRC #mongodb.
The MongoDB Worker Role project allows you to deploy and run a MongoDB replica set on Windows Azure. Replica set members are run as Azure worker role instances. MongoDB data files are stored in an Azure page blob mounted as a cloud drive. One can use any MongoDB driver to connect to the MongoDB server instance. The MongoDB .Net driver is included as part of the package.
Get the Package¶
The MongoDB Azure Worker Role is delivered as a Visual Studio 2010 solution with associated source files. You can access the package at GitHub:
<>
It is recommended using the latest tagged version.
Alternatively, you can clone the repository run the following commands from a git bash shell:
cd <parentdirectory> git config --global core.autocrlf true git clone [email protected]:mongodb/mongo-azure.git cd mongo-azure
Components¶
Once you have unzipped the package or cloned the repository, you will see the following directories:
- Setup: Contains a file called solutionsetup.cmd. This is used to setup the solution the first time you use it.
- src: Contains all the project’s source code.
- src/SampleApplications: Contains sample applications that you can use to demo MongoDB on Azure. See the listing for more info.
- lib: Library files. Includes the MongoDB .NET driver
- Tools: Contains miscellaneous tools for the project.
Initial Setup¶
We assume you’re running Windows x64 and Visual Studio. If not, install those first; Visual Studio 2010 or Visual Web Developer 2010 should work.
- Install Windows Azure SDK v1.7.
- Enable IIS on your local machine. This can be done by going to the “Turn Windows features on or off” control panel, under “Programs”. Check “Internet Information Services” and also check ASP.NET under World Wide Web Services|Application Development Features.
- Clone the project.
- Before opening either solution file, run solutionsetup.cmd [version] from the Setup directory.
- cd Setup
- If version is not specified the default version configured in solutionsetup.ps1 is installed
- Alternatively you can specify the version of MongoDB to installing e.g. solutionsetup.cmd 2.4.4
- Open the solution you want, set the “MongoDB.WindowsAzure.(Sample.)Deploy” project as the StartUp Project, and run it!
The setup script does the following:
- Creates the cloud configs for the 2 solutions
- Downloads the MongoDB binaries to lib\MongoDBBinaries.
Note
The setup script downloads the 64-bit version of MongoDB by default. If you are developing with 32-bit Windows, you must download the latest 32-bit MongoDB binaries and place them in lib\MongoDBBinaries yourself. Do this after running solutionsetup.cmd so it won’t overwrite your work.
The prerequisites can be found in the Github readme.
Once these are installed, you can open either solution MongoDB.WindowsAzure.sln for just the replica set and the monitoring application; MongoDB.WindowsAzure.Sample.sln for the replica set, monitoring application and a sample IIS app, MvcMovie, to test it.
Deploy and Run¶
Run Locally on Compute/Storage Emulator¶
The following instructions are for running the sample application.
To start, you can test out your setup locally on your development machine. The default configuration has 3 replica set members running on ports 27017, 27018 and 27019 with a replica set name of ‘rs’.
In Visual Studio, run the solution using F5 or Debug->Start Debugging. This will start up the replica set, the monitoring application and the MvcMovie sample application (if you are in the sample solution).
You can verify this by using the monitoring application or MvcMovie sample application in the browser or by running mongo.exe against the running instances.
Deploy to Azure¶
Once you have the application running locally, you can deploy the sample app solution to Windows Azure. You cannot execute locally (on the compute emulator) with data stored in Blob store. This is due to the use of Windows Azure Drive which requires both compute and storage are in the same location.
- For detailed configuration options, see Configure Worker Roles in Azure.
- For step-by-step deployment instructions, see Deploy MongoDB Worker Roles in Azure.
Additional Information¶
The MongoDB Worker Role runs mongod.exe with the following options:
--dbpath --port --logpath --journal --nohttpinterface --logappend --replSet
MongoDB creates the following containers and blobs on Azure storage:
- Mongo Data Blob Container Name - mongoddatadrive(replica set name)
- Mongo Data Blob Name - mongoddblob(instance id).vhd
FAQ/Troubleshooting¶
Can I run mongo.exe to connect?
- Yes if you set up remote desktop. Then you can connect to the any of the worker role instances and run e:approotMongoDBBinariesbinmongo.exe.
Role instances do not start on deploy to Azure
- Check if the storage URLs have been specified correctly.
Occasional socket exception using the .Net driver on Azure
This is usually due to the fact that the Azure load balancer has terminated an inactive connection. This can be overcome by setting the max idle connection time on the driver.
MongoDefaults.MaxConnectionIdleTime = TimeSpan.FromMinutes(1);
My MongoDB instances are running fine on the worker roles. The included manager app shows the instances are working fine but my client app cannot connect.
Ensure that the Instance Maintainer exe is deployed as part of the client role. You also need to change the Service Definition for the client role to have the InstanceMaintainer started on instance start.
Refer to images below:
Instance Maintainer deployed as part of role:
Instance Maintainer start defined in service definition:
Known issues/Where do I file bugs?¶
<> | http://docs.mongodb.org/ecosystem/tutorial/deploy-mongodb-worker-roles-in-azure/?showComments=true&showCommentArea=true | 2014-04-16T08:16:05 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.mongodb.org |
A.
When a registry is set up (or created) by a Configurator, the registry will be decorated with an instance named introspector implementing the pyramid.interfaces.IIntrospector interface. See also pyramid.config.Configurator.introspector`.
When a registry is created “by hand”, however, this attribute will not exist until set up by a configurator.
This attribute is often accessed as request.registry.introspector in a typical Pyramid application.
This attribute is new as of Pyramid 1.3.
The default implementation of the interface pyramid.interfaces.IIntrospectable used by framework exenders. An instance of this class is is created when pyramid.config.Configurator.introspectable is called.
This class is new as of Pyramid 1.3. | http://docs.pylonsproject.org/projects/pyramid/en/1.3-branch/api/registry.html | 2014-04-16T07:43:38 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.pylonsproject.org |
public interface FieldAccessorFactory<E>
Factory interface for a single field / field accessor. Provides means to check if a certain field is eligible for this factory and also a factory method to create the field accessor.
boolean accept(Field f)
f- field to check
FieldAccessor<E> forField(Field f)
f- the field to create an accessor for | http://docs.spring.io/spring-data/data-graph/docs/1.0.0.M4/api/org/springframework/data/graph/neo4j/fieldaccess/FieldAccessorFactory.html | 2014-04-16T09:09:55 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.spring.io |
The. For more information about the Ganglia open-source project, go to.
To add Ganglia to a cluster using the console
Sign in to the AWS Management Console and open the Amazon Elastic MapReduce console at.
Click Create Cluster.
Under the Additional Applications list, choose Ganglia and click Configure and add.
Proceed to create the cluster as described in Plan an Amazon EMR Cluster.
To add a Ganglia bootstrap action using the CLI
When you create a new cluster using the CLI, specify the Ganglia bootstrap action by adding the following parameter to your cluster call:
--bootstrap-action s3://elasticmapreduce/bootstrap-actions/install-ganglia
The following command illustrates the use of the
bootstrap-action parameter when starting a new cluster.
In this example, you start the Word Count sample cluster provided by Amazon EMR and launch three instances.
In the directory where you installed the Amazon EMR CLI, run the following from the command line. For more information, see the Command Line Interface Reference for Amazon EMR.
Note
The Hadoop streaming syntax is different between Hadoop 1.x and Hadoop 2.x.
For Hadoop 2.x, use the following command:
Linux, UNIX, and Mac OS X users:
.
Windows users:
ruby
For Hadoop 1.x, use the following command:
Linux, UNIX, and Mac OS X users:
.
Windows users:
ruby Web Interfaces Hosted on the Master Node.
To view the Ganglia web interface
Use SSH to tunnel into the master node and create a secure connection. For information about how to create an SSH tunnel to the master node, see Open an SSH Tunnel to the Master Node.
Install a web browser with a proxy tool, such as the FoxyProxy plug-in for Firefox, to create a SOCKS proxy for domains of the type *ec2*.amazonaws.com*. For more information, see Configure FoxyProxy to View Websites Hosted on the Master Node.
With the proxy set and the SSH connection open, you can view the Ganglia UI by opening a
browser window with
http://
master-public-dns-name/ganglia/, where
master-public-dns-name is the public DNS address
of the master server in the Amazon EMR cluster. For information about how to locate
the public DNS name of a master node, see To locate the public DNS name of the master node using the Amazon EMR console.
When you open the Ganglia web reports in a browser, you see an overview of the cluster’s performance, with graphs detailing the load, memory usage, CPU utilization, and network traffic of the cluster. Below the cluster statistics are graphs for each individual server in the cluster. In the preceding cluster creation example, we launched three instances, so in the following reports there are three instance charts showing the cluster data.
The default graph for the node instances is Load, but you can use the Metric drop-down list to change the statistic displayed in the node-instance graphs.
You can drill down into the full set of statistics for a given instance by selecting the node from the drop-down list or by clicking the corresponding node-instance chart.
This opens the Host Overview for the node.
If you scroll down, you can view charts of the full range of statistics collected on the instance.
Ganglia reports Hadoop metrics for each node instance. The various types of metrics are prefixed by category: distributed file system (dfs.*), Java virtual machine (jvm.*), MapReduce (mapred.*), and remote procedure calls (rpc.*). You can view a complete list of these metrics by clicking the Gmetrics link, on the Host Overview page. | http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/UsingEMR_Ganglia.html | 2014-04-16T07:23:34 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.aws.amazon.com |
This analyzer is recommended to launch analysis on Maven projects.
Prerequisite
You must have previously installed and configured Maven.
Analyze a Maven Project
Analyzing a maven project consists of running a maven goal in the directory where the pom.xml sits. If possible, an install goal should be performed prior to the sonar one.
Recommended way
skipTests=true not to run unit tests twice: during the install goal and again during the sonar goal. You can also deactivate the integration tests execution. Please refer to the Maven documentation.
Alternative way
When the above configuration is not possible, you can run an analysis in one command, but unit tests will run twice: once in the install goal and once in. | http://docs.codehaus.org/pages/viewpage.action?pageId=229738165 | 2014-04-16T08:12:30 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.codehaus.org |
.
Dependencies
You will need the following dependencies to build your report plugin:
AbstractMavenReportRenderer
For a relatively straightforward report, you can take advantage of AbstractMavenReportRenderer.
This class will handle the basic negotiations with the Doxia sink, setting up the head, title, and body. You implement a renderBody method to fill in the middle. it provides utilities for sections and tables.
Doxia Sink
You also need to use the Doxia Sink API to have complete decoration (ie. menus). That is quite straightforward. You simply import
org.apache.maven.doxia.sink.Sink and get an instance by simply calling the class method
getSink() (you don't even have to implement it). Then you can do things like that:
to get
<td>some text</td>.
Here is another complete example:
As one can easily see, the Sink API reproduces the major structural elements of HTML (and most other text markup languages). Start tag is denoted by
xxxx() method and end of tag by
xxxx_() method. You can do pretty much anything you could do with (x)HTML as there is even a
rawText() method that outputs exactly what you give it.
Note that the
text() method takes care of escaping characters.
Caveat: the sectionning is strict which means that section level 2 must be nested in section 1 etc.
Note: To find out more about the possible markup (that is the available methods) you need to read the sources at there is no documentation on at the moment.
More than one report from one plugin
If you want to have more than one goal in | http://docs.codehaus.org/pages/viewpage.action?pageId=64454 | 2014-04-16T08:28:56 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.codehaus.org |
Install it with easy_install
$ sudo easy_install -U luban
or,
get luban from the Python Package Index, expand it and run
$ sudo python setup.py install
Or if you need more details:
Luban is installable on unix/linux platforms, as well as Mac OS X operating systems. Luban has not been tested in windows system.
Please make sure your system has the following software installed:
-
Download the appropriate egg for your version of Python (e.g. setuptools-0.6c11-py2.6.egg). Do NOT rename it.
-
Run it as if it were a shell script, e.g.$ sh setuptools-0.6c11-py2.6.egg
Setuptools will install itself using the matching version of Python (e.g. python2.6), and will place the easy_install executable in the default location for installing Python scripts (as determined by the standard distutils configuration files, or by the Python installation). | http://docs.danse.us/pyre/luban/sphinx/Installation.html | 2014-04-16T08:35:35 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.danse.us |
Source code: Lib/rlcompleter.py
The.
Completer objects have the following method:
Return the stateth completion for text.
If called for text that doesn’t include a period character ('.'), it will complete from names currently defined in __main__, builtins. Any exception raised during the evaluation of the expression is caught, silenced and None is returned. | https://docs.python.org/3.2/library/rlcompleter.html | 2014-04-16T07:16:58 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.python.org |
Radial & Angular Distribution Function¶
Radial and angular distribution function (RDF & ADF) generators have been
implemented in the
MultiMolecule class.
The radial distribution function, or pair correlation function, describes how
the particale density in a system varies as a function of distance from a
reference particle. The herein implemented function is designed for
constructing RDFs between all possible (user-defined) atom-pairs.
Given a trajectory,
mol, stored as a
MultiMolecule instance, the RDF
can be calculated with the following
command:
rdf = mol.init_rdf(atom_subset=None, low_mem=False).
The resulting
rdf is a Pandas dataframe, an object which is effectively a
hybrid between a dictionary and a NumPy array.
A slower, but more memory efficient, method of RDF construction can be enabled
with
low_mem=True, causing the script to only store the distance matrix
of a single molecule in memory at once. If
low_mem=False, all distance
matrices are stored in memory simultaneously, speeding up the calculation
but also introducing an additional linear scaling of memory with respect to
the number of molecules.
Note: Due to larger size of angle matrices it is recommended to use
low_mem=False when generating ADFs.
Below is an example RDF and ADF of a CdSe quantum dot pacified with formate ligands. The RDF is printed for all possible combinations of cadmium, selenium and oxygen (Cd_Cd, Cd_Se, Cd_O, Se_Se, Se_O and O_O).
>>> from FOX import MultiMolecule, example_xyz >>> mol = MultiMolecule.from_xyz(example_xyz) # Default weight: np.exp(-r) >>> rdf = mol.init_rdf(atom_subset=('Cd', 'Se', 'O')) >>> adf = mol.init_adf(r_max=8, weight=None, atom_subset=('Cd', 'Se')) >>> adf_weighted = mol.init_adf(r_max=8, atom_subset=('Cd', 'Se')) >>> rdf.plot(title='RDF') >>> adf.plot(title='ADF') >>> adf_weighted.plot(title='Distance-weighted ADF')
API¶
MultiMolecule.
init_rdf(mol_subset=None, atom_subset=None, dr=0.05, r_max=12.0, mem_level=2)[source]
Initialize the calculation of radial distribution functions (RDFs).
RDFs are calculated for all possible atom-pairs in atom_subset and returned as a dataframe.
MultiMolecule.
init_adf(mol_subset=None, atom_subset=None, r_max=8.0, weight=<function neg_exp>)[source]). | https://auto-fox.readthedocs.io/en/latest/1_rdf.html | 2020-07-02T09:08:19 | CC-MAIN-2020-29 | 1593655878639.9 | [array(['_images/1_rdf-1_00.png', '_images/1_rdf-1_00.png'], dtype=object)
array(['_images/1_rdf-1_01.png', '_images/1_rdf-1_01.png'], dtype=object)
array(['_images/1_rdf-1_02.png', '_images/1_rdf-1_02.png'], dtype=object)] | auto-fox.readthedocs.io |
Playstation DualShock Controller
Playstation DualShock controllers can be used as input devices for TouchDesigner.
PS3 DualShock and Sixaxis controllers[edit]
What you'll need for a Playstation3 controller:
- a PS3 DualShock3 or Sixaxis controller
- a USB mini-B cable
- PS3-Windows driver
- PS3sixaxis.tox Component
The PS3-Windows driver must be selected in the Joystick CHOP's Joystick Source parameter. No motion controls supported at this time.
PS2 DualShock controllers[edit]
What you'll need for a Playstation2 controller:
- a PS2 Dualshock or Dualshock2 controller
- a PS2-USB controller adaptor
- PS2dualshock.tox Component
The PS2-USB adaptor must be selected in the Joystick CHOP's Joystick Source parameter.
TIP: If the PS2 controller is connected to the computer after TouchDesigner has started or after the PS2Dualshock.tox component was added to the network, then the joystick adaptor must be reslected from the Joystick Source menu. If you are using a different PS2-USB adaptor, it will also need to be reselected from the Joystick Source menu.
The Network Explained[edit]
The image below shows the network of these components. TouchDesigner creates data channels from the controllers through the Joystick CHOP.
The math1 CHOP simply inverts one of the analog sticks for consistency. The rename1 CHOP renames all the channels created by the Joystick CHOP into something more meaningful. You can edit the rename parameters here if you would like to change the naming of any controls (the analog stick and directional pad channels are verbose for clarity, shorter names could be used).
The math2 CHOP adjusts the range of the analog sticks and directional pad. It is re-mapped such that neutral is 0. Horizontally, left is 0 to -1, right is 0 to 1. Vertically, down is 0 to -1, up is 0 to 1.
The constant1 and replace1 CHOPs are added so that if the joystick is not present at startup, the channels still will be present in your CHOP network using default values. If you rename your channels in the rename1 CHOP, be sure to update constant1 with your new channel names.
TIP: Make sure the Analog button on the controller is on, the red LED should be on. If not, the analog sticks will not work.
An Operator Family which operate on Channels (a series of numbers) which are used for animation, audio, mathematics, simulation, logic, UI construction, and many other applications. | https://docs.derivative.ca/Playstation_DualShock_Controller | 2020-07-02T09:21:35 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.derivative.ca |
Posts related to XQuery and its Type System
The following postings are discussing aspects of XQuery and its Type System:
- Discussion of the problems of the syntax and semantics of the XQuery SequenceType in the Nov 2003 XQuery Last Call document.
- Series on the XQuery and XPath 2.0 Type System
More will follow which I will add to this post, so please check back frequently. | https://docs.microsoft.com/en-us/archive/blogs/mrys/posts-related-to-xquery-and-its-type-system | 2020-07-02T09:08:22 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.microsoft.com |
In SuiteCRM Cases are used to record interactions with Customers when they ask for help or advice, for example in a Sales or Support function. A Case can be created, updated when a User is working on it, assigned to a colleague and closed when resolved. At each stage of the Case the User can track and update the incoming and outgoing conversation thread so a clear record of what has occurred is registered in the CRM. Cases can be related to individual records such as Accounts, Contacts and Bugs.
You can access the Cases actions from the Cases module menu drop down or via the Sidebar. The Cases actions are as follows:
Create Case – A new form is opened in Edit View to allow you to create a new Account record.
View Cases – Redirects you to the List View for the Cases module. This allows you to search and list Case records.
Import Cases – Redirects you will be taken to the Import Wizard for the Cases module. For more information, see Importing Records.
To view the full list of fields available when creating a Case, See Cases Field List.
Advanced functionality for Cases can be found in the Cases with Portal section of this User Guide.
To sort records on the Cases List View, click any column title which is sortable. This will sort the column either ascending or descending.
To search for a Case, see the Search section of this user guide.
To update some or all the Cases on the List View, use the Mass Update panel as described in the Mass Updating Records section of this user guide.
To duplicate a Case, you can click the Duplicate button on the Detail View and then save the duplicate record.
To merge duplicate Cases, select the records from the Cases List View, click the Merge link in the Actions drop-down list, and progress through the merge process. For more information on Merging Duplicates, see the Merging Records section of this user guide.
To delete one or multiple Cases, you can select multiple records from the List View and click delete. You can also delete a Case from the Detail View by clicking the Delete button. For a more detailed guide on deleting records, see the Deleting Records section of this user guide.
To view the details of a Cases, click the Case Subject in the List View. This will open the record in Detail View.
To edit the Case details, click Edit icon within the List View or click the edit button on the Detail View, make the necessary changes, and click Save.
For a detailed guide on importing and exporting Cases, see the Importing Records and Exporting Records sections of this user guide.
To track all changes to audited fields, in the Case record, you can click the View Change Log button on the Case’s Detail View or Edit View.
Content is available under GNU Free Documentation License 1.3 or later unless otherwise noted. | https://docs.suitecrm.com/user/core-modules/cases/ | 2020-07-02T09:36:06 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.suitecrm.com |
Images
The Spreadsheet supports the placing of images in its sheets.
To load an image to a sheet, use any of the following approaches:
- Use the Insert Image tool available on the Spreadsheet toolbar.
- Use the initial configuration of the Spreadsheet to load and point to the widget.
- Use the
sheet.addImage()API method.
Using the Insert Image Tool
You can upload and insert a custom image in the Spreadsheet with the
Insert Image tool.
Then, in the popup window, you can select or drag in a file from the file system.
Configuring the Spreadsheet to Initially Display an Image
- To properly configure the Spreadsheet to display an image on one its sheets, add a definition for the image to the Spreadsheet
imagesfield. In the
imagesobject each image should be specified with unique key (property name) holding as value the image URL. The image URLs can be either data URLs, in which case the images are fully contained in the definition, or can be external URLs.
Reference that image and place it accordingly using the
drawingsarray of the respective sheet.
The drawing definition has to contain:
* A pointer to the cell that will hold the top-left corner of the image: `TopLeftCell`. * X and Y offset of the top-left corner: `OffsetX` and `OffsetY`. * Dimensions of the rendered image: `Width` and `Height`. * A pointer to the image key that is used in the `Images` configuration of the Spreadsheet: `Image`.
The following example demonstrates how to configure the Spreadsheet to display an image with top-left corner placed in the
J6 cell.
@(Html.Kendo().Spreadsheet() .Name("spreadsheet") .Images(new { testImage = "/images/image1.png" }) .Sheets(sheets => { sheets.Add() .Name("Sheet1") .Drawings(dr => { dr.Add() .TopLeftCell("J6") .OffsetX(30) .OffsetY(10) .Width(50) .Height(50) .Image("testImage"); }) .Columns(columns => { columns.Add().Width(115); }) .Rows(rows => { rows.Add().Height(25).Cells(cells => { cells.Add() .Value("ID") .TextAlign(SpreadsheetTextAlign.Center); }); }); }) )
Using the addImage() Method
The Spreadsheet Sheet API exposes a method that allows you to programmatically add an image to the Spreadsheet and place it on a sheet.
- Create a new
kendo.spreadsheet.Drawingobject. The configuration of the
Drawingobject is the same as the one described in the example from the previous section.
Pass the
Drawingto the
sheet.addDrawing().
When you use the export functionality of the Spreadsheet together with images, note the following:
- Images are supported only for client-side import and export. When you engage server-side import or export, no images will be loaded or exported.
- To properly export any image to PDF by using the default Spreadsheet functionality, at least one cell with data has to be present on the sheet which contains that image.
@(Html.Kendo().Spreadsheet() .Name("spreadsheet") .Sheets(sheets => { sheets.Add() .Name("Sheet1") .Columns(columns => { columns.Add().Width(115); }) .Rows(rows => { rows.Add().Height(25).Cells(cells => { cells.Add() .Value("ID") .TextAlign(SpreadsheetTextAlign.Center); }); }); }) ) <script> $(document).ready(function () { var spreadsheet = $("#spreadsheet").data("kendoSpreadsheet"); var sheet = spreadsheet.activeSheet(); var drawing = kendo.spreadsheet.Drawing.fromJSON({ topLeftCell: "J6", offsetX: 30, offsetY: 10, width: 50, height: 50, image: spreadsheet.addImage("/images/chrome.gif") }); sheet.addDrawing(drawing); }) </script> | https://docs.telerik.com/aspnet-core/html-helpers/data-management/spreadsheet/images | 2020-07-02T10:10:23 | CC-MAIN-2020-29 | 1593655878639.9 | [array(['images/spreadsheet-insert-image-tool.png',
'Spreadsheet Insert Image tool'], dtype=object)
array(['images/spreadsheet-insert-image-pop-up.png',
'Spreadsheet Insert Image pop-up'], dtype=object)] | docs.telerik.com |
Presentations & Workshop materials¶
We occasionally run workshops to teach how to use eHive, at the EMBL-EBI and in other institutes too. If you are interested in hosting one, please contact the Ensembl Helpdesk.
March 2017, Roslin Institute¶
In 2017 we ran two workshops consecutively: at the NCHC, Hsinchu, Taiwan, and at the Roslin Institute, Edinburgh, United Kingdom. Here are the materials we used at the latter (based on feedback from the first course). The workshop was composed of four parts:
Introduction to eHive
This part gives an overview of eHive, and explains the basic concepts of workflow management.
Initialising and running eHive pipelines
This part shows how to run already-existing pipelines.
Pipeline configuration
This part is about writing new pipelines using available components.
Writing your own Runnable modules
This part is about writing new components (Runnables) to add to pipelines. | https://ensembl-hive.readthedocs.io/en/version-2.5/appendix/presentations.html | 2020-07-02T08:43:25 | CC-MAIN-2020-29 | 1593655878639.9 | [] | ensembl-hive.readthedocs.io |
General Information
Quality Assurance and Productivity
Desktop
Frameworks and Libraries
Web
Controls and Extensions
Maintenance Mode
Enterprise and Analytic Tools
End-User Documentation
EFObjectSpace Class
An Object Space which is used for data manipulation via the Entity Framework.
Namespace: DevExpress.ExpressApp.EF
Assembly: DevExpress.ExpressApp.EF.v20.1.dll
Declaration
Remarks
When an XAF application uses the Entity Framework as an ORM layer, an Object Space of the EFObjectSpace class is created. This class is a wrapper over the ObjectContext, which is a container for in-memory objects in the Entity Framework. To access the Object Context used by the current Object Space, use the EFObjectSpace.ObjectContext property. The Object Space uses its Object Context to create, manage and delete persistent objects.
The Object Context's ObjectStateManager tracks every change to every simple property of a persistent object. All the changes are automatically saved to the database by making a single call of the ObjectContext.SaveChanges method, which the Object Space performs using the BaseObjectSpace.CommitChanges method. For details, refer to the Entity Framework documentation. To be sure that changes made to a persistent object's reference properties are tracked, call the BaseObjectSpace.SetModified method after you change such properties in code.
EFObjectSpace works with collections of the DevExpress.ExpressApp.EF.EFCollection type when it is required to load all required objects to a client at once, and with the DevExpress.ExpressApp.EF.EFServerCollection collection type when loading objects in small portions on demand.
When you need to create a new Object Space, use the XafApplication.CreateObjectSpace method. It will create an Object Space of the EFObjectSpace type, if the default Object Space Provider is of the EFObjectSpaceProvider type (see the CreateDefaultObjectSpaceProvider method implementation of your WinApplication descendant in your application).
To learn more about Object Spaces, refer to the BaseObjectSpace class description. | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.EF.EFObjectSpace | 2020-07-02T09:32:33 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.devexpress.com |
Product License Activation
Activating you purchased product license key is necessary for enabeling the automatic updates via WP dashboard.
To activate the license key click the Settings link under the plugin name or the link to the plugin settings page under the Plugins menu item (for example, Divi DotNav, Divi ScrollTop, etc.).
Both of these two links point to the same settings page with the license activation form.
To activate the license key first copy it from the Purchase Receipt that has been sent to your email address immediately after completing the purchase or log in to your Divicio.us account and go to Purchase History -> View Licenses.
Then click the small key icon and copy the license key.
After copying it go back to the license activation form, paste the key into the license key field, click the Save Changes button and finally click the Activate License button.
The green active word indicates that the license has been successfully activated.
| https://docs.divicio.us/article/44-product-license-activation | 2020-07-02T08:40:14 | CC-MAIN-2020-29 | 1593655878639.9 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5d4df8790428631e94f9391d/images/5d5075b90428631e94f93d10/file-sADfe7KZn5.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5d4df8790428631e94f9391d/images/5d507d812c7d3a68825e810a/file-nKon7Tdnxd.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5d4df8790428631e94f9391d/images/5d507f942c7d3a68825e8114/file-xhpTLiz0rG.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5d4df8790428631e94f9391d/images/5d5083be0428631e94f93d41/file-QiH6IlzZOG.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5d4df8790428631e94f9391d/images/5d5085ce0428631e94f93d46/file-I0ZKXQLqys.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5d4df8790428631e94f9391d/images/5d5088a60428631e94f93d55/file-zHDurbpl7f.jpg',
None], dtype=object) ] | docs.divicio.us |
Before upgrading to v6, consider the below breaking changes.
As for any dependency upgrade, it's very important to test this upgrade in your testing environments. Not doing so could result in your admin panel being unusable.
To upgrade to v6, simply run:
npm install forest-express-sequelize@latest
npm install forest-express-mongoose@latest
In case of a regression introduced in Production after the upgrade, a rollback to your previous liana is the fastest way to restore your admin panel.
The liana initialization now returns a promise. This solves an issue wherein exposed lianas were not yet initialized and thus returning 404s.
You must update the following 2 files:
middlewares/forestadmin.js (lines 6-7)// BEFOREmodule.exports = function (app) {app.use(Liana.init({// AFTERmodule.exports = async function (app) {app.use(await Liana.init({
app.js (line 56)// BEFOREresolve: Module => new Module(app),// AFTERresolve: Module => Module(app),
This version also introduces the new Select all behavior. Once you've updated your bulk Smart Actions according to the below changes, you'll be able to choose between selecting all the records or only those displayed on the current page.
/routes/companies.js//BEFORErouter.post('/actions/mark-as-live', permissionMiddlewareCreator.smartAction(), (req, res) => {let companyId = req.body.data.attributes.ids[0];return companies.update({ status: 'live' }, { where: { id: companyId }}).then(() => {res.send({ success: 'Company is now live!' });});});// AFTERimport { RecordsGetter } from "forest-express-sequelize";...router.post('/actions/mark-as-live', permissionMiddlewareCreator.smartAction(), (req, res) => {return new RecordsGetter(companies).getIdsFromRequest(req).then((companyIds) => {return companies.update({ status: 'live' }, { where: { id: companyIds }}).then(() => {res.send({ success: 'Company is now live!' });});});});
If you altered the default DELETE behavior by overriding or extending it, you'll have to do so as well with the new BULK DELETE route.
This release note covers only the major changes. To learn more, please refer to the changelogs in our different repositories: | https://docs.forestadmin.com/documentation/how-tos/maintain/upgrade-to-v6 | 2020-07-02T08:32:08 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.forestadmin.com |
This example shows you how to display a different response following a smart action based on the number of records selected:
if the action is done on one company, the response is
"Company is now live!"
if the action is done on several company, the response is
"Companies are now live!"
An admin backend running on forest-express-sequelize
This directory contains the
companies.js file where the model is declared.
/models/companies.jsmodule.exports = (sequelize, DataTypes) => {const { Sequelize } = sequelize;const Companies = sequelize.define('companies', {name: {type: DataTypes.STRING,},status: {type: DataTypes.STRING,}}, {tableName: 'companies',timestamps: false,schema: process.env.DATABASE_SCHEMA,});return Companies;};
This directory contains the
companies.js file where the Smart Action
Mark as Liveis declared.
const { collection } = require('forest-express-sequelize');collection('companies', {actions: [{name: 'Mark as Live',type: 'bulk',}],});
This directory contains the
companies.js file where the implementation of the route is handled. The
POST /forest/actions/mark-as-live API call is triggered when you click on the Smart Action in the Forest UI.
//...const { RecordsGetter } = require('forest-express-sequelize');router.post('/actions/mark-as-live', (req, res) => {return new RecordsGetter(companies).getIdsFromRequest(req).then((companyIds) => {const companiesCount = companyIds.length;return companies.update({ status: 'live' }, { where: { id: companyIds }}).then(() => {const message = companiesCount > 1 ? 'Companies are now live!' : 'Company is now live!';res.send({ success: message });});});});//... | https://docs.forestadmin.com/woodshop/how-tos/display-a-customized-response | 2020-07-02T09:25:54 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.forestadmin.com |
Serialised format information
Information about the format and serialisation options are included for decoders.
property kind
The
kind property specifies the overall format of the serialised object. The value for the current version of the serialisation format is
"haplo:object:0".
property sources
Optional Sources may be used to include additional information in the serialised format. The
sources property is an array of source names used to generate the serialised object.
When using a serialised object, always test that the
sources contain the expected names. | https://docs.haplo.org/dev/serialisation/format | 2020-07-02T08:31:05 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.haplo.org |
Microsoft Dynamics GP and Kerberos: Do You Need It?
If you have deployed the Microsoft SQL Server Reporting Services reports for Microsoft Dynamics GP 2010, you may have run across a situation where the reports would only render and return data from certain servers/workstations. If so, you are likely running into a situation where Kerberos Authentication is required.
Often times the need for Kerberos authentication will be highlighted by the follow errors in the report viewer web parts in Business Portal 5.1 where your SSRS reports are supposed to be displayed:
“The Request Failed with HTTP Status 401: Unauthorized”
An example of this error is found in the screenshot below:
If you have experienced this error or you just want to verify if you need to configure Kerberos Authentication in your environment you can review the following How To document:
Customers
Partners
You can match your environment to the examples given in this document to determine if you need to implement Kerberos Authentication or if there is a potential workaround. The article also provides an example of how you would implement Kerberos Authentication that you can follow in your own environment.
If you need in-depth assistance with Kerberos configuration you can contact SQL Server support at.
Enjoy!
Lucas M | https://docs.microsoft.com/en-us/archive/blogs/dynamicsgp/microsoft-dynamics-gp-and-kerberos-do-you-need-it | 2020-07-02T10:55:37 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.microsoft.com |
7.7
Rebellion
Rebellion is a collection of core Racket libraries that includes a stream processing system built on transducers and reducers, new kinds of collections such as multisets and multidicts, a suite of libraries for defining new struct-based types including record types and enum types, and much more. The goal of Rebellion is to make high quality standard libraries accessible to all Racketeers regardless of what #lang they’re using. | https://docs.racket-lang.org/rebellion/index.html | 2020-07-02T08:53:47 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.racket-lang.org |
SuiteCRM allows users to log in using your Username and Password, provided to you by the System Administrator.
Before logging into SuiteCRM, you can select the language you wish to use. There are many default languages for SuiteCRM and there are also additional language packs available for other languages around the world.
Once you have chosen your language and have entered your user credentials, you will be able to click Log in to access the CRM.
If you forget your CRM password and cannot access your CRM user account, you can use the 'Forgotten Password' feature to re-send your password to the email address associated to your user account. Clicking the 'Forgot Password?' link on the login form will display the forgotten password form.
In this chapter we have demonstrated how to access SuiteCRM using the login form. We have also established how to use the forgotten password functionality to retrieve a users password in the event of the password being lost or forgotten.
In the next chapter we will cover the User Wizard, which allows you to set your preferences when using SuiteCRM.
Content is available under GNU Free Documentation License 1.3 or later unless otherwise noted. | https://docs.suitecrm.com/user/introduction/getting-started/ | 2020-07-02T09:47:27 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.suitecrm.com |
Config logs capture events based on your configuration changes, such as changes to SSID settings, radio settings, or network updates.
Use this screen to view config logs. you can specify date/time, severity, select one or multiple event types, and enter the operator name to display the log messages related to it.
Click Analyze > Event Log > Config Log to access this screen. | https://docs.engenius.ai/engenius-cloud/analytics/config-logs | 2020-02-17T00:15:16 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.engenius.ai |
In the following example, cdrecord command will be used:
1. First see if you have a cd/dvd writer:
florian@florian:~$ [color=darkgreen]cdrecord -scanbus[/color] scsibus1: 1,0,0 100) 'HL-DT-ST' 'DVDRAM GT50N ' 'LT20' Removable CD-ROM 1,1,0 101) * 1,2,0 102) * 1,3,0 103) * 1,4,0 104) * 1,5,0 105) * 1,6,0 106) * 1,7,0 107) *
2. Then burn the cd/dvd with the following command:
florian@florian:~$ [color=darkgreen]cdrecord -v -dao dev=1,0,0 Desktop/AIX\ ISO/aix53000_64_os_1201_us_1.gbl.cd1_iplno.iso[/color] TOC Type: 1 = CD-ROM wodim: Operation not permitted. Warning: Cannot raise RLIMIT_MEMLOCK limits.scsidev: '1,0,0' scsibus: 1 version: 1.1.9 SCSI buffer size: 64512 Device type : Removable CD-ROM Version : 5 Response Format: 2 Capabilities : Vendor_info : 'HL-DT-ST' Identification : 'DVDRAM GT50N ' Revision : 'LT20' Device seems to be: Generic mmc2 DVD-R/DVD-RW. Current: 0x0009 (CD-R) Profile: 0x0012 (DVD-RAM)11 (DVD-R sequential recording) Profile: 0x0010 (DVD-ROM) Profile: 0x000A (CD-RW) Profile: 0x0009 (CD-R) (current) : 327680 = 320 KB Beginning DMA speed test. Set CDR_NODMATEST environment variable if device communication breaks or freezes immediately after that. FIFO size : 4194304 = 4096 KB Track 01: data 630 MB Total size: 724 MB (71:46.10) = 322958 sectors Lout start: 724 MB (71:48/08) = 322958: 36891 Speed set to 4234 KB/s Starting to write CD/DVD at speed 24: 630 of 630 MB written (fifo 100%) [buf 76%] 24.8x. Track 01: Total bytes read/written: 661417984/661417984 (322958 sectors). Writing time: 277.643s Average write speed 15.5x. Min drive buffer fill was 68% Fixating... Fixating time: 3.946s wodim: fifo had 10418 puts and 10418 gets. wodim: fifo was 0 times empty and 9716 times full, min fill was 85%. | http://docs.gz.ro/node/160 | 2020-02-17T00:56:04 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['http://docs.gz.ro/sites/default/files/styles/thumbnail/public/pictures/picture-1-1324065756.jpg?itok=rS4jtWxd',
"root's picture root's picture"], dtype=object) ] | docs.gz.ro |
Heads up: Part 3 of our Webcast series taking place this morning!
I know this is short notice, but my only excuse is that this has been a long weekend. Just wanted to give everybody a heads up that Part 3 of our Webcast is coming up this morning at 10 AM PST. So if you haven't already done so, please register for it at:.
Today, Maarten Struys and I will teach how to get even more out of your Windows Mobile Device. Maarten will show you how to use Pocket Outlook data inside your own application and how to make phone calls from within a managed application. I will give you guys a demo on how to implement Location-based services on your device. So stay tuned. Hope to see you soon! | https://docs.microsoft.com/en-us/archive/blogs/croman/heads-up-part-3-of-our-webcast-series-taking-place-this-morning | 2020-02-17T02:28:18 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
Installing Cluster Continuous Replication.
Note
We recommend that you complete each phase before you start the next phase. After you complete all phases, we recommend that you verify the CCR solution before putting it into production..
Network Formation and Configuration.
Network Best Practices for Clustered Mailbox Servers a Microsoft Exchange Server 2007 CCR solution, configure the public and private networks by following the steps that are described in How to Configure Network Connections for Cluster Continuous Replication.
Forming the Failover 2003 Failover Cluster for Cluster Continuous Replication. This procedure includes graphical user interface and command-line interface instructions for forming the failover cluster, adding the second node to the failover cluster, and configuring the cluster to use a Majority Node Set (MNS) quorum.
Note
CCR on Windows Server 2003 requires a quorum model called the MNS quorum with file share witness. This quorum model is available in Windows Server 2003 Service Pack 2 (SP2), which is required for Exchange 2007 Service Pack 1 (SP1). To use the MNS quorum with file share witness with the release to manufacturing (RTM) version of Exchange 2007 and Windows Server 2003 SP1, you must install a hotfix on each node prior to deploying CCR. The hotfix is described in Knowledge Base article 921181, An update is available that adds a file share witness feature and a configurable cluster heartbeats feature to Microsoft Windows Server 2003 Service Pack 1-based server clusters. For detailed steps about how to install the hotfix, see How to Install the Majority Node Set File Share Witness Feature.
Post-Installation Configuration of the Failover Cluster.
Configuring the Cluster Networks.
Configuring Tolerance Settings for Missed Cluster Heartbeats both nodes to account for ten missed heartbeats. This setting level corresponds to approximately 12 seconds.
For detailed steps about how to configure Cluster service tolerance for missed heartbeats, see How to Configure Tolerance Settings for Missed Cluster Heartbeats.
Configuring the File Share Witness.
Clustered Mailbox Server Installation and Configuration.
Note
If you are installing the active node on a computer running Windows Server 2003 that is not located in the same Active Directory site as the domain controller assigned the primary domain controller (PDC) role, you must first create a computer account with the intended name for the CMS. The computer account must be enabled, and the computer object must be available in the local Active Directory site. If a computer account for the CMS does not exist and the PDC is not in the local Active Directory site, Setup will not continue..
Post-Setup Tasks.
Tuning Failover Control Settings. When configuring the system for Good Availability or Best Availability, do not use spaces. For example, use GoodAvailability and BestAvailability..
Important
When the value for ForcedDatabaseMountAfter is reached, the database will be mounted regardless of whether the storage group copy is 1 log behind, 10 logs behind, or 1,000 logs behind, which could result in significant data loss. For this reason, this parameter should not be used if service level agreements (SLAs) guarantee a maximum on the amount of data loss that can be incurred.
For more information about tuning failover, see How to Tune Failover and Mount Settings for Cluster Continuous Replication.
Tuning the Transport Dumpster CMSs in a CCR environment and all LCR-enabled storage groups in the Active Directory site containing the Hub Transport server.
For detailed steps about how to enable and configure the transport dumpster, see How to Configure the Transport Dumpster.
Verifying the CCR Solution Get-StorageGroupCopyStatus and Get-ClusteredMailboxServerStatus cmdlets:.
Enabling Multiple Networks for Continuous Replication Activity
In the RTM version of Exchange 2007, all log file copying and seeding occurs over the public network. In Exchange 2007 SP1, any redundant the Enable-ContinuousReplicationHostName cmdlet. 2003.
Note
In addition to the host name, IP address, and cluster group that is created on the failover cluster, each time you run the Enable-ContinuousReplicationHostName cmdlet, you are also creating a computer account in the Active Directory domain that contains the CMS. By default, in Windows Server 2003, the maximum number of computer accounts that can be added by a user who has not been delegated domain administrator privileges and has not been granted the Create Computer Objects and Delete Computer Objects access control entries (ACEs) is 10. An Exchange administrator who frequently runs the Enable-ContinuousReplicationHostName and Disable-ContinuousReplicationHostName cmdlets and does not have domain administrator privileges or the aforementioned ACEs could reach the 10 account limit quickly. There are available workarounds for this issue, which are documented in Knowledge Base article 307532, How to troubleshoot the Cluster service account when it modifies computer objects. Additional information can be found in Knowledge Base article 251335, Domain Users Cannot Join Workstation or Server to a Domain.. | https://docs.microsoft.com/en-us/previous-versions/office/exchange-server-2007/aa997144%28v%3Dexchg.80%29 | 2020-02-17T00:37:23 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
Difference between revisions of "Getting Involved"
From Omni
Latest revision as of 19:16, 13 January 2016
Intro
The core of Omni is the community, and we recognize that everyone has a role to play.[1]
So we have a number of ways for you to get involved:
Gerrit – view our project at the code level with the ability to see what’s merged, what’s open and what’s in review.
Forum – participate in general discussion, Q&A, and Features Development on XDA. Also find device-specific builds in their relevant forums on XDA.
IRC – get involved with Omni in realtime on IRC Freenode:
- General discussion in #omnirom
- Developer discussion in #omni
Jira – issue, feature and project tracker. | https://docs.omnirom.org/index.php?title=Getting_Involved&diff=prev&oldid=4135&printable=yes | 2020-02-17T01:23:33 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.omnirom.org |
Real Geeks integrates with Zillow using their Tech Connect program so leads generated from Zillow or Trulia can be sent automatically to the Lead Manager.
To start receiving leads from Zillow you will need to copy a code generated by Real Geeks and paste that code in Zillow
Follow the steps below:
There are two options when it comes to how leads will be assigned:
Using this option leads will come from Zillow unassigned and will use the configured Round Robin in Real Geeks.
With the Real Geeks code in hang from Step 1 you will add a new Partner in Zillow.
First visit:
Zillow is sending Trulia leads using the Zillow platform called Tech Connect Program, which is what we use in this integration.
By setting up this integration you will receive both Zillow and Trulia leads. | https://docs.realgeeks.com/zillow_trulia | 2020-02-17T00:49:46 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.realgeeks.com |
: " << weight(shape); std::cout << ", span: " <<. | http://docs.seqan.de/seqan/2.0.2/specialization_GenericCyclicShape.html | 2020-02-17T01:45:17 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.seqan.de |
Overview
If you are a Salesforce Admin, you can add a link to your Salesforce Account, Opportunity, and Task page that maps directly to the record in Chorus. This will help your team easily access the call recording right from their Salesforce view.
Going forward Chorus will automatically add a link to the Salesforce Page after your meeting.
Note: At least 1 Chorus user needs Salesforce API access (default for Salesforce Enterprise).
Steps to set-up
1. In Salesforce: Go to Setup -> Build -> Account / Opportunity / Task -> Buttons, Links, and Actions
2. Create a Button or Link
3. Configure the Button as follows:
Field Label: Chorus
Field Name: ChorusLink
Display Type: Detail Page Link
Button or Link URL:
- For Account Link:{!Account.Id}
- For Opportunity Link:{!Opportunity.Id}
- For Task Link:{!Task.Id}
4. Update the Page Layout to activate the Link
DONE!
Please sign in to leave a comment. | https://docs.chorus.ai/hc/en-us/articles/115009169308-Show-a-link-to-Chorus-on-your-Salesforce-Account-Opportunity-Task-record | 2020-02-17T00:11:54 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['/hc/article_attachments/115017052968/SL_1.png', 'SL_1.png'],
dtype=object)
array(['/hc/article_attachments/115017053128/SL_2.png', 'SL_2.png'],
dtype=object)
array(['/hc/article_attachments/115001904614/mceclip0.png', None],
dtype=object) ] | docs.chorus.ai |
The definitions in this section are specifically in reference to LORIOT Network Server terminology.
Included is the most frequent and useful terms:
General Terminology
"Network Server" is all the components (network server, web server, database server etc.) required to run a LoRaWAN network.
"Gateway" is an antennas that receive broadcasts from "Devices", is connected to the LORIOT Network Server via the Internet and also send data back to "Devices".
"Device" is an object with computing power (sensors/actuators etc.) that can connect to a network and has the ability to transmit data.
"Uplink" a message from a "Device" to an "Application".
"Downlink" a message from an "Application" to a "Device"
"Network" is a collection of LoRaWAN gateways, and provide an efficient environment to monitor and manage gateways.
"Application" is a LORIOT software to register, manage and organize devices, plus configure the output destination for the device data
"Application Output" is used to define where a LORIOT "Application" should route your data, via protocols such as HTTPS/MQTT or full integrations with external IoT platforms like Azure IoT, Google IoT etc.
"Organization" is an isolated environment which contains "Users", "Admins" and their "Resources", no account contained in one organisation can access/read/write a "Resource" in a different Organization.
"Resource" is a "Gateway" or "Device"
Basic LoRaWAN Terminology
"Frequency Bands" is an interval in the frequency domain, delimited by a lower frequency and an upper frequency. The term refers to LoRaWAN frequency bands and is typically set by "Regional Parameters".
"Regional Parameters" LoRaWAN has official regional specifications, called "Regional Parameters", these can be found in the LoRaAlliance Resource Hub - Specification section.
"Duty Cycle" regional frequency regulation that imposes specific duty-cycles (active signal time) on devices for each sub-band.
"LoRa Modulation" a way of manipulating a radio wave to encode information using a chirped, multi-symbol format. LoRa is a spread spectrum modulation technique.
"Data Rate" LoRaWAN uses a different configuration of frequencies, spreading factors and bandwidths to determine the "Data Rate", more details can be found in the Specification section
Account Profiles
"Server Operator" is the top account on a private network server with all available functionality on the Network Server.
"Organization Admin" is an account within an Organization which enables access to the Organisation Admin user interface.
"User" is a standard account with access to the user interface.
"Role" defines the capability of a user to make changes within their account and read/edit any "Resources".
Community Server Account Tiers
"Community Account" is a FREE Community Public Server account with connectivity included for 1 "Gateway" and 10 "Devices".
"Professional Account" is a commercial account with advanced features enabled such as REST API and SSH tunnel access. | https://docs.loriot.io/display/LNS/Definitions | 2020-02-17T02:02:56 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.loriot.io |
Policies for allowing guest and external B2B access
This article describes how to adjust the recommended common identity and device access policies to allow B2B account access (guest and external users). This guidance builds on the Common identity and device access policies.
These recommendations are designed to apply to the baseline tier of protection. However, you can adjust the recommendations based on the granularity of your needs for sensitive and highly regulated protection.
Providing a path for B2B users to authenticate with your Azure AD tenant doesn't give these users access to your entire environment. B2B users only have access to resources that are shared with them (such as files) within the services granted in the conditional access policies.
Updating the common policies to allow and protect guest and external access
The following diagram illustrates the common identity and device access policies and indicates (with a pencil icon) which policies to add or update to protect guest and external access.
The following table lists the policies you either need to update or create new. The common policies link to the associated configuration instructions in the Common identity and device access policies article.
To include or exclude guests and external users in conditional access rules, click the include or exclude tab and check All guests and external users.
More information
Guests vs. external users
In Azure AD, guest and external users are the same. The user type for both of these is Guest. Guest users are B2B users.
Microsoft Teams differentiates between guest users and external users within the app, but these are both B2B users when authenticating. For more information about Teams guest and external users, see Enabling guest and external access for Teams.
Require MFA always for guest and external users
This rule prompts guests to register for MFA in your tenant, regardless of whether they're registered for MFA in their home tenant. When accessing resources in your tenant, guests and external users are required to use MFA for every request.
Excluding guest and external users from risk-based MFA
While organizations can enforce risk-based policies for B2B users using Identity Protection, there are limitations in the implementation of Identity Protection for B2B collaboration users in a resource directory due to their identity existing in their home directory. Due to these limitations, Microsoft recommends you exclude guest users from risk-based MFA policies and require these users to always use MFA.
For more information, see Limitations of Identity Protection for B2B collaboration users.
Excluding guest and external users from device management
Only one organization can manage a device. If you don't exclude guest and external users from policies that require device compliance, these policies will block these users.
Next steps
Learn how to enable Teams conditional access
Feedback | https://docs.microsoft.com/en-us/microsoft-365/enterprise/identity-access-policies-guest-access | 2020-02-17T00:44:30 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['../media/identity-access-ruleset-guest.png',
'Summary of policy updates for protecting guest access'],
dtype=object)
array(['../media/identity-access-exclude-guests-ui.png',
'screen capture of controls for excluding guests'], dtype=object)] | docs.microsoft.com |
FoundriesFactory Documentation¶
The Linux microPlatform is a minimal, secure, continuously updated software platform. It’s built and maintained using the FoundriesFactory product. Foundries.io maintains a community focused Factory that users can try out for free on our supported devices, as well as private, customer-owned Factories.
- The Community Factory
- Your FoundriesFactory
- Advanced Topics
-
- foundries.io | https://docs.foundries.io/latest/ | 2020-02-17T02:12:52 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.foundries.io |
Hibernate.orgCommunity Documentation.
It is recommended to stick either to field or property annotation.
The Bean Validation specification does not enforce that groups have to be interfaces. Non interface classes could be used as well, but we recommend to stick to interfaces. constraints class, place a
@GroupSequence annotation on the class. The
defined groups in the annotation express the sequence of groups that
substitute
Default for this class. Example 2.19, “RentalCar” introduces a new class RentalCar with a
redfined default group. With this definition the check for all three
groups can be rewritten as seen in Example 2.20, “testOrderedChecksWithRedefinedDefault”.
Example 2.19. RentalCar
@GroupSequence({ RentalCar.class, CarChecks.class })
public class RentalCar extends Car {
public RentalCar(String manufacturer, String licencePlate, int seatCount) {
super( manufacturer, licencePlate, seatCount );
}
}, DriverChecks.class ).size() );
}
Due to the fact that there cannot be a cyclic dependency in the
group and group sequence definitions one cannot just add
Default to the sequence redefining
Default for a class. Instead the class itself
should be added!
Hibernate Validator implements all of the default constraints specified in Bean Validation as well as some custom ones. Table 2.2, “Built-in constraints” list all constraints available in Hibernate Validator.
On top of the parameters indicated in Table 2.2, “Built-in constraints” each constraint supports the
parameters
groups
and
payload. This is a requirement of the Bean
Validation specification.
In some cases these built-in constraints will not fulfill your requirements. In this case you can literally in a minute write your own constraints. We will discuss this in Chapter 3, Creating custom constraints | http://docs.jboss.org/hibernate/validator/4.1/reference/en-US/html/validator-usingvalidator.html | 2018-01-16T13:41:10 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.jboss.org |
Suppresses all output that is produced by the CFML within the tag's scope.
<cfsilent
[bufferoutput=boolean]
><!--- body ---></cfsilent>
This tag must have a body.
This tag is also supported within cfscript
if set to true (default) the output written to the body of the tag is buffered and in case of a exception also outputted. if set to false the content to body is ignored and not disabled when a failure in the body of the tag occur.
There are currently no examples for this tag. | http://docs.lucee.org/reference/tags/silent.html | 2018-01-16T13:27:29 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.lucee.org |
PyPICloud - PyPI backed by S3¶
This is an implementation of the PyPI server for hosting your own python packages. It stores the packages in S3 and dynamically generates links to them for pip.
After generating the S3 urls, pypicloud caches them in a database. Subsequent requests to download packages will use the already-generated urls in the db. Pypicloud supports using SQLAlchemy, Redis, or DynamoDB as the cache.
Pypicloud was designed to be fast and easy to replace in the case of server failure. Simply copy your config.ini file to a new server and run pypicloud there. The only data that needs to be persisted is in S3, which handles the redundancy requirements for you.
Code lives here:
User Guide¶
- Getting Started
- Configuration Options
- Storage Backends
- Caching Backends
- Access Control
- Deploying
- Upgrading
- Migrating Packages
- Extending PyPICloud
- HTTP API
- Developing
- Changelog | http://pypicloud.readthedocs.io/en/0.3.6/ | 2018-01-16T13:17:21 | CC-MAIN-2018-05 | 1516084886436.25 | [] | pypicloud.readthedocs.io |
Oracle Locator
Amazon RDS supports Oracle Locator
through the use of the
LOCATOR option.
Oracle Locator provides
capabilities that are typically required to support internet
and wireless service-based applications and partner-based GIS solutions.
Oracle Locator is a limited subset of Oracle Spatial.
For more information, see
Oracle Locator
in the Oracle documentation.
Important
If you use Oracle Locator, Amazon RDS automatically updates your DB instance to the latest Oracle PSU if there are security vulnerabilities with a Common Vulnerability Scoring System (CVSS) score of 9+ or other announced security vulnerabilities.
Amazon RDS supports Oracle Locator for the following editions and versions of Oracle:
Oracle Standard Edition (SE2) or Enterprise Edition, version 12.1.0.2.v6 or later
Oracle Standard Edition (SE, SE1) or Enterprise Edition, version 11.2.0.4.v10 or later
Prerequisites for Oracle Locator
The following are prerequisites for using Oracle Locator:
Your DB instance must be inside a virtual private cloud (VPC). For more information, see Determining Whether You Are Using the EC2-VPC or EC2-Classic Platform.
Your DB instance must be of sufficient class. Oracle Locator is not supported for the db.m1.small, db.t2.micro, or db.t2.small DB instance classes. For more information, see DB Instance Class Support for Oracle.
Your DB instance must have Auto Minor Version Upgrade enabled. Amazon RDS updates your DB instance to the latest Oracle PSU if there are security vulnerabilities with a CVSS score of 9+ or other announced security vulnerabilities. For more information, see Settings for Oracle DB Instances.
If your DB instance is version 11.2.0.4.v10 or later, you must install the
XMLDBoption. For more information, see Oracle XML DB.
Best Practices for Oracle Locator
The following are best practices for using Oracle Locator:
For maximum security, use the
LOCATOR Locator Option
The following is the general process for adding the
LOCATOR option to a DB instance:
Create a new option group, or copy or modify an existing option group.
Add the option to the option group.
Associate the option group with the DB instance.
There is a brief outage while the
LOCATOR option is added.
After you add the option, you don't need to restart your DB instance.
As soon as the option group is active, Oracle Locator is available.
To add the
LOCATOR for your DB instance.
For Major Engine Version, choose 11.2 or 12.1 for your DB instance.
For more information, see Creating an Option Group.
Add the LOCATOR.
Using Oracle Locator
After you enable the Oracle Locator option, you can begin using it. You should only use Oracle Locator features. Don't use any Oracle Spatial features unless you have a license for Oracle Spatial.
For a list of features that are supported for Oracle Locator, see Features Included with Locator in the Oracle documentation.
For a list of features that are not supported for Oracle Locator, see Features Not Included with Locator in the Oracle documentation.
Removing the Oracle Locator Option
You can remove the
LOCATOR option from a DB instance.
There is a brief outage while the option is removed.
After you remove the
LOCATOR option, you don't need to restart your DB instance.
Warning
Removing the
LOCATOR option can result in data loss if the DB instance is using data types that
were enabled as part of the option. Back up your data before proceeding. For more
information, see
Backing Up and Restoring Amazon RDS DB Instances.
To remove the
LOCATOR option from a DB instance, do one of the following:
Remove the
LOCATORoption from the option group it belongs to. This change affects all DB instances that use the option group. For more information, see Removing an Option from an Option Group.
Modify the DB instance and specify a different option group that doesn't include the
LOCATORoption. This change affects a single DB instance. You can specify the default (empty) option group or a different custom option group. For more information, see Modifying a DB Instance Running the Oracle Database Engine. | https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Options.Locator.html | 2018-01-16T13:57:17 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.aws.amazon.com |
Accept Publication Request¶
Navigate to the catalog record you would like to review and accept.
Review the contents and history of the catalog record to verify that it should be accepted for publication.
Click the catalog record’s Publish and Archive link.
Click the Approve Publication button.
The catalog record will be finalized and published.
During catalog record finalization, the following actions occur:
- Preservation formats are created for proprietary data files.
- Request persistent identifiers from the configured Handle service.
- Checksums are created for each file.
- DDI metadata is created for all files and the catalog record itself.
- An archive package is created and placed in the configured archive ingest location.
- The catalog record is marked as published. | https://docs.colectica.com/curation/org-admin/accept-publication/ | 2018-01-16T12:58:06 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.colectica.com |
Create a new choice record This record allows users to select the language as a valid option in a User record and the language picker. About this task You must create a choice record for a new translation in the Choices [sys_choice] table. Procedure Navigate to System Localization > Choices. Click New. Enter the following fields. Table: Enter sys_user. Element: Enter preferred_language. Language: Enter the two-character ISO 639.2 code for the language this choice record is a member of. For example, tr. The default is en. Label: Enter the name of the language selection as you want it to appear in the language picker. For example, Turkish. Value: Enter the two-character ISO 639.2 code for the new language selection. For example, tr. The instance uses this value to set the display language. Sequence: Enter a number to determine what order the option appears in the choice list if you do not want to list choices alphabetically. For example, 5. Click Submit. | https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/administer/localization/task/t_CreateANewChoiceRecord.html | 2018-01-16T13:29:24 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.servicenow.com |
requestAnimationFrame(fn)is not the same as
setTimeout(fn, 0)- the former will fire after all the frame has flushed, whereas the latter will fire as quickly as possible (over 1000x per second on a iPhone 5S).
setImmediateis executed at the end of the current JavaScript execution block, right before sending the batched response back to native. Note that if you call
setImmediatewithin a
setImmediatecallback, it will be executed right away, it won't yield back to native in between.
Promiseimplementation uses
setImmediateas its asynchronicity implementation.
InteractionManagerto make sure long-running work is scheduled to start after any interactions/animations have completed.
Interaction | https://docs.expo.io/versions/v36.0.0/react-native/timers/ | 2020-09-18T13:45:25 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.expo.io |
sf.decompositions.bloch_messiah¶
bloch_messiah(S, tol=1e-10, rounding=9)[source]¶
Bloch-Messiah decomposition of a symplectic matrix.
See Bloch-Messiah (or Euler) decomposition.
Decomposes a symplectic matrix into two symplectic unitaries and squeezing transformation. It automatically sorts the squeezers so that they respect the canonical symplectic form.
Note that it is assumed that the symplectic form is\[\begin{split}\Omega = \begin{bmatrix}0&I\\-I&0\end{bmatrix}\end{split}\]
where \(I\) is the identity matrix and \(0\) is the zero matrix.
As in the Takagi decomposition, the singular values of N are considered equal if they are equal after np.round(values, rounding).
If S is a passive transformation, then return the S as the first passive transformation, and set the the squeezing and second unitary matrices to identity. This choice is not unique.
For more info see:
- Parameters
S (array[float]) – symplectic matrix
tol (float) – the tolerance used when checking if the matrix is symplectic: \(|S^T\Omega S-\Omega| \leq tol\)
rounding (int) – the number of decimal places to use when rounding the singular values
- Returns
- Returns the tuple
(ut1, st1, vt1).
ut1and
vt1are symplectic orthogonal,
and
st1is diagonal and of the form \(= \text{diag}(s1,\dots,s_n, 1/s_1,\dots,1/s_n)\) such that \(S = ut1 st1 v1\)
- Return type
tuple[array]
Downloads | https://strawberryfields.readthedocs.io/en/latest/code/api/strawberryfields.decompositions.bloch_messiah.html | 2020-09-18T14:03:12 | CC-MAIN-2020-40 | 1600400187899.11 | [] | strawberryfields.readthedocs.io |
Left Function Function: Returns len leftmost characters of a string sourcestring. Syntax: left(byref sourcestr as string, len as byte) as string See Also: Right, Mid Part Description sourcestr String from which to take len leftmost characters. len Number of characters to take. Details Examples ** Tibbo Basic ** ss = left("ABCDE",3) ' result will be 'ABC | https://docs.tibbo.com/taiko/syscall_left | 2020-09-18T14:13:08 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.tibbo.com |
public static class CreateImageRequest.Builder extends Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
public CreateImageRequest.Builder invocationCallback(com.oracle.bmc.util.internal.Consumer<javax.ws.rs.client.Invocation.Builder> invocationCallback)
Set the invocation callback for the request to be built.
invocationCallback- the invocation callback to be set for the request
public CreateImageRequest.Builder retryConfiguration(RetryConfiguration retryConfiguration)
Set the retry configuration for the request to be built.
retryConfiguration- the retry configuration to be used for the request
public CreateImageRequest.Builder copy(CreateImageRequest o)
Copy method to populate the builder with values from the given instance.
public CreateImageRequest build()
Build the instance of CreateImageRequest as configured by this builder
Note that this method takes calls to
invocationCallback(com.oracle.bmc.util.internal.Consumer) into account, while the method
buildWithoutInvocationCallback() does not.
This is the preferred method to build an instance.
public CreateImageRequest.Builder createImageDetails(CreateImageDetails createImageDetails)
public CreateImageRequest.Builder opcRetryToken(String opcRetryToken)
public CreateImageRequest buildWithoutInvocationCallback()
public String toString()
toStringin class
Object | https://docs.cloud.oracle.com/en-us/iaas/tools/java/1.12.3/com/oracle/bmc/core/requests/CreateImageRequest.Builder.html | 2020-09-18T15:22:48 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.cloud.oracle.com |
Integrating CodeScan into your GitLab pipeline is easy with our sfdx plugin! There are only a few lines to add to your .YML file to run codescan when a build is triggered.
The following is based on a docker pipeline with Java and Node installed in the container.
First, we'll need to add your CodeScan token as a variable we can access in our .YML file.
- Open your project and navigate to Settings>CI/CD then expend the Variables section.
- Add your token with the name CODESCAN_TOKEN and check the masked variable box . To learn how to generate a token, see our Generating a Security Token article.
Now you'll be able to access this variable by using $CODESCAN_TOKEN in your .YML file.
Add the following into your .YML file:
image: joeferner/node-java stages: - codescan scan: stage: codescan script: - mkdir /tmp/sfdx - wget -q -O /tmp/sfdx/sfdx-linux-amd64.tar.xz - tar xJf /tmp/sfdx/sfdx-linux-amd64.tar.xz -C /tmp/sfdx/ --strip-components 1 - /tmp/sfdx/install - echo y|sfdx plugins:install sfdx-codescan-plugin - sfdx codescan:run --token=$CODESCAN_TOKEN --projectkey=your_project_key --organization=your_organization_key -Dsonar.branch.name=$CI_COMMIT_REF_NAME -Dsonar.branch.target=$CI_EXTERNAL_PULL_REQUEST_TARGET_BRANCH_NAME
You will need to replace the placeholder variables (in bold) in the env section of the script with your Project Key and Organization Key.
The branches names and types can be set by the following parameters:
- sonar.branch.type: this is SHORT or LONG as described in the Branching Article
- sonar.branch.target: the comparison branch for SHORT type branches.
- sonar.branch.name: the name of the branch.
By default, the CodeScan SFDX plugin will fail if the Quality Gate fails. If you would prefer that the build passes despite the quality gate, use the --nofail tag when calling sfdx codescan:run.
You can find a complete list of flags and examples on our npm plugin page. | https://docs.codescan.io/hc/en-us/articles/360043510632-Integrating-CodeScan-in-GitLab | 2020-09-18T14:22:18 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.codescan.io |
expo-secure-storeprovides a way to encrypt and securely store key–value pairs locally on the device. Each Expo project has a separate storage system and has no access to the storage of other Expo projects. Please note that for iOS standalone apps, data stored with
expo-secure-storecan persist across app installs.
kSecClassGenericPassword. iOS has the additional option of being able to set the value's
kSecAttrAccessibleattribute, which controls when the value is available to be fetched.
SharedPreferences, encrypted with Android's Keystore system.
expo install expo-secure-store
If you're installing this in a bare React Native app, you should also follow these additional installation instructions.
import * as SecureStore from 'expo-secure-store';
.,
-, and
_.
kSecAttrService
Alias
keychainServiceoption, it will be required to later fetch the value.
kSecAttrAccessibleproperty. See Apple's documentation on keychain item accessibility. The available options are:
SecureStore.WHEN_UNLOCKED(default): The data in the keychain item can be accessed only while the device is unlocked by the user.
SecureStore.AFTER_FIRST_UNLOCK: The data in the keychain item cannot be accessed after a restart until the device has been unlocked once by the user. This may be useful if you need to access the item when the phone is locked.
SecureStore.ALWAYS: The data in the keychain item can always be accessed regardless of whether the device is locked. This is the least secure option.
SecureStore.WHEN_UNLOCKED_THIS_DEVICE_ONLY: Similar to
WHEN_UNLOCKED, except the entry is not migrated to a new device when restoring from a backup.
SecureStore.WHEN_PASSCODE_SET_THIS_DEVICE_ONLY: Similar to
WHEN_UNLOCKED_THIS_DEVICE_ONLY, except the user must have set a passcode in order to store an entry. If the user removes their passcode, the entry will be deleted.
SecureStore.AFTER_FIRST_UNLOCK_THIS_DEVICE_ONLY: Similar to
AFTER_FIRST_UNLOCK, except the entry is not migrated to a new device when restoring from a backup.
SecureStore.ALWAYS_THIS_DEVICE_ONLY: Similar to
ALWAYS, except the entry is not migrated to a new device when restoring from a backup.
kSecAttrService. Android: Equivalent of the public/private key pair
Alias.
keychainServiceoption, it will be required to later fetch the value.
kSecAttrService. Android: Equivalent of the public/private key pair
Alias. If the item is set with a keychainService, it will be required to later fetch the value. | https://docs.expo.io/versions/v36.0.0/sdk/securestore/ | 2020-09-18T13:14:44 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.expo.io |
mars.tensor.logical_or¶
mars.tensor.
logical_or(x1, x2, out=None, where=None, **kwargs)[source]¶
Compute the truth value of x1 OR x2 element-wise.
- x1, x2 : array_like
- Logical OR is applied to the elements of x1 and x2. They have to OR operation on elements of x1 and x2.
logical_and, logical_not, logical_xor bitwise_or
>>> import mars.tensor as mt
>>> mt.logical_or(True, False).execute() True >>> mt.logical_or([True, False], [False, False]).execute() array([ True, False])
>>> x = mt.arange(5) >>> mt.logical_or(x < 1, x > 3).execute() array([ True, False, False, False, True]) | https://docs.pymars.org/en/v0.1.0/tensor/generated/mars.tensor.logical_or.html | 2020-09-18T12:57:17 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.pymars.org |
sf.apps.qchem.dynamics¶
Functions used for simulating vibrational quantum dynamics of molecules.
Photonic quantum devices can be programmed with molecular data in order to simulate the quantum dynamics of spatially-localized vibrations in molecules [9]. To that aim, the quantum device has to be programmed to implement the transformation:
where \(\hat{H} = \sum_i \hbar \omega_i a_i^\dagger a_i\) is the Hamiltonian corresponding to the harmonic normal modes, \(\omega_i\) is the vibrational frequency of the \(i\)-th normal mode, \(t\) is time, and \(U_l\) is a unitary matrix that relates the normal modes to a set of new modes that are localized on specific bonds or groups in a molecule. The matrix \(U_l\) can be obtained by maximizing the sum of the squares of the atomic contributions to the modes [10]. Having \(U_l\) and \(\omega\) for a given molecule, and assuming that it is possible to prepare the initial states of the mode, one can simulate the dynamics of vibrational excitations in the localized basis at any given time \(t\). This process has three main parts:
Preparation of an initial vibrational state.
Application of the dynamics transformation \(U(t)\).
Generating samples and computing the probability of observing desired states.
It is noted that the initial states can be prepared in different ways. For instance, they can be Fock states or Gaussian states such as coherent states or two-mode squeezed vacuum states.
Algorithm¶
The algorithm for simulating the vibrational quantum dynamics in the localized basis with a photonic device has the following form:
Each optical mode is assigned to a vibrational local mode and a specific initial excitation is created using one of the state preparation methods discussed. A list of state preparations methods available in Strawberry Fields is provided here.
An interferometer is configured according to the unitary \(U_l^\dagger\) and the initial state is propagated through the interferometer.
For each mode, a rotation gate is designed as \(R(\theta) = \exp(i\theta \hat{a}^{\dagger}\hat{a})\) where \(\theta = -\omega t\).
A second interferometer is configured according to the unitary \(U_l\) and the new state is propagated through the interferometer.
The number of photons in each output mode is measured.
Samples are generated and the probability of obtaining a specific excitation in a given mode (or modes) is computed for time \(t\).
This module contains functions for implementing this algorithm.
The function
evolution()returns a custom
sfoperation that contains the required unitary and rotation operations explained in steps 2-4 of the algorithm.
The function
sample_fock()generates samples for simulating vibrational quantum dynamics in molecules with a Fock input state.
The function
sample_coherent()generates samples for simulating vibrational quantum dynamics in molecules with a coherent input state.
The function
sample_tmsv()generates samples for simulating vibrational quantum dynamics in molecules with a two-mode squeezed vacuum input state.
The function
prob()estimates the probability of observing a desired excitation in the generated samples.
The function
marginals()generates single-mode marginal distributions from the displacement vector and covariance matrix of a Gaussian state.
Functions
Contents
Downloads | https://strawberryfields.readthedocs.io/en/latest/code/api/api/strawberryfields.apps.qchem.dynamics.html | 2020-09-18T13:31:48 | CC-MAIN-2020-40 | 1600400187899.11 | [] | strawberryfields.readthedocs.io |
Log into the web portal with your log in credentials, you will need to either follow the link or access the website from the URL below:
After you have entered your details, you will be taken into the portal.
Creating a Prospect Site
Step 1: Select 'Sites' From the Toolbar
To create a new website, select 'Sites' from the toolbar within the Portal.
Should this be the first website you are going to create with Prospect, you will be greeted with a welcome message and the option to 'Get Started'
To then create a new site, simply click on 'Get Started'
If you already have an existing site with Prospect, you will have the option to select a site from the library or alternatively, create a new site.
Step 2: Select a Template
You will now have the option to select a template from the gallery. There are various categories to select a base template that matches your preferences.
Step 3: Customising your Template
To customise the template further, you can adjust the colour scheme to suit your branding. You also have the ability to select template colours from an image. Drag and drop an image of your company logo and Prospect will determine the primary and a complimentary secondary colour to style your template.
Step 4: Adjusting Further Fields
Your site is almost complete, there are just a few more fields to adjust.
Colour --> Your chosen colour scheme, you have the ability to modify here
Company --> The name of your company
Site Description --> What is the site being used for?
Staging Domain --> Hosted onprospectsoft.com,a link to preview and share before the site is published live to the web.
Show Product Images --> Do you want to display product images? (These are images already stored in your accounts system, you can import and modify all product images in the Product manager) | http://docs.prospect365.com/en/articles/1705943-creating-your-site | 2020-09-18T12:50:07 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.prospect365.com |
View Sightings Search Details Review the aggregate details of all sighting searches. Before you beginRole required: sn_si.analyst About this task. Procedure Navigate to a security incident. Select the Sightings Search Details tab from Show IoC Related List group to view the list of sightings searches. Note: This data can be shared with Trusted Security Circles. Table 1. Sightings Search Details Detail Description Observable List of all observables searched for by query. Observable type Internal sightings Count of internal sightings for all searches. External sightings Count of external sightings for all searches. (Received from threat sharing.) Sighting search Sightings Search identifier. Updated Date and time of last modification. | https://docs.servicenow.com/bundle/jakarta-security-management/page/product/security-operations-common/task/view-sightings-search-details.html | 2018-01-16T13:44:13 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.servicenow.com |
Notice: if you want to add more than one editor in one element, please use field “editor” instead.
The Textarea-HTML field creates a full WordPress tinyMCE content Editor. It is useful to create extra content areas.
Map Usage:
How it works?:
Example:
Register a new shortcode with filed type: textarea_html | http://docs.kingcomposer.com/available-param-types/textarea-html-field/ | 2018-01-16T13:33:23 | CC-MAIN-2018-05 | 1516084886436.25 | [array(['http://docs.kingcomposer.com/wp-content/themes/kc/images/docs/fields/text_html.jpg',
'KingComposer Textarea-HTML text'], dtype=object) ] | docs.kingcomposer.com |
Basic Sorting
RadGridView provides you with a built-in sorting functionality, which allows the user to easily sort the data by one or several columns.
This article is divided into the following topics:
Overview
The data gets sorted as the user clicks the header of a column. When sorted you should see the header of the column highlighted and the appropriate arrow showing if the applied sorting is ascending or descending.
Figure 1: RadGridView with applied sorting
By clicking on the header a second time, the sort direction is changed to descending and on the next click, the sorting will be cleared. The header goes into its normal state and the arrow disappears.
Sorting is a data operation and it is performed by building and executing a LINQ query over the source collection.
If the RadGridView is bound to a collection that inherits ICollectionView that has a CanSort property set to True, the RadGridView's sorting is disabled and the sorting mechanism of the collection is used instead.
SortMemberPath
You can set the SortMemberPath property of a column to specify the name of the property the data in the column will be sorted by. Use this if you need to sort the column by a property different than the one it is bound to.
<telerik:GridViewDataColumn
Disable Sorting
If you don't want your RadGridView to be sortable, you just have to set its CanUserSortColumns property to False:
Example 1: Disable sorting
<telerik:RadGridView
In case you want to disable sorting for a particular column only, you can set that column's IsSortable property to False:
Example 2: Disable sorting for a particular column
<telerik:GridViewColumn
Events
There are two events that are raised as the user apply sorting on any column. The first one is the Sorting event and it is raised before the data is sorted. The second one is the Sorted event and it is raised after the data is sorted.
Example 3: Handle the Sorting and Sorted events
<telerik:RadGridView
Sorting
The GridViewSortingEventArgs of the Sorting event provide you with the following properties:
- Cancel: A boolean property indicating whether the sorting operation should be canceled.
- Column: The GridViewColumn that is being sorted.
- DataControl: The instance of the GridViewDataControl that owns the column.
- OldSortingState: The old SortingState.
- NewSortingState: The new SortingState.
- IsMultipleColumnSorting: The a boolean value indicating whether the current sorting operation is a multiple column. You can check the Multiple-column Sorting article for more information.
Example 4: Cancel the sorting of a column
private void radGridView_Sorting(object sender, GridViewSortingEventArgs e) { e.Cancel = true; }
Private Sub radGridView_Sorting(ByVal sender As Object, ByVal e As GridViewSortingEventArgs) e.Cancel = True End Sub
To learn how to use the Sorting event to overwrite the built-in sorting functionality take a look at the Custom Sorting topic.
Sorted
The Sorted event allows you to get the instance of the column by which the data is sorted via its GridViewSortedEventArgs.
In the event handler, you can place some code that has to be executed when the data in the RadGridView gets sorted. For example, you can change the TextAlignment of the sorted column:
Example 5: Change the TextAlignment of the sorted column
private GridViewColumn previousColumn; private void radGridView_Sorted(object sender, GridViewSortedEventArgs e) { if (this.previousColumn != null) { this.previousColumn.TextAlignment = TextAlignment.Left; } e.Column.TextAlignment = TextAlignment.Right; this.previousColumn = e.Column; }
Private previousColumn As GridViewColumn Private Sub radGridView_Sorted(ByVal sender As Object, ByVal e As GridViewSortedEventArgs) If Me.previousColumn IsNot Nothing Then Me.previousColumn.TextAlignment = TextAlignment.Left End If e.Column.TextAlignment = TextAlignment.Right Me.previousColumn = e.Column End Sub
In this example, the previousColumn field is used to store the currently sorted column. This is done in order to revert its TextAlignment color when another column is selected.
Style the Sorted Header
By editing the template of the header cell, you are able to change its overall look and feel. Making use of the VisualStateManager also allows you to adjust the visual appearance in the different sorting states - descending, ascending and none. You can also change the visual element that represents the direction of the sorting. For more information, have a look at the Styling Column Headers article. | https://docs.telerik.com/devtools/wpf/controls/radgridview/sorting/basics | 2018-01-16T13:43:12 | CC-MAIN-2018-05 | 1516084886436.25 | [array(['images/RadGridView_BasicSorting_1.png',
'RadGridView with applied sorting'], dtype=object)] | docs.telerik.com |
Portfolios
What is a Portfolio? Back to Top
Portfolios vs Galleries - Some people understandably think of portfolios and galleries as the same thing, which is fair as they are both technically collections of images. However in our themes, they are two distinctly different things, and each serve a different purpose. Galleries are your site's slideshows, where Portfolios are collections of galleries. It might be easier to think of Portfolios as categories for galleries, as just like you can put multiple Posts on your blog into a single (or multiple) category, you can put a gallery into multiple Portfolios.
Portfolios allow you to arrange various galleries into logical collections and also give you an added page style to display those galleries, instead of being limited to only having them linked in the menu. They are also optional, you do not have to put your Galleries into Portfolios if you do not want to.
Creating and Adding to Portfolios Back to Top
Creating a Portfolio couldn't be easier, and is the same process as creating Categories for your Posts.
There are two ways you can add a Portfolio:
From the WordPress sidebar: To add a Portfolio from the the WP sidebar, simply click the Portfolio submenu item in the Gallery section of the sidebar. This will bring up the Portfolio menu. On the left side of the page are fields to add a new Portfolio, and on the right side, is a list of all of your current Portfolios.
Simply type the name you'd like to use for your Portfolio in the Name field and click the Add New Portfolio button at the bottom of the page.
When editing/creating a Gallery:
To add a Portfolio when editing a gallery, click the Add New Portfolio link in the Portfolios box in the right hand side of the page.
This will reveal a textbox where you can type the name for your new portfolio. Once you've entered a name, simply click the Add New Portfolio button.
When adding a Portfolio this way, it will automatically assign the gallery to the new Portfolio, as evidenced by the checkbox being checked next to the Portfolio's name.
Adding a gallery to a portfolio: To add a gallery to an existing portfolio, simply check the checkbox next to the Portfolio name in the Portfolios box when editing the gallery and click the Update button. This will assign the gallery to that portfolio.
Portfolio Pages Back to Top
When you create a portfolio, the theme will automatically create a page that will display the galleries that you assign to it, you do not need to manually create a page.
In most themes, the Portfolio pages are a grid of clickable images, with each link taking you to the respective gallery. If your theme supports customizing the Portfolio pages, you will find the settings located in the Theme Options panel under Settings->Galleries. Most of our themes use a grid based format for displaying the galleries on the portfolio pages and allow you to select how many columns/images you'd like per row as well as where the title of the gallery is displayed, i.e. Above the image, Below the image, or Overlaid on the image. The font settings for the gallery titles on the portfolio pages, are controlled by the Portfolio Title Font in the located in the Typography tab of the Styling section in the theme options.
Note: Portfolio pages are created automatically when Portfolio's are created and galleries are assigned to them. You do not need to manually create a page for portfolios.
Portfolio Page/Grid Images: The image used to denote the each gallery on the Portfolio pages are the Featured Images you set for each the respective gallery.
Portfolio Grid Order: You can dictate the order in which the galleries appear on your portfolio pages by setting the Order for the galleries individually. To set the order: When editing a gallery, in the right side of the page you'll see a box labeled Attributes, inside of that box is a textbox labeled Order, this number will dictate the order in which the galleries will appear on the portfolio page. By default, WordPress sets this value to 0, which in WordPress' eyes is actually 1 i.e. 0 = 1, 1 = 2 etc.
Note: WordPress' ordering beings at 0 instead of 1, and any new galleries added will automatically receive the order of 0.
Adding Portfolios to your Menu Back to Top
Once you've created your Galleries, assigned a Featured Image and added them to a Portfolio, you can now add a link to the Portfolio page to your Navigation Menu.
Note: The process is the same if you'd like to add Galleries to your menu individually.
In the menu editor, which is located under Appearance->Menus of the WordPress sidebar, you'll see a box in the left column labeled Portfolios. This box will show any Portfolios you have created with a checkbox next to the name. To add one or more to the menu, simply check the box next to the name, and click the Add to Menu button. This will add the portfolio(s) to the menu on the right. You can then drag & drop the box to the position you would like it displayed in.
Note: If you do not see the Portfolios box as shown in the image above, click the Screen Options tab in the upper right corner of the screen. This will reveal a panel with multiple checkboxes. Click the box next to Portfolios in the panel to add the Portfolios selection box to the menu editor. You can also check any other items you wish to be able to add as individual menu items as well.
| http://docs.rawfolio.com/portfolios | 2018-01-16T13:20:28 | CC-MAIN-2018-05 | 1516084886436.25 | [array(['img/MenuPortfolio1.jpg', None], dtype=object)
array(['img/MenuPortfolio2.jpg', None], dtype=object)] | docs.rawfolio.com |
CDAP UI¶
The CDAP UI is available for deploying, querying and managing the Cask Data Application Platform in all modes of CDAP except an in-memory CDAP.. | https://docs.cask.co/cdap/3.4.3/en/admin-manual/operations/cdap-ui.html | 2018-01-16T12:57:29 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.cask.co |
Installed with Case and Knowledge Management Several types of components are installed with Case and Knowledge Management. Tables installed with Case and Knowledge Management. HR Condition[sn_hr_core_condition] Details of HR conditions used as filters for any HR table. HR Conditions for Criteria[sn_hr_core_m2m_condition_criteria] Details of HR conditions used as filters for HR Criteria. HR Contact[sn_hr_core_contact] Basic contact information for HR contacts (relating to an HR profile). HR Criteria[sn_hr_core_criteria] Details of HR criteria used to define audiences for HR content. HR Criteria for Links[sn_hr_core_m2m_link_template] M2M between HR links and template lookup. Document Type[sn_hr_core_document_type] Details of document type for HR document templates. Employment Verification Letter HR Employee Relations Case[sn_hr_core_case_relations] Details of an employee relations case. HR Health Benefit[sn_hr_core_health_benefit] Subject table (benefit). HR Insurance Benefit[sn_hr_core_insurance_benefit] Subject table (insurance benefits). PDF Template[sn_hr_core_pdf_template] Details of HR .pdf document templates. Employment verification letters Offer letter Sample education agreementExtends HR Document Template. HR Portal Content[sn_hr_core_link] Details of content that displays on the HR Service Portal. Holiday Calendars Links to information about the company, executive team, product documentation, suggested reading, community, blogs, and more. Videos, tutorials HR Profile[sn_hr_core_profile] Subject table (profile). HR Retirement Benefit[sn_hr_core_retirement_benefit] Subject table (retirement benefit). HR Service[sn_hr_core_service] Details of HR Services offered to employees. HR Service Approval Option[sn_hr_core_service_approval_option] HR Service Option[sn_hr_core_service_option] HR Talent Management Case[sn_hr_core_case_talent_management] Details of a submitted Talent Management case. Candidate offer Employee travel visa request Request background check Request drug screen Work visa transfer request HR Task[sn_hr_core_task] The base table for HR tasks. The details of a task associated with a particular HR case. HR Template[sn_hr_core_template] Details of hr templates used to populate fields automatically for HR cases.Extends Template table [sys_template]. HR Tier Escalation[sn_hr_core_tier_definition] Details of the HR groups used for case escalation. HR Total Rewards Case[sn_hr_core_case_total_rewards] Total Rewards/Benefits based HR cases. HR Visa Category[sn_hr_core_visa_category] Type of travel visa. The base system provides business and work visa categories.] Workday interface table for job profile. HR Workday Job Tracker[sn_hr_wday_job_tracker] Tracks the workday sync jobs. When a run starts, a record is inserted in this table and updated at the end of the sync. The update includes the last run date and time and sync end date and time.] Details of a submitted Operation and HRIT cases. HR Account Access Request HR Accounts Inquiry HR Portal Support Request Password Reset Report Inquiry Report Request Setup New Hire HR ProfileExtends the HR Case [sn_hr_core_case] table. Bank Account[sn_hr_core_profile_bank_account] Details of bank account for direct deposit. mappings that pass information from the parent lifecycle event case to an activity. Activity Status[sn_hr_le_activity_status] Activity Set Context[sn_hr_le_activity_set_context] Client Role Rule[sn_hr_core_client_role_rule] Details of mapping conditions to client roles. CMDB HR Case Product Model[cmdb_hr_case_product_model] The product models used for HR case record producers. Compensation[sn_hr_core_compensation] Details on compensation for an employee. Compensation Bonus[sn_hr_core_bonus] Details on the type, percentage, and amount of bonus for an employee.Extends the Compensation [sn_hr_core_compensation] table. Compensation Salary[sn_hr_core_salary] Details of the salary and currency for an employee.Extends Compensation [sn_hr_core_compensation] table. Compensation Stocks[sn_hr_core_stocks] Details on amount, vesting schedule and dates, and quantity of stock for an employee. Direct Deposit[sn_hr_core_direct_deposit] Direct deposit information for an employee. Fulfiller Activity Configuration[sn_hr_le_fulfiller_activity_config] Details on Lifecycle Event Activities assigned and completed by a fulfiller. Fulfiller Activity Configuration Mapping[sn_hr_le_fulfiller_activity_config_mapping] Details on information from Lifecycle Events Case Table passed to another table. Job Profile[sn_hr_core_job_profile] Job profile description. Lifecycle Event Type [sn_hr_le_type] Details of Lifecycle Event Types that are used to contain bundles and activities. Matching Roles[sn_hr_core_matching_roles] Ordered Task [sn_hr_core_service_template] PDF Template Mapping [sn_hr_core_pdf_template_mapping] Position[sn_hr_core_position] Job position information. Relationship[sn_hr_core_relationship] Signature Image[signature_image] Contains images of captured signatures. Topic Category[sn_hr_core_topic_category] Details of topic categories used primarily to group common HR services and topic details for reporting purposes. Topic Detail[sn_hr_core_topic_detail] Details of topic details that provide a more granular level of categorization for reporting purposes. Tuition Reimbursement[sn_hr_core_tuition_reimbursement] Subject table (tuition reimbursement) Who is covered[sn_hr_core_who_is_covered] Collects information on people covered by benefits of an employee. Roles installed with Case and Knowledge Management and follow up on cases they created.User groups can replace this role. document_management_user sn_hr_core.case_writer sn_hr_core.kb_writer sn_hr_core.profile_writer skill_user survey_reader HR case reviewersn_hr_core.case_reader Can read HR cases and follow up on cases they created. sn_hr_core.profile_reader HR case workersn_hr_core.case_writer Can create and update HR cases. sn_hr_core.case_reader HR integrations admin[sn_hr_integrations.admin] sn_hr_integrations.user HR integrations user[sn_hr_integrations.user] None migration adminsn_hr_migration.admin Full control of the HR Data Migration functions. sn_hr_core including sensitive information (SSN, paycheck, and similar information). sn_hr_core.profile_reader HR position specialistsn_hr_core.secure_info_writer Can create, update, and delete HR position records.Can create and update HR profiles.Read and write all HR case and user information including sensitive information (SSN, paycheck, and similar information). sn_hr_core.secure_info_reader sn_hr_core.profile_writer HR Service Portal admin[sn_hr_sp.admin] Allows a delegated developer access to development areas like widget creation for the HR Service Portal.Can assign Service Portal roles.Can turn on Scoped Admin for Service Portal in sys_app.Can access the HR Service Portal. None HR Service Portal Alumnisn_hr_sp.hrsp_alumni Role assigned when employment status is offboarding or previous employee.Can access the HR Service Portal. sn_hr_core.hrsm_alumni HR Employee Alumnisn_hr_core.hrsm_alumni Role assigned when employment status is offboarding or previous employee.For customers that do not use the HR Service Portal. HR Service Portal Contingent employeesn_hr_sp.hrsp_contingent Role assigned to employee with a fixed term contract.Can access the HR Service Portal. sn_hr_core.hrsm_contingent Contingent Employeesn_hr_core.hrsm_contingent Role assigned to employee with a fixed term contract.For customers that do not use the HR Service Portal. HR Service Portal Contractorsn_hr_sp.hrsp_contractor Role is assigned when employment status is employed and type temporary.Can access the HR Service Portal. sn_hr_core.hrsm_contractor Contractorsn_hr_core.hrsm_contractor Role is assigned when employment status is employed and type temporary.For customers that do not use the HR Service Portal. HR Service Portal Employeesn_hr_sp.hrsp_employee Role assigned when employee type is permanent.Can access the HR Service Portal. sn_hr_core.hrsm_employee HR Employeesn_hr_core.hrsm.employee Role assigned a regular employee for customers that do not use the HR Service Portal. HR Service Portal New Hiresn_hr_sp.hrsp_new_hire Role assigned when employee is hired.Can access the HR Service Portal. sn_hr_core.hrsm_new_hire New Hiresn_hr_core.hrsm_new_hire Role assigned when employee is hired.For customers that do not use the HR Service Portal. Workday Integration Rolesn_hr_core.workday_integration A Workday integration specialist. Has access to the Workday integration module and can set integration properties. Also, has access to the Job Status module that lists Job Trackers. None User groups installed with Case and Knowledge Management Human Resources Scoped App: Core plugin adds the following user groups. User group Description HR Parent to other HR groups. It grants the HR manager role, and its child groups inherit the HR manager role. HR Admin Group members can perform all functions within Case and Knowledge Management Case and Knowledge Management Human Resources Scoped App: Core adds the following client scripts. Client script Table Description Reset priority on opened_for change HR Case [hr_case] Adjusts HR case priority based on whether the user is a VIP. Auto populate fields HR Case [hr_case] Automatically sets location and department fields in HR cases, based on details from the user associated with that record. Populate profile and assignment group HR Case [hr_case] Populates the Assignment Group and HR profile fields (if the Opened for user has an HR profile) in an HR case. Custom Knowledge Search HR Case [hr_case] Custom knowledge search in the HR case form view. End date must be after start date HR Employment History [hr_employment_history] Validates that the employment end date is not before the employment start date. Enforce unique user HR Profile[hr_profile] Prevents creating a profile when the selected user already has an HR profile. Hide Record Producer variables HR Case [hr_case] Hides record producer variables which would otherwise be displayed in the HR case form view. Highlight VIP employee HR Case [hr_case] Formats an HR case for a VIP user in the HR case list. Populate Category using template HR Case [hr_case] Populates the category based on the selected HR template. Populate fields using sys_user HR Profile[hr_profile] Updates fields in a new HR profile record when an existing user is selected. Populate HR profile onChange HR Case [hr_case] Updates the HR profile fields in an HR case automatically when the Opened for user is changed. Populate Opened for field onChange HR Case [hr_case] Updates the Opened for field when a profile is added to an HR case. Populate template using category HR Case [hr_case] Populates the template on an HR case when a category is changed. Start date must be before end date HR Employment History [hr_employment_history] Validates that the employment start date is not after the employment end date. Field Access [onLoad] hr_profile Sets HR Profile fields to read only if the user does not have the hr_case_writer role. update manager when department changes hr_case. Validate Email Address on submit hr_emergency_contact Ensures that the email address is valid when the form is submitted. Make Ack Type mandatory in HR Task. hr_task Make the Acknowledgement type field mandatory when assigned to an Opened for user on the HR case. User field is only writable for hr_admin hr_profile User field on the hr_profile form is read-only for all the users except for hr_admin users. Business rules installed with Case and Knowledge Management Case and Knowledge Management Case and Knowledge Management Case and Knowledge Management. Global Mobility Manages processes related to employee visa requests and work visa transfer requests.. Talent Management Manages processes related to visa and visa transfer requests, background checks, and drug screening requests.. | https://docs.servicenow.com/bundle/jakarta-hr-service-delivery/page/product/human-resources/reference/InstWithHR.html | 2018-01-16T13:45:34 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.servicenow.com |
Message-ID: <1140348548.763612.1386208316666.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_763611_1040780309.1386208316665" ------=_Part_763611_1040780309.1386208316665 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
We're getting some experience of proposals for projects now so w= e thought it was worthwhile writing down some of our suggestions on how you= can hopefully become more successful in working on the Jikes RVM this summ= er. This is intended to sup= plement the material provided by Google.
Certainly not! We hope that our involvement in the GSoC will introduce p= eople to our code base and make them experts, from undergraduates to PhD st= udents. Clearly the projects are challenging and you should describe your e= xperience in your applications, so we should be able to tell if you are bei= ng overly ambitious. To get feedback about this early, why not contact the = mentors?
Interest in certain projects on our proposal page is higher than for others. Every applicat= ion for a project will be considered and then scored by the mentors. = Its ok to have >1 applicant working on the same proposal. However, w= hen applying for the same project the mentors will be comparing your applic= ation to that of others. We therefore believe that to have as many students= as possible working on the Jikes RVM this summer, if you are interested in= >1 project you may increase your chances of being allocated at least on= e of your choices of projects by submitting >1 application. Google allow up t= o 20 applications per student, but we'd agree with their sentiment that= quality is more important than quantity.
When making your application you should try to convey to us that you hav= e a good understanding of what the project is, that you will be able to man= age your time effectively this summer, that this is a project that interest= s you and why, and your background.
Why not download and build the= source, you may be surprised at what building a Java VM in Java looks = like. On this site is information on a range of teaching resources and tuto= rials. You can also browse the source code through the API.
Contact the mentors directly or e-mail the researchers mailing list. | http://docs.codehaus.org/exportword?pageId=74940 | 2013-12-05T01:51:56 | CC-MAIN-2013-48 | 1386163038079 | [] | docs.codehaus.org |
)
There is mention of the beez template and the paths to the, say, the users stuff and the default.php file. I guess my question is, what happens if you are using the ja_purity or the rhuk_milkyway template, both of which also come as default templates when loading Joomla? Does editing the files located under the beez paths given provide the results that will be seen using those other two templates? (In other words, I don't see ANY of those paths under the ja_purity template, which is the one I'm using).
Thanks!
John | http://docs.joomla.org/index.php?title=Talk:How_to_override_the_output_from_the_Joomla!_core&oldid=29656 | 2013-12-05T02:10:49 | CC-MAIN-2013-48 | 1386163038079 | [] | docs.joomla.org |
TOPICS×
Understanding Notifications
Cloud Manager allows the user to receive notifications when the production pipeline starts and completes (successfully or unsuccessfully), at the start of a production deployment. These notifications are sent through the Adobe Experience Cloud Notification system.
The approval and scheduled notifications are only sent to users in the Business Owner, Program Manager, and Deployment Manager roles.
The notifications appear in a sidebar in Cloud Manager UI (User Interface) and throughout the Adobe Experience Cloud.
Click on the bell icon from the header to open the sidebar and view the notifications, as shown in the figure below:
The sidebar lists the most recent notifications.
By default, notifications are available in the web user interface across Adobe Experience Cloud solutions. Individual users can also opt for these notifications to be sent through email, either on an immediate or digest basis.
This will take the user to the Notifications Preferences screen in Adobe Experience Cloud.
The users can turn on email notifications and (optionally) select the types of notifications they want to receive over email.
You can also enable digesting from the Adobe Experience Cloud, as shown below: | https://docs.adobe.com/content/help/en/experience-manager-cloud-service/implementing/using-cloud-manager/notifications.html | 2020-08-03T15:21:04 | CC-MAIN-2020-34 | 1596439735812.88 | [array(['/content/dam/help/experience-manager-cloud-service.en/help/implementing/cloud-manager/assets/notify-1.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-cloud-service.en/help/implementing/cloud-manager/assets/notify-2.png',
None], dtype=object) ] | docs.adobe.com |
Message-ID: <1965129158.273979.1596469847768.JavaMail.j2ee-conf@bmc1-rhel-confprod1> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_273978_1684320288.1596469847767" ------=_Part_273978_1684320288.1596469847767 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
In BMC Performance Manager, a Knowledge Module (KM) that is not loaded b= y the PATROL Agent before a PATROL Console with a loaded KM of the same nam= e connects to the agent. After a static KM is loaded by the agent, the stat= ic KM is never unloaded. The static KM continues to run for as long as the = agent runs, even if all PATROL Consoles with a registered interest disconne= ct from the PATROL Agent. If the PATROL Agent stops, static KMs are not rel= oaded. See also disable a K= M and preloaded KM.= | https://docs.bmc.com/docs/exportword?pageId=103874665 | 2020-08-03T15:50:47 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.bmc.com |
firewall.huawei
The tags beginning with firewall.huawei identify log events generated by the Huawei Firewall.
Tag structure
The full tag must have at least three levels. The first two are fixed as firewall.huawei. The third level identifies the technology type and currently it can only be ngfw. The fourth element identifies the application module identified in the event.
Therefore, the valid tags include:
- firewall.huawei.ngfw
- firewall.huawei.ngfw.aaa
- firewall.huawei.ngfw.cm
- firewall.huawei.ngfw.fw-log
- firewall.huawei.ngfw.ifnet
- firewall.huawei.ngfw.ifpdt
- firewall.huawei.ngfw.info
- firewall.huawei.ngfw.module
- firewall.huawei.ngfw.mstp
- firewall.huawei.ngfw.ntp
- firewall.huawei.ngfw.sec
- firewall.huawei.ngfw.shell
- firewall.huawei.ngfw.spr
- firewall.huawei.ngfw.ssh
Huawei log format
Huawei uses a fixed syslog format that contains key fields including the module name:
TimeStamp Hostname %% dd ModuleName/Severity/Brief (l): Description
In the following example, the event was generated by the SHELL module and informs of a login action.
2018-07-22 11:19:31 sysname %%01SHELL/4/LOGIN(l): access type:console vsys:root user:admin login from con0
For more information about the Huawei Firewall log event format, see the vendor documentation.
Devo Relay rule
You will need to define a relay rule that can correctly identify the event module and apply the corresponding tag. The events are identified by the source port that they are received on and by matching a format defined by a regular expression.
When the source conditions are met, the relay will apply a tag that begins with firewall.huawei.ngfw. A regular expression in the Source Data field describes the structure of the event data - specifically the syslog header that identifies the module. The module name is extracted from the event as a capturing group and appended as the fourth level of the tag.
In the example below the rule is defined with the following settings:
- Source Port → 13030 (this can be any free port)
- Source Data → %%[0-9]{2}([A-Z]+)/
- Target Tag → firewall.huawei.ngfw.\\D1
- Check the Stop processing and Sent without syslog tag boxes. | https://docs.devo.com/confluence/ndt/parsers-and-collectors/list-of-devo-parsers/network-and-application-security/firewall-huawei | 2020-08-03T14:12:25 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.devo.com |
Compare execution plans
Applies to:
SQL Server (all supported versions)
This topic describes how to compare similarities and differences between actual graphical execution plans by using SQL Server Management Studio Plan Comparison feature. This feature is available starting with SQL Server Management Studio v16.
Note
Actual execution plans are generated after the Transact-SQL queries or batches execute. Because of this, an actual execution plan contains runtime information, such as actual number of rows, resource usage metrics and runtime warnings (if any). For more information, see Display an Actual Execution Plan.
The ability to compare plans is something that database professionals may have to do for troubleshooting reasons:
- Find why a query or batch suddenly slowed down.
- Understand the impact of a query rewrite.
- Observe how a specific performance-enhancing change introduced to the schema design (like a new index) has effectively changed the execution plan.
The Plan Comparison menu option allows side-by-side comparison of two different execution plans, for easier identification of similarities and changes that explain the different behaviors for all the reasons stated above. This option can compare between:
- Two previously saved execution plan files (.sqlplan extension).
- One active execution plan and one previously saved query execution plan.
- Two selected query plans in Query Store.
Tip
Plan Comparison works with any .sqlplan files, even from older versions of SQL Server. Also, this option enables an offline compare, so there's no need to be connected to a SQL Server instance.
When two execution plans are compared, regions of the plan that do essentially the same are highlighted in the same color and pattern. Clicking on a colored region in one plan will center the other plan on the matching node in that plan. You can still compare unmatched operators and nodes of the execution plans, but in that case you must manually select the operators to compare.
Important
Only nodes considered to change the shape of the plan are used to check for similarities. Therefore, there may be a node which is not colored in the middle of two nodes that are in the same subsection of the plan. The lack of color in this case implies that the nodes were not considered when checking if the sections are equal.
To compare execution plans Compare Showplan.
Choose the second query plan file that you would like to compare with. The second file will open so that you can compare the plans.
The compared plans will open a new window, by default with one on top and one on the bottom. The default selection will be the first occurrence of an operator or node that is common in the compared plans, but showing differences between plans. All highlighted operators and nodes exist in both compared plans. Selecting an highlighted operator in the top or left plans automatically selects the corresponding operator in the bottom or right plans. Selecting the root node operator in any of the compared plans (the SELECT node in the picture below) also selects the respective root node operator in the other compared plan.
Tip
You can toggle the display of the execution plan comparison to side-by-side, by right-clicking a blank area of the execution plan and selecting Toggle Splitter Orientation.
Tip
All zooming and navigation options available for execution plans work in plan comparison mode. For more details, see Display an Actual Execution Plan.
A dual properties window also opens on the right side, in the scope of the default selection. Properties that exist in both compared operators but have differences will be preceded by the not equal sign (≠) for easier identification.
The Showplan Analysis comparison navigation window also opens on the bottom. Three tabs are available:
- In the Statement Options tab, the default selection is Highlight similar operations and the same highlighted operator or node in compared plans share the same color and line pattern. Navigate between similar areas in compared plans by clicking on a lime pattern. You can also choose to highlight differences in plans rather similarities, by selecting Highlight operations not matching similar segments.
Note
By default, database names are ignored when comparing plans to allow comparison of plans captured for databases that have differenty names, but share the same schema. For example when comparing plans from databases ProdDB and TestDB. This behavior can be changed with the Ignore database name when comparing operators option.
The Multi Statement tab is useful when comparing plans with multiple statements, by allowing the right statement pair to be compared.
In the Scenarios tab you can find an automated analysis of some of the most relevant aspects to look at in what relates to Cardinality Estimation differences in compared plans. For each listed operator on the left pane, the right pane shows details about the scenario in the Click here for more information about this scenario link, and possible reasons to explain that scenario are listed.
If this window is closed, right-click on a blank area of a compared plan, and select Showplan Compare Options to re-open.
To compare execution plans in Query Store
In Query Store, identify a query that has more than one execution plan. For more information on Query Store scenarios, see Query Store Usage Scenarios.
Use a combination of the SHIFT key and your mouse to select two plans for the same query.
Use the button Compare the plans for the select query in a separate window to start plan comparison. Then steps 4 through 6 of To compare execution plans are applicable. | https://docs.microsoft.com/en-us/sql/relational-databases/performance/compare-execution-plans?view=sql-server-ver15 | 2020-08-03T15:52:37 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.microsoft.com |
Follow these instructions:
1. Create an element with class “js-pushowl--subscribe”
You can turn any element into a Subscribe button by adding class="js-pushowl--subscribe". PushOwl will attach click listeners to all the elements having this class. Clicking on the element will allow visitors to subscribe to your online store.. Add the Element to your theme.
You can place this element anywhere in your store. Depending upon your requirements, insert the new Element at an appropriate place in your theme files. PushOwl will add the appropriate functionality to it visitor has subscribed
Example:
<button class="js-pushowl--subscribe"). | https://docs.pushowl.com/en/articles/2320713-custom-subscribe-button | 2020-08-03T15:34:38 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.pushowl.com |
Platform and Product Roles
When creating a new user or editing an existing user from the User Management page of the Insight Platform, you can assign them the Platform Admin role as well as 1 of 3 product roles.
Roles by organization
If your company uses the concept of organizations, note that product user roles only apply to the organizations the user is assigned to.
Platform Admin role
Platform Admin is a global, or platform-wide user role. A Platform Admin has full, administrative access to the Insight Platform and can perform all of the tasks outlined in the Platform Overview, including organization-wide operations.
Product access for Platform Admins
Platform Admins don’t have product access by default and can’t complete product-specific tasks unless assigned to a product.
If you want a user to have administrative capabilities on the platform as well as within each product they’re assigned, give them both the Platform Admin and product Admin user roles.
Product roles
An Insight Platform user’s product role determines what they are able to see and do in each of the Insight products they’re assigned. Here’s how product roles are defined at the Platform level.
Product roles sometimes vary
Many Insight products use these standard product user roles. This means that the way roles are defined at the Platform level, where they’re assigned, is how they are defined and implemented at the product level. However, some products interpret or apply these product user roles a little differently based on specific product use cases.
InsightAppSec
InsightConnect
InsightConnect uses standard Admin, Read Write, and Read Only product roles.
InsightIDR
Use a Read Only user for dashboard display
Because the Insight IDR Read Only user role provides non-expiring sessions, you can create a generic Read Only user and use them to display one of your organization’s dashboards 24/7.
InsightOps
InsightVM
Product roles assigned to InsightVM users at the Platform level are ignored in favor of the more detailed and specialized InsightVM user roles, which are assigned to users by an product admin in InsightVM. That means that Platform users who are also InsightVM users are given InsightVM permissions associated with whatever role they’re assigned in InsightVM. Platform users who are not also InsightVM users are treated as global administrators.
Rapid7 Services
Want a user who can only see reports?
Create a user with a Read Only user role without admin privileges if you only want to provide viewing access to reports.
tCell
tCell application roles
In addition to these product roles, tCell also has the concept of application roles. With application roles, user permissions can be scoped to a specific tCell application. These roles don’t restrict access to the app, only increase it. | https://docs.rapid7.com/insight/product-roles/ | 2020-08-03T15:22:53 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.rapid7.com |
“What’s new” documents¶
These document the changes between minor (or major) versions of h5py.
- What’s new in h5py 2.7.1
- What’s new in h5py 2.7
- What’s new in h5py 2.6
- What’s new in h5py 2.5
- What’s new in h5py 2.4
- What’s new in h5py 2.3
- What’s new in h5py 2.2
- Support for Parallel HDF5
- Support for Python 3.3
- Mini float support (issue #141)
- HDF5 scale/offset filter
- Field indexing is now allowed when writing to a dataset (issue #42)
- Region references preserve shape (issue #295)
- Committed types can be linked to datasets and attributes
movemethod on Group objects
- What’s new in h5py 2.1
- What’s new in h5py 2.0
- Enhancements unlikely to affect compatibility
- Changes which may break existing code
- Supported HDF5/Python versions
- Group, Dataset and Datatype constructors have changed
- Unicode is now used for object names
- File objects must be manually closed
- Changes to scalar slicing code
- Array scalars now always returned when indexing a dataset
- Reading object-like data strips special type information
- The selections module has been removed
- The H5Error exception class has been removed (along with h5py.h5e)
- File .mode property is now either ‘r’ or ‘r+
- Long-deprecated dict methods have been removed
- Known issues | https://docs.h5py.org/en/2.7.1/whatsnew/index.html | 2020-08-03T14:44:59 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.h5py.org |
Adding shortcuts to subscribed users' My Matters
Users can now add shortcuts to other user's My Matters that they have subscribed to from their My Matters page.
Navigate to your My Matters page.
Select New Shortcut.
Figure: New Shortcut button
The Create Shortcut dialog box appears.
From the available list select a desired user's name. The list of Matters you have subscribed for that user appears.
Select the desired Matter for which you wish to create a shortcut.
Select Create Shortcut. | https://docs.imanage.com/work-web-help/10.2.5/en-US/Adding_shortcuts_to_subscribed_users%27_My_Matters.html | 2020-08-03T15:42:42 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.imanage.com |
Networking
All deployments of the Insight Agent require proper connectivity to function. This article details the necessary whitelisting rules that you will need to configure on your assets so their corresponding agents can communicate with the Insight platform. Additionally, you may need to configure whitelisting rules for the agent directory if you use an endpoint security application in your environment.
- Insight Platform Connectivity Requirements
- Collector Proxy Requirements
- Endpoint Security Software Requirements
Proxy Support
The Insight Agent is now proxy-aware and supports a variety of proxy definition sources. See the Proxy Configuration page for more information.
IMPORTANT
The Insight Agent will not function if your organization decrypts SSL traffic via Deep Packet Inspection technologies.
Insight Platform Connectivity Requirements
The Insight Agent communicates with the Insight platform through the following channel. All endpoint URLs ending with this destination must be whitelisted for the designated port.
If you need an alternative to the URL whitelisting method shown previously, whitelist the following IP addresses for your selected region instead.
Collector Proxy Requirements
If you also use the Rapid7 Collector to proxy agent traffic, it requires the following additional connectivity:
Endpoint Security Software Requirements
Endpoint security applications (such as McAfee Threat Intelligence Exchange, CylancePROTECT, Carbon Black, and others) may flag, block, or delete the Insight Agent from your assets depending on your detection and response settings. To prevent this from happening, configure a whitelist rule for the agent directory so your endpoint security software does not target it accidentally.
Your whitelist rule must accommodate all subdirectories contained in the agent installation path. The following paths show default agent installation locations by operating system:
- Windows -
C:\Program Files\Rapid7\Insight Agent\
- Mac and Linux -
/opt/rapid7/ir_agent/ | https://docs.rapid7.com/insight-agent/networking/ | 2020-08-03T15:33:27 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.rapid7.com |
. Then, log in to the API Cloud and the API Publisher Web application will open automatically.
Let's create an publish an API. See the video tutorial here or a step-by-step walk-through of the video tutorial below.
Here's a step-by-step walk-though of the video tutorial:
- Log in to the API Publisher.
Click the Add link and provide the information given in the table below.
Click Add New Resource. After the resource is added, expand its
GETmethod, add the following parameters to it and click Implement.
You add these parameters as they are required to invoke the API using our integrated API Console in later tutorials.
The
Implementtab opens. Provide the information given in the table below. Click the Show More Options link to see the options that are not visible by default.
Click Manage to go to the
Managetab and provide the following information.
- Click Save & Publish. This will publish the API that you just created in the API Store so that subscribers can use it.
You have created an API. | https://docs.wso2.com/pages/viewpage.action?pageId=47515830 | 2020-08-03T15:37:14 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.wso2.com |
Step 1: Find your replacement code within SmartrMail. Instructions on finding this code are here:
Step 2: Navigate to the "My Themes" section of the Big Commerce dashboard and click Edit HTML/CSS
Step 3: Search for where your signup form is located. This is usually in the SideNewsletterBox.html file.
Step 4: Replace the <form> action attribute with the SmartrMail action attribute ex. ""
Step 5: Add the spam bot protection code:
<input name="subscribe_form[anti_bot]" type="text" style="display: none" />
Step 6 (optional): Replace the name attributes for the first and last name inputs. The name attributes are "FNAME" and "LNAME"
Step 7: Click Save | http://docs.smartrmail.com/en/articles/1372000-how-to-switch-over-newsletter-signup-forms-bigcommerce | 2020-08-03T14:22:33 | CC-MAIN-2020-34 | 1596439735812.88 | [array(['https://downloads.intercomcdn.com/i/o/45142138/19c12d6bf08e6412c6640b27/embed-code.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/44841728/6c023647e74bf645b9767cc7/find+html.gif',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/44842060/3341d93eb5e10845aa20fbeb/find+footer.gif',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/44842371/7d3611229cc56b924998cdcb/change+action+2.gif',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/44842925/fcc0eaaecb49bc22a9b9d572/change+name+3.gif',
None], dtype=object) ] | docs.smartrmail.com |
Stacked Bar Chart
- 3 minutes to read
Short Description
The Stacked Bar Chart is represented by the StackedBarSeriesView object, which belongs to Bar Series Views. This view displays all the points of its series. Series values are stacked at arguments included in multiple series. So, the height of each bar is determined by the total of all the series values for the category. So, the height of each bar is determined by the total of all the series values for the category.
A Stacked Bar chart is shown in the image below. Note that this chart type is based upon the XYDiagram, and so it can be rotated to show bars either vertically or horizontally.
Chart Type Characteristics
The table below lists the main characteristics of this chart type.
NOTE
For information on which chart types can be combined with the Stacked Bar Chart, refer to the Series Views Compatibility document.
Example
The following example demonstrates how to create a ChartControl with two series of the StackedBarS stackedBarChart = new ChartControl(); // Create two stacked bar series. Series series1 = new Series("Series 1", ViewType.StackedBar); Series series2 = new Series("Series 2", ViewType.StackedBar); // Add points to them series1.Points.Add(new SeriesPoint("A", 10)); series1.Points.Add(new SeriesPoint("B", 12)); series1.Points.Add(new SeriesPoint("C", 14)); series1.Points.Add(new SeriesPoint("D", 17)); series2.Points.Add(new SeriesPoint("A", 15)); series2.Points.Add(new SeriesPoint("B", 18)); series2.Points.Add(new SeriesPoint("C", 25)); series2.Points.Add(new SeriesPoint("D", 33)); // Add both series to the chart. stackedBarChart.Series.AddRange(new Series[] { series1, series2 }); // Access the view-type-specific options of the series. ((StackedBarSeriesView)series1.View).BarWidth = 0.8; // Access the type-specific options of the diagram. ((XYDiagram)stackedBarChart.Diagram).EnableAxisXZooming = true; // Hide the legend (if necessary). stackedBarChart.Legend.Visible = false; // Add a title to the chart (if necessary). stackedBarChart.Titles.Add(new ChartTitle()); stackedBarChart.Titles[0].Text = "A Stacked Bar Chart"; // Add the chart to the form. stackedBarChart.Dock = DockStyle.Fill; this.Controls.Add(stackedBarChart); }
TIP
A complete sample project is available in the DevExpress Code Examples database at. | https://docs.devexpress.com/WindowsForms/2973/controls-and-libraries/chart-control/series-views/2d-series-views/bar-series-views/stacked-bar-chart | 2020-08-03T15:02:46 | CC-MAIN-2020-34 | 1596439735812.88 | [array(['/WindowsForms/images/seriesview_stackedbarseries.png5026.png',
'SeriesView_StackedBarSeries.png'], dtype=object) ] | docs.devexpress.com |
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::WAF::Types::ListSizeConstraintSetsRequest
- Defined in:
- (unknown)
Overview
When passing ListSizeConstraintSetsRequest as input to an Aws::Client method, you can use a vanilla Hash:
{ next_marker: "NextMarker", limit: 1, }
Instance Attribute Summary collapse
- #limit ⇒ Integer
Specifies the number of
SizeConstraintSetobjects that you want AWS WAF to return for this request.
- #next_marker ⇒ String.
Instance Attribute Details
#limit ⇒ Integer
Specifies the number of
SizeConstraintSet objects that you want AWS
WAF to return for this request. If you have more
SizeConstraintSets
objects than the number you specify for
Limit, the response includes a
NextMarker value that you can use to get another batch of
SizeConstraintSet objects.
#next_marker ⇒ String
If you specify a value for
Limit and you have more
SizeConstraintSets than the value of
Limit, AWS WAF returns a
NextMarker value in the response that allows you to list another group
of
SizeConstraintSets. For the second and subsequent
ListSizeConstraintSets requests, specify the value of
NextMarker
from the previous response to get information about another batch of
SizeConstraintSets. | https://docs.amazonaws.cn/sdk-for-ruby/v2/api/Aws/WAF/Types/ListSizeConstraintSetsRequest.html | 2020-03-28T21:35:02 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.amazonaws.cn |
Managing Groups¶
QDS creates the following groups by default for every account: system-admin, system-user, dashboard-user, and everyone. These groups have roles associated with them, which provide access control to all the users of that group.
The following table lists the groups, their associated roles, and the operations that can be performed on these groups.
In the Control Panel page, click Manage Groups to create and modify user groups. The Manage Groups page is displayed as shown in the following figure.
Click the downward arrow in the Action column to modify or see users in a group. Click Modify and the Manage Group Members and Group Roles page is displayed as shown in the following figure.
Add/remove users in the Select Group Members field and add/remove roles in the Select Group Roles field. Click Update after making the changes.
Add a group by clicking the add icon
. The Create a New Group page is displayed as shown in the following
figure.
Enter a name in the Group Name text field. Add users from the Select Group Members and roles in the Select Group Roles fields. Click Create Group. Click Cancel to not save the group. | https://docs.qubole.com/en/latest/admin-guide/iam/groups-conf.html | 2020-03-28T21:54:45 | CC-MAIN-2020-16 | 1585370493120.15 | [array(['../../_images/ManageGroups.png', '../../_images/ManageGroups.png'],
dtype=object)
array(['../../_images/ModifyGroups.png', '../../_images/ModifyGroups.png'],
dtype=object)
array(['../../_images/CreateGroup.png', '../../_images/CreateGroup.png'],
dtype=object) ] | docs.qubole.com |
Yclas Client login With all the plans we offer private professional support done by our developers. We will answer within 24 working hours and we will try our best to help you always! To use Private Professional Support login to Yclas and on My Account choose Support. Opening a ticket Once you are on the Support page press the button “New Ticket”. Before opening a ticket you need to enter a keyword or a topic. After you type the first two letter of the keyword/topic you will see all the related docs below, which you can click to open in a new tab. If you can’t find the answer of your question in the docs you can press the button called New Ticket, which becomes enabled only after you enter one or more keywords. When you click “New Ticket”, a ticket opening panel comes up, in which you could put the title and describe whatever you need help with. Replying to a ticket On the support page you can see a list of all the opened tickets and their status: Open, hold or closed. A ticket is on hold when you have received! Support does not include: Mentoring for technologies CSS, PHP, HTML, JavaScript or other technologies used in our software. Sign up or support for third party services. Customization of software or themes. | https://docs.yclas.com/use-yclas-support-system/ | 2020-03-28T20:43:20 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.yclas.com |
routine wordcase
Documentation for routine
wordcase assembled from the following types:
class Cool
(Cool) routine wordcase
Defined as:
sub wordcase(Str(Cool) , : = , Mu : = True)method wordcase(: = , Mu : = True)
Coerces the invocant (or in sub form, the first argument) to Str, and filters each word that smartmatches against
$where through the
&filter. With the default filter (first character to upper case, rest to lower) and matcher (which accepts everything), this title-cases each word:
say "perl 6 programming".wordcase; # OUTPUT: «Perl 6 Programming»
With a matcher:
say "have fun working on perl".wordcase(:where());# Have fun Working on Perl
With a customer filter too:
say "have fun working on perl".wordcase(:filter(), :where());# HAVE fun WORKING on PERL
class Str
(Str) routine wordcase
multi sub wordcase(Cool --> Str)multi sub wordcase(Str --> Str)multi method wordcase(Str: : = , Mu : = True --> Str)
Returns a string in which
&filter has been applied to all the words that match
$where. By default, this means that the first letter of every word is capitalized, and all the other letters lowercased. | https://docs-stage.perl6.org/routine/wordcase | 2020-03-28T21:38:07 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs-stage.perl6.org |
CScanner: A Cloud Security Scanner¶
This utility is intended to check your cloud configuration for compliance with your companies rules in an automated fashion, not unlike AWS Config.
For example, if you want to make sure that your port 22 is never open to the world, across all your cloud providers, you could do something like this:
connections: # Configure your connections here rules: - type: FIREWALL_PUBLIC_SERVICE_PROHIBITED protocol: "tcp" ports: - 22
You would then get a report detailing all your security groups across all your cloud providers and if they are compliant or are violating the rules.
Downloading¶
You can grab one of the releases from GitHub.
Running¶
To run the cscanner, simply point it to your config file:
java -jar cscanner.jar your-config-file.yaml
Make sure you have at least Java 8 to run this application. Note that you can use the
-h or
--help option to get a
full list of possible filtering and output options.
For detailed configuration options see Configuration.
Supported cloud providers¶
Currently the following cloud providers are supported:
Supported rules¶
Currently the following rule sets are supported: | https://docs.cscanner.io/ | 2020-03-28T21:09:36 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.cscanner.io |
Table
There are three types of tables in MemSQL: reference tables, sharded tables, and temporary tables.
Reference Tables
Reference tables are relatively small tables that do not need to be distributed and are present on every node in the cluster. They are both created and written to on the master aggregator. Reference tables are implemented via master-slave replication to every node in the cluster from the master aggregator. Replication enables reference tables to be dynamic: updates that you perform to a reference table on the master aggregator are quickly reflected on every machine in the cluster.
MemSQL aggregators can take advantage of reference tables’ ubiquity by pushing joins between reference tables and a distributed table onto the leaves. Imagine you have a distributed
clicks table storing billions of records and a smaller
customers table with just a few million records. Since the
customers table is small, it can be replicated on every node in the cluster. If you run a join between the
clicks table and the
customers table, then the bulk of the work for the join will occur on the leaves.
Reference tables are a convenient way to implement dimension tables.
Sharded Tables
Every database in the distributed system is split into a number of partitions. Each sharded table created is split with hash partitioning over the table’s primary key; a portion of the table is stored in each of its database’s partitions.
Partitions are implemented as databases on the leaves. For example, partition 3 of database
db is stored in a database
db_3 on one of the leaves. For every sharded table created inside
db, a portion of its data will reside in
db_3 on that leaf.
Although sharded tables in the same database share the same database containers on the leaves, no assumptions can be made about particular rows from different tables being co-located on a partition. If you join two tables on the column(s) they are sharded on, MemSQL can perform a co-located join, which will improve the speed of the join.
Temporary Tables
Temporary tables exist, storing data in memory, for the duration of a client session. MemSQL does not write logs or take snapshots of temporary tables. Temporary tables are designed for temporary, intermediate computations.
Unlike CREATE TABLE without the
TEMPORARY option,
CREATE TEMPORARY TABLE can be run on any aggregator, not just the master aggregator. Temporary tables are sharded tables, and can be modified and queried like any “permanent” table, including distributed joins.
Temporary tables are not persisted in MemSQL, which means they have high availability disabled. If a node in a cluster goes down and a failover occurs, all the temporary tables on the cluster lose data. Whenever a query references a temporary table after a node failover, it returns an error. For example,
"Temporary table <table> is missing on leaf <host> due to failover. The table will need to be dropped and recreated."
To prevent loss of data on node failover, use MemSQL tables that have high availability enabled.
Views cannot reference temporary tables because temporary tables only exist for the duration of a client session. Although MemSQL does not materialize views, views are available as a service for all clients, and so cannot depend on client session-specific temporary tables. Temporary tables do not support
ALTER TABLE. | https://docs.memsql.com/v7.0/concepts/table/ | 2020-03-28T21:32:13 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.memsql.com |
FIX: SharePoint 2013 Workflow recursion prevention – Part 2
Following FIX: SharePoint 2013 Workflow recursion prevention – Part 1, this post will walk you through the processes of designing your workflow that creates a list item on another list and invokes the workflow associated with that list using the new REST APIs released through SharePoint 2013 May 2014 CU as pointed out in part 1 of this series. If you haven’t read through part 1 of this series I suggest you do to get some background that will help you appreciate this post.
The scenario:
List1 has a workflow Workflow1 associated with it.
Workflow1 can be set to auto-start on item adding/updating or set to manually start or both, it doesn’t matter.
Workflow1 is designed to create a list item on List2.
List2 has a workflow Workflow2 associated with it.
Workflow2 should have manually start workflow option enabled. (Allow this workflow to be manually started)
Workflow2 simply logs the title of the list item in workflow history list.
NOTE: Both Workflow1 and Workflow2 are built on 2013 workflow platform.
Publish both the workflows and create an item on List1. Manually start Workflow1 on the list item and you should see that it completes successfully.
Now browse to List2 and you will see that a new list item was successfully created by Workflow1.
The problem:
But Workflow2 would never start off. Refer part 1 of this series, check ULS logs and you’ll see Workflow2 didn’t start because of workflow recursion prevention.
Fix:
The fix is to leverage the new REST APIs exposed to start off Workflow2 from Workflow1. Let’s open Workflow1 in SharePoint Designer’s workflow editor.
We drop a “Build Dictionary” action just above the create item in list action with the following name/value pairs of type string.
accept: application/json;odata=verbose
content-type: application/json;odata=verbose
Choose the dropdown against “(Output to Variable: dictionary)” and create a variable to type Dictionary named “ReqHeader”.
Choose the dropdown against “(Output to Variable: create)” and create a new variable named “NewItemGuid”. It will be of type GUID.
Drop a “Call HTTP Web Service” action after the create item in list action.
Click “this”, use the … button against the textbox titled “Enter the HTTP web service URL”. This will bring up string builder dialog box.
Click “Add or Change Lookup”, change “Data source” dropdown to select “Workflow Context” and for “Field from source” choose “Current Site URL” and hit OK.
Append /_api/web/lists/GetByTitle('List2')/items?$filter=GUID eq ' to the end of the current site URL token (watch out for the single open quote in the end, that is necessary too).
Click “Add or Change Lookup” again, change “Data source” dropdown to select “Workflow Variables and Parameters” and then choose “Variable: NewItemGuid” for “Field from source”. Hit OK.
The string builder dialog will now have the following string in it.
We are calling the REST method GetByTitle passing it the title of the list we are interested in (for this scenario it’s List2). We are then asking for all the items filtered on the specific GUID which is the GUID of the item we just created in the create item in list action.
Hit OK on string builder dialog. The “Call HTTP Web Service” dialog should now look like this. Ensure you select “HTTP GET” for the HTTP method. And hit OK.
Choose “Properties” option from the context menu of “Call HTTP Web Service” action.
Set “RequestHeaders” to the request header variable you created and create a new variable of type dictionary named “RespContent” for “ResponseContent” as shown below. Hit OK.
Drop a “Get an Item from a Dictionary” action after “Call HTTP Web Service” action and click “item by name or path” hyperlink. In the editor type d/results(0) .
Click the dictionary hyperlink and choose “Variable: RespContent” from the dropdown. Click the item hyperlink and again choose “Variable: RespContent” from the dropdown. End result will be.
There are plenty of articles out there that talk about how to parse the JSON to get to the actual data we need so I am not going to cover that here. But remember we can use Fiddler to determine the structure of JSON result and we’ll use that knowledge to decide on how we will get to the result we need (for e.g., d/results(0)).
What we did above is that we want the value at d/results(0) from the RespContent dictionary variable we get back from the web service call. We are then storing the result again back in RespContent because the result is also of type dictionary.
Drop another “Get an Item from a Dictionary” action. Click on “item by name or path” hyperlink and type Id. Click “dictionary” hyperlink and choose “Variable: RespContent”. Click the item hyperlink in “(Output to item)” and create a new variable named “NewItemId” of type integer as shown below.
We did all these to get the ID (the sequence number) value of the list item we just created. Phew!!! This is because we need to know the ItemID of the list item we just created as we need to pass it to the StartWorkflowOnListItemBySubscriptionId REST method later. But we are just 30% into our design
The other parameter that StartWorkflowOnListItemBySubscriptionId needs is the workflow SubscriptionId.
We drop another “Call HTTP Web Service” action and construct our next REST call. This time we call /_api/web/lists/GetByTitle('List2') . We set the same “ReqHeader” variable to this call. And set the same “RespContent” variable as we did before. The end result you should have in “Call HTTP Web Service” dialog is shown below.
And in designer we should see.
We then add another “Get an Item from a Dictionary” action. Ask for d/Id from “RespContent” variable and assign the output to a new variable named “List2Guid” of type GUID as shown below.
Now the variable “List2Guid” will have the GUID of List2. We need this to get the workflow subscriptionId that we can get by calling an already available REST API called /_api/SP.WorkflowServices.WorkflowSubscriptionService.Current/EnumerateSubscriptionsByList.
We now drop another “Call HTTP Web Service” action and set it up get all workflow subscriptions. Here’s how string builder should look like for this call.
And we set the “HTTP method” option for this “Call HTTP Web Service” action to “HTTP POST” as shown below.
We should now have the workflow design as shown below.
Now we have a tricky situation where we need to get the subscriptionId of the exact workflow that we are interested to start off. This is where Fiddler is very helpful. Again I’ll defer to the plethora of articles available online that tells us how to do a GET, POST REST calls using Fiddler. But I am going to rush on this piece a bit.
Through Fiddler (by submitting a POST request to the same REST method EnumerateSubscriptionsByList, we see the following output.
Workflow2 happens to be the 3rd in the list of workflows associated to List2 so I can get that out using the 2nd index. So we drop two more “Get an Item from a Dictionary” actions. In the first action we ask for d/results(2) from “RespContent” and we store the output again in “RespContent”. In the second action we ask for Id which happens to be the SubscriptionId of Workflow2 on list List2 from “RespContent” again and store the result in a new variable named “SubscriptionId” of type GUID. Our workflow design should now look like below.
Now we are at the final stage where we will start the workflow.
Drop our last “Call HTTP Web Service” action and call the new /_api/SP.WorkflowServices.WorkflowInstanceService.Current/StartWorkflowOnListItemBySubscriptionId REST method by passing in the SubscriptionId and NewItemId variables. Our string builder dialog should look like the below. (Ensure the parameter names of this method is exactly as it is shown below and that the values are enclosed within single quotes).
This will again be a POST request.
After setting the “ReqHeader” correctly using the “Properties” context menu option for the last “Call HTTP Web Service” action, we drop a “Set Workflow Status” action. The completed Workflow1 should look like below.
That’s it!!
We Save/Publish our workflow, go to List1 and create a new item. Start off Workflow1 manually on that new item. It should complete successfully. Then we go to List2 and we should see that a new item is created and Workflow2 was trigged off too.
If we click on “Stage 1”, we’ll see that Workflow2 logged the message as we designed.
I hope the information in this post was useful. Please check out the next part SharePoint 2013 Workflow recursion prevention – Part 3 to achieve the above objective from a Visual Studio SharePoint 2013 Workflow. | https://docs.microsoft.com/en-us/archive/blogs/sridhara/fix-sharepoint-2013-workflow-recursion-prevention-part-2 | 2020-03-28T22:35:12 | CC-MAIN-2020-16 | 1585370493120.15 | [array(['https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/69/59/metablogapi/1411.wlEmoticon-smile_5A870089.png',
'Smile'], dtype=object) ] | docs.microsoft.com |
Gloop service inputs and outputs
Input and output properties are optional for any type of service but in most cases, a service will be defining at least one of either. In Gloop, you can define as many input or output properties you'd like through the Input/Output view.
Injectable input properties input properties. In Gloop, they include the following:
- Arguments that can be injected to services called by Martini endpoints, listed and described in every endpoint type's respective page;
- Parameters injected by Gloop steps, listed and described in each type's respective page; and
- Input variables brought by HTTP invokes.
Injecting input properties
To inject an argument in Gloop, you must declare the variable under the
Input/Output view. You can use the designated hotkeys or do a right click, pick a type, and then rename the
variable. In Gloop and Flux, the names of injectable arguments must be prepended by the
$ character. From there,
you can use the parameter by mapping.
The example below shows how you can obtain the
HttpServletRequest
object in an HTTP-invokable Gloop service:
When a Gloop service is invoked via HTTP, a proxy is used to map inputs to the service. The proxy will then handle the injection of the variables for you. | https://docs.torocloud.com/martini/latest/developing/gloop/service/inout/ | 2020-03-28T20:13:04 | CC-MAIN-2020-16 | 1585370493120.15 | [array(['../../../../placeholders/img/coder-studio/gloop-logging-http-request.png',
'HTTP request-logging Gloop service'], dtype=object)
array(['../../../../placeholders/img/coder/gloop-logging-http-request.png',
'HTTP request-logging Gloop service'], dtype=object) ] | docs.torocloud.com |
The
layouts/ folder contains different physical key layouts that can apply to different keyboards.
layouts/+ default/| + 60_ansi/| | + readme.md| | + layout.json| | + a_good_keymap/| | | + keymap.c| | | + readme.md| | | + config.h| | | + rules.mk| | + <keymap folder>/| | + ...| + <layout folder>/+ community/| + <layout folder>/| + ...
The
layouts/default/ and
layouts/community/ are two examples of layout "repositories" - currently
default will contain all of the information concerning the layout, and one default keymap named
default_<layout>, for users to use as a reference.
community contains all of the community keymaps, with the eventual goal of being split-off into a separate repo for users to clone into
layouts/. QMK searches through all folders in
layouts/, so it's possible to have multiple repositories here.
Each layout folder is named (
[a-z0-9_]) after the physical aspects of the layout, in the most generic way possible, and contains a
readme.md with the layout to be defined by the keyboard:
# 60_ansiLAYOUT_60_ansi
New names should try to stick to the standards set by existing layouts, and can be discussed in the PR/Issue.
For a keyboard to support a layout, the variable must be defined in it's
<keyboard>.h, and match the number of arguments/keys (and preferably the physical layout):
#define LAYOUT_60_ansi KEYMAP_ANSI
The name of the layout must match this regex:
[a-z0-9_]+
The folder name must be added to the keyboard's
rules.mk:
LAYOUTS = 60_ansi
LAYOUTS can be set in any keyboard folder level's
rules.mk:
LAYOUTS = 60_iso
but the
LAYOUT_<layout> variable must be defined in
<folder>.h as well.
You should be able to build the keyboard keymap with a command in this format:
make <keyboard>:<layout>
When a keyboard supports multiple layout options,
LAYOUTS = ortho_4x4 ortho_4x12
And a layout exists for both options,
layouts/+ community/| + ortho_4x4/| | + <layout>/| | | + ...| + ortho_4x12/| | + <layout>/| | | + ...| + ...
The FORCE_LAYOUT argument can be used to specify which layout to build
make <keyboard>:<layout> FORCE_LAYOUT=ortho_4x4make <keyboard>:<layout> FORCE_LAYOUT=ortho_4x12
Instead of using
#include "planck.h", you can use this line to include whatever
<keyboard>.h (
<folder>.h should not be included here) file that is being compiled:
#include QMK_KEYBOARD_H
If you want to keep some keyboard-specific code, you can use these variables to escape it with an
#ifdef statement:
KEYBOARD_<folder1>_<folder2>
For example:
#ifdef KEYBOARD_planck#ifdef KEYBOARD_planck_rev4planck_rev4_function();#endif#endif
Note that the names are lowercase and match the folder/file names for the keyboard/revision exactly.
In order to support both split and non-split keyboards with the same layout, you need to use the keyboard agnostic
LAYOUT_<layout name> macro in your keymap. For instance, in order for a Let's Split and Planck to share the same layout file, you need to use
LAYOUT_ortho_4x12 instead of
LAYOUT_planck_grid or just
{} for a C array. | https://beta.docs.qmk.fm/developing-qmk/qmk-reference/feature_layouts | 2020-03-28T19:55:42 | CC-MAIN-2020-16 | 1585370493120.15 | [] | beta.docs.qmk.fm |
WordPress
A plugin has been developed to embed Fluid Player in WordPress blogs:
Fluid Player can be easily embedded by using the custom [fluid-player] shortcode. The initial version accepts the following list of named parameters:
- video: path to actual video to be used by the player. If no value is passed it will fall back to the plugin sample video.
- vast_file: path to vast file (optional)
- vtt_file: path to VTT file (optional)
- vtt_sprite: path to VTT sprites file (optional)
- layout: any of the following themes are provided with the player: default/funky/metal, if no value is passed it will fall back to ‘default’
Provided below is a generic example of how such a call would look like:
[fluid-player video="foo.mp4" vast_file="vast.xml" vtt_file="thumbs.vtt" vtt_sprite="thumbs.jpg" layout="default"]
For more information visit the Wordpress hosted plugin page. | https://docs.fluidplayer.com/wordpress/ | 2020-03-28T21:15:09 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.fluidplayer.com |
A Trip Down Memory Lane
While waiting for Visual Studio to repair whatever was causing deploys to the emulator to fail, I came across this post containing images of Windows boot screens over time. I know I'm giving my age away, but the only one I don't remember is the first. If you're interested you can find a few more here. For those of you who think Windows 95 was the first version of Windows, take a look at the Windows 2.0 UI. Imagine...Windows and applications running in 640K. Of course, we didn't have Solitaire then either. One of the first things I remember doing after coming to work for MS was running Excel in Windows. The experience was unique because you typed excel.exe at the command prompt and it booted Windows 2.11 prior to starting Excel.
Good times, good times. | https://docs.microsoft.com/en-us/archive/blogs/bluecollar/a-trip-down-memory-lane | 2020-03-28T22:20:19 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.microsoft.com |
Small Basic - Pictionary Project
I started new project especially for non-native kids to understand Small Basic keywords with pictures.
Purpose
- Small Basic is a programming language for kids.
- Little difficult to understand English keywords especially for non-native kids.
- This project suggests to draw pictures about keywords in Small Basic.
Sample Pictures
Site
- Pictionary Project for Small Basic - home page (Sway)
- Pictionary for Small Basic - pictures (Google Photo) | https://docs.microsoft.com/en-us/archive/blogs/smallbasic/small-basic-pictionary-project | 2020-03-28T20:19:06 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.microsoft.com |
When you use Minishift, you interact with the following components:
the Minishift virtual machine (VM)
the Docker daemon running on the VM
the OpenShift cluster running on the Docker daemon
The Minishift architecture diagram outlines these components.
The
minishift binary, placed on the
PATH for easy execution, is used to
start,
stop and
delete the Minishift VM.
The VM itself is bootstrapped off of a pluggable, live ISO image.
Some Minishift commands, for example
docker-env, interact with the Docker daemon, whilst others communicate with the OpenShift cluster, for example the
openshift command.
Once the OpenShift cluster is up and running, you interact with it using the
oc binary.
Minishift caches this binary under
$MINISHIFT_HOME (per default ~/.minishift).
minishift oc-env is an easy way to add the
oc binary to your
PATH.
For more.
The runtime behavior of Minishift can be controlled through flags, environment variables, and persistent configuration options.
The following precedence order is applied to control the behavior of Minishift. Each action in the following list takes precedence over the action below it:
Use command line flags as specified in the Flags section.
Set environment variables as described in the Environment Variables section.
Use persistent configuration options as described in the Persistent Configuration section.
Accept the default value as defined by allows you to specify command line flags you commonly use through environment variables. To do so, apply the following rules to the flag you want to set as an environment variable.
Apply
MINISHIFT_ as a prefix to the flag you want to set as an environment variable.
For example, the
vm-driver command.
A common example is the URL of the ISO to be used.
Usually, you specify it with the
iso-url flag of the
minishift start command.
Applying the above rules, you can also specify this URL by setting the environment variable as
MINISHIFT_ISO_URL.
Using persistent configuration allows you to control Minishift behavior without specifying actual command line flags, similar to the way you use environment variables.
You can also define global configuration using
--global flag.
Global configuration is applicable to all profiles.
Minishift maintains a configuration file in $MINISHIFT_HOME/config/config.json. This file can be used to set commonly-used command line flags persistently for individual profiles. The global configuration file is maintained at $MINISHIFT_HOME/config/global.json.
Flags which can be used multiple times per command invocation, like
docker-env or
insecure-registry, need to be comma-separated when used with the
config set command.
For example, from the CLI, you can use
insecure-registry like this:
$ minishift start --insecure-registry hub.foo.com --insecure-registry hub.bar.com
As part of the OpenShift cluster provisioning, 100 persistent volumes are created for your OpenShift cluster.
This allows applications to make persistent volumes claims.
The location of the persistent data is determined in the
host-pv-dir flag of the
minishift start command and defaults to /var/lib/minishift/openshift.local.pv on the Minishift VM.
If you are behind an HTTP/HTTPS proxy, you need to supply proxy options to allow Docker and OpenShift to work properly.
To do this, pass the required flags during
minishift start.
For example:
$ minishift start --http-proxy --https-proxy
In an authenticated proxy environment, the
proxy_user and
proxy_password must be a part of proxy URI.
$ minishift start --http-proxy http://<proxy_username>:<proxy_password>@YOURPROXY:PORT \ --https-proxy https://<proxy_username>:<proxy_password>@YOURPROXY:PORT
You can also use the
--no-proxy flag to specify a comma-separated list of hosts that should not be proxied.
Using the proxy options will transparently configure the Docker daemon as well as OpenShift to use the specified proxies. | https://docs.okd.io/3.11/minishift/using/basic-usage.html | 2020-03-28T20:59:53 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.okd.io |
$ oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=<accountkey> --namespace openshift-image-registry
In addition to the
configs.imageregistry.operator.openshift.io and ConfigMap
resources, configuration is provided to the Operator by a separate secret
resource located within the
openshift-image-registry namespace.
The
image-registry-private-configuration-user secret provides
credentials needed for storage access and management. It overrides the default
credentials used by the Operator, if default credentials were found.
For Azure registry storage the secret is expected to contain one key whose value is the contents of a credentials file provided by Azure:
REGISTRY_STORAGE_AZURE_ACCOUNTKEY
Create an OKD secret that contains the required key.
$ oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=<accountkey> --namespace openshift-image-registry
During installation, your cloud credentials are sufficient to create Azure Blob Storage, and the Registry Operator automatically configures storage.
A cluster on Azure with user-provisioned infrastructure.
To configure registry storage for Azure, provide Registry Operator cloud credentials.
For Azure storage the secret is expected to contain one key:
REGISTRY_STORAGE_AZURE_ACCOUNTKEY
Create an Azure storage container.
Fill in the storage configuration in
configs.imageregistry.operator.openshift.io/cluster:
$ oc edit configs.imageregistry.operator.openshift.io/cluster storage: azure: accountName: <account-name> container: <container-name> | https://docs.okd.io/latest/registry/configuring_registry_storage/configuring-registry-storage-azure-user-infrastructure.html | 2020-03-28T21:24:50 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.okd.io |
Currently Bluetooth support is limited to AVR based chips. For Bluetooth 2.1, QMK has support for RN-42 modules and the Bluefruit EZ-Key, the latter of which is not produced anymore. For more recent BLE protocols, currently only the Adafruit Bluefruit SPI Friend is directly supported. BLE is needed to connect to iOS devices. Note iOS does not support mouse input.
Not Supported Yet but possible:
HC-05 boards flashed with RN-42 firmware. They apparently both use the CSR BC417 Chip. Flashing it with RN-42 firmware gives it HID capability.
Sparkfun Bluetooth Mate
HM-13 based boards
Currently The only bluetooth chipset supported by QMK is the Adafruit Bluefruit SPI Friend. It's a Nordic nRF5182 based chip running Adafruit's custom firmware. Data is transmitted via Adafruit's SDEP over Hardware SPI. The Feather 32u4 Bluefruit LE is supported as it's an AVR mcu connected via SPI to the Nordic BLE chip with Adafruit firmware. If Building a custom board with the SPI friend it would be easiest to just use the pin selection that the 32u4 feather uses but you can change the pins in the config.h options with the following defines:
define AdafruitBleResetPin D4
define AdafruitBleCSPin B4
define AdafruitBleIRQPin E6
A Bluefruit UART friend can be converted to an SPI friend, however this requires some reflashing and soldering directly to the MDBT40 chip.
This requires some hardware changes, but can be enabled via the Makefile. The firmware will still output characters via USB, so be aware of this when charging via a computer. It would make sense to have a switch on the Bluefruit to turn it off at will.
Use only one of these
BLUETOOTH_ENABLE = yes (Legacy Option)
BLUETOOTH = RN42
BLUETOOTH = AdafruitEZKey
BLUETOOTH = AdafruitBLE
This is used when multiple keyboard outputs can be selected. Currently this only allows for switching between USB and Bluetooth on keyboards that support both. | https://beta.docs.qmk.fm/using-qmk/hardware-features/feature_bluetooth | 2020-03-28T21:21:05 | CC-MAIN-2020-16 | 1585370493120.15 | [] | beta.docs.qmk.fm |
CSCDomainManager
- 1 minute to read
-
- DarkLight
CSCDomainManager is a web-based portfolio management platform consolidating domains alongside social media usernames, SSL digital certificates, and DNS. Integrate CSCDomainManager with the Axonius Cybersecurity Asset Management Platform.
NOTE
Axonius uses the DomainManager API
Adapter Parameters
- CSC API Server (required, default:) - The CSC API host.
- Zone Name (required) - The DNS zone name.
- User Token (required) - User token supplied by CSC (e.g xxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxx)
- API Key (required) - API key supplied by CSC.
- Verify SSL (required, default: False) - Verify the SSL certificate offered by the host supplied in CSC API Server. CSC API Server.
- If supplied, Axonius will utilize the proxy when connecting to the host defined for this connection.
- If not supplied, Axonius will connect directly to the host defined for this connection.
- Choose Instance (required, default: 'Master') - The Axonius node to utilize when connecting to CSC API? | https://docs.axonius.com/docs/cscdomainmanager | 2020-03-28T20:05:15 | CC-MAIN-2020-16 | 1585370493120.15 | [array(['https://cdn.document360.io/95e0796d-2537-45b0-b972-fc0c142c6893/Images/Documentation/image%281023%29%28210%29.png',
'image.png'], dtype=object) ] | docs.axonius.com |
This feature is available with our 2.8.0 release and on all sites hosted at Yclas.com This feature gives you and your users two-factor authentication. You can protect your account with both your password and your phone. How to configure 2 step authentication: Login to your admin panel. Go to Settings -> General. Activate 2 Step Authentication. Press Save. How to enable the 2 Step Authentication on your profile: Login to your website. Go to Edit Profile. If you don’t have Google Authenticator app installed on your mobile phone, you can choose Android or iOS below to get one. Run the app on your mobile phone, click Set up account from the options and scan the QR code. In the “2 Step Authentication” section, scan the QR code. 6. Once the QR code is scanned, you will see the account created with a verification code. (There’s no need to write down or memorize the verification code because it changes every 30 seconds.) 7. Now you will be redirected to enter the verification code and press “Send”. If the code is valid 2 Step Authentication will be enabled, otherwise you will have to scan QR code and enter the verification code again. How to use it: Go to your website and choose to log in. Enter your Email and Password and click Login. Now you will be redirected to enter the Verification Code (Run the Google Authenticator app to find the verification code.) Click Send. Now you are logged into your website! | https://docs.yclas.com/2-step-authentication/ | 2020-03-28T20:42:11 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.yclas.com |
Responsive
Please refer to the responsive section ↗ of the Digital Foundations documentation for an introduction to designing responsively.
Note on Grid & Breakpoints
The grid and breakpoints laid out here were designed specifically for the wework.com product. This responsive setup was designed for the content, but could easily be applied to other designs. It may be that a different grid and breakpoints are more suitable for other products. The grid and breakpoints can change to be anything, but the system components are universal.
Responsive Grid
Rivendell uses a 12 column grid. Each column has a % width (of its container). Each gutter (between columns) has a fixed 30px width. The gutters have a fixed px width, as opposed to also being %, to retain ample spacing between elements at any browser width.
Breakpoints
Breakpoints are a designated point (often brower width, but can also be landscape or portrait orientation) where the CSS styles change. Below are the responsive breakpoints:
Desktop
The full extent of a template. The only restriction should be a max-width on the container to stop content filling too much of the width of large desktop screens, for a comfortable scan of content and reading width. The responsive grid will adapt to any max-width, to suit the project. For example, wework.com is set to 1200px max-width.
Breakpoint 1 (bp1)
The first relief from desktop. Breaks at 1090px browser width, to cover tablet portrait screens. Allows for small changes to desktop designs, for relief for more 'complicated' content-heavy templates containing tables, or multiple tiles/card patterns.
Breakpoint 2 (bp2)
Breaks at 790px browser width, to cover tablet portrait screens. By and large, all templates should be designed to work down to at least 790px width without any major changes. At this breakpoint, templates should prioritize tablet and mobile.
Breakpoint 3 (bp3)
CSS specifically targeted at mobile. Breaks at 490px browser width, to cover the vast majority of mobile screen sizes.
Responsive Template
The download below is a good starting point for designing new templates with the Rivendell design system. This particular example is from wework.com, with the desktop max-width set to 1200px. The grid and 4 core artboards for each breakpoint are set up and ready to start populating, and stress-test how your template will work at different browser sizes. The smaller artboard to the left is ‘above the fold’ on a highly trafficked 1440x800px desktop browser window.
| http://rivendell-docs.netlify.com/responsive/ | 2018-02-18T05:14:33 | CC-MAIN-2018-09 | 1518891811655.65 | [array(['../patterns/masthead-1.png', 'Responsive design'], dtype=object)
array(['template-weworkcom.png', 'Responsive Sketch template'],
dtype=object) ] | rivendell-docs.netlify.com |
Introduction¶
Jan Ilavsky, “Nika - software for 2D data reduction”, J. Appl. Cryst. (2012), vol. 45, pp. 324-328. DOI:10.1107/S0021889812004037. Please e-mail me, if you need copy.
Manual 1.3.2 for Nika version 1.75 for Igor 7.x
Jan 27, 2018
Jan Ilavsky
Description¶
This is manual for Nika set of macros developed for Igor Pro (Wavemetrics, Inc,) Igor 7.0x. These macros are designed to process 2D (CCD and other area detectors) data from small-angle and wide-angle scattering instruments. The purpose is to process (normalize, background correct, calibrate,…) 2D data from experiment and convert these into 1D “line outs” data – providing correctly calibrated Intensity, q (\(2\Theta\) or d), and errors.
Nika was designed to provide number of methods to extract the data:
- Sector and circular averages (“cake”)
- Intensity along linear and elliptical path (vertical/horizontal lines, line under an angle and ellipse of arbitrary aspect ratio)
- Intensity along linear path but for Grazing incidence geometry
- Intensity vs azimuthal angle image intended for manual inspection of geometry.
Disclaimer:
These macros represent a collaborative work in progress and it is very likely that not all features are finished at any given time. Therefore, some features may not work fully.. | http://saxs-igorcodedocs.readthedocs.io/en/stable/Nika/Introduction.html | 2018-02-18T05:11:41 | CC-MAIN-2018-09 | 1518891811655.65 | [] | saxs-igorcodedocs.readthedocs.io |
Changes
This topic will summarize the new functionality introduced in the library with helpful links to places in the documentation that describe in greater detail the new functionality and how it can be used.
What's New in 2014 Q3
What's New
Mail Merge support, which can be used to generate documents from a template document (containing merge fields) and data source. Read more.
Document Variables that enable users to define variables in the document and use document variable fields. Read more.
Export of table styles to HTML.
Import/export HTML preserving white spaces through non-breaking spaces.
Import and export document theme to DOCX file format.
Introduced lists export/import to HTML.
What's Fixed
Table border calculator is not working correctly for table with empty rows.
'Style' element is not correctly imported when it is outside the 'head' element.
Incorrect export of nested table elements.
Converted Border class to immutable type.
Importing empty string causes exception.
'Class' attribute is exported when ExportSettings.StylesExportMode is None.
NullReference exception is thrown when FieldResult is empty string or null.
Properties of Paragraphs without StyleId are not exported when StylesExportMode is Inline.
Importing HTML containing only an image causes exception.
Line breaks are not exported to HTML.
Underline is not exported to HTML.
HtmlFormatProvider crashes when html element is present in the body.
Support for negative indent.
Table column's widths are not respected when importing from HTML and exporting to DOCX.
Importing from HTML imports table borders as inside borders.
StyleProperty.GetActualValue() throws exception when style is not added to a document.
RestartAfterLevel property in ListLevel class has inappropriate default value.
RestartAfterLevel does not work correctly when exported to RTF format.
Style applied to div is applied over paragraphs after the div.
Exporting to HTML document containing hyperlink with StylesExportMode Inline causes exception.
When importing from HTML paragraph style is not respected.
Default font size is not exported correctly to RTF. | https://docs.telerik.com/devtools/document-processing/libraries/radwordsprocessing/changes-and-backward-compatibility/changes | 2018-02-18T05:07:14 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.telerik.com |
Organizing performance metrics presets
Organize groups of performance metrics graphs with custom named presets. Clone, rename, share, or delete performance metrics views on the Dashboard.
Clone, rename, share, or delete performance metrics views on the Dashboard. Saving groups of graphs with named presets allows customizing, organizing, and viewing different groups of related metrics graphs for analysis. Add metrics in performance graphs for each preset view as you prefer.
Procedure
- Click Dashboard in the left navigation pane.
- At the top of the Dashboard page, hover on the preset tab and click the drop-down arrow to open the menu.
- To clone a preset view of metrics graphs:
- Click Clone.
- For multiple clusters, select either To This Cluster to clone the preset view for the current cluster, or To Different Cluster to clone the view to a different cluster.
- In the Save Preset dialog, enter a name for the preset and click Save.
- If the preset already exists in the destination cluster, a dialog prompts you to overwrite or cancel.
- If there are any issues with incompatible schema or metrics, a warning appears. Click Continue Clone to proceed with cloning the preset, or click Cancel.
- To set the default preset, click Make Default.
- To rename a preset, click Rename, enter a new name and click Save.
- To delete a preset, click Delete. The original installed default view cannot be deleted.
- (Admins only) To share a preset view, click Share with all users. A globe icon in the view tab indicates the preset view is visible to all users.The Share... menu option is not available if authentication is disabled.
- To view another preset, click the preset name tab at the top of the Dashboard. Tabs appear in alphabetical order. | https://docs.datastax.com/en/opscenter/6.1/opsc/online_help/opscMetricsPresets_t.html | 2018-02-18T04:37:43 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.datastax.com |
Create new Form Element
You can add new form elements to BuddyForms to support your plugin specific fields or add a form element with a special functionality.
With the buddyforms_form_element_add_field filter you can add a new form element to the Form Builder.
Parameters
- $form_fields (array) The Form Fields.
- $form_slug (string) The Form Slug
- $field_type (string) The Field Type like Textbox, Hidden...
- $field_id (string) with mod5 Generated Field ID
Return Value
- $form_fields (array)
Code Example
I have created a gist as starting point with an BuddyForms New Form Element Example
You can save the value stored in $_POST[ $customfield[ 'slug' ] ] or use for your purpose.
To store the value as custom post meta you can use update_post_meta function Read the WordPress codex for more information: | https://docs.buddyforms.com/article/148-create-new-form-element | 2018-02-18T04:34:38 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.buddyforms.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.