content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
click_data.num_archived_logs_to_use
The number of archived click logs to use from each archive directory.
Key:
click_data.num_archived_logs_to_use
Type:
String
Can be set in: collection.cfg
Description
This option allows the selection of the number of logs to use from each archive directory.
It can be useful to limit the click data that is included in indexes to a certain degree of recentness. This helps keep the search results as relevant as possible over collections with changing information. This option can be set to:
- n, where the last n records from each archive directory (as determined by alphabetical sort of the click logs) will have their click data included in building the index.
- all, where every available click log in the archive directories will have their click data included in building the index.
The logs are usually archived each time a collection is updated. This means that if your collection is updated once per day, then setting this option to '5' will include the last 5 days worth of click logs.
Note: The number of logs to use applies to all archive directories.
Default Value
click_data.num_archived_logs_to_use=all
Examples
click_data.num_archived_logs_to_use=5 click_data.num_archived_logs_to_use=all click_data.num_archived_logs_to_use=100 | https://docs.squiz.net/funnelback/archive/administer/reference-documents/collection-options/click_data.num_archived_logs_to_use.html | 2021-09-17T00:16:48 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.squiz.net |
Retry Schedule
Svix attempts to deliver each webhook message based on a retry schedule with exponential backoff.
#The schedule
Each message is attempted based on this schedule:
- Immediately
- 5 seconds
- 5 minutes
- 30 minutes
- 2 hours
- 5 hours
- 10 hours
- 10 hours (in addition to the previous)
Your customers can also manually retry each message at any time from the application portal. Additionally, if an endpoint is removed or disabled delivery attempts to the endpoint will be disabled as well.
#Failed delivery handling
After the conclusion of the above attempts the message will be marked as
Failed for this endpoint, and you will get a webhook of type
message.attempt.exhausted notifying you of this error. | https://docs.svix.com/account/retries/ | 2021-09-17T01:33:43 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.svix.com |
You are viewing version 2.25 of the documentation, which is no longer maintained. For up-to-date documentation, see the latest version.
Configure Spinnaker on AWS for Disaster Recovery
Spinnaker disaster recovery
The following guide describes how to configure SpinnakerTM deployment on AWS to be more resilient and perform disaster recovery (DR). Spinnaker does not function in multi-master mode, which means that active-active is not supported at this time. Instead, this guide describes how to achieve an active-passive setup. This results in two instances of Spinnaker deployed into two regions that can fail independently.
Requirements
- The passive instance will have the same permissions as the active instance
- The active instance is configured to use AWS Aurora and S3 for persistent storage
- Your Secret engine/store has been configured for disaster recovery
- All other services integrated with Spinnaker, such as your Continuous Integration (CI) system, are configured for disaster recovery
What a passive instance is
A passive instance Spinnaker accessible through
us-west.spinnaker.acme.comand
api.us-west-spinnaker.acme.comload balancers.
- Passive Spinnaker Spinnaker
To make a passive version of Spinnaker, use the same configuration files as the current active installation for your starting point. Then, modify it to deactivate certain services before deployment.
To keep the configurations in sync, set up automation to create a passive Spinnaker configuration every time a configuration is changed for the active Spinnaker. An easy way to do this is to use Kustomize Overlays.
Configuration modifications
Make sure you set replicas for all Spinnaker Spinnaker, Spinnaker is failing, the following actions need to be taken:
Activating the passive Spinnaker
Perform the following tasks when you make the passive Spinnaker into the active Spinnaker:
- Use the same version of Operator or Halyard to deploy the passive Spinnaker installation that was used to deploy the active Spinnaker.
- Spinnaker services, and the time it takes to update DNS. Most Spinnaker services that fail Spinnaker will recover the affected systems in case of a failure, such as database corruption. The current Spinnaker RPO target is 24 hours maximum, tied to the last snapshot of the database. | https://v2-25.docs.armory.io/docs/armory-admin/aws/aws-dr/ | 2021-09-17T00:43:13 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['https://d33wubrfki0l68.cloudfront.net/ce88a6d1e6be5c08154c2928e8d8e10ebbe216a3/afaa5/images/cloud-resources/aws/armory-active-passive.png',
'Diagram of Armory deployment on AWS with disaster recovery'],
dtype=object) ] | v2-25.docs.armory.io |
Access Throttling¶
Control Panel Location:
This section of the Control Panel allows you to manage the Throttling feature. See Throttling Control for more information regarding this feature.
Settings¶
Enable throttling?¶
Allows you to enable or disable this feature.
Require IP?¶
Set the system to deny a visitor access if the user’s IP address cannot be determined while throttling is enabled.
Maximum page loads¶¶
The number of seconds during which the above number of page loads are allowed.
Lockout time¶
The length of time in seconds that a user will be unable to use your site.
Lock out action¶¶
If you choose the URL Redirect option above, this preference enables you to set the destination URL. | https://docs.expressionengine.com/latest/cp/settings/throttling.html | 2018-02-17T23:01:27 | CC-MAIN-2018-09 | 1518891808539.63 | [] | docs.expressionengine.com |
Intelligence
This topic points you to resources that you can use to learn more about the business intelligence (BI) and reporting tools that are available in Microsoft Dynamics 365 for Finance and Operations, Enterprise edition.
- Information access and reporting
- Tech Talk: Reporting options (video)
- Finance and Operations: Business intelligence (blog)
Analytical workspaces
Finance and Operations delivers interactive reports that are seamlessly integrated into application for Finance and Operations overview
- Printing in Finance and Operations applications
- Install the Document Routing Agent to enable network printer devices overview
- Manage the Electronic reporting configuration lifecycle
- Create an Electronic reporting configuration
Financial reporting
Standard financial reports are provided that use the default main account categories in Finance and Operations. You can use the report designer to create or modify traditional financial statements, such as income statements and balance sheets. You can then share the results with other members of your organization. Examples of financial reporting include balance sheets, cash flow, and summary trial balance year over year.
To learn more, see the following topics:
- Financial reporting for Finance and Operations
- Generate a financial report
- Financial report components
Technical reference reports
The following reports provide reference information about the objects in Finance and Operations: | https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/analytics/bi-reporting-home-page | 2018-02-17T23:41:15 | CC-MAIN-2018-09 | 1518891808539.63 | [array(['media/power-bi-in-d365-workspace.png',
'Example of Power BI in a workspace'], dtype=object)] | docs.microsoft.com |
Problem
After waiting five minutes, none of your Python agent data appears in the New Relic UI.
Solution
To troubleshoot missing data:
-. | https://docs.newrelic.com/docs/agents/python-agent/troubleshooting/no-data-appears-python | 2018-02-17T23:14:34 | CC-MAIN-2018-09 | 1518891808539.63 | [] | docs.newrelic.com |
Verifying the Zones are Painted
When the ink and paint process is completed, it_19<<
- | https://docs.toonboom.com/help/harmony-12/premium/Content/_CORE/_Workflow/013_Ink_n_Paint/024_H1_Verify_Zones_are_Painted.html | 2018-02-17T23:18:25 | CC-MAIN-2018-09 | 1518891808539.63 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePremium.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageAdvanced.png',
'Toon Boom Harmony 12 Stage Advanced Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageEssentials.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/Activation.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/_ICONS/download.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Trad_Anim/005_Ink_Paint/HAR11_xsheet_column.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Trad_Anim/005_Ink_Paint/HAR11_play_window.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Trad_Anim/005_Ink_Paint/HAR11_play_drawings.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Trad_Anim/005_Ink_Paint/HAR11_preroll.png',
None], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Trad_Anim/005_Ink_Paint/HAR11_backlight.png',
None], dtype=object) ] | docs.toonboom.com |
Welcome to Louie’s documentation!¶
Contents:
Louie provides Python programmers with a straightforward way to dispatch signals between objects in a wide variety of contexts. It is based on PyDispatcher, which in turn was based on a highly-rated recipe in the Python Cookbook.
Louie is licensed under The BSD License.
Installing Louie¶
Louie uses pip for installation, and is distributed via the Python Package Index.
In many modern Python environments, and also starting with Python 3.4, pip installed by default.
Run this command:
pip install louie
Development¶
You can track the latest changes in Louie using the Github repo.
Using git¶
Clone the Louie repo using git, e.g.:
git clone
Run this command inside your git repo directory to use Louie directly from source code in that directory:
cd louie pip install -e .
If you want to revert to the version installed in
site-packages,
you can do so:
pip uninstall louie | http://louie.readthedocs.io/en/latest/ | 2018-02-17T23:04:49 | CC-MAIN-2018-09 | 1518891808539.63 | [] | louie.readthedocs.io |
Request for all CAs not received
At system initialization, the custom server control program has requested the list of custom servers to start, and the request has failed or a timeout has occurred. The most probable cause is that the database needs recovering.
Recover the database and restart the system. Call Blueworx Support.
Yellow
Log, System Monitor | http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.probdet.doc/dtxprobdet523.html | 2019-10-14T02:57:38 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.blueworx.com |
Install Hummingbot in the Cloud¶
Using Hummingbot as a long running service can be achieved with the help of cloud platforms such as Google Cloud Platform, Amazon Web Services, and Microsoft Azure. You may read our blog about running Hummingbot on different cloud providers.
Below, we show you how to set up a new Virtual Machine Instance on each major cloud platform.
Tip: Access Cloud Instances on your Phone
Use Hummingbot's Telegram Integration to connect to your cloud instance without a computer. Note that this has limited functionality and remains a work in progress.
Google Cloud Platform¶
- Navigate to the Google Cloud Platform console
- Create an instance of Compute Instance
- Select “New VM Instance”, then pick
Ubuntu 18.04 LTS
- Click on "SSH" to SSH into the newly created VM instance
Amazon Web Services¶
- Navigate to the AWS Management Console
- Click on "Launch a Virtual Machine"
Ubuntu Server 18.04 LTS (HVM)
- Click on "Review and Launch", and then "Launch"
- Select “create a new key pair”, name the key pair (e.g. hummingbot), download key pair, and then click on “Launch Instances”.
Click on “View Instances”
To connect to the instance from the terminal, click on “Connect” and then follow the instructions on the resulting page.
Microsoft Azure¶
- Navigate to the Virtual Machines console.
- Click on the "Add" button in the top-left corner.
- Choose a name for the resource group and for the VM itself.
Ubuntu 18.04 LTSfor the image type and
Standard D2s v3for the size.
- Under "Administrator Account", choose password and select a username and password.
- Under "Inbound Port Rules", select SSH and HTTP.
- Scroll up to the top and click on "Management" tab.
- Choose a valid name for your diagnostics storage account.
- Go to the "Review and Create" tab, click on "Create".
- While your VM is being created, download and install PuTTY for your OS.
- After your VM has been initialized, copy the public IP address.
- Open the PuTTY app and paste the IP address into the host name, then open.
| https://docs.hummingbot.io/installation/cloud/ | 2019-10-14T05:07:33 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['/assets/img/gcp-new-vm.png', 'Create New Instance'], dtype=object)
array(['/assets/img/gcp-ssh.png', 'Connect SSH'], dtype=object)
array(['/assets/img/aws1.png', 'Create New Instance'], dtype=object)
array(['/assets/img/aws2.png', 'Select Server Type'], dtype=object)
array(['/assets/img/aws3.png', 'Select Instance Type'], dtype=object)
array(['/assets/img/aws4.png', 'Create a New Key Pair'], dtype=object)
array(['/assets/img/aws5.png', 'Connect to AWS Instance'], dtype=object)
array(['/assets/img/azure1.png', 'Create New Instance'], dtype=object)
array(['/assets/img/azure2.png', 'Select Server Type'], dtype=object)
array(['/assets/img/azure3.png', 'Configure Server Protocols'],
dtype=object)
array(['/assets/img/azure4.png', 'Set Up Diagnostics'], dtype=object)
array(['/assets/img/azure5.png', 'Create the Virtual Machine'],
dtype=object)
array(['/assets/img/azure6.png', 'Download and Install PuTTY'],
dtype=object)
array(['/assets/img/azure7.png', 'Connect to Azure Instance'],
dtype=object) ] | docs.hummingbot.io |
Tool
Strip
Tool Drop Down. Background Image Layout Changed Strip
Tool Drop Down. Background Image Layout Changed Strip
Tool Drop Down. Background Image Layout Changed Strip
Event
Drop Down. Background Image Layout Changed
Definition
Occurs when the value of the BackgroundImage property changes.
public: event EventHandler ^ BackgroundImageLayoutChanged;
[System.ComponentModel.Browsable(false)] public event EventHandler BackgroundImageLayoutChanged;
member this.BackgroundImageLayoutChanged : EventHandler
Public Custom Event BackgroundImageLayoutChanged As EventHandler
- Attributes
-
Examples BackgroundImageLayoutChanged event.
private void ToolStripDropDown1_BackgroundImageLayoutChanged(Object sender, EventArgs e) { MessageBox.Show("You are in the ToolStripDropDown.BackgroundImageLayoutChanged event."); }
Private Sub ToolStripDropDown1_BackgroundImageLayoutChanged(sender as Object, e as EventArgs) _ Handles ToolStripDropDown1.BackgroundImageLayoutChanged MessageBox.Show("You are in the ToolStripDropDown.BackgroundImageLayoutChanged event.") End Sub
Remarks
For more information about handling events, see Handling and Raising Events. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.toolstripdropdown.backgroundimagelayoutchanged?view=netframework-4.7.2 | 2019-10-14T04:29:56 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.microsoft.com |
Project prices report (ProjPriceList) options.:
ProjPriceListDP.processReport
TmpProjPriceList table
Note
To determine where the data in the temp tables comes from, view the cross-references for the ProjPriceListDP).
See also
Sales price - Subscription (form)
Sales price - expenses (form)
Sales price - hour (form)
Feedback | https://docs.microsoft.com/en-us/dynamicsax-2012/appuser-itpro/project-prices-report-projpricelist?redirectedfrom=MSDN | 2019-10-14T03:44:21 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.microsoft.com |
What happens if I let my seat expire?
If you have a seat that is set to expire it will be removed from your account at the end of your billing cycle.
You can find more information on your billing cycle by selecting Billing Information from the Settings menu.
If you have no active seats at the end of your billing cycle then the following will happen:
Your account will become read-only and your zapcodes un-published
After ten days your content will be deleted and your zapcodes archived
At any point during this process it's super simple to add a seat again - just select the Get Seats option from the Settings button.
If you do so before your billing cycle ends then all your content and zapcodes will remain live and in your account.
Let us know at [email protected] if you have any questions. | https://docs.zap.works/accounts-billing/business/what-happens-if-i-let-my-seat-expire/ | 2019-10-14T04:43:02 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.zap.works |
Cutting 300 calories a day can help you lose weight
Even healthy, young, slim people can benefit from cutting 300 calories a day from their diet — a simple lifestyle change that can lead to a big payoff in heart…
Jul 11, 2019 23:22 UTC Nutrition
Acer Predator Triton 900: A love/hate relationship
The Acer Predator Triton 900 looks insane. It's a big, fast 17-inch gaming laptop with one of the most clever designs we've seen. But it also has some quirks, starting with the keyboard. Subscribe to CNET: CNET playlists: Download the new CNET app: Like us on Facebook: Follow us on Twitter: Follow us on Instagram:
Jul 11, 2019 23:02 UTC Gadgets
Hospitality startup Sonder books $1B+ valuation
Venture capital-backed hospitality startup Sonder has raised a $210 million Series D with an estimated $1.01 billion valuation.
Jul 11, 2019 23:00 UTC Entrepreneurship
Lunds & Byerlys to Close All 14 Pharmacies July 11, 2019
Lunds & Byerlys is shuttering all 14 of its pharmacies, the company announced Thursday. In a statement, the company said the decision stems from “significant…
Jul 11, 2019 22:55 UTC Health-Care
N.J. town warns its residents after coyote tests positive for rabies
There were no reports of anyone being attacked by the animal, but residents of the Hunterdon County town were still notified of the positive test.
Jul 11, 2019 22:49
Three more health workers infected in Ebola outbreak
The new cases raise the total number of healthcare workers infected in this outbreak to 131, including 41 deaths. Health workers make up 5% of all the victims of…
Jul 11, 2019 21:57 UTC Health-Care
How To Use Girlboss' Career Networking Site & Start Making Connections
As every entrepreneurial gal knows, networking is key to making connections that nurture your professional growth. The input of like-minded folks is integral to…
Jul 11, 2019 21:36 UTC Entrepreneurship
People "machine-enhanced" by AI
Trying to establish which New Zealand industries will benefit from artificial intelligence (AI) is a bit like wondering which businesses benefitted from the invention…
Jul 11, 2019 21:16 UTC Artificial-Intelligence
Many teen girls pressured by partners to get pregnant
By Lisa Rapaport. (Reuters Health) - Nearly one in eight sexually active teen girls are pressured by their partners to have unprotected sex and try to conceive…
Jul 11, 2019 21:14
Experts' tips for evaluating SAP cloud computing options
Before your company deploys applications on a non-SAP or SAP cloud computing platform, experts strongly suggest you consider evaluating the application's…
Jul 11, 2019 20:58 UTC Computing . | https://search-docs.net/2019-07-11 | 2019-10-14T03:47:06 | CC-MAIN-2019-43 | 1570986649035.4 | [] | search-docs.net |
Running Sparkling Water on Kerberized Hadoop Cluster¶
Sparkling Water can run on kerberized Hadoop cluster and also supports Kerberos authentification for clients and Flow access. This tutorial shows how to configure Sparkling Water to run on kerberized Hadoop cluster. If you are also interested in using Kerberos authentification, please read Enabling Kerberos Authentication.
Sparkling Water supports the kerberized cluster in both internal and external backend.
Internal Backend¶
To make Sparkling Water aware of the Kerberized cluster, you can call:
bin/sparkling-shell --conf "spark.yarn.principal=PRINCIPAL" --conf "spark.yarn.keytab=/path/to/keytab"
or you can create the Kerberos ticket in before hand using
kinit and call just
./bin/sparkling-shell
In this case Sparking Water will use the created ticket and we don’t need to pass the configuration details.
External Backend¶
In External Backend, we are also starting H2O cluster on YARN and we need to make sure it is secured as well.
You can start Sparkling Water as:
bin/sparkling-shell --conf "spark.yarn.principal=PRINCIPAL" --conf "spark.yarn.keytab=/path/to/keytab"
In this case, the value of
spark.yarn.principal and
spark.yarn.keytab properties will be also used to set
spark.ext.h2o.external.kerberos.principal and
spark.ext.h2o.external.kerberos.keytab correspondigly. These options
are used to set up Kerberos on H2O external cluster via Sparkling Water.
You can also set the
spark.ext.h2o.external.kerberos.principal and
spark.ext.h2o.external.kerberos.keytab
options directly.
The simplest option you can also start Sparkling Water is:
./bin/sparkling-shell
In this case we assume that the ticket has been created using
kinit and it will be used for both Spark and external
H2O cluster.
The same configuration is valid also for PySparkling and RSparkling. | http://docs.h2o.ai/sparkling-water/2.1/latest-stable/doc/tutorials/kerberized_cluster.html | 2019-10-14T03:03:55 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.h2o.ai |
Generate SMTP Credentials for a User
Simple Mail Transfer Protocol (SMTP) credentials are necessary to send email through Email Delivery. Each user is limited to a maximum of two SMTP credentials. If more than two are required, SMTP credentials must be generated on other existing users or more users must be created.. For example:
Allow group <group name> to use approved-senders in compartment <compartment name>
Using the Console. | https://docs.cloud.oracle.com/iaas/Content/Email/Tasks/generatesmtpcredentials.htm?Highlight=SMTP%20Credentials | 2019-10-14T04:41:31 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.cloud.oracle.com |
Here are some of the sites that offer online training for Kubernetes:
Scalable Microservices with Kubernetes (Udacity)
Introduction to Kubernetes (edX)
Getting Started with Kubernetes (Pluralsight)
Hands-on Introduction to Kubernetes (Instruqt)
Learn Kubernetes using Interactive Hands-on Scenarios (Katacoda)
Certified Kubernetes Administrator Preparation Course (LinuxAcademy.com)
Kubernetes the Hard Way (LinuxAcademy.com)
Certified Kubernetes Application Developer Preparation Course (KodeKloud.com)
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement. | https://v1-12.docs.kubernetes.io/docs/tutorials/online-training/ | 2019-10-14T03:54:32 | CC-MAIN-2019-43 | 1570986649035.4 | [] | v1-12.docs.kubernetes.io |
All content with label client+data_grid+distribution+expiration+gridfs+import+infinispan+recovery+repeatable_read+transaction.
Related Labels:
datagrid, coherence, interceptor, server, rehash, replication, transactionmanager, dist, release, partitioning, deadlock, contributor_project, archetype, jbossas, lock_striping, nexus, guide, schema, listener,
state_transfer, cache, amazon, s3, grid, memcached, test, jcache, api, xsd, ehcache, maven, documentation, userguide, write_behind, 缓存, ec2, streaming, hibernate, aws, interface, custom_interceptor, clustering, setup, eviction, large_object, concurrency, jboss_cache, index, events, batch, hash_function, configuration, buddy_replication, loader, colocation, xa, write_through, cloud, remoting, mvcc, tutorial, notification, murmurhash2, xml, read_committed, jbosscache3x, meeting, cachestore, hibernate_search, resteasy, cluster, br, development, permission, websocket, async, xaresource, build, hinting, searchable, demo, scala, installation, ispn, command-line, non-blocking, migration, rebalance, filesystem, jpa, tx, user_guide, gui_demo, eventing, shell, student_project, client_server, testng, infinispan_user_guide, murmurhash, standalone, snapshot, hotrod, webdav, docs, batching, consistent_hash, store, jta, faq, 2lcache, as5, jsr-107, docbook, lucene, jgroups, locking, rest, hot_rod
more »
( - client, - data_grid, - distribution, - expiration, - gridfs, - import, - infinispan, - recovery, - repeatable_read, - transaction )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/client+data_grid+distribution+expiration+gridfs+import+infinispan+recovery+repeatable_read+transaction | 2019-10-14T04:23:17 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.jboss.org |
Single Sign-On (SSO) is a paid-only feature. Wish to upgrade? Contact your Sales Representative.
Metricly from the first step.
netuitive-api
Your Tenant Name(optional)
Add your tenant name to the Relay State field if you do not want to enter it when logging into Metricly from Azure. Your tenant name is the company name you used when you signed up for a Metricly account. Contact support if you do not know your tenant name. Metricly. | https://docs.metricly.com/integrations/sso/sso-azure/ | 2019-10-14T04:35:05 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.metricly.com |
Contents Now Platform Capabilities Previous Topic Next Topic Get Started with Performance Analytics Premium Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Get Started with Performance Analytics Premium With Performance Analytics Premium you can define your own key metrics, breakdowns, and visualizations to present exactly the data you want for any process. About this task After Performance Analytics Premium has been activated, complete the following steps to configure Performance Analytics and begin collecting scores. Procedure Clearly define the data you want to present. Before creating any records, identify what you want to measure and how you want to present that data. Ensure the data is actionable. Changes in scores should provide usable feedback into the performance of individuals, groups, or processes.Tip: It can be helpful to create a sketch of your planned scorecards and dashboards to help identify your key metrics and visualizations. Review the configuration records such as indicators, breakdowns, data collection jobs, widgets, and dashboards that are provided by default. Review the optional content packs as well. Use the provided configuration records whenever possible, or use them as a template to create your own configuration. Configuration records that allow you to analyze many common processes, such as Incident Management or HR Management, are provided by default. If the provided configuration records do not meet your needs, create your own configuration by completing the following steps. Define indicator sources for the tables you want to analyze. Indicator sources form the basis of the data that is collected and can be reused for multiple indicators.An indicator source can specify a filter to include a subset of table data, such as to include only open incidents. Create automated indicators to define the key metrics you want to analyze. Automated indicators track scores collected regularly and automatically from the instance. Create breakdown sources to define which breakdown elements are available to group and filter scores for further analysis. A breakdown element is a single possible value for a field, such as the Hardware assignment group or the Critical priority. A breakdown source defines the set of available breakdown elements from a table, such as the assignment groups that you can group and filter scores by. A breakdown source can define a filtered set of elements. For example, you can add the filter condition [Active][is][true] to a breakdown source on the Groups table to include only active assignment groups as breakdown elements. Create breakdowns to define how you want to group and filter collected scores. Breakdowns organize data and allow you to analyze or compare subsets of the indicator data. Breakdowns associate indicator scores with elements from a breakdown source, such as to organize incident scores based on the value of the Assignment group field using elements from a breakdown source on the Groups table.For example, you can break down incident data by priority or by assignment group, such as to show scores only for incidents with a certain priority, or to compare scores across assignment groups. After you have defined the data you want to collect and any breakdowns you want to apply, set up and run data collection jobs to populate the scores. Create and schedule data collection jobs to collect data and populate indicator scores. You can manually run historical data collection to collect scores for existing records. Run a historical data collection job once after defining new indicators, then use a scheduled data collection job to keep the scores updated. Check the job log to see if the data collection jobs have run successfully. After you have confirmed that the data collection jobs ran successfully, view the collected scores and create widgets to visualize the data. View the scorecards for your indicators to ensure the scores were populated as expected. Scorecards display a detailed view of data for a single indicator. Create widgets to define how to visualize the collected scores and add the widgets to a dashboard. Widgets allow for additional visualization and formatting options that are not available from scorecards. You can create any number of widgets for your indicators. Widgets only display scores and do not modify the underlying indicator data. What to do next After successfully implementing a simple Performance Analytics configuration, consider taking advantage of these advanced options to refine your data: Define bucket groups to break down data in user-defined ranges. Create scripts to do more advanced data collection, or to organize scores into bucket groups. Apply time series to view aggregate data over different time ranges. Apply multi-level breakdowns to group and filter data by multiple dimensions. Create indicator groups to organize indicators, and to use with widgets that display multiple indicators such as a scorecard list widget. Define improvement goals by creating targets and thresholds. Create formula indicators to generate scores based on a formula, or manual indicators to manually enter scores. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/use/performance-analytics/task/t_UsePerformanceAnalyticsPremium.html | 2019-10-14T03:57:11 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.servicenow.com |
Add a "set active without explicit mipLevel, cubemapFace and depthSlice respect the mipLevel, cubemapFace and depthSlice values that were specified when creating the RenderTargetIdentifier.. | https://docs.unity3d.com/es/2018.2/ScriptReference/Rendering.CommandBuffer.SetRenderTarget.html | 2019-10-14T04:33:12 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.unity3d.com |
Platform Requirements
EdgeX Foundry is an operating system (OS)-agnostic and hardware (HW)-agnostic IoT edge platform. At this time the following platform minimums are recommended:
Memory: minimum of 1 GB When considering memory for your EdgeX platform consider your use of database - Redis is the current default. Redis is an open source (BSD licensed), in-memory data structure store, used as a database and message broker in EdgeX. Redis is durable and uses persistence only for recovering state; the only data Redis operates on is in-memory. Redis uses a number of techniques to optimize memory utilization. Antirez and Redis Labs have written a number of articles on the underlying details (see list below). Those strategies has continued to evolve. When thinking about your system architecture, consider how long data will be living at the edge and consuming memory (physical or physical + virtual).
Hard drive space: minimum of 3 GB of space to run the EdgeX Foundry containers, but you may want more depending on how long sensor and device data is to be retained. Approximately 32GB of storage is minimally recommended to start.
EdgeX Foundry has been run successfully on many systems, including, but not limited to the following systems
- Windows (ver 7 - 10)
- Ubuntu Desktop (ver 14-20)
- Ubuntu Server (ver 14-20)
- Ubuntu Core (ver 16-18)
- Mac OS X 10
Info
EdgeX is agnostic with regards to hardware (x86 and ARM), but only release artifacts for x86 and ARM 64 systems. EdgeX has been successfully run on ARM 32 platforms but has required users to build their own executable from source. EdgeX does not officially support ARM 32. | https://docs.edgexfoundry.org/1.3/general/PlatformRequirements/ | 2021-06-12T22:33:52 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.edgexfoundry.org |
Layout
Tabris.js uses the native platform capabilities to layout UIs. As display density (pixels per inch) widely varies among mobile devices the pixel measures in Tabris.js are always expressed as Device Independent Pixels (DIP). The density of a device’s display can be accessed by
tabris.device.scaleFactor. The value represents the number of native pixels per Device Independent Pixel.
Layout Property
The property
layout of the
Composite widget contains a layout manager that is responsible for arranging the children of the composite. Different subclasses of
Composite have different default values of
layout:
NavigationView,
TabFolder and
CollectionView do not support
layout since their children can not be freely arranged. Same goes for the
ContentView instance attached to
AlertDialog.
LayoutData Properties
All widgets have a property
layoutData that influences how the widget will be arranged. The exact syntax supported by
layoutData is described here, but most commonly it is assigned a plain object containing any of these properties:
Example:
widget.layoutData = {left: 10, top: 20};
All of these properties are also available as widget properties that delegate to the
layoutData. When setting one of these the value of
layoutData is updated accordingly:
widget.left = 15; console.log(widget.layoutData.top); // 15
Setting a field of
layoutData directly is not allowed since the property always returns an immutable object of the type
LayoutData.
widget.layoutData.left = 15; // WRONG!!
The main difference of setting the
layoutData property as opposed to the individual delegates is that it implicitly resets all layout properties to
'auto'. It can therefore be used to completely change the layout of a widget without any regard to the current one.
How
layoutData is interpreted depends on the layout manager of the parent and will be explained below.
TheThe
layoutand
layoutDatavalues of the same widget instance are not relevant to each other.
layoutdeals with the size and position of a widgets children, while
layoutDatais relevant to a widgets own size and position.
LayoutData Shorthand
LayoutData supports some string aliases for either centering or stretching the widget:
widget.layoutData = 'center'; // equivalent to: widget.layoutData = `{centerX: 0, centerY: 0}` widget.layoutData = 'stretch'; // equivalent to: widget.layoutData = `{left: 0, top: 0, right: 0, bottom: 0}`; widget.layoutData = 'stretchX'; // equivalent to: widget.layoutData = `{left: 0, right: 0}` widget.layoutData = 'stretchY'; // equivalent to: widget.layoutData = `{top: 0, bottom: 0}`
Used as a JSX element
Composite also supports a special shorthand syntax. It allows setting the alias string directly in the tag, omitting the attribute:
widget = <Composite stretch/>; // equivalent to: widget = <Composite layoutData='stretch'/>;
In general JSX allows setting an attribute to
true by omitting the value. This is useful since all layoutData properties except
width and
height can be set to
true:
widget = <Composite centerX baseline/>; // same as: widget = <Composite centerX={true} baseline={true}/>; // equivalent to: widget = <Composite centerX={0};
Properties “bounds” and “absoluteBounds”
The
layoutData property always reflects the values set by the application, not the actual outcome of the layout process. For example, if
width is left on
'auto' it will always be
'auto', not visible on-screen widget width. However, that value can be obtained via the read-only properties
bounds and
absoluteBounds. They provide the position and size of any widget in relation to its parent or assigned contentView respectively.
Note that there is a short delay needed for the layout calculation before changes to
layoutData are reflected in
bounds. You can be notified of any changes of
bounds by listening to the
resize or
boundsChanged events. (They are fired at the same time.) However, there is no event to get notified when the
absoluteBounds property changes, specifically its
top and
left values may change without a
resize event.
The initial value of
bounds until the first layout pass is
{left: 0, top: 0, width: 0, height: 0}. That is also the value for any widget not attached to a parent.
ConstraintLayout
This is the default layout used by
Composite and most of its subclasses like
ContentView. It supports all
layoutData properties and each child can be arranged freely based on its own content, the parent’s dimensions and its sibling’s sizes and positions. It has no properties and thus there is never a reason to change the default instance.
Properties “width” and “height”
The
width and
height properties define the dimensions of the widget in DIPs. The value can be a positive float,
0 or
'auto'. The default value is
'auto'.
If
width is
'auto’ (or not specified), the actual width is computed based on the position of the left and right edge defined by the
left and
right properties. If either
left or
right is also
'auto', the widget will shrink to its intrinsic width, i.e. the minimal width required to display its content.
The same logic applies to
height/
top/
bottom.
Properties “top”, “right”, “bottom”, “left”
The
top,
right,
bottom and
left properties put a constraint on the the position of the child’s edge. For detailed syntax see ConstraintValue. The position may be given as an absolute (
number) or relative (percentage) distance in relation to either the parent’s opposing edge or a sibling’s opposing edge.
Sibling references are resolved dynamically, that is, if a referenced widget is added or removed later, or its
excludeFromLayout property changes, the layout will adjust. When a sibling selector does not match any of the current siblings, it will be treated like an offset of zero.
Properties “centerX” and “centerY”
These properties allow positioning a widget relative to its parent’s center.
A numeric value (may be 0 or negative) for
centerX defines the distance of this widget’s vertical center from the parent’s vertical center in DIPs. The default value is
'auto', which indicates that the
left and
right properties take priority. Can also be set to
true, which is treated like
0.
The same logic applies for
centerY in relation to
top/
bottom.
Property “baseline”
Defines the vertical position of the widget relative to another widget’s text baseline. The value must be a reference to a sibling widget, for example via
'prev()' or
'#id'. (For more examples see left/right/top/bottom properties above.) Can also be set to
true, which is treated like
'prev()'.
This property is only supported for widgets that contain text, i.e. both the actual and the referenced widget must be one of
TextView,
TextInput, or
Button.
For multiline texts, the platforms differ: Android aligns on the first line, iOS on the last line.For multiline texts, the platforms differ: Android aligns on the first line, iOS on the last line.
This property cannot be used in combination with either of
top,
bottom, and
centerY.
Z-Order
When the layout definition results in widgets overlapping one another, the z-order (drawing order) is defined by the order in which the widgets are appended to their parent. New widgets will be rendered on top of those widgets that have already been appended. This is the same order as given via the parent’s
children() method, with the last child in the returned
WidgetCollection being placed on top of all other siblings.
This order can be changed via the
insertAfter and
insertBefore:
child.insertAfter(parent.children().last()); // now drawn on top of all other children
In this example
child may or may not already be a child of
parent, the outcome will be the same.
The
elevation property overrides the default z-order. Any widget with an
elevation of
1 will be drawn on top of any sibling with an
elevation of
0, regardless of child order.
Fallback position
If all of
left,
right, and
centerX are
'auto', the widget will be positioned as though
left was set to
0.
If all of
top,
bottom,
centerY and
baseline are
'auto', the widget will be positioned as though
top was set to
0.
Consequently, when there is no
layoutData specified at all, the widget will be be displayed in the top left corner while still respecting the parent’s padding.
Example
widget.layoutData = { left: 10, // 10px from left edge top: ["#label", 10], // label's bottom edge + 10px, i.e. 10px below label right: ["30%", 10] // 30% + 10px from right edge, i.e. at 70% - 10px // no height or bottom given, i.e. auto-height };
StackLayout
The
StackLayout is the default layout manager of the
Stack widget, but can also be used on
Composite,
Canvas,
Page and
Tab. It’s a convenient way of arranging widgets in a vertical line, like a single-column table.
StackLayoutis just a helper, everything it can do can also be achieved with
ConstraintLayout.
StackLayout has two properties, both of which can be set only via its own constructor or the constructor of
Stack. They are:
The order in which the children are arranged vertically corresponds to the order in which they are appended to the composite. The first child is placed at the very top of the composite, the second below that, etc. The last widget will be placed below all others and any remaining space of the composite (if it is higher than needed) will be left blank. The order may be changed at any time by re-inserting a child at any given position using
insertAfter and
insertBefore.
The horizontal layout of each child is controlled by the
alignment property. If it is set to
'left',
'right' or
'centerX', all children will have their intrinsic width and placed at the left, right or horizontal center of the composite. If
alignment is
'stretchX', all children will take all the available horizontal space. The composite’s padding will be respected in all cases.
WhenWhen
alignmentis set to
stretchXthe width of the composite needs to be determined by either its
widthproperty or its
leftand
rightproperties. It can not be computed based on the children’s intrinsic size.
Examples:
<Stack alignment='right' padding={4} spacing={24} > <TextView>lorem</TextView> <TextView>ipsum dolor</TextView> <TextView>sit amet</TextView> </Stack>
new Page({ layout: new StackLayout({alignment: 'right', padding: 4, spacing: 24}) });
The
layoutData of children managed by a
StackLayout is interpreted differently from
ConstraintLayout:
Properties “width” and “height”
Like in
ConstraintLayout, the
width and
height properties define the dimensions of the widget in DIPs.
If
width/
height is
'auto’ (or not specified) the widget will shrink to its intrinsic width/height. However, if
width is
'auto' and the
alignment of
StackLayout is
'stretchX' the width of the widget is determined by the width of the parent.
Properties “left”, “right” and “centerX”
If all of
left,
right and
centerX are set to
'auto' (or not specified), the horizontal position of the widget is controlled by the
alignment of
StackLayout. If one of more of them are set to any other value they all behave like they do when controlled by
ConstraintLayout. The
alignment is ignored in that case.
Properties “top” and “bottom”
In a stack layout these properties control the distance to the preceding (for
top) and following sibling (
bottom) in DIPs. If set to
'auto', the
'spacing' of
StackLayout is determining the distance. If both
top and
bottom are set to a numeric value the widget will be stretched vertically, assuming it is the first widget to have that configuration and there is enough horizontal space available. The LayoutData alias
'stretchY' has the same effect, as it stands for
{top: 0, bottom: 0}:
new Stack().append( new TextView({text: 'Top'}), new TextView({top: 0, bottom: 0, text: 'Stretch'}), new TextView({text: 'Bottom'}), );
Same code, but using JSX and layoutData shorthand syntax:
<Stack> <TextView>Top</TextView> <TextView stretchY>Stretch</TextView> <TextView>Bottom</TextView> </Stack>
Properties “baseline” and “centerY”
These properties are not supported by
StackLayout. | https://docs.tabris.com/3.1/layout.html | 2021-06-12T23:53:43 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.tabris.com |
brainrender can be used with Jupyter notebooks in two ways:
you can embed a window with your rendered scene
you can have your scene be rendered in a pop-up window.
For an example of how to use
brainrender with
jupyter have a look here.
If you want to have your scene be rendered in a new window, then you just need to set this option before your create your
scene.
from vedo import embedWindowembedWindow(None)
after this everything will work exactly the same as usual, and you will have access to all of
brainrender's features!
When embedding renderings in Jupyter Notebook not all of
brainrender's functionality will work! If you want to support all of
brainrender's features you should not embed renderings in the notebooks.
Note that this is due to the backend (
k3d) used to embed the renderings not because of
brainrender.
If you want to embed your scene anyways, you can do that as either a
k3d panel or a with
itkwidgets by setting:
embedWindow('k3d') # or 'itkwidgets'
If you chose
k3d then to rendered your scene:
from vedo import show# scene.render now will prepare the scene for rendering,# but it won't render anything yetscene.render()# to actually display the scene we use `vedo`'s `show`# method to show the scene's actorsshow(scene.actors)
and with
itkwidgets:
from ipywidgets import VBox, ButtonVBox([show(scene.actors)]) | https://docs.brainrender.info/usage/using-notebooks | 2021-06-12T23:22:37 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.brainrender.info |
-
-
-
-
-
-
-
SCP (put) command in configuration jobs
You can use the Configuration Jobs feature of Citrix ADM to create configuration jobs, send email notifications, and check execution logs of the jobs created. A job is a set of configuration commands that you can create and run on a single managed instance or on multiple managed instances. For example, you can use configuration jobs for device upgrades.
Configuration jobs in Citrix ADM use Secure Shell (SSH) commands to configure instances, and you can configure a configuration job to use secure copy (SCP) to securely transfer files. SCP is based on the SSH protocol. One of the SCP commands that you can include in a configuration job is the “put” command. You can use the “put” command in configuration jobs to upload or transfer one or more files stored in a local directory on your system to Citrix ADM and then to a directory on the NetScaler instance or instances.
Note The file is uploaded to Citrix ADM and it is later copied (put) to the selected NetScaler instances. The uploaded file is stored in Citrix ADM and is deleted only when the job is deleted. This is necessary for jobs scheduled to execute at a later time.
The command has the following syntax:
put <local_filename> <remote_path/remote_filename>
where,
<local_filename> is the name of the local file to be uploaded.
<remote_path / remote_filename> is the path to a remote directory, and the name to assign to the file when it is copied to that directory.
While creating the configuration job, you can convert the local and remote file name parameters into variables. This lets you assign different files to these parameters for the same set of NetScaler instances every time you execute the job. Also, when you use a file at multiple places in a job and if you want to rename the file, you can redefine the variable instead of changing the file name at all places.
To use the put command to upload files in a configuration job:
Navigate to Networks > Configuration Jobs.
On the Jobs page, click Create Job.
On the Create Job page, enter the name of the job in the Job name field, and in the Configuration Editor pane, enter the “put” command.
For example, if you want to create a configuration job that copies a SSL certificate file saved on your local system to multiple NetScaler instances, you can add a “put” command that uses a variable instead of the name of a particular file, and define the variable type as “file”.
put ssl-file /nsconfig/ssl-file
In this example,
ssl-file - This is the name of the file that needs to be uploaded in the NetScaler instance.
/nsconfig/ssl-file - This is the destination folder on the instance where the ssl-file will be put after the execution of the task.
In the command that you just entered, select the file name that you want to convert to a variable, and then click Convert to Variable, as shown in the following figure.
Verify that the file name has been enclosed by dollar signs (indicating that it is now a variable), and then click the variable.
Specify the details of the variable, such as name, display name, and type.
From the Type drop-down list, select File. Click Save. Declaring the variable as a “File” type allows you to upload files to Citrix ADM.
Click Next and select the NetScaler instances to which to copy the files.
On the Specify Variable Values tab, select Common Variables Values for all Instances section, select the file from the local storage on your system, click Upload to upload the file to Citrix ADM, and click Next.
On the Job Preview tab, you can evaluate and verify the commands to be run on each instance or instance group.
On the Execute tab, you can execute the job now or schedule it to be executed at a later time. You can also choose what action Citrix ADM should take if the command fails. You can also create an Email notification to receive notification about the success or failure of the job, and other details. Click Finish.
You can see the job details by navigating to Networks > Configuration Jobs, and selecting the job that you just configured. Click Details, and then click Variable Details to list the variables added to your. | https://docs.citrix.com/en-us/citrix-application-delivery-management-software/12-1/networks/configuration-jobs/how-to-use-scp-put-command.html | 2021-06-13T00:11:55 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.citrix.com |
Quick Start
This guide will get EdgeX up and running on your machine in as little as 5 minutes. We will skip over lengthy descriptions for now. The goal here is to get you a working IoT Edge stack, from device to cloud, as simply as possible.
When you need more detailed instructions or a breakdown of some of the commands you see in this quick start, see either the Getting Started- Users or Getting Started - Developers guides.
Setup
The fastest way to start running EdgeX is by using our pre-built Docker images. To use them you'll need to install the following:
- Docker
- Docker Compose
Running EdgeX
Once you have Docker and Docker Compose installed, you need to:
- download / save the latest
docker-composefile
- issue command to download and run the EdgeX Foundry Docker images from Docker Hub
This can be accomplished with a single command as shown below (please note the tabs for x86 vs ARM architectures).
curl -o docker-compose.yml; docker-compose up
curl -o docker-compose.yml; docker-compose up
Verify that the EdgeX containers have started:
docker-compose ps
Connecting a Device
EdgeX Foundry provides a Random Number device service which is useful to testing, it returns a random number within a configurable range. Configuration for running this service is in the
docker-compose.yml file you downloaded at the start of this guide, but it is disabled by default. To enable it, uncomment the following lines in your
docker-compose.yml:
device-random: image: edgexfoundry/docker-device-random-go:1.2.1 ports: - "127.0.0.1:49988:49988" container_name: edgex-device-random hostname: edgex-device-random networks: - edgex-network environment: <<: *common-variables Service_Host: edgex-device-random depends_on: - data - command
docker-compose up -d device-random
Random-Integer-Generator01, which will start sending its random number readings into EdgeX.
You can verify that those readings are being sent by querying the EdgeX core data service for the last 10 event records sent for Random-Integer-Generator01:
curl
Controlling the Device
Reading data from devices is only part of what EdgeX is capable of. You can also use it to control your devices - this is termed 'actuating' the device. When a device registers with the EdgeX services, it provides a Device Profile that describes both the data readings available from that device, and also the commands that control it.
When our Random Number device service registered the device
Random-Integer-Generator01, it used a profile which defines commands for changing the minimum and maximum values for the random numbers it will generate.
Note
The URLs won't be exactly the same for you, as the generated unique IDs for both the Device and the Command will be different. So be sure to use your values for the following steps.
Warning
Notice that localhost replaces edgex-core-command here. That's because the EdgeX Foundry services are running in Docker. Docker recognizes the internal hostname edgex-core-command, but when calling the service from outside of Docker, you have to use localhost to reach it.
This command will return a JSON result that looks like this:
{ "device": "Random-Integer-Generator01", "origin": 1592231895237359000, "readings": [ { "origin": 1592231895237098000, "device": "Random-Integer-Generator01", "name": "RandomValue_Int8", "value": "-45", "valueType": "Int8" } ], "EncodedEvent": null }
A call to GET of the Random-Integer-Generator01 device's GenerateRandomValue_Int8 operation through the command service results in the next random value produced by the device in JSON format.
Warning
Again, also notice that localhost replaces edgex-core-command.
There is no visible result of calling PUT if the call is successful.
A call to the device's PUT command through the command service will return no results.
Now every time we call GET on this command, the returned value will be between 0 and 100.
Exporting Data
EdgeX provides exporters (called application services) for a variety of cloud services and applications. To keep this guide simple, we're going to use the community provided 'application service configurable' to send the EdgeX data to a public MQTT broker hosted by HiveMQ. You can then watch for the EdgeX event data via HiveMQ provided MQTT browser client.
First add the following application service to your docker-compose.yml file right after the 'rulesengine' service (around line 255). Spacing is important in YAML, so make sure to copy and paste it correctly.
app-service-mqtt: image: edgexfoundry/docker-app-service-configurable:1.1.0 ports: - "127.0.0.1:48101:48101" container_name: edgex-app-service-configurable-mqtt hostname: edgex-app-service-configurable-mqtt networks: - edgex-network environment: <<: *common-variables edgex_profile: mqtt-export Service_Host: edgex-app-service-configurable-mqtt Service_Port: 48101 MessageBus_SubscribeHost_Host: edgex-core-data Binding_PublishTopic: events Writable_Pipeline_Functions_MQTTSend_Addressable_Address: broker.mqttdashboard.com Writable_Pipeline_Functions_MQTTSend_Addressable_Port: 1883 Writable_Pipeline_Functions_MQTTSend_Addressable_Protocol: tcp Writable_Pipeline_Functions_MQTTSend_Addressable_Publisher: edgex Writable_Pipeline_Functions_MQTTSend_Addressable_Topic: EdgeXEvents depends_on: - consul - data
Note
This adds the application service configurable to your EdgeX system. The application service configurable allows you to configure (versus program) new exports - in this case exporting the EdgeX sensor data to the HiveMQ broker at broker.mqttdashboard.com port 1883. You will be publishing to EdgeXEvents topic.
Save the compose file and then execute another compose up command to have Docker Compose pull and start the configurable application service.
docker-compose up -d
Using the HiveMQ provided client tool, connect to the same public HiveMQ broker your configurable application service is sending EdgeX data to.
Then, use the Subscriptions area to subscribe to the "EdgeXEvents" topic.
You must subscribe to the same topic - EdgeXEvents - to see the EdgeX data sent by the configurable application service.
You will begin seeing your random number readings appear in the Messages area on the screen.
Once subscribed, the EdgeX event data will begin to appear in the Messages area on the browser screen.
Next Steps
Congratulations! You now have a full EdgeX deployment reading data from a (virtual) device and publishing it to an MQTT broker in the cloud, and you were able to control your device through commands into EdgeX.
It's time to continue your journey by reading the Introduction to EdgeX Foundry, what it is and how it's built. From there you can take the Walkthrough to learn how the microservices work together to control devices and read data from them as you just did. | https://docs.edgexfoundry.org/1.2/getting-started/quick-start/ | 2021-06-13T00:12:07 | CC-MAIN-2021-25 | 1623487586465.3 | [array(['EdgeX_GettingStartedUsrActiveContainers.png', 'image'],
dtype=object)
array(['EdgeX_GettingStartedRandomIntegerData.png', 'image'], dtype=object)] | docs.edgexfoundry.org |
- Product
- Customers
- Solutions
This guide walks you through how to configure AWS PrivateLink for use with Datadog.
The overall process consists of configuring an internal endpoint in your VPC that local Datadog Agents can send data to. Your VPC endpoint is then peered with the endpoint within Datadog’s VPC.
Connect to the AWS console and create a new VPC endpoint:
Select Find service by name.
Fill the Service Name text box according to which service you want to establish AWS PrivateLink for:
Hit the verify button. If it does not return Service name found, reach out to the Datadog support team.
Choose the VPC and subnets that should be peered with the Datadog VPC service endpoint.
Make sure that for Enable DNS name the Enable for this endpoint is checked:
Choose the security group of your choice to control what can send traffic to this VPC endpoint.
Note: If you want to forward logs to Datadog through this VPC endpoint, the security group must accept inbound and outbound traffic on port
443.
Hit Create endpoint at the bottom of the screen. If successful, you will see this:
Click on the VPC endpoint ID to check its status.
Wait for the status to move from Pending to Available. This can take up to 10 minutes.
Once it shows Available, the AWS PrivateLink is ready to be used.
If you are collecting logs data, ensure your Agent is configured to send logs over HTTPS. If it’s not already there, add the following to the Agent
datadog.yaml configuration file:
logs_config: use_http: true
If you are using the container Agent, set the following environment variable instead:
DD_LOGS_CONFIG_USE_HTTP=true
This configuration is required when sending logs to Datadog via AWS PrivateLink. More information about this is available in the Agent log collection documentation.
Restart your Agent to send data to Datadog through AWS PrivateLink.
To route traffic to Datadog’s PrivateLink offering in
us-east-1 from other regions, use inter-region Amazon VPC peering.
Inter-region VPC peering enables you to establish connections between VPCs across different AWS regions. This allows VPC resources in different regions to communicate with each other using private IP addresses.
For more information, see the Amazon VPC peering documentation.
Additional helpful documentation, links, and articles: | https://docs.datadoghq.com/agent/guide/private-link/?lang_pref=en | 2021-06-12T23:09:29 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.datadoghq.com |
Working in a Hybrid Environment
In some cases, as a developer or contributor, you want to work on a particular micro service. Yet, you don't want to have to download all the source code, and then build and run for all the micro services. In this case, you can download and run the EdgeX Docker containers for all the micro services you need and run your single micro service (the one you are presumably working on) natively or from a developer tool of choice outside of a container. Within EdgeX, we call this a "hybrid" environment - where part of your EdgeX platform is running from a development environment, while other parts are running from the Dockerized containers. This page outlines how to do hybrid development.
As an example of this process, let's say you want to do coding work with/on the Virtual Device service. You want the rest of the EdgeX environment up and running via Docker containers. How would you set up this hybrid environment? Let's take a look.
Get and Run the EdgeX Docker Containers
- If you haven't already, follow the Getting Started with Docker Guide before continuing.
Since you plan to work with the virtual device service, you probably don't need or want to run all the micro services. You just need the few that the Virtual Device will be communicating with or that will be required to run a minimal EdgeX environment. So you will need to run Consul, Redis, Core Data, Core Metadata, Support Notifications, and Core Command.
Based on the instructions found in the Getting Started with Docker, locate and download the appropriate Docker Compose file for your development environment. Next, issue the following commands to start this set of EdgeX containers - providing a minimal functioning EdgeX environment.
docker-compose up -d consul docker-compose up -d redis docker-compose up -d notifications docker-compose up -d metadata docker-compose up -d data docker-compose up -d command
Note
These notes assume you are working with the EdgeX Genva release. Some versions of EdgeX may require other or additional containers to run.
Run the command below to confirm that all the containers have started.
docker-compose ps
Get, Build and Run the (non-Docker) Service
With the EdgeX containers running, you can now download, build and run natively (outside of a container) the service you want to work on. In this example, the virtual device service is used to exemplify the steps necessary to get, build and run the native service with the EdgeX containerized services. However, the practice could be applied to any service.
Get the service code
Per Getting Started Go Developers, pull the micro service code you want to work on from GitHub. In this example, we assume you want to get the device-virtual-go.
git clone
Build the service code
At this time, you can add or modify the code to make the service changes you need. Once ready, you must compile and build the service into an executable. Change folders to the cloned micro service directory and build the service.
cd device-virtual-go/ make build
Change the configuration
Depending on the service you are working on, you may need to change the configuration of the service to point to and use the other services that are containerized (running in Docker). In particular, if the service you are working on is not on the same host as the Docker Engine running the containerized services, you will likely need to change the configuration.
Examine the configuration.toml file in the cmd/res folder of the device-virtual-go. Note that the Registry (located in the [Registry] section of the configuration) and all the "clients" (located in the [clients] section of the configuration file) suggest that the "Host" of these services is "localhost". These and other host configuration elements need to change when the services are not running on the same host. If you do have to change the configuration, save the configuration.toml file after making changes.
Run the service code natively.
The executable created by the make command is usually found in the cmd folder of the service.
cd cmd ./device-virtual
Check the results
At this time, your virtual device micro service should be communicating with the other EdgeX micro services running in their Docker containers. Give the virtual device a few seconds or so to initialize itself and start sending data to Core Data. To check that it is working properly, open a browser and point your browser to Core Data to check that events are being deposited. You can do this by calling on the Core Data API that checks the count of events in Core Data http://[host].48080/api/v1/event/count.
Note
If you choose, you can also import the service into GoLand and then code and run the service from GoLand. Follow the instructions in the Getting Started - Go Developers to learn how to import, build and run a service in GoLand.
| https://docs.edgexfoundry.org/1.2/getting-started/Ch-GettingStartedHybrid/ | 2021-06-12T22:59:22 | CC-MAIN-2021-25 | 1623487586465.3 | [array(['../EdgeX_GettingStartedHybridBuild.png', 'image'], dtype=object)
array(['../EdgeX_GettingStartedHybridRun.png', 'image'], dtype=object)
array(['../EdgeX_GettingStartedHybridResults.png', 'image'], dtype=object)
array(['../EdgeX_GettingStartedHybridGoLand.png', 'image'], dtype=object)] | docs.edgexfoundry.org |
[−][src]Function tempfile::
spooled_tempfile
pub fn spooled_tempfile(max_size: usize) -> SpooledTempFileⓘ
Notable traits for SpooledTempFile
impl Read for SpooledTempFileimpl Write for SpooledTempFile
Create a new spooled temporary file..
Examples
use tempfile::spooled_tempfile; use std::io::{self, Write}; let mut file = spooled_tempfile(15); writeln!(file, "short line")?; assert!(!file.is_rolled()); // as a result of this write call, the size of the data will exceed // `max_size` (15), so it will be written to a temporary file on disk, // and the in-memory buffer will be dropped writeln!(file, "marvin gardens")?; assert!(file.is_rolled()); | https://docs.rs/tempfile/3.2.0/tempfile/fn.spooled_tempfile.html | 2021-06-13T00:12:53 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.rs |
Create, update, and delete records from SQL Server¶
In this example, we’ll be using Skuid’s OData data source type to pull in info from a SQL Server database.
Step 1: Create a new data source¶
- Navigate to Configure > Data Sources > Data Sources.
- Click New Data Source.
- Choose OData as the data source type and name the data source.
- Click Next Step.
- Enter the URL of the service.
- Select your OData version.
- Click Save.
- Click OK.
If prompted to create a remote site setting, click OK to have Skuid create one for you (if you don’t already have one for this URL). Click Cancel to create the Remote Site Setting yourself.
Step 2: Go to the page where you want to add this model or create a new page.¶
Click Compose to view a list of all Skuid Pages, or Compose > New Page to create a new one.
Step 3: Create a New Model.¶
- Click Models.
- Click to add a new Model.
- Choose OData as the Data Source Type.
- Choose the data source you created earlier in this tutorial.
- Start typing and choose the External Object from the list.
Step 4: Choose the fields you want to show in your page.¶
- Click Fields.
- Check the fields you want to include. Best Practice: Always include the Id field.
- Use Search to aid you in your quest.
Success! You are now able to search, update, create, and delete records from your SQL Server Database right from Salesforce.¶
Notice that there’s no visual difference between this table and any other Skuid table on data from Salesforce or any other source, so you can view your data as a cohesive whole, regardless of where it comes from. Feel free to add a Title Component to this page though, to remind yourself, like I did, that “This Data comes from SQL Server!”
Troubleshooting Tips¶
- Need help with debugging? See the data troubleshooting topic.
- You can also hop over to community.skuid.com at any time to ask questions, report problems and give feedback. | https://docs.skuid.com/v11.2.7/en/data/odata/odata-sql.html | 2021-06-13T00:16:45 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.skuid.com |
Use the timeline
Use the timeline
This topic assumes that you're comfortable running simple searches to retrieve events. If you're not sure, go back to the last topic where you searched with keywords, wildcards, and Booleans to pinpoint an error.
Back at the Flower & Gift shop, let's continue with the customer (10.2.1.44) you were assisting. He reported an error while purchasing a gift for his girlfriend. You confirmed his error, and now you want to find the cause of it.
Continue with the last search, which showed you the customer's failed purchase attempts.
1. Search for:
sourcetype=access_combined_wcookie 10.2.1.44 purchase NOT 200 NOT 404
In the last topic, you really just focused on the search results listed in the events viewer area of this dashboard. Now, let's take a look at the timeline.
The location of each bar on the timeline corresponds to an instance when the events that match your search occurred. If there are no bars at a time period, no events were found then.
2. Mouse over one of the bars.
A tooltip pops up and displays the number of events that Splunk found during the time span of that bar (1 bar = 1 hr).
The taller the bar, the more events occurred at that time. Often seeing spikes in the number of events or no events is a good indication that something has happened.
3. Click one of the bars, for example the tallest bar.
This updates your search results to show you only the events at the time span. Splunk does not run the search when you click on the bar. Instead, it gives you a preview of the results zoomed-in at the time range. You can still select other bars at this point.
4. Double-click on the same bar.
Splunk runs the search again and retrieves only events during that one hour span you selected.
You should see the same search results in the Event viewer, but, notice that the search overrides the time range picker and it now shows "Custom time". (You'll see more of the time range picker later.) Also, each bar now represents one minute of time (1 bar = 1 min).
One hour is still a wide time period to search, so let's narrow the search down more.
5. Double-click another bar.
Once again, this updates your search to now retrieve events during that one minute span of time. Each bar represents the number of events for one second of time.
Now, you want to expand your search to see everything else, if anything happened during this minute.
6. Without changing the time range, replace your previous search in the search bar with:
*
Splunk supports using the asterisk (*) wildcard to search for "all" or to retrieve events based on parts of a keyword. Up to now, you've just searched for Web access logs. This search tells Splunk that you want to see everything that occurred at this time range:
This search returns events from all the logs on your server. You expect to see other user's Web activity--perhaps from different hosts. But instead you see a cluster of mySQL database errors. These errors were causing your customer's purchases to fail. Now, you can report this issue to someone in the IT Operations team.
When you're ready, proceed to the next topic to learn about searching over different time ranges.
This documentation applies to the following versions of Splunk: 4.3 , 4.3.1 , 4.3.2 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/latest/User/Timelinetutorial | 2012-05-27T11:07:36 | crawl-003 | crawl-003-021 | [] | docs.splunk.com |
Change the time range
Change the time range
This topic assumes that you're familiar with running ad hoc searches and using the timeline. If you're not sure, review the previous topics on searching and using the timeline.
This topic shows you how to narrow the scope of your investigative searching over any past time range. If you have some knowledge about when an event occurred, use it to target your search to that time period for faster results.
It's your second day of work with the Customer Support team for the online Flower & Gift shop. You just got to your desk. Before you make yourself a cappuccino, you decide to run a quick search to see if there were any recent issues you should be aware of.
1. Return to the Search dashboard and type in the following search over all time:
error OR failed OR severe OR (sourcetype=access_* (404 OR 500 OR 503))
This searches for general errors in your event data over the course of the last week. Instead of matching just one type of log, this searches across all the logs in your index. It matches any occurrence of the words "error", "failed", or "severe" in your event data. Additionally, if the log is a Web access log, it looks for HTTP error codes, "404", "500", or "503".
This search returns a significant amount of errors. You're not interested in knowing what happened over All time, even if it's just the course of a week. You just got into work, so you want to know about more recent activity, such as overnight or the last hour. But, because of the limitations of this dataset, let's look at yesterday's errors.
2. Drop down the time range picker and change the time range to Other > Yesterday.
3. Selecting a time range from this list automatically runs the search for you. If it doesn't, just hit Enter.
This search returns events for general errors across all your logs, not just Web access logs. (If your sample data file is more than a day old, you can still get these results by selecting Custom time and entering the last date for which you have data.) Scroll through the search results. There are more mySQL database errors and some 404 errors. You ask the intern to get you a cup of coffee while you contact the Web team about the 404 errors and the IT Operations team about the recurring server errors.
Up to now, you've run simple searches that matched the raw text in your events. You've only scratched the surface of what you can do in Splunk. When you're ready to proceed, go on to the next topic to learn about fields and how to search with fields.
This documentation applies to the following versions of Splunk: 4.3 , 4.3.1 , 4.3.2 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/latest/User/Timerangetutorial | 2012-05-27T11:07:39 | crawl-003 | crawl-003-021 | [] | docs.splunk.com |
Start Splunk
Start Splunk
When you start Splunk, you're starting up two processes on your host,
splunkd and
splunkweb:
splunkdis a distributed C/C++ server that accesses, processes and indexes streaming IT data and handles search requests.
splunkwebis a Python-based application server that provides the Splunk Web interface that you use to search and navigate your IT data and manage your Splunk deployment.
Windows
To start Splunk on Windows, you have three options:
- start Splunk from the Start menu.
- use the Windows Services Manager to start and stop
splunkdand
splunkweb.
- open a cmd window and go to \Program Files\Splunk\bin and type
> splunk start
Mac OS X
Open a terminal or shell to access the CLI. Go to
/Applications/splunk/bin/, and type:
$ ./splunk start
If you have administrator or root privileges you can simplify CLI usage by setting a Splunk environment variable. For more information about how to do this, read "About the CLI" in the Admin manual.
Accept the Splunk license
After you run the start command, Splunk displays the license agreement and prompts you to accept the license before the startup continues.
After you accept the license, the startup sequence displays. At the very end, Splunk tells you where to access Splunk Web:
The Splunk Web interface is at
If you run into any problems starting up Splunk, see Start Splunk for the first time in the Installation manual.
Other commands you might need
If you need to stop, restart, or check the status of your Splunk server, use these CLI commands:
$ splunk stop $ splunk restart $ splunk status
Launch Splunk Web
Splunk's interface runs as a Web server and after starting up, Splunk tells you where the Splunk Web interface is. Open a browser and navigate to that location.
Splunk Web runs by default on port 8000 of the host on which it's installed. If you are using Splunk on your local machine, the URL to access Splunk Web is.
If you are using an Enterprise license, launching Splunk for the first time takes you to this login screen. Follow the message to authenticate with the default credentials:
If you are using a Free license, you do not need to authenticate to use Splunk. In this case, when you start up Splunk you won't see this login screen. Instead, you will be taken directly to Splunk Home or whatever is set as the default app for your account.
When you sign in with your default password, Splunk asks you to create a new password.
You can either Skip this or change your password to continue.
Welcome to Splunk
When you log into Splunk for the first time, you should see Splunk Home. This app is designed to help you get started using Splunk. Before you can start using Splunk, you need to add some data.
The Welcome tab includes quick links to:
- Add data: this takes you to the interface where you can define data inputs.
- Launch search app: this takes you to Splunk's search interface, where you can start searching your data.
Use the system navigation bar at the upper right corner to access any apps (under App) and configuration pages (in Manager) for your Splunk server. This system bar is available in every Splunk page, though not all of the same options will be there.
When you're ready, proceed to the next topic in this tutorial to Add data to Splunk.
This documentation applies to the following versions of Splunk: 4.2 , 4.2.1 , 4.2.2 , 4.2.3 , 4.2.4 , 4.2.5 , 4.3 , 4.3.1 , 4.3.2 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/latest/User/StartSplunk | 2012-05-27T11:07:21 | crawl-003 | crawl-003-021 | [] | docs.splunk.com |
Start searching
Start. Splunk will return. If a term or phrase doesn't exist in your data, you won't see it listed in search assistant..
When you're ready to proceed, go to the next topic to learn how to investigate and troubleshoot interactively using the timeline in Splunk.
This documentation applies to the following versions of Splunk: 4.3 , 4.3.1 , 4.3.2 View the Article History for its revisions.
In Step 6: "Mouse-over an instance of "404" in your search results and alt-click (for Windows, use ctrl-click)."
I've found in Windows XP SP2 it is alt-click to enable a "NOT 404"
In Step 6: "Mouse-over an instance of "404" in your search results and alt-click (for Windows, use ctrl-click)."
I've found in Windows 2008 it is alt-click to enable a "NOT 404"
Thanks. I've corrected Step 6! | http://docs.splunk.com/Documentation/Splunk/latest/User/Startsearchingtutorial | 2012-05-27T11:07:24 | crawl-003 | crawl-003-021 | [] | docs.splunk.com |
Use a subsearch
Use a subsearch
The last topic introduced search commands, the search pipeline, and drilldown actions. If you're not familiar with them, review more ways to search.
This topic walks you through another search example and shows you two approaches to getting the results that you want.
Back at the Flower & Gift shop, your boss asks you to put together a report that shows the customer who bought the most items yesterday and what he or she bought.
Part 1: Break the search down.
Let's see which customer accessed the online shop the most yesterday.
1. Use the
top command and limit the search to Yesterday:
sourcetype=access_* action=purchase | top limit=1 clientip
Limit the
top command to return only one result for the
clientip. If you wanted to see more than one "top purchasing customer", change this limit value. For more information about usage and syntax, refer to the the "top" command's page in the Search Reference Manual.
This search returns one
clientip value that you will now use to complete your search.
2. Use the
stats command to count this VIP customer's purchases:
sourcetype=access_* action=purchase clientip=10.192.1.39 | stats count by clientip
This search used the
count() function which only returns the count of purchases for the clientip. You also want to know what he bought, so let's use another
stats function.
3. One way to do this is to use the
values() function:
sourcetype=access_* action=purchase clientip=10.192.1.39 | stats count, values(product_id) by clientip
This adds a column to the table that lists what he bought by product ID.
The drawback to this approach is that you have to run two searches each time you want to build this table. The top purchaser is not likely to be the same person at any given time range.
Part 2: Let's use a subsearch instead.
1. Use a subsearch to run the searches from Part 1 inline. Type or copy/paste in:
sourcetype=access_* action=purchase [search sourcetype=access_* action=purchase | top limit=1 clientip | table clientip] | stats count, values(product_id) by clientip
Because the
top command returns
count and
percent fields as well, you use the
table command to keep only the
clientip value.
These results should match the previous result, if you run it on the same time range. But, if you change the time range, you might see different results because the top purchasing customer will be different!
2. Reformat the results so that it's easier to read:
sourcetype=access_* action=purchase [search sourcetype=access_* action=purchase | top limit=1 clientip | table clientip] | stats count, values(product_id) as product_id by clientip | rename count AS "How much did he buy?", product_id AS "What did he buy?", clientip AS "VIP Customer"
For more information about the usage and syntax for the sort command, see the sort command in the Search Reference manual.
While this report is perfectly acceptable, you want to make it better. For example, you don't expect your boss to know the shop items by their product ID numbers. You want to display the VIP customer's purchases by the product names, rather than the cryptic product ID. When you're ready continue on to the next topic to learn about adding more information to your events using field lookups.
This documentation applies to the following versions of Splunk: 4.3 , 4.3.1 , 4.3.2 View the Article History for its revisions.
One thing to note (or improve the docs) is that a subsearch can only be used where the explicit action deals with search vs. transformation of data at the search language. Example: One cannot take a search like "sourcetype=top | multikv" and place a subsearch at the end of it, as multikv isn't expecting a subsearch as an argument. one can however "pipe to append" as in "| append [search some stuff|fields some field]" or "join". it is not obvious when you can and cannot use a subsearch within a command. | http://docs.splunk.com/Documentation/Splunk/latest/User/Subsearchtutorial | 2012-05-27T11:07:27 | crawl-003 | crawl-003-021 | [] | docs.splunk.com |
regmon-filters.conf
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
regmon-filters.conf
The following are the spec and example files for regmon-filters.conf.
regmon-filters.conf.spec
# Copyright (C) 2005-2010 Splunk Inc. All Rights Reserved. Version 4.0 # # This file contains potential attribute/value pairs to use when configuring Windows registry # monitoring. The regmon-filters.conf file is used in conjunction with sysmon.conf, and # contains the specific regular expressions you create to refine and filter the hive key paths # you want Splunk to monitor. You must restart Splunk to enable configurations. # # To learn more about configuration files (including precedence) please see the documentation # located at [<stanza name> * Name of the filter being defined. proc = <string> * Regex specifying process image that you want Splunk to monitor. hive = <string> * Regex specifying the registry key path that you want Splunk to monitor. type = <string> * Regex specifying the type(s) of registry event that you want Splunk to monitor. This must be a subset of those defined for the event_types attribute in regmon-filters.conf. baseline = <int 0|1> * Whether or not to establish a baseline value for the keys this filter defines. baseline_interval = <int> * The threshold, in seconds, for how long Spunk has to have been down before re-taking the snapshot. disabled = <int 0|1> * Disables or enables a given filter.
regmon-filters.conf.example
# Copyright (C) 2005-2010 Splunk Inc. All Rights Reserved. Version 4.0 # # This file contains example registry monitor filters # # spec outlined in regmon-filters.conf.spec. [default] disabled = 1 baseline = 0 baseline_interval = 86400 [User keys] proc = \\Device\\.* hive = \\REGISTRY\\USER\\.* type = set|create|delete|rename. | http://docs.splunk.com/Documentation/Splunk/4.0.4/Admin/Regmon-filtersconf | 2012-05-27T05:55:43 | crawl-003 | crawl-003-021 | [] | docs.splunk.com |
Venti
Version 2 Changes Back to Top
Along with our other themes, there are many changes to Venti in the version 2 release. Most of which won't affect your site when upgrading, but there are a few key changes that you'll need to address upon updating. Below is a list of the more notable changes to the Venti theme. Items that need attention are noted.
Venti Additions & Changes
- Completely Redesigned Admin Interface
- Easier to use Gallery system and galleries now support videos as well.
- short code system.. You can find a list of the shortcodes here: Version 2 Shortcodes
-. | http://docs.rawfolio.com/venti | 2018-01-16T13:14:20 | CC-MAIN-2018-05 | 1516084886436.25 | [array(['/img/Venti.jpg', None], dtype=object)] | docs.rawfolio.com |
Since ver 2.5
The CSS field allows adding any CSS properties for element or children tags inside that element. You can also create the styling for different screen sizes (responsive). It works completely automatically, you only need to register it on the map and not further processed anywhere.
To make sure that it works, you have to use filters to add master class in your element ==> Use CSS system for my Element
Map Usage:
Example:
Register a new shortcode with filed type: css | http://docs.kingcomposer.com/available-param-types/css-field/ | 2018-01-16T13:35:44 | CC-MAIN-2018-05 | 1516084886436.25 | [array(['http://docs.kingcomposer.com/wp-content/themes/kc/images/docs/fields/css.jpg',
'KingComposer Editor'], dtype=object) ] | docs.kingcomposer.com |
ApSIC Xbench has a very powerful search engine. For example, you can search by source term, target term, or both source and target term. ApSIC Xbench also allows you to search using regular expressions or Microsoft Word wildcards, and combine them using the PowerSearch mode.
Likely, most of your searches will be done by source term. However, your need to search for a term will not originate while you are in the interface of ApSIC Xbench, but when you are translating within Word or within some other CAT application such as Trados Studio, memoQ, Wordfast, Déjà Vu, or from a note in your email program such as Microsoft Outlook.
This is why ApSIC Xbench is accessible system-wide from any application with a single key combination (Ctrl+Alt+Insert).
The following 5 steps describe how you should interact with ApSIC Xbench. The starting point for this scenario is an open document with Microsoft Word in the foreground and an ApSIC Xbench project loaded in the background.
You will notice that, especially for software options, it is faster to search and paste than to type the target software options manually. Thus, you are more productive and your translations are more consistent at the same time.
Familiarize yourself with the above procedure until you feel it is intuitive enough. Try it with words that you know are exact matches so you can get familiar with the paste step. | https://docs.xbench.net/user-guide/search-terms/ | 2018-01-16T13:10:26 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.xbench.net |
Core
Window
Core Flyout Window
Core Flyout Window
Class
Flyout
Definition
public : sealed class CoreWindowFlyout : ICoreWindowFlyout
public sealed class CoreWindowFlyout : ICoreWindowFlyout
Public NotInheritable Class CoreWindowFlyout Implements ICoreWindowFlyout
- Attributes
-
Remarks
Note
: This class is not agile, which means that you need to consider its threading model and marshaling behavior. For more info, see Threading and Marshaling (C++/CX).
Constructors
Creates an instance of the CoreWindowFlyout class at the supplied position.
public : CoreWindowFlyout(Point position)
public CoreWindowFlyout(Point position)
Public Sub New(position As Point)
The pixel position on the screen where the flyout is to originate. The position provides the upper-leftmost corner of the flyout.
- See Also
-
CoreWindowFlyout(Point, String) CoreWindowFlyout(Point, String) CoreWindowFlyout(Point, String)
Creates an instance of the CoreWindowFlyout class at the specified position with the supplied title.
public : CoreWindowFlyout(Point position, Platform::String title)
public CoreWindowFlyout(Point position, String title)
Public Sub New(position As Point, title As String)
The pixel position on the screen where the flyout is to originate. The position provides the upper-leftmost corner of the flyout.
- title
- Platform::String String String
The title of the flyout.
- See Also
-
Properties
Gets or sets the delegate called when the back button on the flyout flyout is selected.
Gets the set of user interface commands available on the flyout. user interface commands available on the flyout.
Gets or sets the index of the flyout window's default command.
public : unsigned int DefaultCommandIndex { get; set; }
public uint DefaultCommandIndex { get; set; }
Public ReadWrite Property DefaultCommandIndex As uint
- Value
- unsigned int uint uint
The index value of the flyout window's default command (such as OK).
Gets or sets a value that indicates whether any UI interaction event message is slightly delayed or not. This delay prevents a user from accidentally invoking an action on the flyout window.
public : int IsInteractionDelayed { get; set; }
public int IsInteractionDelayed { get; set; }
Public ReadWrite Property IsInteractionDelayed As int
- Value
- int int.
Methods
Displays the flyout flyout, as well as information about the action.
Events
Is fired when the flyout )) | https://docs.microsoft.com/en-us/uwp/api/Windows.UI.Core.CoreWindowFlyout | 2018-01-16T13:40:47 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.microsoft.com |
Magnum now support policy in code [1], which means if users didn’t modify any of policy rules, they can leave policy file (in json or yaml format) empty or just remove it all together. Because from now, Magnum keeps all default policies under magnum/common/policies module. Users can still modify/generate the policy rules they want in the policy.yaml or policy.json file which will override the default policy rules in code only if those rules show in the policy file.
[1].
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/releasenotes/magnum/unreleased.html | 2018-01-16T13:05:58 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.openstack.org |
Configuration Options
A newer version is available; see the version menu above for details.
RBAC Service Configuration Options
There are various configuration options for the RBAC service. Each section can exist in its own file or in separate files.
General RBAC Config
The following is general configuration of the RBAC service and is not required, but when present must be under the ‘rbac’ section:
rbac: { # Duration in hours that a password reset token is viable password-reset-expiration: 24 # Duration in minutes that a session is viable session-timeout: 60 failed-attempts-lockout: 10 }
password-reset-expiration
When a user doesn’t remember their current password, an administrator can generate a token for them to change their password. The duration, in hours, that this generated token is valid can be changed with, you have to also create a new file or Puppet will reset back to 10 when it next runs. Create the file in an RBAC section of will allow access to the RBAC APIs as the
‘api_user’. This user is by default an administrator, but permissions
for the ‘api_user’ can be changed. By default there is no certificate
whitelist.
Authentication
You need to authenticate requests to the RBAC API using a certificate listed in RBAC’s certificate whitelist, located at
/etc/puppetlabs/console-services/rbac-certificate-whitelist. Note that if you edit this file, you must restart the
pe-console-services service for your changes to take effect. You can attach the certificate using the command line as demonstrated in the example curl query below. You must have the whitelist certificate name and the private key to run the script.
You do not need to use an agent certificate for authentication. You can use
puppet cert generate to create a new certificate specifically for use with the API.
Example Query
The following query will return https://<DNS NAME OF CONSOLE>:4433/rbac-api/v1/users --cert /etc/puppetlabs/puppet/ssl/certs/<WHITELISTED CERTNAME>.pem --key /etc/puppetlabs/puppet/ssl/private_keys/<WHITELISTED CERTNAME>.pem --cacert /etc/puppetlabs/puppet/ssl/certs/ca.pem -H "Content-Type: application/json"
RBAC Database Config
Credential information for the RBAC service is stored in a PostgreSQL database. The configuration of that database is found in the ‘rbac-database’ section of the config, like below:
will use to store credentials.
user
This is the username the RBAC service should use to connect to the PostgreSQL database.
This is the password the RBAC service should use to connect to the PostgreSQL database. | https://docs.puppet.com/pe/2015.2/rbac_config.html | 2018-01-16T13:08:34 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.puppet.com |
tox configuration and usage examples¶
- Basic usage
- a simple tox.ini / default environments
- specifying a platform
- whitelisting non-virtualenv commands
- depending on requirements.txt or defining constraints
- using a different default PyPI url
- installing dependencies from multiple PyPI servers
- further customizing installation
- forcing re-creation of virtual environments
- passing down environment variables
- setting environment variables
- special handling of PYTHONHASHSEED
- Integration with “setup.py test” command
- Ignoring a command exit code
- Compressing dependency matrix
- Prevent symbolic links in virtualenv
- pytest and tox
- unittest2, discover and tox
- nose and tox
- General tips and tricks
- Using tox with the Jenkins Integration Server
- Development environment
- Platform specification | http://tox.readthedocs.io/en/latest/examples.html | 2018-01-16T13:18:25 | CC-MAIN-2018-05 | 1516084886436.25 | [] | tox.readthedocs.io |
Resources¶
A resource represents an instrument, e.g. a measurement device. There are multiple classes derived from resources representing the different available types of resources (eg. GPIB, Serial). Each contains the particular set of attributes an methods that are available by the underlying device.
You do not create this objects directly but they are returned by the
pyvisa.highlevel.ResourceManager.open_resource() method of a
pyvisa.highlevel.ResourceManager. In general terms, there
are two main groups derived from
pyvisa.resources.Resource,
pyvisa.resources.RegisterBasedResource and
pyvisa.resources.RegisterBasedResource.
Note
The resource Python class to use is selected automatically from the resource name. However, you can force a Resource Python class:
>>> from pyvisa.resources import MessageBasedResource >>> inst = rm.open('ASRL1::INSTR', resource_pyclass=MessageBasedResource)
The following sections explore the most common attributes of
Resource and
MessageBased (Serial, GPIB, etc) which are the ones you will encounte more
often. For more information, refer to the API.
Attributes Resource¶
session¶
Each communication channel to an instrument has a session handle which is unique. You can get this value:
>>> my_device.session 10442240
If the resource is closed, an exception will be raised:
>>> inst.close() >>> inst.session Traceback (most recent call last): ... pyvisa.errors.InvalidSession: Invalid session handle. The resource might be closed.
timeout¶
Very most VISA I/O operations may be performed with a timeout. If a timeout is set, every operation that takes longer than the timeout is aborted and an exception is raised. Timeouts are given per instrument in milliseconds.
For all PyVISA objects, a timeout is set with
my_device.timeout = 25000
Here,
my_device may be a device, an interface or whatever, and its timeout is
set to 25 seconds. To set an infinite timeout, set it to
None or
float('+inf') or:
del my_device.timeout
To set it to immediate, set it to 0 or a negative value. (Actually, any value smaller than 1 is considered immediate)
Now every operation of the resource takes as long as it takes, even indefinitely if necessary.
Attributes of MessageBase resources¶
Chunk length¶
If you read data from a device, you must store it somewhere. Unfortunately, PyVISA must make space for the data before it starts reading, which means that it must know how much data the device will send. However, it doesn’t know a priori.
Therefore, PyVISA reads from the device in chunks. Each chunk is 20 kilobytes long by default. If there’s still data to be read, PyVISA repeats the procedure and eventually concatenates the results and returns it to you. Those 20 kilobytes are large enough so that mostly one read cycle is sufficient.
The whole thing happens automatically, as you can see. Normally you needn’t worry about it. However, some devices don’t like to send data in chunks. So if you have trouble with a certain device and expect data lengths larger than the default chunk length, you should increase its value by saying e.g.
my_instrument.chunk_size = 102400
This example sets it to 100 kilobytes.
Termination characters¶
Somehow the computer must detect when the device is finished with sending a message. It does so by using different methods, depending on the bus system. In most cases you don’t need to worry about termination characters because the defaults are very good. However, if you have trouble, you may influence termination characters with PyVISA.
Termination characters may be one character or a sequence of characters. Whenever this character or sequence occurs in the input stream, the read operation is terminated and the read message is given to the calling application. The next read operation continues with the input stream immediately after the last termination sequence. In PyVISA, the termination characters are stripped off the message before it is given to you.
You may set termination characters for each instrument, e.g.
my_instrument.read_termination = '\r'
(‘r’ is carriage return, usually appearing in the manuals as CR)
Alternatively you can give it when creating your instrument object:
my_instrument = rm.open_resource("GPIB::10", read_termination='\r')
The default value depends on the bus system. Generally, the sequence is empty,
in particular for GPIB. For RS232 it’s
\r.
You can specify the character to add to each outgoing message using the
write_termination attribute.
query_delay and send_end¶
There are two further options related to message termination, namely
send_end and
query_delay.
send_end is a boolean. If it’s
True (the
default), the EOI line is asserted after each write operation, signalling the
end of the operation. EOI is GPIB-specific but similar action is taken for
other interfaces.
The argument
query_delay is the time in seconds to wait after
each write operation. So you could write:
my_instrument = rm.open_resource("GPIB::10", send_end=False, delay=1.2)
This will set the delay to 1.2 seconds, and the EOI line is omitted. By the way, omitting EOI is not recommended, so if you omit it nevertheless, you should know what you’re doing. | http://pyvisa.readthedocs.io/en/stable/resources.html | 2018-01-16T12:58:57 | CC-MAIN-2018-05 | 1516084886436.25 | [] | pyvisa.readthedocs.io |
All content with label 2lcache+async+grid+hot_rod+hotrod+infinispan+jboss_cache+listener+lock_striping+out_of_memory+release+scala+user_guide+xaresource.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, recovery, transactionmanager, dist, partitioning, query, deadlock, intro, pojo_cache, archetype, jbossas, nexus,
guide, schema, cache, s3, amazon, memcached, jcache, test, api, xsd, ehcache, maven, documentation, roadmap, youtube, userguide, write_behind, ec2, 缓存, hibernate, aws, interface, clustering, setup, eviction, gridfs, fine_grained, concurrency, index, events, hash_function, configuration, batch, buddy_replication, loader, xa, pojo, write_through, cloud, remoting, mvcc, tutorial, notification, presentation, murmurhash2, jbosscache3x, read_committed, xml, distribution, jira, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, br, websocket, transaction, interactive, build, demo, cache_server, installation, client, non-blocking, migration, jpa, filesystem, tx, article, gui_demo, eventing, client_server, infinispan_user_guide, murmurhash, standalone, repeatable_read, snapshot, webdav, docs, consistent_hash, batching, store, whitepaper, jta, faq, as5, jgroups, lucene, locking, rest
more »
( - 2lcache, - async, - grid, - hot_rod, - hotrod, - infinispan, - jboss_cache, - listener, - lock_striping, - out_of_memory, - release, - scala, - user_guide, - xaresource )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/2lcache+async+grid+hot_rod+hotrod+infinispan+jboss_cache+listener+lock_striping+out_of_memory+release+scala+user_guide+xaresource | 2019-09-15T14:50:24 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.jboss.org |
All content with label client+coherence+gridfs+infinispan+installation+non-blocking+query+rest+s.
Related Labels:
json, expiration, publish, datagrid, interceptor, server, replication, transactionmanager, dist, release, deadlock, rest_security, archetype, lock_striping, jbossas, nexus, guide, schema, listener,
cache, amazon, memcached, grid, test, jcache, api, xsd, ehcache, maven, documentation, jboss, wcm, write_behind, ec2, 缓存, hibernate, jwt, getting, aws, getting_started, interface, custom_interceptor, setup, clustering, eviction, ls, concurrency, examples, jboss_cache, import, index, events, hash_function, configuration, batch, buddy_replication, loader, write_through, cloud, remoting, tutorial, notification, jbosscache3x, read_committed, xml, distribution, jose, started, cachestore, data_grid, resteasy, hibernate_search, cluster, websocket, transaction, async, interactive, xaresource, build, gatein, searchable, demo, scala, command-line, as7, migration, jpa, filesystem, json_encryption, gui_demo, eventing, shell, client_server, testng, infinispan_user_guide, standalone, repeatable_read, hotrod, webdav, snapshot, docs, consistent_hash, batching, store, jta, faq, 2lcache, as5, jsr-107, jgroups, locking, favourite, json_signature, hot_rod
more »
( - client, - coherence, - gridfs, - infinispan, - installation, - non-blocking, - query, - rest, - s )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/client+coherence+gridfs+infinispan+installation+non-blocking+query+rest+s | 2019-09-15T14:51:06 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.jboss.org |
Creating a NuGet from Existing Library Projects
Existing PCL or .NET Standard libraries can be turned into NuGets via the Project Options window:
Right-click on the library project in the Solution Pad and choose Options.
Go to the NuGet Package > Metadata section and enter all the required information in the General tab:
Optionally, add additional metadata in the Details tab.
Once the metadata is configured, you can right-click on the project and choose Create NuGet Package and the .nupkg NuGet package file will be saved in the /bin/ folder (either Debug or Release, depending on configuration).
To create the NuGet package on every build or deploy, go to the NuGet Package > Build section and tick Create a NuGet Package when building the project:
Note
Building the NuGet package can slow down the build process. If this box is not ticked, you can still generate a NuGet package manually at any time from the project context menu (shown in step 4 above).
Verifying the Output
NuGet packages are also ZIP files, so it's possible to inspect the internal structure of the generated package.
This screenshot shows the contents of a PCL-based NuGet – only a single PCL assembly is included:
Related Links
Feedback | https://docs.microsoft.com/en-us/xamarin/cross-platform/app-fundamentals/nuget-multiplatform-libraries/existing-library | 2019-09-15T14:32:49 | CC-MAIN-2019-39 | 1568514571506.61 | [array(['existing-library-images/nuget-output.png',
'Files contained in the NuGet package'], dtype=object)] | docs.microsoft.com |
.
- Extract the file you downloaded in the previous step and do the following:
- Copy the
org.wso2.carbon.apimgt.migrate.client-1.9.X.jarfile to
<APIM_1.9.0_HOME>/repository/components/dropins. If you use a clustered/distributed API Manager setup, copy the JAR file to all nodes.
- Copy the
migration-scriptfolder into
<APIM_1.9.0_HOME>/. If you use a clustered/distributed API Manager setup, copy the migration-script folder to the node that hosts your database.
If you are not using the MySQL database with the API Manager, change the query inside
<APIM_1.9.0_HOME>/migration-scripts/18-19-migration/drop-fk.sqlaccording to your database type. The scripts for each database type are given in the table:
Rename the
<lastAccessTimeLocation>element in the
<APIM_1.9 1.9.0, back up and delete the
<APIM_1.9.0_HOME>/
solrdirectory and restart the server.. | https://docs.wso2.com/pages/viewpage.action?pageId=47515350 | 2019-09-15T14:00:56 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.wso2.com |
The Browsers page in New Relic Browser provides information about your end users' experience with your app based on which browser they use, such as Google Chrome, Mozilla Firefox, Microsoft Internet Explorer, and Apple Safari. This page includes:
- Top browsers by throughput (pages per minute or ppm)
- Average page load time by platform type (mobile, tablet, desktop)
Drill-down charts also segment the selected browser type by version; for example, Chrome 31, 32, 33, etc. This helps you quickly determine whether problems with page load timing may be related to a specific browser type or platform, or whether the problem is more widespread. include:
- Request queuing (black): Wait time between the web server and the application code. Large numbers indicate a busy application server.
- Web application (purple): Time spent in the application code.
- Network (brown): The network latency, or time it takes for a request to make a round trip over the Internet.
- DOM processing (yellow): In the browser, parsing and interpreting the HTML and retrieving assets. Measured by the browser's DOMContentLoaded event.
- Page rendering (blue): In the browser, displaying the HTML, running in-line JavaScript, and loading images. Measured by the browser's Load event.
Note: For apps that have been deployed using the copy/paste method,) | https://docs.newrelic.com/docs/browser/new-relic-browser/additional-standard-features/browsers-problem-patterns-type-or-platform | 2019-09-15T14:53:17 | CC-MAIN-2019-39 | 1568514571506.61 | [array(['https://docs.newrelic.com/sites/default/files/styles/inline_660px/public/thumbnails/image/screen-browser-browsers.png?itok=5OWey7XB',
'screen-browser-browsers.png screen-browser-browsers.png'],
dtype=object)
array(['https://docs.newrelic.com/sites/default/files/styles/inline_660px/public/thumbnails/image/screen-browser-browsers-example.png?itok=uiYlwFcT',
'Browser detail view (example) Browsers detail view (example)'],
dtype=object) ] | docs.newrelic.com |
Contents Now Platform Capabilities Previous Topic Next Topic Create notification channels Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Create notification channels You can add channels to receive your notifications. A notification channel is an email account or voice message system that you have access to. Before you beginRole required: user About this taskNotification channels include email addresses, service providers for SMS messages, and mobile applications. You can create voice notification channels to support applications like Notify.Note: If you are using the ServiceNow mobile application or a custom push application, you do not need to create a push channel for your mobile device. The system automatically creates a channel for the mobile app after you initially log in to your instance from your mobile device. Procedure Click the gear icon in the banner frame to open the System Settings window, and click the Notifications tab. Click Create Channel. Complete the fields on the New Channel form. Field Description Name A descriptive name for the channel, such as the device or email account. Type The type of channel: Email: for email messages.Note: All users with an email address have a primary email channel, which is created automatically after a notification is sent to them. SMS: for SMS messages. Voice: for phone messages. If you are using the ServiceNow mobile application or a custom push application, the system automatically creates a channel for the mobile app after you initially log in to your instance from your mobile device. Email address The email address of the channel. Phone number The phone number for SMS messages or for voice messages. Service provider The service provider for SMS messages. Click Save. The system creates and enables the channel, and adds it to the list of notification channels. What to do nextTo receive notifications on your new notification channel, you must enable the channel for individual notifications. After you enable the channel for a notification, you can set conditions to further control the notifications that you receive on the channel. For more information, see Apply notification conditions. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/administer/notification/task/create-channel.html | 2019-09-15T14:42:45 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.servicenow.com |
Ping Sender¶
The ping sender is a minimalistic program whose sole purpose is to deliver a telemetry ping. It accepts the following parameters:
- the URL the ping will be sent to (as an HTTP POST command);
- the path to an uncompressed file holding the ping contents.
Once the ping has been read from disk the ping sender will try to post it once, exiting with a non-zero value if it fails. If sending the ping succeeds then the ping file is removed.
The content of the HTTP request is gzip encoded. The request comes with a few additional headers:
User-Agent: pingsender/1.0
X-PingSender-Version: 1.0. Even if this data is already included by the user agent, this header is needed as the pipeline is not currently storing use agent strings and doing that could require storing a fair chunk of redundant extra data. We need to discern between pings sent using the ping sender and the ones sent using the normal flow, at the end of the ingestion pipeline, for validation purposes.
Note
The ping sender relies on libcurl for Linux and Mac build and on WinInet for Windows ones for its HTTP functionality. It currently ignores Firefox or the system proxy configuration.
In non-debug mode the ping sender doesn’t print anything, not even on error, this is done deliberately to prevent startling the user on architectures such as Windows that would open a separate console window just to display the program output. If you need runtime information to be printed out compile the ping sender with debugging enabled.
The pingsender is not supported on Firefox for Android (see bug 1335917) | http://firefox-source-docs.mozilla.org/toolkit/components/telemetry/internals/pingsender.html | 2019-09-15T15:06:20 | CC-MAIN-2019-39 | 1568514571506.61 | [] | firefox-source-docs.mozilla.org |
GLOBAL ECONOMY Crisis of confidence hits euro area as dollar thrives on uncertainty / p. 6 AI & THE CENTRAL BANKS How algorithms facilitate better interpretation of central bank policy / p. 32 BY THE NUMBERS Key figure forecasts for the Nordic and global economies / p. 36 3 / 2019 TIME OF FEAR Policy choices and Nordic divergence come to the fore / p. 5
Print
Download PDF file | https://docs.nordeamarkets.com/nordea-economic-outlook/economic-outlook-2019/EO-en-03-2019/?utm_source=press&utm_medium=ncom&utm_campaign=EO201903 | 2019-09-15T13:56:37 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.nordeamarkets.com |
- Categories:
Warehouse & Resource Monitor DDL
SHOW RESOURCE MONITORS¶
Lists all the resource monitors in your account for which you have access privileges. a
levelcolumn with the following values:
WAREHOUSE: The resource monitor is assigned to one or more warehouses and, therefore, is monitoring the credit usage for the assigned warehouse(s).
ACCOUNT: The resource monitor is assigned at the account-level and, therefore, monitoring the credit usage for your entire account.
NULL: The resource monitor is not assigned to the account or any warehouses and, therefore, is not monitoring any credit usage. | https://docs.snowflake.net/manuals/sql-reference/sql/show-resource-monitors.html | 2019-09-15T13:49:56 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.snowflake.net |
Files-Only Installation (Reporting Services)
Files, Report Designer Preview, the Reporting Services Configuration tool, and the Reporting Services command line utilities (rsconfig.exe, rskeymgmt.exe and rs.exe). It does not apply to shared features such as SQL Server Management Studio or Business Intelligence Development Studio, Tool. | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/cc281380%28v%3Dsql.100%29 | 2019-09-15T14:53:23 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.microsoft.com |
This topic provides instructions on how to configure the CAS inbound authenticator and the WSO2 Identity Server and demonstrates this integration using a sample app (cas-client-webapp).
This procedure was tested using Java 8. The current version of the CAS Inbound authenticator is not supported with a tenant user. CAS Version 1.0.2 Inbound Authenticator is supported by WSO2 Identity Server versions 5.2.0 and CAS Version 2.0.1 Inbound Authenticator is supported by WSO2 Identity Server versions 5.3.0.
If you are using CAS authenticator version 2.0.2, go to the v2.0.2 tag of the identity-outbound-auth-cas GitHub repository to view the documentation
See the following sections for more information on configuring this integration.
Prerequisites
Download WSO2 Identity Server from the WSO2 Identity Server product page and install it by following the instructions in the Installing the Product topic.
Download the sample CAS client webapp (cas-client-webapp.war) from
Download the CAS Version 1.0.2 Inbound Authenticator JAR from the store for this authenticator and CAS Version 2.0.1 Inbound Authenticator JAR from the store for this authenticator.
If you want to upgrade the CAS Inbound Authenticator (.jar) in your existing IS pack, please refer upgrade instructions.
- The CAS login URL is required if you want to use it in your own app. It must be:
https://<IS_IP>:9443/
identity/cas/login
Configuring cas-client-webapp
- Generate Keystore to enable 'https' request in your web container (e.g., Tomcat).
Use the following "keytool" command inside the "web-container/bin" (e.g.,
<TOMCAT_HOME/bin>) directory to create a keystore with the self-signed certificate. During the keystore creation process, you need to assign a password and fill in the certificate’s details.
keytool -genkey -alias localhost -keyalg RSA -keystore "PATH_TO_CREATE_KEYSTORE/KEYSTORE_NAME".
Tip: Here
localhostis the same name as the machine's hostname.
Add the following connector in the
server.xmlfile in your web-container (e.g.,
<TOMCAT_HOME>/conf/server.xml)
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="PATH_TO_CREATED_KEYSTORE/KEYSTORE_NAME" keystorePass="KEYSTORE_PASSWORD" />
Tip: KEYSTORE_PASSWORD is the password you assigned to your keystore via the "keytool" command.
- To establish the trust between cas-client-webapp and CAS-Server (WSO2 IS), take the following steps:
- Go to the
<IS_HOME>/repository/resources/security/directory and execute the following command to create a certificate file for the wso2carbon JKS.
keytool -export -alias wso2carbon -file wso2.crt -keystore wso2carbon.jks -storepass wso2carbon
- Inside the above directory use the following command to import the CAS server certificate (
wso2.crt) into the system truststore of the CAS client. You will be prompted for the keystore password, which is by default changeit.
keytool -import -alias wso2carbon -file wso2.crt -keystore PATH-TO-jre/lib/security/cacerts
Deploying CAS artifacts
- Place the
cas-client-webapp.warfile into the webapps directory of the web-container (e.g.,
<TOMCAT_HOME>/webapps).
- Place the
org.wso2.carbon.identity.sso.cas-1.0.2.jarfile (for Identity Server 5.3.0, use the
cas-2.0.1.jarfile instead as described in the note below) into the
<IS_HOME>/repository/components/dropinsdirectory and restart the Identity Server.
Configuring the service provider
Now, you are ready to configure WSO2 Identity Server by adding a new service provider .
- Run WSO2 Identity Server.
- Log in to the management console as an administrator.
In the Identity section under the Main tab, click Add under Service Providers.
Enter cas-client-webapp in the Service Provider Name text box and click Register.
In the Inbound Authentication Configuration section, click CAS Configuration .
Configure the Service Url:
Service URL refers to the URL of the application that the client is trying to access.
Go to Claim Configuration and click to add the requested claims. (This is required to show requested claims as user attributes in the cas-client-webapp; otherwise, no attributes will be shown.) Add the Service Provider Claim name that corresponds to the Local Claim URI and mark it as Requested Claim.
- Click Update to save the changes. Now you have configured the service provider.
Testing the sample
- To test the sample, navigate to
https://[server-address]/cas-client-webapp/in your browser (i.e., go to the following URL:).
- The basic authentication page appears. Use your IS username and password.
- If you have successfully logged in, you will see the following CAS Home page of cas-client-webapp with the authenticated user and user attributes. | https://docs.wso2.com/display/ISCONNECTORS/Configuring+CAS+Inbound+Authenticator | 2019-09-15T14:54:26 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.wso2.com |
This section describes some recommended performance tuning configurations to optimize the WSO2 Message Broker. It assumes that you have set up the MB on a server running Unix/Linux, which is recommended for a production deployment. It is recommended to have at least one MB server node for failover. Therefore, a clustered deployment is recommended for most production systems with at least two MB server nodes. you to carry out load tests on your environment to tune the ESB accordingly. (be sure to include the leading * character).
* soft nofile 4096 * hard nofile 65535
Optimal values for these parameters depend on the environment.
JVM-level settings
If one or more worker nodes in a clustered deployment require access to the management console, increase the entity expansion limit as follows in the
<MB_HOME>/bin/wso2server.bat file (for Windows) or the
<MB_HOME>/bin/wso2server.sh file (for Linux/Solaris). The default entity expansion limit is 64000.
-DentityExpansionLimit=10000.
MB-level settings
The following sections describe how you can configure the MB-level settings to optimize performance. | https://docs.wso2.com/display/MB320/Performance+Tuning+Guide | 2019-09-15T14:01:27 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.wso2.com |
8.5.100.21
Configuration Database Maintenance Scripts 8.5.x Release Notes
Helpful Links
Releases Info
Product Documentation
Genesys Products
What's New
This release includes only resolved issues.
Resolved Issues
This release contains the following resolved issues:
The Configuration Database multi-language initialization script can now be executed successfully on the MS SQL server with collation set to Latin1_General_100_CI_AS_KS_WS_SC. (MFWK-19331)
Upgrade Notes
No special procedure is required to upgrade to release 8.5.100.21.
This page was last modified on May 8, 2018, at 10:17.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/RN/latest/fr-cdb-mnt-sc85rn/fr-cdb-mnt-sc8510021 | 2019-09-15T13:53:30 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.genesys.com |
CSGMesh¶
Inherits: CSGPrimitive < CSGShape < VisualInstance < Spatial < Node < Object
Category: Core
Description¶
This CSG node allows you to use any mesh resource as a CSG shape, provided it is closed, does not self-intersect, does not contain internal faces and has no edges that connect to more then two faces. | https://docs.godotengine.org/en/stable/classes/class_csgmesh.html | 2019-09-15T14:38:47 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.godotengine.org |
This command allows you to quickly kick a member from the server.
LewdBot requires the
Kick Members permission for this feature.
L!kick
This will make LewdBot send a help block with the description about this command along with all it's available usages.
L!kick <member>
Specifying a member will make LewdBot kick that member from the server.
L!kick <member> <reason>
Specifying a reason along with the member will make LewdBot kick that member from the server and register the reason on the server's Audit Logs. | https://docs.notfab.net/list-of-commands/some-command/kick | 2019-09-15T14:12:46 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.notfab.net |
Can I change the SKU on an existing Amazon listing?
Unfortunately, Amazon does not allow you to change existing SKUs - unless you remove your existing listing and create a new one.
The SKU is used as a key identifier in your feeds, similar to the Item ID on eBay: When you change a price on eBay, you tell eBay the new price for a particular Item ID - but when you change a price on Amazon, you tell Amazon "This is the new price for the SKU ..."
So in order to change the SKU on an already listed item, you can either visit seller central to delete the SKU and then delete the item from WP-Lister's database - or you use the "Remove from Amazon" bulk action and wait until your product removal feed has been processed.
See Deleting Amazon listings for more details. | https://docs.wplab.com/article/107-can-i-change-the-sku-on-an-existing-amazon-listing | 2019-09-15T14:47:51 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.wplab.com |
View Metrics with JMX
You can use JConsole to view metrics provided by your Web Engagement Server. To do this, you can start Web Engagement Server as a:
Once you have connected, you can view your metrics in a JConsole JMX panel.
You may also want to look into some of the other tools that are available for viewing your Web Engagement metrics.
Connect to Web Engagement started as a local java process.
- Run jconsole.exe from the jdk/bin directory.
In the New Connection dialog, specify the Web Engagement launcher java process.
If the Web Engagement Server was started via a BAT file in the same host where the JMX console is opened, this launcher process is the com.genesys.launcher.bootstrap.Bootstrap process from the Local Process list.
Connect to Web Engagement Server started on a remote host.
If the Web Engagement Server was started remotely as a server, follow these steps:
- Run jconsole.exe from the jdk/bin directory.
- Open the launcher.ini file and uncomment all of the lines that appear under the following:
; uncomment and configure the properties below to use JMX
- Save your changes.
- Restart the Web Engagement Server application.
- Specify host:JMX port in the Remote Process section, as shown in the screenshot on the left.
Open the JMX panel to view the metrics.
- Click Connect in the New Connection dialog. The JMX panel opens.
- Open the MBeans tab and expand com.genesyslab.gemc.metrics. All of the Web Engagement metrics are there.
- To refresh the metrics, click Refresh.
Other Tools
We have just explained how to use the JConsole tool bundled with Oracle Java (TM) to view your metrics, but there are several other tools you can use to do this:
- The EJTools JMX Browser
- Panoptes
- jManage
- MC4J
- Zabbix
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/GWE/latest/Deployment/jmx | 2019-09-15T13:56:01 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.genesys.com |
Dictionary¶
Category: Built-In Types
Description¶
Dictionary type. Associative container which contains values referenced by unique keys. Dictionaries are always passed by reference.
Erasing elements while iterating over them is not supported.
Creating a dictionary:
var d = {4: 5, "A key": "A value", 28: [1, 2, 3]}
To add a key to an existing dictionary, access it like an existing key and assign to it:
d[4] = "hello" # Add integer 4 as a key and assign the String "hello" as its value. d["Godot"] = 3.01 # Add String "Godot" as a key and assign the value 3.01 to it.
Tutorials¶
Method Descriptions¶
- void clear ( )
Clear the dictionary, removing all key/value pairs.
- Dictionary duplicate ( bool deep=False )
Creates a copy of the dictionary, and returns it.
Returns
true if the dictionary is empty.
Erase a dictionary key/value pair by key. Returns
true if the given key was present in the dictionary,
false otherwise. Do.
Returns
true if the dictionary has all of the keys in the given array.
Returns a hashed integer value representing the dictionary contents.
Returns the list of keys in the
Dictionary.
Returns the size of the dictionary (in pairs).
Returns the list of values in the
Dictionary. | https://docs.godotengine.org/en/3.1/classes/class_dictionary.html | 2019-09-15T14:01:44 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.godotengine.org |
Contents Now Platform Capabilities Previous Topic Next Topic Orchestration databus Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Orche workflow | https://docs.servicenow.com/bundle/madrid-servicenow-platform/page/administer/orchestration-activity-designer/concept/c_OrchestrationDatabus.html | 2019-09-15T14:50:41 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.servicenow.com |
Configuring the Metric Registrar
This topic describes the Metric Registrar for Pivotal Application Service (PAS). It also includes information about enabling and configuring the Metric Registrar.
Overview
The Metric Registrar allows app developers to export custom app metrics in a format that Loggregator can consume. App developers can then use the custom metrics to monitor apps with PCF Metrics and configure autoscaling rules with PCF Autoscaler.
App developers can export custom metrics to Loggregator by configuring their apps in one of the following ways:
- Metrics Endpoint—Publish and register a Prometheus Exposition metrics endpoint to an app. The Metric Registrar will then poll this endpoint every 35 seconds and convert the metrics found in the response to Loggregator metrics.
- Structured Log—Modify your app to emit metrics using a specific JSON or DogStatsD format. The Metric Registrar then converts all matching log messages into Loggregator metrics or events.
For more information about installing the Metric Registrar Plugin and registering your app, see Emitting Custom App Metrics to the Metric Registrar.
For more information about the components and products mentioned, see the following: - Loggregator - PCF Metrics - PCF Autoscaler
Architecture
The following diagram illustrates how the Metric Registrar sends your custom app metrics to Loggregator. The components of the Metric Registrar are as follows:
- The cf CLI plugin
- The
metric_registrar_endpoint_workerand
metric_registrar_log_workerjobs running on the Doppler VM of the PAS deployment
- The
metric_registrar_orchestratorand
metric_registrar_smoke_testjobs running on the Clock Global VM of the PAS deployment
Click the image for a larger representation.
Configure the. | https://docs.pivotal.io/pivotalcf/2-6/metric-registrar/index.html | 2019-09-15T14:20:34 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.pivotal.io |
More Network dashboards
Web Center. The Web Center can also be used to profile the type of content that clients are requesting and how much bandwidth is being used by each client.
Use the filtering options at the top of the screen to limit which items are shown. Configure new data inputs through Splunk Settings or search for particular traffic events directly through Incident Review.
Dashboard Panels
Troubleshooting
For information about troubleshooting, see "Troubleshooting Network dashboards" in this topic.
Web Search
The Web Search dashboard assists in searching for web events that are of interest based upon the criteria defined by the search filters. The dashboard is used in ad-hoc searching of web data, but is also the primary destination for drilldown searches used in the "'Web Search dashboard panels.
The Web Search dashboard displays no results by default unless it was opened in response to a drilldown action, or the user updates a filter, selects a time range, and chooses on the device(s).
Dashboard Panels
Troubleshooting Network Dashboards
1. This dashboard references data from various data models..
This documentation applies to the following versions of Splunk® Enterprise Security: 3.2, 3.2.1, 3.2.2, 3.3.0, 3.3.1, 3.3.2, 3.3.3
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/ES/3.3.1/User/MoreNetworkdashboards | 2019-09-15T14:56:19 | CC-MAIN-2019-39 | 1568514571506.61 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Edje Library Documentation
Edje Graphical Design LibraryThese routines are used for Edje.
- Version:
- 1.7
- Date:
- 2003-2012
Please see the Authors page for contact details.
Introduction are also allowed to transition over a period of time, allowing animation. Programs and animations can be run in "parallel"..
For details of Edje's history, see the Edje History section.
What does Edje require?
Edje requires fairly little on your system. to use the Edje runtime library you need:
- Evas (library)
- Ecore (library)
- Eet (library)
- Embryo (library)
- Eina (library)
- Lua 5.1 (library)
Evas needs to be build with the JPEG, PNG and EET image loaders enabled at a minimum. You will also need the buffer engine (which requires the software_generic engine) as well.
Ecore (library) needs the ECORE and ECORE_EVAS modules built at a minimum. It's suggested to build all the Ecore modules. You will beed the Buffer engine support built into Ecore_Evas for edje_cc to function.
How to compile and test Edje
Now you need to compile and install Edje.
./configure make sudo make install
You now have it installed and ready to go, but you need input data. There are lots of examples in SVN, the best one is Enlightenment's own theme file.
You may use different tools to edit and view the generated ".edj" files, for instance:
- edje_player (provided by Edje)
- edje_codegen (provided by Edje)
- Since:
- 1.8.0
- editje ()
- edje_viewer ()
So how does this all work? following example:
#include <Eina.h> #include <Evas.h> #include <Ecore.h> #include <Ecore_Evas.h> #include <Edje.h> #define WIDTH 320 #define HEIGHT 240 static Evas_Object *create_my_group(Evas *canvas, const char *text) { Evas_Object *edje; edje = edje_object_add(canvas); if (!edje) { EINA_LOG_CRIT("could not create edje object!"); return NULL; } if (!edje_object_file_set(edje, "edje_example.edj", "my_group")) { int err = edje_object_load_error_get(edje); const char *errmsg = edje_load_error_str(err); EINA_LOG_ERR("could not load 'my_group' from edje_example.edj: %s", errmsg); evas_object_del(edje); return NULL; } if (text) { if (!edje_object_part_text_set(edje, "text", text)) { EINA_LOG_WARN("could not set the text. " "Maybe part 'text' does not exist?"); } } evas_object_move(edje, 0, 0); evas_object_resize(edje, WIDTH, HEIGHT); evas_object_show(edje); return edje; } int main(int argc, char *argv[]) { Ecore_Evas *window; Evas *canvas; Evas_Object *edje; const char *text; ecore_evas_init(); edje_init(); window = ecore_evas_new(NULL, 0, 0, WIDTH, HEIGHT, NULL); if (!window) { EINA_LOG_CRIT("could not create window."); return -1; } canvas = ecore_evas_get(window); text = (argc > 1) ? argv[1] : NULL; edje = create_my_group(canvas, text); if (!edje) return -2; ecore_evas_show(window); ecore_main_loop_begin(); evas_object_del(edje); ecore_evas_free(window); edje_shutdown(); ecore_evas_shutdown(); return 0; }
The above example requires the following annotated source Edje file:
// compile: edje_cc edje_example.edc collections { group { name: "my_group"; // must be the same as in edje_example.c parts { part { name: "background"; type: RECT; // plain boring rectangle mouse_events: 0; // we don't need any mouse event on the background // just one state "default" description { state: "default" 0.0; // must always exist color: 255 255 255 255; // white // define part coordinates: rel1 { // top-left point at (0, 0) [WIDTH * 0 + 0, HEIGHT * 0 + 0] relative: 0.0 0.0; offset: 0 0; } rel2 { // bottom-right point at (WIDTH * 1.0 - 1, HEIGHT * 1.0 - 1) relative: 1.0 1.0; offset: -1 -1; } } } part { name: "text"; type: TEXT; mouse_events: 1; // we want to change the color on mouse-over // 2 states, one "default" and another "over" to be used // on mouse over effect description { state: "default" 0.0; color: 255 0 0 255; // red // define part coordinates: rel1 { // top-left at (WIDTH * 0.1 + 5, HEIGHT * 0.2 + 10) relative: 0.1 0.2; offset: 5 10; } rel2 { // bottom-right at (WIDTH * 0.9 - 6, HEIGHT * 0.8 - 11) relative: 0.9 0.8; offset: -6 -11; } // define text specific state details text { font: "Sans"; // using fontconfig name! size: 10; text: "hello world"; } } description { state: "over" 0.0; inherit: "default" 0.0; // copy everything from "default" at this point color: 0 255 0 255; // override color, now it is green } } // do programs to change color on text mouse in/out (over) programs { program { // what triggers this program: signal: "mouse,in"; source: "text"; // what this program does: action: STATE_SET "over" 0.0; target: "text"; // do the state-set in a nice interpolation animation // using linear time in 0.1 second transition: LINEAR 0.1; } program { // what triggers this program: signal: "mouse,out"; source: "text"; // what this program does: action: STATE_SET "default" 0.0; target: "text"; // do the state-set in a nice interpolation animation // using linear time in 0.1 second transition: LINEAR 0.1; } } } } }
One should save these files as edje_example.c and edje_example.edc then:
gcc -o edje_example edje_example.c `pkg-config --cflags --libs eina evas ecore ecore-evas edje` edje_cc edje_example.edc ./edje_example "some text"
Although simple, this example illustrates that animations and state changes can be done from the Edje file itself without any requirement in the C application.
Before digging into changing or creating your own Edje source (edc) files, read the Edje Data Collection reference.
Edje History.
Examples on Edje's usage
What follows is a list with various commented examples, covering a great part of Edje's API:
- Note:
- The example files are located at /Where/Enlightenment/is/installed/share/edje/examples
- Edje basics example
- Edje basics example 2
- Swallow example
- Swallow example 2
- Table example
- Box example - basic usage
- Box example - custom layout
- Edje Color Class example
- Edje Animations example
- Edje animations example 2
- Edje signals and messages
- Edje Signals example 2
- Edje Text example
- Dragable parts example
- Perspective example | http://docs.enlightenment.org/auto/edje/ | 2013-05-18T19:40:59 | CC-MAIN-2013-20 | 1368696382764 | [] | docs.enlightenment.org |
View tree
Close tree
|
Preferences
|
|
Feedback
|
Legislature home
|
Table of contents
Search
Up
Up),
UWS 12.07(1)(a)
(a)
For fixed term and probationary appointee, one of the following occurs:
UWS 12.07(1)(a)1.
1.
The appointment expires under its own terms;
UWS 12.07(1)(a)2.
2.
The staff member fails to accept an alternate appointment.
UWS 12.07(1)(b)
(b)
For academic staff on indefinite appointment one of the following occurs:
UWS 12.07(1)(b)1.
1.
The staff member is reappointed to the position from which laid off. Failure to accept such reappointment would terminate the academic staff member's association with the institution;
UWS 12.07(1)(b)2.
2.
The staff member accepts an alternative continuing position in the institution. Failure to accept an alternate appointment would not terminate the academic staff member's association with the institution;
UWS 12.07(1)(b)3.
3.
The staff member resigns;
UWS 12.07(1)(b)4.
4.
The staff member fails to notify the chancellor or his/her designee not later than December 1, of each year while on layoff status, as to his/her location, employment status, and desire to remain on layoff status. Failure to provide such notice of desire to remain on layoff status shall terminate the academic staff member's association with the institution;
UWS 12.07(1)(b)5.
5.
A period of 3 years lapses.
UWS 12.07 History
History:
Cr.
Register, October, 1975, No. 238
, eff. 11-1-75.
UWS 12.08.08 History
History:
Cr.
Register, October, 1975, No. 238
, eff. 11-1-75.
UWS 12.09.09 History
History:
Cr.
Register, October, 1975, No. 238
, eff. 11-1-75.
UWS 12.10.10 History
History:
Cr.
Register, October, 1975, No. 238
, eff. 11-1-75.
UWS 12.11
UWS 12.11
Rights of academic staff members on layoff.
An academic staff member on layoff status in accord with the provisions of this chapter has the reemployment rights guaranteed by
s.
UWS 12.09
or
12.10
, and has the following minimal rights:
UWS 12.11(1)
(1)
Such voluntary participation in fringe benefit programs as is permitted by institutional policies;
UWS 12.11(2)
(2)
Such continued use of campus facilities as is allowed by policies and procedures established by the institution; and
UWS 12.11(3)
(3)
Such participation in institutional activities as is allowed by the policies and procedures established by the institution.
UWS 12.11 History
History:
Cr.
Register, October, 1975, No. 238
, eff. 11-1-75.
Next file:
Chapter UWS 13
/code/admin_code/uws/12
true
administrativecode
/code/admin_code/uws/12/08
administrativecode/UWS 12.08
administrativecode/UWS 12? | http://docs.legis.wisconsin.gov/code/admin_code/uws/12/08 | 2013-05-18T19:31:20 | CC-MAIN-2013-20 | 1368696382764 | [] | docs.legis.wisconsin.gov |
rizione.
Elenco dei parametri
link_identifier
An LDAP link identifier, returned by ldap_connect().
dn
The distinguished name of an LDAP entity.
entry
Valori restituiti
Restituisce
TRUE in caso di successo,
FALSE in caso di fallimento.
Vedere anche:
- ldap_mod_add() - Add attribute values to current attributes
- ldap_mod_replace() - Replace attribute values with new ones
jonatam dot ribeiro at hotmail dot com ¶
11 days ago
Anonymous ¶
7 years ago
For.
thomas dot thiel at tapgmbh dot com ¶
10 years ago
and please don't forget:
you can't delete all attributes, when at least one is required.
JoshuaStarr at aelana dot com ¶
11 years ago ¶
11 years ago
¶
12 years ago ¶
12 years ago ¶
12 years ago
To remove all instances of an attribute you can use ldap_modify with an empty value for that attribute.
$entry["mail"] = "";
$result = ldap_modify($connID, $dn, $entry);
arimus at apu dot edu ¶
13 years ago. | http://docs.php.net/manual/it/function.ldap-mod-del.php | 2013-05-18T19:43:03 | CC-MAIN-2013-20 | 1368696382764 | [array(['/images/notes-add.gif', 'add a note'], dtype=object)] | docs.php.net |
Talk:Upgrading a Joomla 1.5 template to Joomla 2.5
From Joomla! Documentation
Revision as of 06:49, 21 October 2011 by Andrecolbert (Talk | contribs)
Hello! :)
confusing sentence
The wording in the following sentence needs improving. It doesn't make sense.
<fieldset name="basic"> wraps the parameters in a slider and using name="basic" labels that slider as "Basic Options" and name="advanced" labels it as "Advanced Options".
Sitename File not referenced
The updated syntax below fails to give a file reference:
<?php echo $mainframe->getCfg('sitename');?> is now $app->getCfg('sitename'); Where $app = JFactory::getApplication(); | http://docs.joomla.org/index.php?title=Talk:Upgrading_a_Joomla_1.5_template_to_Joomla_2.5&oldid=62700 | 2013-05-18T20:06:26 | CC-MAIN-2013-20 | 1368696382764 | [] | docs.joomla.org |
public final class JavaBeanAccessorMethodAuthorizer extends Object implements MethodInvocationAuthorizer
MethodInvocationAuthorizerthat allows any method execution that follows the design patterns for accessor methods described in the JavaBean specification 1.01; that is, any method whose name begins with 'get' or 'is'. For additional security, only methods belonging to classes in user-specified packages will be allowed. If a method does not match the user-specified parameters, or belongs to the 'org.apache.geode' package, then the decision of whether to authorize or not will be delegated to the default
RestrictedMethodAuthorizer. Some known dangerous methods, like
Object.getClass(), are also rejected by this authorizer implementation (see
RestrictedMethodAuthorizer.isPermanentlyForbiddenMethod(Method, Object)). When used as intended, with all region entries and OQL bind parameters following the JavaBean specification 1.01, this authorizer implementation addresses all four of the known security risks:
Java Reflection,
Cache Modification,
Region Modificationand
Region Entry Modification. It should be noted that the
Region Entry Modificationsecurity risk still potentially exists: users with the
DATA:READ:RegionNameprivilege will be able to execute any method whose name starts with 'is' or 'get' on the objects stored within the region and on instances used as bind parameters of the OQL, providing they are in the specified packages. If those methods do not fully follow the JavaBean 1.01 specification that accessors do not modify the instance's state then entry modifications are possible. Usage of this authorizer implementation is only recommended for secured clusters on which the Operator has full confidence that all objects stored in regions and used as OQL bind parameters follow JavaBean specification 1.01. It might also be used on clusters on which the entries stored are immutable.
Cache,
MethodInvocationAuthorizer,
RestrictedMethodAuthorizer
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
initialize
public JavaBeanAccessorMethodAuthorizer(Cache cache, Set<String> allowedPackages)
JavaBeanAccessorMethodPackages- the packages containing classes for which 'is' and 'get' methods will be authorized.
public JavaBeanAccessorMethodAuthorizer(RestrictedMethodAuthorizer restrictedMethodAuthorizer, Set<String> allowedPackages)
JavaBeanAccessorMethodAuthorizerobject and initializes it so it can be safely used in a multi-threaded environment.
restrictedMethodAuthorizer- the default
RestrictedMethodAuthorizerto use.
allowedPackages- the packages containing classes for which 'is' and 'get' methods will be authorized.
public Set<String> getAllowedPackages() | http://gemfire-910-javadocs.docs.pivotal.io/org/apache/geode/cache/query/security/JavaBeanAccessorMethodAuthorizer.html | 2020-11-24T07:20:58 | CC-MAIN-2020-50 | 1606141171126.6 | [] | gemfire-910-javadocs.docs.pivotal.io |
basket plugin
T | evaluate basket()
Basket finds all frequent patterns of discrete attributes (dimensions) in the data. It then returns the frequent patterns that passed the frequency threshold in the original query. Basket is guaranteed to find every frequent pattern in the data, but isn't guaranteed to have polynomial runtime. The runtime of the query is linear in the number of rows, but it might be exponential in the number of columns (dimensions). Basket is based on the Apriori algorithm originally developed for basket analysis data mining.
Syntax
T | evaluate basket( arguments
)
Returns
Basket returns all frequent patterns appearing above the ratio threshold of the rows. The default threshold is 0.05. Each pattern is represented by a row in the results.
The first column is the segment ID. The next two columns are the count and percentage of rows, from the original query, that are captured by the pattern. The remaining columns are from the original query. Their value is either a specific value from the column or a wildcard value, which is by default null, meaning a variable value.
Arguments (all optional)
T | evaluate basket([Threshold, WeightColumn, MaxDimensions, CustomWildcard, CustomWildcard, ...]
)
All arguments are optional, but they must be ordered as above. To indicate that the default value should be used, use the string tilde value - '~'. See examples below.
Available arguments:
Threshold - 0.015 < double < 1 [default: 0.05]
Sets the minimal ratio of the rows to be considered frequent. Patterns with a smaller ratio won't be returned.
Example:
T | evaluate basket(0.02)
WeightColumn - column_name
Considers each row in the input according to the specified weight. By default, each row has a weight of '1'. The argument must be a name of a numeric column, such as int, long, real. A common use of a weight column, is to take into account sampling or bucketing/aggregation of the data that is already embedded into each row.
Example:
T | evaluate basket('~', sample_Count)
MaxDimensions - 1 < int [default: 5]
Sets the maximal number of uncorrelated dimensions per basket, limited by default, to minimize the query runtime.
Example:
T | evaluate basket('~', '~', 3)
CustomWildcard - "any_value_per_type"
Sets the wildcard value for a specific type in the result table that will indicate that the current pattern doesn't have a restriction on this column. Default is null. The default for a string is an empty string. If the default is a good value in the data, a different wildcard value should be used, such as
*.
For example:
T | evaluate basket('~', '~', '~', '*', int(-1), double(-1), long(0), datetime(1900-1-1))
Example
StormEvents | where monthofyear(StartTime) == 5 | extend Damage = iff(DamageCrops + DamageProperty > 0 , "YES" , "NO") | project State, EventType, Damage, DamageCrops | evaluate basket(0.2)
Example with custom wildcards
StormEvents | where monthofyear(StartTime) == 5 | extend Damage = iff(DamageCrops + DamageProperty > 0 , "YES" , "NO") | project State, EventType, Damage, DamageCrops | evaluate basket(0.2, '~', '~', '*', int(-1)) | https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/basketplugin | 2020-11-24T07:37:39 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.microsoft.com |
Backlinks Rename Page Add to book Export to PDF Book Creator Add this page to your book Book Creator Remove this page from your book Manage book (0 page(s)) Help The License key is also available as a USB-Dongle. We use the products from MARX Software Security. To use the Dongles you have to do following steps: minimum .NET Framework V 4.6.0 (check version / install) install MARX "CBIOS Server Windows" as a service" plug in the Dongle only using S7-FileLogger: install S7-FileLogger for Dongle start the program Under you will find the license informations | https://docs.traeger.de/en/company/dongle | 2020-11-24T05:48:17 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.traeger.de |
Installing the Real End User Experience Monitoring Software Edition silently
Where you are in the Installation process
The server-silent-options-linux file is used to configure the Real End User Experience Monitoring Software Edition silent installation.
This file is located in the Disk1 directory of the Real End User Experience Monitoring Software Edition installation file structure.
Notes
- If you are planning to install the Real User Analyzer component, you must have an activation key file from BMC Customer Support. You will need this file to complete the installation of the Real User Analyzer.
- You must have an encrypted user password for the Real User Analyzer and Collector silent installation.
Encrypting a password for silent Real End User Experience Monitoring Software Edition installation
The Maintenance Tool enables you to create an encrypted password that the Real User Analyzer and Collector installation process requires for importing a keystore. You must use an encrypted password so that the KeyStore password is not exposed in the server-silent-options.txt file.
You can run the Maintenance Tool in a GUI or from the command line interface (CLI).
- To open the Maintenance Tool, go to the temporaryDirectory/Disk1/utility directory.
The temporaryDirectory is the place where you downloaded the installation files.
- Run the EuemMaintenanceTool utility and click the Encrypt tab.
- Enter your password in the Password and Confirm Password fields and click Encrypt.
- Copy and paste the value from the Encrypted Password field to the server-silent-options-linux.txt file for the
collector_password,
analyzer_password, and
-confirm_passwordparameter. open the Maintenance Tool, go to the temporaryDirectory\Disk1\utility directory.
The temporaryDirectory is the place to which you downloaded the installation files.
Run the following command, entering your password for the
-passwordand
-confirm_passwordoptions. For example:
./EuemMaintenanceTool.sh -silent -encrypt -encrypt -password=security -confirm_password=security
Note
For Linux installations, any password that includes special characters requires single quotation marks before and after the password. For example:
./EuemMaintenanceTool.sh -silent -encrypt -encrypt -password='$123456$' -confirm_password='$123456$'
- Copy and paste the encrypted password output to the server-silent-options-linux.txt file for your silent installation. configure the silent installation file
On the system where you want to install the Real User Analyzer and Collector, complete the following procedure:
Use a text editor to open the server-silent-options-linux file.
Use the default installation directory or enter a new directory for the installation:
-P installLocation=/opt/bmc/euem
(Optional) To prevent the Analyzer or Collector installation, add a # symbol to beginning of the relevant line.
-A featureEuemAnalyzer -A featureEuemCollector
Verify the configuration information and change the port numbers, disk space, or amount of RAM for the Analyzer or Collector, if required.
Ensure that the port numbers you enter are available.
-J analyzer_logrotate_dir=/etc/logrotate.d -J euem_hostname= -J max_memory= -J analyzer_activation_key_file_path= -J analyzer_company_name_for_activation= -J analyzer_http_port=80 -J analyzer_https_port=443 -J analyzer_shutdown_port=8005 -J analyzer_sessions_port=22033 -J analyzer_pages_port=22032 -J analyzer_objects_port=22031 -J analyzer_username= -J analyzer_password= -J analyzer_confirm_password= -J analyzer_database_port=3306 -J analyzer_data_disk_space=570 -J analyzer_optional_ram= -J collector_http_port=8080 -J collector_https_port=8443 -J collector_shutdown_port=8004 -J collector_username= -J collector_password= -J collector_confirm_password= -J collector_database_port=5432 -J collector_data_disk_space=28 -J collector_optional_ram= -J snmp_port=161 -J snmp_active=false -J demo_mode=false
Silent installation file field descriptions
Save and close the server-silent-options-linux.txt file.
To install the Real End User Experience Monitoring Software Edition silently
- In a command line, navigate to the Disk1 folder in the installation file structure.
Enter the following command:
./setup.bin -i silent -DOPTIONS_FILE=<Full File Path>/server-silent-options-linux.txt
Notes
- After the installation, a new user is created for the Real End User Experience Monitoring database.
- If the installation fails, you must uninstall what has been installed before rerunning the installation. For more information about the uninstall process, see Uninstalling Real End User Experience Monitoring Software Edition
- The Real User Analyzer and Collector installation euem_install_log.txt log file is located in the /tmp folder.
To verify the Real End User Monitoring Software Edition installation.
Real End User Experience Monitoring log on to view each log.
Next Step in the Installation process
Step 7 Phase B — Now that you have successfully installed Real End User Experience Monitoring Software Edition, you must install Cloud Probe. | https://docs.bmc.com/docs/TSOperations/113/installing-the-real-end-user-experience-monitoring-software-edition-silently-843620128.html | 2020-11-24T07:14:49 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.bmc.com |
To get started with Kommander, download and install the latest version of Konvoy.
Release SummaryRelease Summary
Kommander provides a command center for all your cloud native management needs in public Information as a Service (IaaS), on-premises, and edge environments. Kommander provides a multi-tenant experience to create, secure, and configure Kubernetes clusters and cloud native workloads. Additionally, Kommander enables teams to unlock federated and cost management, across multiple clusters, whether they are a new Konvoy cluster or existing 3rd party/DIY distribution.
Supported VersionsSupported Versions
Improvements in Kommander 1.1.2Improvements in Kommander 1.1.2
Kommander 1.1.2 was released on 25, August 2020, here are the improvements you can expect when upgrading to Kommander 1.1.1:
- Update YAKCL to v0.3.3
Component VersionsComponent Versions
- Addon:
1.1.2-1
- Chart:
0.8.44
- auto-provisioning (yakcl):
0.3.3
- kommaner-federation (yakcl):
0.3.3
- kommander-licensing (yakcl):
0.3.3
- UI:
3.126.1
- kommander-karma:
0.3.10
- kubeaddons-catalog:
0.1.11
- kommander-thanos:
0.1.15
- kubecost:
0.1.12
- grafana:
4.6.3
Improvements in Kommander 1.1.1Improvements in Kommander 1.1.1
Kommander 1.1.1 was released on 13, August 2020, here are the improvements you can expect when upgrading to Kommander 1.1.1:
- Aggregated cost monitoring data in Kommander is now available 5 minutes after clusters are attached
Component VersionsComponent Versions
- Addon:
1.1.1-1
- Chart:
0.8.4.3
- auto-provisioning (yakcl):
0.1.7
- kommaner-federation (yakcl):
0.1.7
- kommander-licensing (yakcl):
0.1.7
- UI:
3.126.1
- kommander-karma:
0.3.10
- kubeaddons-catalog:
0.1.11
- kommander-thanos:
0.1.15
- kubecost:
0.1.12
- grafana:
4.6.3
New Features and Capabilities in Kommander 1.1.0New Features and Capabilities in Kommander 1.1.0
Kommander 1.1.0 was released on 16, July 2020.
Centralized Cost MonitoringCentralized Cost Monitoring
Kubecost, running on Kommander, provides centralized cost monitoring for all managed clusters. This feature, installed by default in every Kommander cluster, provides a centralized view of Kubernetes resources used on all managed clusters. For more information go to Centralized Cost Monitoring
Automatic Federation of AuthN/Z and Monitoring StackAutomatic Federation of AuthN/Z and Monitoring Stack
When attaching non-Konvoy clusters, such as Amazon EKS, Azure AKS, Google GKE, and On-Premises Kubernetes clusters, Kommander will federate a subset of standard Konvoy Addons and charts to enable SSO, AuthN/Z, observability, and cost monitoring.
Improved RBACImproved RBAC
Kommander has enhanced Access Controls for users at the global, workspace, and project levels allowing greater flexibility and security when assigning roles. See Granting Access to Kubernetes Resources in Kommander
Other ImprovementsOther Improvements
Beyond new features, here are the improvements you can expect when upgrading to Kommander 1.1:
- Added guidance on what to do when cluster deletion fails
- Added flag (
--skip-credentials-display) to Kommander so Konvoy does not display login information in logs
- Added ability to disable federation of monitoring stack
- Added autogenerated labels to advanced cluster create form
- Added Support for Kommander to attach the cluster it is running on as managed Cluster
- Added ability to access the managed cluster’s Kubernetes API with a valid management cluster token
- Added possibility to set metadata for Workspaces
- Added a way to deploy generic KUDO services
- Added quota support for projects
- Added Cluster Overview tab to Cluster Details page in UI
- Added a way to add clusters to projects based on labels
- Added support for nonResourceUrls in Roles in the UI
- Added loading indicators for cluster creation form
- Added Cluster ID to UI for easier identification in other dashboards
- Allowed default federated addon values to be overriden
- Attaching clusters to Kommander is now considered GA
- Display users Username in the UI
- Enabled License validation
- Generated labels are now hidden in cluster overview pages
- Hid Kubeconfig download link for clusters where that config might not be available
- Improved Grafana dashboard names for addons
- Improved auth token transport for managed clusters
- Improved context handling for multiple contexts when attaching cluster
- Improved display of resource allocations in the UI
- Improved attach Cluster Flow in UI
- Improved K8s Version Selector and Support for managed clusters created through the UI
- Improved AWS tags to include cluster name.
- Improved Error messaging when trying to delete roles or groups that are used by policies
- Improved performance for querying available versions in cluster create form
- Improved Cluster Status Visualisation in the UI
- Removed suffixes from federated ConfigMaps and Secrets
- Removed namespace suffix from projects and platform service names
- Removed old, unsupported versions from version selector in create cluster form
- Renamed “Cloud Provider” to “Infrastructure Provider” to better fit on premise
- Simplifed kommander’s grafana dashboard job
- UI now trims input values to remove leading, trailing, and duplicate spaces
- Updated catalog API for v1beta2 addons
Fixed IssuesFixed Issues
- Allow deleting clusters retry after failing. For example when there are permission issues.
- Disabled “View Logs” Link in UI for managed clusters not running Kibana
- Federate karma-proxy only to Konvoy clusters
- Fixed missing Grafana home dashboard
- Fixed cluster deletion on detail page not working
- Fixed a bug where Kommander addon was not successfully deploying on Azure
- Fixed Kommander to not show unofficially supported versions of Kubernetes by default
- Fixed possible data collision bug related to clusters
- Fixed a possible crash-loop situation for kubeaddons when cert-manager is not ready yet
- Fixed and improved LDAP Identity provider handling
- Fixed display of “nothing to report” situations vs actual errors
- Fixed bugs related to access control
- Fixed naming of roles in projects
- Fixed an issue where projects were not created due to a bug in project name suffix handling
- Fixed version selector when creating clusters through the UI
- Fixed listing of workspaces and projects for limited users
- Fixed
skipMetadataApiCheckbeing removed from
cluster.yaml
- Fixed project links on projects overview page
- Fixed kubeconfig download for certain clusters/users
- Fixed uninstall from Kommander addon
- Fixed number value saving in cluster creation form
- Fixed federating Kommander internal addons to managed clusters
- Fixed UI leaking sensitive data in some error messages
- Fixed UI not showing some error messages
- Fixed UI not allowing user to change context when attaching cluster
- Prevent minor version upgrades for Kubernetes due to compatibility issues
- Kommander Grafana was unavailable after self-attaching host cluster as managed cluster
- UX Bugs and Improvements
Known IssuesKnown Issues
- Currently, you can upgrade any managed Konvoy cluster via Kommander. However, after doing so, you will not be able to delete the managed cluster from the Kommander UI nor can you upgrade the Kubernetes version greater than the version the managed cluster was originally deployed with.
Component VersionsComponent Versions
- Addon:
1.1.0-56
- Chart:
0.8.41
- auto-provisioning:
0.1.6
- kommander-federation:
0.1.6
- kommander-licensing:
0.1.6
- UI:
3.126.1
- kommander-karma:
0.3.10
- kubeaddons-catalog:
0.1.11
- kommander-thanos:
0.1.15
- kubecost:
0.1.10
- grafana:
4.6.3 | https://docs.d2iq.com/dkp/kommander/1.1/release-notes/ | 2020-11-24T06:15:43 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.d2iq.com |
TOPICS×
What Should I Test?
Test results must be clear and meaningful so that you can feel confident making large dollar decisions based on those results.
Although you can test various page layouts with Sensor and Site, Adobe suggests that you focus on testing high-value, strategic business initiatives, or new or redesigned website functionality that address the goals that you have set for your website as well as for your business. You can test for such issues as best price guarantees, personalization functionality, market offers (for example, packages or bundles), creative design, and application processes.
The following concepts are most important when developing your controlled experiment:
- Understand the right changes to make. This requires some research into how your website functions and the business processes underlying the front-end website. You want to make changes that provide the most impact and can be tested easily.
- Small changes can have significant impact. Not all of the changes that you test need to be drastic to have a significant impact on your business. Always be open to making small, but very important changes.
Supported Methodologies
Many types of experiments with many different goals can be performed using Site. The following list provides a few examples:
- Altering pages, content, and website processes to improve conversion rates.
- Changing marketing campaigns, promotions, cross-sells, and up-sells to increase revenue.
- Varying page load times to understand customer quality of service and the actual value of infrastructure performance.
To reach these goals, Site supports the following types of methodologies for controlled experimentation and testing:
- Page Replacement: Replace static URL X with static URL Y. This methodology is of limited use in a dynamic environment.
- Dynamic URI Replacement: This is a variant of Page Replacement that replaces static page X with dynamic page Y to render dynamic content.
- Object Replacement: Replace fixed object X with fixed object Y.
- Content Replacement: Replace content set X (multiple objects, pages, table, and so on) with content set Y.
- Experiment Variable Replacement: Replace JavaScript object /writeCookie_X.js with JavaScript object /writeCookie_Y.js to write a cookie that can be used by a back-end system to serve particular content.
Controlled experiments are based on URI replacement, not query string replacement. The URI within a particular URL is highlighted in the following example:
For example, in your controlled experiment you could specify that the control group URI index.asp be replaced with the test group URI index2.asp to determine which page design would result in more value. | https://docs.adobe.com/content/help/en/data-workbench/using/experiments/c-wht-test-.html | 2020-01-17T22:40:08 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.adobe.com |
TOPICS×
Character encoding
Image Serving supports image catalogs with ISO‑8859‑1 and UTF‑8 encoding.
A byte order mark (BOM) is used to specify the encoding for each file. For UTF-8, the BOM is the byte sequence EF BB BF . UTF-8 encoding is assumed when this character sequence is detected at the very beginning of each image catalog file. Any other byte sequence results in the file being interpreted as being encoded to the ISO-8859-1 standard.
Many contemporary applications, when configured for UTF-8, inserts the BOM automatically. | https://docs.adobe.com/content/help/en/dynamic-media-developer-resources/image-serving-api/image-serving-api/image-catalog-reference/file-formats/r-is-cat-character-encoding.html | 2020-01-17T23:08:54 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.adobe.com |
EXOS Platform Options¶
Extreme EXOS Ansible modules support multiple connections. This page offers details on how each connection works in Ansible and how to use it.
Topics
Connections Available¶
EXOS does not support
ansible_connection: local. You must use
ansible_connection: network_cli or
ansible_connection: httpapi
Using CLI in Ansible¶
Example CLI
group_vars/exos.yml¶
ansible_connection: network_cli ansible_network_os: ex EXOS OS version exos_command: commands: show version when: ansible_network_os == 'exos'
Using EXOS-API in Ansible¶
Example EXOS-API
group_vars/exos.yml¶
ansible_connection: httpapi ansible_network_os: ex EXOS-API Task¶
- name: Retrieve EXOS OS version exos_command: commands: show version when: ansible_network_os == 'ex.
See also | https://docs.ansible.com/ansible/devel/network/user_guide/platform_exos.html | 2020-01-17T23:03:11 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.ansible.com |
- Reference
The Coveo Cloud platform offers a set of APIs allowing developers to manage all platform aspects (see Coveo Cloud V2 API Overview).
Under the hood, the Coveo Cloud administration console uses many of the platform APIs. The available API reference pages are organized below in sections similar to those of the Coveo Cloud administration console main menu to help you more easily find the appropriate API to perform a specific task.
- You may want to use the Official Coveo Cloud APIs JavaScript Client to query Coveo Cloud APIs.
- You can search for Coveo Cloud API Endpoints from connect.coveo.com.
The Coveo Cloud platform does not support VPC peering.
Content
Search
Analytics
Organization
The pages in this section present API documentation generated from the Swagger specifications using Redoc.
Display Mode
People also viewed | https://docs.coveo.com/en/4/cloud-v2-developers/coveo-cloud-v2-api-reference | 2020-01-17T21:07:19 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.coveo.com |
DeployGate makes it easy to share your in-development iOS and Android apps, allowing developers to seamlessly progress through the prototyping, development, testing, and marketing stages of app distribution.
Register and log into DeployGate.
Go to
Account Settings.
Then, a Profile page will be shown. You will be able to find the API key at the end of the page.
You can use the following parameters in the JSON recipe script for Monaca CI. For more information, please refer to DeployGate API documentation.
HockeyApp brings mobile DevOps to your apps with beta distribution, crash reporting, user metrics, feedback, and powerful workflow integrations.
Register and log into HockeyApp.
Go to
Account Settings.
In the
Account Settings page, go to
API Tokens tab. In this page,
you can find all of your API tokens or create a new one. Assuming
you haven’t created an API token yet, let’s create one as shown in
the screenshot below:
Once the API token is successfully created, you will be able to see it at the bottom of the page.
You can use the following parameters in the JSON recipe script for Monaca CI. For more information, please refer to HockeyApp API documentation.
Appetize.ios allows you to run Android and iOS apps on your browser. By using this service, it is possible to check the operations of the application without iOS certificates or provisioning profiles.
Let’s try experiencing Appetize.io’s services with this demo.
Register and log into Appetize.io.
Enter your email in the Request an API token form and click on Request button to acquire the API token.
After getting the API Token, you are ready to add Appetize.io to Monaca. Please do as follows:
From Monaca Cloud IDE menu, go to Configure → Deploy Services .
Click on Add Deploy Service.
AppetizeIo and fill in the required information such as
Config Alias: a unique identifier for each service
API Token: API Token provided by Appetize.io
Then, click on Add. That’s it. You can now use Appetize’s simulator to install your build apps.
You can use the following parameters in the JSON recipe script for Monaca CI. For more information, please refer to Appetize.io documentation.
In addition to the above services, we are planning to add more deployment services. Currently, we are working the following service:
See Also: | https://docs.monaca.io/en/products_guide/monaca_ide/monaca_ci/supported_services/ | 2020-01-17T22:57:49 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.monaca.io |
HipChat¶
Get HipChat notifications for Sentry issues.
- Go to the project settings page in Sentry that you’d like to link with HipChat
- Click All Integrations, find the HipChat integration in the list, and click configure
- Click Enable Plugin
- Click the Enable Integration link OR add the integration through the HipChat Marketplace
- Log in to HipChat and specify which HipChat room you would like to send Sentry issues to
- Follow the Sentry configuration flow and select the organizations and projects you would like notifications for
- Sentry issues will appear in your HipChat room as they occur depending on the alert rules you have specified in your project settings | https://docs.sentry.io/integrations/hipchat/ | 2017-12-11T07:34:18 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.sentry.io |
There are more than 150 color-space conversion methods available in OpenCV. But we will look into only two which are most widely used ones, BGR \(\leftrightarrow\) Gray and BGR \(\leftrightarrow\) HSV.
For color conversion, we use the function cv2.cvtColor(input_image, flag) where flag determines the type of conversion.
For BGR \(\rightarrow\) Gray conversion we use the flags cv2.COLOR_BGR2GRAY. Similarly for BGR \(\rightarrow\) HSV, we use the flag cv2.COLOR_BGR2HSV. To get other flags, just run following commands in your Python terminal :
Now we know how to convert BGR image to HSV, we can use this to extract a colored object. In HSV, it is more easier to represent a color than in BGR color-space. In our application, we will try to extract a blue colored object. So here is the method:
Below is the code which are commented in detail :
Below image shows tracking of the blue object:
This is a common question found in stackoverflow.com. It is very simple and you can use the same function, cv2.cvtColor(). Instead of passing an image, you just pass the BGR values you want. For example, to find the HSV value of Green, try following commands in Python terminal:
Now you take [H-10, 100,100] and [H+10, 255, 255] as lower bound and upper bound respectively. Apart from this method, you can use any image editing tools like GIMP or any online converters to find these values, but don't forget to adjust the HSV ranges. | https://docs.opencv.org/3.2.0/df/d9d/tutorial_py_colorspaces.html | 2017-12-11T07:42:12 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.opencv.org |
All functions support the methods documented below, inherited from sympy.core.function.Function.
Base class for applied mathematical functions.
It also serves as a constructor for undefined function classes.
Examples
First example shows how to use Function as a constructor for undefined function classes:
>>> from sympy import Function, Symbol >>> x = Symbol('x') >>> f = Function('f') >>> g = Function('g')(x) >>> f f >>> f(x) f(x) >>> g g(x) >>> f(x).diff(x) Derivative(f(x), x) >>> g.diff(x) Derivative(g(x), x)
In the following example Function is used as a base class for my_func that represents a mathematical function my_func. Suppose that it is well known, that my_func(0) is 1 and my_func at infinity goes to 0, so we want those two simplifications to occur automatically. Suppose also that my_func(x) is real exactly when x is real. Here is an implementation that honours those requirements:
>>> from sympy import Function, S, oo, I, sin >>> class my_func(Function): ... ... nargs = 1 ... ... @classmethod ... def eval(cls, x): ... if x.is_Number: ... if x is S.Zero: ... return S.One ... elif x is S.Infinity: ... return S.Zero ... ... def _eval_is_real(self): ... return self.args[0].is_real ... >>> x = S('x') >>> my_func(0) + sin(0) 1 >>> my_func(oo) 0 >>> my_func(3.54).n() # Not yet implemented for my_func. my_func(3.54) >>> my_func(I).is_real False
In order for my_func to become useful, several other methods would need to be implemented. See source code of some of the already implemented functions for more complete examples.
Attributes
Returns the method as the 2-tuple (base, exponent).
Returns the first derivative of the function.
Returns whether the functon is commutative. | http://docs.sympy.org/0.7.4/modules/functions/index.html | 2017-12-11T07:44:17 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.sympy.org |
I am happy to announce the release of JBossOSGi-1.0.0.
You
- Wiki documentation now available in Confluence
- OSGi service invocation from EJB3 and Webap
- Migration to the Arquillian test framework
- Better support for execution environments
For details please have a look at the latest version of our User Guide.
Here are the change log details
Enjoy
Labels:
None | https://docs.jboss.org/author/display/JBOSGI/JBossOSGi-1.0.0 | 2017-12-11T07:57:37 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.jboss.org |
room_add();
Returns: Real
This function will create a new, empty, room and add it to your
game, returning its index to be stored in a variable for all
further codes that deal with this., and its persistence to false. | http://docs.yoyogames.com/source/dadiospice/002_reference/rooms/room_add.html | 2017-11-17T23:00:19 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.yoyogames.com |
You can define a Cloud Function either from the Telerik Platform portal or by using the Backend Services Metadata API.
In this article:
Structure
You start defining a Cloud Function by giving it a name. The name and its format is important because it becomes a part of the RESTful endpoint that you use to execute the Cloud Function. It must be unique within the app. Choose a name that is short, descriptive, and URL-safe.
When defining the Cloud Function body, always start with the following code. Executing it results in an empty JSON object and status code 200.
Everlive.CloudFunction.onRequest(function (request, response, done) { done(); });
The
request object contains information about the Cloud Function request, including the HTTP method that was used, the query string, and the principal who executed it on the client, along with any posted data and headers.
Use the
response object to customize the response, including status code, headers, modified operation result, and additional data.
The body of the Cloud Function is set using the
body property of the
response object.
response.body = 'My custom result';
To set the HTTP status code, use the
statusCode property:
response.statusCode = 200;
The response headers can be set using the
headers property:
response.headers = { "HTTP-header-name": "header-value" }
Example
The following Cloud Function manipulates the response body and headers to return an HTML document instead of the default JSON.
Everlive.CloudFunction.onRequest(function(request, response, done) { response.body = "<html><head></head><body><h2>Sample body</h2></body></html>"; response.headers = { "Content-type": "text/html" }; done(); });
Passing Parameters
You can pass parameters to the Cloud Function as query string parameters in the URL.
Inside the Cloud Function, you have access to the query string through the
request.queryString object.
Everlive.CloudFunction.onRequest(function (request, response, done) { if (request.queryString) { response.body = request.queryString; done(); } else { done(); } });
The result of the above function is as follows:
{ "a": "1", "b": "2", "c": "3" }
From the body of a Cloud Function, you can access your content types, send push notifications and SMSs, make external requests, as well as take other actions supported by the Cloud Code API.
For example, you could pass an array of values that identify content type items that you want to delete:["MessageText1", "MessageText2", "MessageText3"]
The following code defines a filter expression (
query.where().isin()) that is used to delete items from the
Messages content type. The expression matches all items that have one of the specified values in their
Message field.
Everlive.CloudFunction.onRequest(function (request, response, done) { if (request.queryString && request.queryString.Delete) { var filterCondition = JSON.parse(request.queryString.Delete); var query = new Everlive.Sdk.Query(); query.where().isin("Message", filterCondition); var data = Everlive.Sdk.$.data('Messages'); data.destroy(query, function (data) { console.log("Deleted items: " + data.result); done(); }, function (err) { console.log("Error occurred: " + err.message); }); } else { done(); } });
Permissions
Each Cloud Function has permissions that control who can execute it. Read more about permissions in the security section. | https://docs.telerik.com/platform/backend-services/dotnet/server-side-logic/cloud-code/cloud-functions/cloud-functions-defining | 2017-11-17T23:21:51 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.telerik.com |
RadCalendarView: Getting Started
In this article, you will learn how to get started with RadCalendarView for Android: how to initialize the calendar, how to set the dates that are displayed and how to create a calendar that looks like this one:
Adding the calendar instance
You can easily add RadCalendarView in the layout file for the main activity of your project:
<com.telerik.widget.calendar.RadCalendarView android:
You can access the control from the activity in order to be able to apply further modifications:
protected override void OnCreate (Bundle bundle) { base.OnCreate (bundle); SetContentView (Resource.Layout.Main); RadCalendarView calendarView = FindViewById<RadCalendarView> ( Resource.Id.calendarView); }
Display Date
By default when the calendar is loaded, it shows the current month. If you need to change the month that is currently visible, you can use the method setDisplayDate(long). If you want the month that is visible to be January 2014, you need to set the display date to a time during this month. Here's an example:
Calendar calendar = new GregorianCalendar(2014, Calendar.January, 1); calendarView.DisplayDate = calendar.TimeInMillis;
Here we used the 1st of January, but the result would have been the same if we had chosen another date from the same month. If you need to get the current display date, you can use the getDisplayDate() method.
Since the result will be of type long (just as the parameter for the method that sets the display date) which may not seem very meaningful, you can use it along with setTimeInMillis() of an instance of type
java.util.Calendar, which provides more easily readable date representation.
Week numbers
At this point the calendar already looks like the screen shot from the beginning of the article. The only difference is that we still don't see the week numbers. RadCalendarView provides three options for the week numbers:
- None: Week numbers are not displayed
- Inline: Week numbers are displayed inside the first cell of each week
- Block: Week numbers are displayed inside a separate cell in the beginning of each week
By default the selected option is
None which explains why the numbers are not currently visible. You can get the current value by using the getWeekNumbersDisplayMode() method and modify it with the method setWeekNumbersDisplayMode(WeekNumbersDisplayMode). Here's how to make the calendar display the week numbers inside the cell of the first date of each week:
calendarView.WeekNumbersDisplayMode = WeekNumbersDisplayMode.Inline;
You can also specify the display mode for the week numbers by using the XML attribute weekNumberDisplayMode:
<RelativeLayout xmlns: <com.telerik.widget.calendar.RadCalendarView android: </RelativeLayout>
And that's all. Now when you run the application, you will see an instance of RadCalendarView displaying the current month and showing information about the week numbers for each week just as in the image from the beginning of the article. | https://docs.telerik.com/devtools/xamarin/nativecontrols/android/calendar/calendar-getting-started.html | 2017-11-17T23:18:14 | CC-MAIN-2017-47 | 1510934804019.50 | [array(['images/calendar-getting-started-1.png',
'In this article you will learn how to create this calendar from scratch. TelerikUI-Calendar-Getting-Started-01'],
dtype=object) ] | docs.telerik.com |
Unique constraint tool
Djangae unique constraint support
The App Engine Datastore doesn't support unique constraints, with the exception of named keys. Djangae exploits this uniqueness of the key to allow it to provide unique constraints and unique-together constraints on other fields, as you would get with a SQL database.
It does this by storing a separate table of Marker entities, with one marker per unique constraint per model instance. The key of the marker is an encoded combination of the model/table and the name(s) and value(s) of the unique field(s) in the constraint. Each marker entity also stores an instance attribute pointing back to the instance that actually uses the unique value. That attribute is updated after the marker is created (because the marker is created first for new instances, before we even know their keys) so it might be invalid in some situations.
The Uniquetool
The tool plugs into Django Admin and allows examining or repairng of the Markers for a given model. There are 3 actions you can perform for any model defining a unique constraint. To perform an action, create a "Unique action" object in the Django admin, setting the "Model" to the model which you want to check and the "Action type" to the action which you want to perform. Creation of the object will trigger a task in the background, the status of which will be shown on the object in the admin site.
Check
Examines all instances and verifies that all of it's unique values are properly backed by Marker entities in the datastore.
Repair
Ensures that all instances with unique values own their respective markers. Missing markers are recreated and ones missing the instance attr are pointed at the right instance. In case a marker already exists, but points to a different instance it is logged as it's an actual integrity problem and has to be resolved manually (change one of your instance's value so it's unique).
This action is useful when migrating an existing model to start using the unique constraint support.
Clean
TODO: Docs needed.
UniquenessMixin
If you want to mark a
ListField or a
SetField as unique or use
unique_together
Meta option with any of these fields, it is also necessary to use
UniquenessMixin.:
from django.db import models from djangae.fields import SetField, ListField from djangae.db.constraints import UniquenessMixin class Princess(UniquenessMixin, models.Model): name = models.CharField(max_length=255) potential_princes = SetField(models.CharField(max_length=255), unique=True) ancestors = ListField(models.CharField(max_length=255), unique=True) | https://djangae.readthedocs.io/en/latest/uniquetool/ | 2017-11-17T23:11:13 | CC-MAIN-2017-47 | 1510934804019.50 | [] | djangae.readthedocs.io |
Difference between revisions of "Accessing the database using JDatabase"
From Joomla! Documentation
Revision as of 03:37, 20 November 2012.
Contents
Usage Connection
Joomla supports a generic interface for connecting to the database, hiding the intricacies of a specific database server from the Framework developer, thus making Joomla code easier to port from one = 42; $profile->profile_key='custom.message'; $profile->profile_value='Inserting a record using insertObject()'; $profile->ordering=1;. | https://docs.joomla.org/index.php?title=Accessing_the_database_using_JDatabase/2.5&diff=77642&oldid=77641 | 2015-06-30T05:29:33 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Components Weblinks Links Edit"
From Joomla! Documentation
Revision as of 19:31, 13 Oct. The viewing level access for this item.
- Language. Item language.
- Version Note. Optional field to identify this version of the item in the item's Version History window.
Publishing
File:Help30-Components-Weblinks-Links-Edit-publishing-options-parmeters.png
-.
Options
File:Help30-Components-Weblinks-Links-Edit-basic-options-parmeters.
- Count Clicks. Whether or not to keep track of how many times this web link has been opened.
Toolbar
At the top left you will see the toolbar for a Edit Item or New Item | https://docs.joomla.org/index.php?title=Help31:Components_Weblinks_Links_Edit&curid=25511&diff=104502&oldid=83788 | 2015-06-30T05:34:57 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Intermediate Beginners/Selected Advanced Beginner and Intermediate Topics
From Joomla! Documentation
< J1.5:Intermediate Beginners
This Namespace has been archived - Please Do Not Edit or Create Pages in this namespace. Pages contain information for a Joomla! version which is no longer supported. It exists only as a historical reference, will not be improved and its content may be incomplete.
These articles and tutorials cover more advanced topics for Joomla! administrators and web masters. If you are brand new to Joomla! or using a CMS (content managment system), you should start on the beginners page. | https://docs.joomla.org/index.php?title=Intermediate_Beginners/Selected_Advanced_Beginner_and_Intermediate_Topics&oldid=85779 | 2015-06-30T05:58:10 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Modifying a Joomla! Template"
From Joomla! Documentation
Revision as of 20:12, 23 June 2013
Templates are just a group of XML, PHP, HTML and image files that are stored in the templates directory of your site. You can edit these files directly or use the Template Manager.
Contents
Before You Begin.
Copy an Existing Template
Create a new template by copying an existing template:
-" />
Discover the New Template
Now.
Using the Template Manager
Editing the Template
J3.4:Editing a template with Template Manager. | https://docs.joomla.org/index.php?title=Modifying_a_Joomla!_Template&diff=100984&oldid=100983 | 2015-06-30T05:44:18 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Why don't contact images display correctly in version 1.5.8?"
From Joomla! Documentation
Latest revision as of 07:25, 9 September 2013
To avoid this problem in the future it would be much better if the image location wasn't hard wired into the code, but used the global configuration location for images. | https://docs.joomla.org/index.php?title=J1.5_talk:Why_don't_contact_images_display_correctly_in_version_1.5.8%3F&diff=cur&oldid=11895 | 2015-06-30T05:57:45 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
All public logs
Combined display of all available logs of Joomla! Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive).
- 10:37, 16 March 2013 JoomlaWikiBot (Talk | contribs) automatically marked revision 82902 of page Tables/menu patrolled
- 10:37, 16 March 2013 MediaWiki default (Talk | contribs) allowed
- 05:39, 25 September 2011 Chris Davenport (Talk | contribs) marked revision 62262 of page Tables/menu patrolled
- 11:56, 31 August 2010 Chris Davenport (Talk | contribs) marked revision 30365 of page Tables/menu patrolled | https://docs.joomla.org/index.php?title=Special:Log&page=Tables%2Fmenu | 2015-06-30T06:12:22 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
All public logs
Combined display of all available logs of Joomla! Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive).
- 14:07, 6 May 2013 Wilsonge (Talk | contribs) deleted page JUpdateAdapter/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JUpdateAdapter== ===Description=== {{Description:JUpdateAdapter}} <span class="editsection" style="font-size:76%;"> <nowiki>[</nowiki>D..." (and the only contributor was "Doxiki2"))
- 13:14, 26 April 2011 Doxiki2 (Talk | contribs) automatically marked revision 52902 of page JUpdateAdapter/1.6 patrolled
- 14:37, 19 April 2011 Doxiki2 (Talk | contribs) automatically marked revision 43097 of page JUpdateAdapter/1.6 patrolled | https://docs.joomla.org/index.php?title=Special:Log&page=JUpdateAdapter%2F1.6 | 2015-06-30T05:36:40 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "JString::substr replace"
From Joomla! Documentation
Revision as of 21:20,::substr_replace
Description
UTF-8 aware substr_replace Replace text within a portion of a string.
Description:JString::substr replace [Edit Descripton]
Synopsis
public static JString::substr_replace ($str, $repl, $start, $length=NULL)
Returns
Defined in
libraries/joomla/utilities/string.php (line 387)
Referenced by
See also
SeeAlso:JString::substr replace
Examples
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=JString::substr_replace/11.1&diff=50696&oldid=48334 | 2015-06-30T06:27:44 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Screen.banners.15
From Joomla! Documentation
Sticky: please clarify
Perhaps): make 'em sticky.
You may also want some banners that didn't show up for a while (due to randomisation) to appear more often: make 'em sticky.
--CirTap (talk • contribs) 15:24, 3 June 2008 (EDT) | https://docs.joomla.org/index.php?title=Help15_talk:Screen.banners.15&direction=prev&oldid=21623 | 2015-06-30T06:23:39 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Users User Manager Edit
From Joomla! Documentation
Contents.
Toolbar
At the top right you will see the toolbar:
The functions are:
-.
Quick tips
- Name, Login Name, and Email are required.
- If you did not fill in a particular language, editor, help site and/or time zone, the default settings from the Global Configuration, Language Manager and/or Template Manager are set. | https://docs.joomla.org/index.php?title=Help17:Users_User_Manager_Edit&oldid=59338 | 2015-06-30T06:40:06 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Changes related to "Category:Development"
← Category:Development
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
29 June 2015
06:32(Page translation log) MATsxm (Talk | contribs) marked J3.x:Developing a MVC Component/Introduction for translation
m 06:31J3.x:Developing a MVC Component/Introduction (diff; hist; 0) MATsxm
28 June 2015
05:32(Page translation log) MATsxm (Talk | contribs) marked Filing bugs and issues for translation
m 05:13Filing bugs and issues (diff; hist; -779) Zero24
27 June 2015
13:58(Page translation log) [MATsxm×2]
13:58 MATsxm (Talk | contribs) marked J3.x:Developing a MVC Component/Introduction for translation
08:12 MATsxm (Talk | contribs) marked J3.x:Developing a MVC Component/Introduction for translation
13:58J3.x:Developing a MVC Component/Introduction (3 changes; hist; +913) [Sandra97; MATsxm×2]
26 June 2015
14:23J3.x:Using Tags in an Extension (diff; hist; -48) MATsxm
14:22J3.x:Using the JToolBar class in the frontend (2 changes; hist; -5,497) [MATsxm; Waltsjt]
09:23(Page translation log) MATsxm (Talk | contribs) marked Form field for translation
m 09:23Form field (diff; hist; +2) MATsxm
06:04Using the JTable class (diff; hist; +5,750) Waltsjt
06:04API16:JTable (diff; hist; -306) Waltsj]
12:00JController and its subclass usage overview (diff; hist; +644) Waltsjt
23 June 2015
10:14(Page translation log) [MATsxm×4]
10:14 MATsxm (Talk | contribs) encouraged translation of Portal:Module Development
09:48 MATsxm (Talk | contribs) marked Portal:Module Development for translation
09:35 MATsxm (Talk | contribs) marked Portal:Module Development for translation
09:10 MATsxm (Talk | contribs) marked Portal:Module Development for translation
09:47Portal:Module Development (3 changes; hist; +304) [MATsxm×3] | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20130119055739&target=Category%3ADevelopment | 2015-06-30T06:01:38 | CC-MAIN-2015-27 | 1435375091751.85 | [array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object) ] | docs.joomla.org |
Revision history of "JDocumentError::render/1.6"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 07:50, 2 May 2013 Wilsonge (Talk | contribs) deleted page JDocumentError::render/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JDocumentError::render== ===Description=== Render the document. {{Description:JDocumentError::render}} <span class="editsection" style=..." (and the only contributor was "Doxiki2")) | https://docs.joomla.org/index.php?title=JDocumentError::render/1.6&action=history | 2015-06-30T05:46:25 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Site Global Configuration"
From Joomla! Documentation
Revision as of 04:54, 21 February 2013 Site. If custom message is selected then the message in the text area beneath is used.
- Offline Image. The image that will be displayed on the site when the site is offline.
- Default Editor. The default editor to use when creating articles.
- Default Captcha. The default Captcha to use when called by extensions. Remember to correctly configure the associated plugin when selecting an option here. By default, this is set to None.
- Default Access Level. The default access level to the site. The options here are from the Help25:Users Access Levels.
-.
- Show Joomla! Version. It shows the Joomla! version in the generator meta tag.. If Yes, non-Latin characters are allowed in the alias (and URL). If No, a title that includes non-Latin characters will produce a default alias value of the current time and date (for example, "2012-12-31-17-54"). The default setting is No.
-
System Settings
- chooses whether the site caches or not. Default setting is ON - Progressive Caching
- Cache Handler. This setting sets how the cache operates. There is only one caching mechanism which is file-based.
- Cache Time. This setting sets the maximum length of time (in minutes) for a cache file to be stored before it is refreshed. The default setting is 15 minutes. model
Joomla! version 2.5 will install with the same familiar back-end permissions as that of version 1.5. However, with 2)
- Edit Own
- Edit existing objects... | https://docs.joomla.org/index.php?title=Help25:Site_Global_Configuration&curid=23009&diff=81834&oldid=81191 | 2015-06-30T05:34:28 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.