content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Difference between revisions of "Developers/Database" From Joomla! Documentation Portal:Developers Revision as of 02:17, 2: Line 2: * [[Using the JTable class]] * [[Using the JTable class]] * [[Connecting to an external database]] * [[Connecting to an external database]] −* [[Using nested sets]]+* [[Using nested sets]]<noinclude>[[Category:Landing subpages|{{PAGENAME}}]]</noinclude> Revision as of 02:32, 1 September 2012 Accessing the database using JDatabase Using the JTable class Connecting to an external database Using nested sets Retrieved from ‘’ Category: Landing subpages | https://docs.joomla.org/index.php?title=Portal:Developers/Database&diff=prev&oldid=73401 | 2015-06-30T06:15:07 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Changes related to "Reinstalling deleted Joomla 1.5 core extensions"
← Reinstalling deleted Joomla 1.5 core extensions
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20130402000102&target=Reinstalling_deleted_Joomla_1.5_core_extensions | 2015-06-30T06:26:33 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Information for "Team Positions" Basic information Display titleTeam Positions Default sort keyTeam Positions Page length (in bytes)27 Page ID8618:33, 13 March 2008 Latest editorWillebil (Talk | contribs) Date of latest edit18:33, 13 March 2008 Total number of edits1 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Team_Positions&action=info | 2015-06-30T05:26:07 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
.
To turn on the picture password, you need to first set a device password. A device password is also needed in case you forget your picture password, or if you need to enter a password to access your BlackBerry device from a computer.
If your device uses BlackBerry Balance technology, in your Balance settings, make sure that you turned off Use as my device password.
Rather than type a password to unlock your device, you can use a secret picture and number gesture and unlock your device with one hand. Choose a picture, a number, and set up the combination. The number of combinations is nearly endless!. | http://docs.blackberry.com/en/smartphone_users/deliverables/62002/als1366917047036.html | 2015-06-30T05:33:23 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.blackberry.com |
About source types
This documentation does not apply to the most recent version of Splunk. Click here for the latest version. . | http://docs.splunk.com/Documentation/Splunk/4.0.4/knowledge/Aboutsourcetypes | 2012-05-27T06:21:52 | crawl-003 | crawl-003-023 | [] | docs.splunk.com |
Backend Release 2019-11-12
Bug Fixes
- Data deployments to a sandbox are now working as expected when you move more than 200 records.
- In the commit and deploy, the Search Layout metadata was not moved to the destination org, although changes were properly set in the promotion and feature branches. This issue has been fixed. | https://docs.copado.com/article/1rf8dsvt9t-backend-release-2019-11-12 | 2020-10-23T21:17:34 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.copado.com |
the Options
you can go to the Options (formerly Settings) section by tapping the top-right icon when looking at your own Profile.
Here, we'll walk you through everything you can do within the Options Section of the WOO Sports App. Including:
· Invite Friends
· Find Friends
· Edit Profile
· Followers / Following
· Settings
· Support
· About
Note:
if the iOS or Android screenshot is missing from the feature-docs, it means the feature is not yet included in the platform you're not seeing (iOS or Android)
Options
Here you can change formatting, contact us, check the WOO Tricktionary, and view our User Guide for instant help at your fingertips. The Options are put into 5 different segments:
· Invite People to WOO
· Account
· Settings
· Support
· About
In iOS, these segments are highlighted by the grey label-rows.
In Android, these segments are highlighted by the subheader in Blue text.
Invite People to WOO
We'd love for you to tell more people about WOO, the more people on WOO, the more complete the WOO Community will be. You can do that by word-of-mount (of course), but you can also use the Invite Friends feature within the Options.
Invite Friends
Have a friend you wish was on WOO? Send them an invite through Facebook, copy the link, or select a friend from the contacts on your phone. Get them in the Game!
Find Friends
To find new friends on WOO, the Find Friends function is a great way to track down someone you know..
__
Account
Edit Profile
Here, you can (you guessed it) edit your profile. You can upload new Profile or Cover Photo, and change or add certain elements to your Profile.
Customizable per Sport:
· Cover Photo
· Profile Photo
· Homespot
__
Set for all Sports:
· Gender
· Stance (locked)
· Date of Birth
· Weight
Did You Know?
You can have a different Cover Photo, Profile Photo, and Homespot for each sport: Kite (Big Air & Freestyle), Wake (Cable & Boat), and Snow (Snowboard & Freeski)
Note:
Homespot
Your Homespot would be the spot you usually ride at. If you can't find your homespot, please check out how to Submit a New Spot and we will make sure we add your homespot to the WOO Spot database.
Stance
While your stance is visible in your Edit Profile, you can't change it. You select your Stance the first time you log into WOO Wake or WOO Snow. If you accidentally selected the wrong stance, feel free to contact us at [email protected].
Followers / Following
These will show a list of users that either you follow (Following) or that follow you (Followers) within the WOO Community. Similar to the Search Friends,.
You can also tab the magnifying glass in the top right corner to go to Search Friends and find new riders on WOO to follow.
Note:
the riders you follow, and the riders that are following you, do not transfer across sports. So if you want to follow your friend in WOO Wake as well as in WOO Kite, please make sure to give him / her a follow within both WOO Wake and WOO Kite. We're sure he / she will appreciate it both times!
__
Settings
Units
Set the metric values of the WOO Sports App to either Metric (Meters) or to Imperial / US (Feet & Miles). The values affected by this setting are Height, Total Height, Weight (in Profile).
Time Format (Android Only)
Set the time format of the WOO Sports App to a 24h clock or a 12h clock. As an example:
24h clock: 16.00u
12h clock: 4.00PM
Push Notifications (Android Only)
With this slider you can turn all notifications coming to you from the WOO Sports App off or on. Curious what Push Notifications the WOO Sports App might sent your way? Check out the Notifications for more information.
Current Sport
Press this to change between sports (f.e. from WOO Kite to WOO Snow), it open a popup where you can select which sport you want to go to. There are also other ways to quickly change sports, read Changing Sports to find out more.
App Version
This number shows you which version of the WOO Sports App you currently have installed on your phone. Maybe the number doesn't mean anything to you, but it can mean a lot to us!
Connected WOO
Tab here and you'll be taken to My WOO. It shows the WOO-ID of the WOO that your version of the WOO Sports App last connected to. It can show you one of two things:
· WOO-XXXX-XXXX
· None
Connect Smartwatch
WOO 3.0 units can be paired to some smartwatches, read more about the Smartwatch Integration. The listing here shows you if have connected a WOO to your at all, and if so, when the last time you synchronized the WOO-information with your smartwatch. It can show you one of two things:
· last sync: 19/1/2018
· No smartwatch connected
Support
Tricktionary
The WOO is able to detect and score well-performed tricks! Tab here to see which tricks the WOO can already recognize. This will take you to the WOO Tricktionary for the sport you're currently in:
WOO Kite - Freestyle: Tricktionary
WOO Wake (Cable & Boat): Tricktionary
WOO Snow (Snowboard & Ski): Tricktionary
How to WOO
How to WOO will show you a quick and easy video that shows you just how easy it is to ride with WOO! Just strap your WOO in, press the Button to start recording, and off you go!
WOO User Guide
Links to the User Guide
Report a Problem
If you saw a bug, noticed a crash, or saw anything weird at all, feel free to shoot us a message through pressing this button.
Give Feedback
If you have any tips for us, got some feedback on the product, or just generally feel like talking to someone from the WOO Team, don't hesitate to reach out!
Buy a WOO
Don't have a WOO yet, get your hands on one! Links to our Find a Shop
About
this last bit of the Options includes some stuff to read, and give you the ability to log out of the WOO Sports App.
Stay up to date on the latest WOO stuff! Links to the Blog
Links to our Terms & Conditions
Links to our Privacy Policy
Does what you might expect, logs you out of the WOO Sports App.
Updated 11 months ago | https://docs.woosports.com/docs/settings | 2020-10-23T21:15:18 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['https://files.readme.io/d0f1b97-settings_main.png',
'settings_main.png'], dtype=object)
array(['https://files.readme.io/d0f1b97-settings_main.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/cbfd49d-options_main.png',
'options_main.png'], dtype=object)
array(['https://files.readme.io/cbfd49d-options_main.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/e4433c3-settings_invite.png',
'settings_invite.png'], dtype=object)
array(['https://files.readme.io/e4433c3-settings_invite.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/0716ba5-settings_find_friends.png',
'settings_find_friends.png'], dtype=object)
array(['https://files.readme.io/0716ba5-settings_find_friends.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/8837629-settings_edit_profile.png',
'settings_edit_profile.png'], dtype=object)
array(['https://files.readme.io/8837629-settings_edit_profile.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/5204bc5-settings_followers.png',
'settings_followers.png'], dtype=object)
array(['https://files.readme.io/5204bc5-settings_followers.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/a9b313e-settings_settings.png',
'settings_settings.png'], dtype=object)
array(['https://files.readme.io/a9b313e-settings_settings.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/a929cf2-settings_support.png',
'settings_support.png'], dtype=object)
array(['https://files.readme.io/a929cf2-settings_support.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/536ccfe-settings_about.png',
'settings_about.png'], dtype=object)
array(['https://files.readme.io/536ccfe-settings_about.png',
'Click to close...'], dtype=object) ] | docs.woosports.com |
This toolbar is a combination of the default server context toolbar and resource context toolbar which are displayed on the properties panel, except that you must select a server and possibly a resource when you invoke actions from this toolbar.
Feedback
Thanks for your feedback.
Post your comment on this topic. | http://docs.us.sios.com/sps/8.7.1/en/topic/global-toolbar | 2020-10-23T22:05:26 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.us.sios.com |
Ok, so you'd like an auto refresh rather than using a filter and hitting F5 as you cull, correct? I'll have to do some research on how much that would affect the speed performance for other actions. In the mean time, if this is a feature you'd like to see, please upvote Steve's suggestion or leave a comment.
Hi Marziah - thanks for the engagement on this topic. The key behavior is very similar if not exactly what LR provides. I have a list of photos based on filter criteria. For example, on my first pass I often mark photos I like as "3." Upon completion I would like to be able to say "Show me all the pics >3" and then I see that list. In that list, I might realize that I have too many photos and I want to narrow this down. This is very common when I take portraits - I have too many and want to get this from 10 similar pics to just one or two. So when i am reviewing, on a picture that I don't select, I mark it a 2. At this point, I would like the pic to disappear from the list. I want to avoid refiltering the list, etc. It should say "Oh, you have a list with a filter of >3 and this pic is a 2, so let me drop this from the list." This is how LR handles this. The upside is that I often am not ready to delete these pics, I just want to get to the best ones. If I an quickly reduce this by changing stars on that filtered list and it updates, this keeps me out of LR just that much longer. I hope this additional detail makes sense!
Steve Chadwick
On the culling process, I often look at two photos and then I want to demote one photo and not see it again, but I am not ready to delete it either. In Lightroom, the user can say show me pictures > 3 star, then on a pic, give it 2 stars and it drops from the list. This would be a killer feature for your software - i know others would use it. Heck, it's even in LR :) so you know it's common as use-cases go. This would leverage your super fast viewing capabilities and then extend this to the fine-tuning/culling process. | https://docs.camerabits.com/support/discussions/topics/48000558444 | 2020-10-23T21:05:12 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.camerabits.com |
What is AWS Elemental MediaConnect?
AWS Elemental MediaConnect is a service that lets you ingest live video content into the cloud and distribute it to destinations all over the world, both inside and outside the AWS cloud.
This is the API Reference for MediaConnect. This guide is for developers who need detailed information about the MediaConnect API actions, data types, and errors. To access MediaConnect using the REST API endpoint:.
<region>.amazonaws.com
For descriptions of MediaConnect features and step-by-step instructions on how to use them, see the AWS Elemental MediaConnect User Guide.
Alternatively, you can use one of the AWS SDKs to access an API that's tailored to
the
programming language or platform that you're using. For more information, see AWS SDKs | https://docs.aws.amazon.com/mediaconnect/latest/api/what-is-mediaconnect.html | 2020-10-23T21:16:28 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.aws.amazon.com |
To find User Manual pages on new and updated features in Unity 2019.3, click on the link above or search for “NewIn20193”.
This lists Unity 2019.3 User Manual pages containing:
To find out more about the new features, changes, and improvements to this Unity version, see the 2019.3 Release Notes.
If you are upgrading existing projects from an earlier version to 2019.3, read the Upgrade Guide to 2019.3 for information about how your project may be affected. | https://docs.unity3d.com/ja/2019.3/Manual/WhatsNew20193.html | 2020-10-23T22:28:31 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.unity3d.com |
- . A message should be displayed indicating the network connection is successfully removed
- Click Done to close the dialog and return to the GUI status display.
Feedback
Thanks for your feedback.
Post your comment on this topic. | http://docs.us.sios.com/spslinux/9.4.0/en/topic/deleting-a-communication-path | 2020-10-23T22:05:45 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.us.sios.com |
Open Admin panel of Magento Store
Go to System -> Configuration as shown below in the image
3. Go to Design Section and expand HTML Head as show below. Paste the chatbot script next to "Miscellaneous Scripts", as shown below.
4. Save everything.
Now chatbot is installed in your store.
To find out IntelliTicks chatbot script check the below link | https://docs.intelliticks.com/integrations/magento-integration-1.x | 2020-10-23T21:45:24 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.intelliticks.com |
QtQml.qtqml-referenceexamples-coercion-example
-
This example builds on:
- Extending QML - Object and List Property Types Example
- Extending QML - Adding Types Example
The Inheritance and Coercion Example shows how to use base classes to assign types of more than one type to a property. It specializes the Person type developed in the previous examples into two types - a
Boyand a
Girl.
BirthdayParty {
- host: Boy {
- name: "Bob Jones"
- shoeSize: 12
- }
- guests: [
- Boy { name: "Leo Hodges" },
- Boy { name: "Jack Smith" },
- name:" _wzxhzdk:60_wzxhzdk:Girl
- ]
- }.Register.
Boy::Boy(QObject * parent)
- Person(parent)
- {
- }
- Girl::Girl(QObject * parent)
- Person(parent) { }.qrc | https://phone.docs.ubuntu.com/en/apps/api-qml-development/QtQml.qtqml-referenceexamples-coercion-example | 2020-10-23T22:00:11 | CC-MAIN-2020-45 | 1603107865665.7 | [] | phone.docs.ubuntu.com |
Overview
Affirm Promotional Messaging allows you to display the monthly payment price and information modal on your site by using Affirm's API and runtime Javascript. The monthly payment messaging is dynamically generated into placeholder elements on your pages to inform customers about the availability of installment financing, which will help drive increased average order value and conversion rates.
Before you start
- Ensure you messaging is compliant or your merchant account can be disabled. Review our Compliance & Guidelines. guidelines
Review our Marketing guidelines for guidance.
On-site marketing
We recommend adding Affirm on-site marketing components to the following areas of your site, to maximize conversion lift:
Affirm marketing assets
There are Affirm logos, buttons, and banners available in a variety of sizes. Additional logos are located at the bottom of our press page:
Example
Product page
Add Affirm Monthly Payment Messaging to your product page to see up to a 10-percent increase in add-to-cart.
Instructions
1. Add monthly payment messaging embed code to your product description page
2. A modal will be displayed by default when a user clicks on the Monthly Payment Messaging
Example
Example code
- Product page
- Single price
Cart page
Add Affirm Monthly Payment Messaging to your cart page and see up to a 15-percent increase in conversion.
Instructions
1. Add monthly payment messaging embed code to your cart page
2. A modal will be displayed by default when a user clicks on the Monthly Payment Messaging
3. Also provide an in-line description of Affirm or link to your landing page.
Example
Example code
- Product/Cart page
- Single price displayed multiple places
The checkout flow will include Affirm as a selectable payment method, but it's also another opportunity to educate the customer on the Affirm value prop.
Instructions
1. Selecting Affirm expands payment method description
2. Add monthly payment messaging embed code to the Affirm payment method description
3. A modal will be displayed by default when a user clicks on the Monthly Payment Messaging
4. Also provide an in-line description of Affirm or link to your landing page.
Education and Landing Page.
Instructions
1. Use standard Affirm copy as your starting point
2. Submit your page/copy to our Merchant Success team for approval
3. Link to your landing page where Affirm is mentioned on your site
4. Link to your landing page for campaigns/promotions around Affirm
Example
1. Download a promotional banner
2. Integrate a site modal or link to your education/landing page
Updated 8 months ago | https://docs.affirm.com/affirm-developers/docs/placement | 2020-10-23T21:12:49 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['https://files.readme.io/d31c988-learn_more.png', 'learn_more.png'],
dtype=object)
array(['https://files.readme.io/d31c988-learn_more.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/2c049de-prequalify.png', 'prequalify.png'],
dtype=object)
array(['https://files.readme.io/2c049de-prequalify.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/1d6d3e5-interest_free_payments.png',
'interest_free_payments.png'], dtype=object)
array(['https://files.readme.io/1d6d3e5-interest_free_payments.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/46c37dd-site_modal.png', 'site_modal.png'],
dtype=object)
array(['https://files.readme.io/46c37dd-site_modal.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/90642bd-checkout-now.png',
'checkout-now.png'], dtype=object)
array(['https://files.readme.io/90642bd-checkout-now.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/a66811b-ALA2Bmessaging.png',
'ALA%2Bmessaging.png'], dtype=object)
array(['https://files.readme.io/a66811b-ALA2Bmessaging.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/3263a83-checkout-now.png',
'checkout-now.png'], dtype=object)
array(['https://files.readme.io/3263a83-checkout-now.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/3466ca7-education-and-landing-page.png',
'education-and-landing-page.png'], dtype=object)
array(['https://files.readme.io/3466ca7-education-and-landing-page.png',
'Click to close...'], dtype=object) ] | docs.affirm.com |
The eBay – NetSuite Integration App helps retailers combine the selling power of eBay with the proven back-office features of NetSuite. With the eBay integration app, sellers can focus on increasing sales and manage orders without needing to manually sync their eBay and NetSuite accounts.
With the eBay - NetSuite integration app, you receive our expertise in NetSuite, eBay, and cloud integrations. Using this integration app, you can quickly and easily integrate order, product, and customer information between eBay and NetSuite without requiring additional development or IT support.
Prebuilt functionality
The eBay - NetSuite integration app comes with prebuilt integration flows that synchronize your Customers, Sales Orders, Fulfillments, Inventory levels, Pricing, and Items between NetSuite and eBay, eliminating the overhead associated with dual-entry and maintenance between multiple systems.
Integration flows
On a high-level, the integration app consists of following key integration flows:
- Import Customers from eBay to NetSuite
- Import Sales Orders from eBay to NetSuite
- Export Fulfillments from NetSuite to eBay
- Export Inventory Levels & Pricing from NetSuite to eBay
- Export Products, Variations from NetSuite to eBay
Workflow diagram
Following is a workflow diagram that explains the flow of information between eBay and Netsuite:
- eBay – NetSuite Integration App
Accelerate online sales with prebuilt integration between eBay and NetSuite
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/115005119888-eBay-NetSuite-Integration-App-overview | 2020-10-23T22:27:58 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['/hc/article_attachments/360007374651/mceclip0.png', None],
dtype=object) ] | docs.celigo.com |
Installing plugins¶
Godot features an editor plugin system with numerous plugins developed by the community. Plugins can extend the editor's functionality with new nodes, additional docks, convenience features, and more.
Finding plugins¶
The preferred way to find Godot plugins is to use the Asset Library. While it can be browsed online, it's more convenient to use it directly from the editor. To do so, click the AssetLib tab at the top of the editor:
You can also find assets on code hosting websites such as GitHub.
Note
Some repositories describe themselves as "plugins" but may not actually be editor plugins. This is especially the case for scripts that are intended to be used in a running project. You don't need to enable such plugins to use them. Download them and extract the files in your project folder.
One way to distinguish editor plugins from non-editor plugins is to look for
a
plugin.cfg file in the repository that hosts the plugin. If the
repository contains a
plugin.cfg file in a folder placed in the
addons/ folder, then it is an editor plugin.
Installing a plugin¶
To install a plugin, download it as a ZIP archive. On the Asset Library, this can be done using the Download button, either from the editor or using the Web interface.
On GitHub, if a plugin has tags (versions) declared, go to the Releases tab to download a stable release. This ensures you download a version that was declared to be stable by its author.
On GitHub, if the plugin doesn't have any tags declared, use the Download ZIP button to download a ZIP of the latest revision:
Extract the ZIP archive and move the
addons/ folder it contains into your
project folder. If your project already contains an
addons/ folder, move the
plugin's
addons/ folder into your project folder to merge the new folder
contents with the existing one. Your file manager may ask you whether to write
into the folder; answer Yes. No files will be overwritten in the process.
Enabling a plugin¶
To enable the freshly installed plugin, open Project > Project Settings at the top of the editor then go the Plugins tab. If the plugin was packaged correctly, you should see it in the list of plugins. Click on the gray Inactive text and choose Active to enable the plugin. The word Active will display in green to confirm the plugin was enabled.
You can use the plugin immediately after enabling it; there's no need to restart the editor. Likewise, disabling a plugin can be done without having to restart the editor. | https://docs.godotengine.org/en/latest/tutorials/plugins/editor/installing_plugins.html | 2020-10-23T22:25:38 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['../../../_images/installing_plugins_assetlib_tab.png',
'../../../_images/installing_plugins_assetlib_tab.png'],
dtype=object)
array(['../../../_images/installing_plugins_github_download_zip.png',
'../../../_images/installing_plugins_github_download_zip.png'],
dtype=object)
array(['../../../_images/installing_plugins_project_settings.png',
'../../../_images/installing_plugins_project_settings.png'],
dtype=object) ] | docs.godotengine.org |
Pipeline Specification¶
This document discusses each of the fields present in a pipeline specification. To see how to use a pipeline spec to create a pipeline, refer to the pachctl create pipeline section.
JSON Manifest Format¶
{ "pipeline": { "name": string }, "description": string, "metadata": { "annotations": { "annotation": string }, "labels": { "label":, }, "sidecar_resource_limits": { "memory": string, "cpu": number }, "datum_timeout": string, "datum_tries": int, "job_timeout": string, "input": { <"pfs", "cross", "union", "cron", or "git" see below> }, "s3_out": bool, "output_branch": string, "egress": { "URL": "s3://bucket/dir" }, "standby": bool, "cache_size": string, "enable_stats": bool, "service": { "internal_port": int, "external_port": int }, "spout": { "overwrite": bool \\ Optionally, you can combine a spout with a service: , "s3": bool } ------------------------------------ "cross" or "union" input ------------------------------------ "cross" or "union": [ { "pfs": { "name": string, "repo": string, "branch": string, "glob": string, "lazy" bool, "empty_files": bool "s3": bool } }, { "pfs": { "name": string, "repo": string, "branch": string, "glob": string, "lazy" bool, "empty_files": bool "s3": bool } } ... ] ------------------------------------ "cron" input ------------------------------------ "cron": { "name": string, "spec": string, "repo": string, "start": time, "overwrite": bool } ------------------------------------ "join" input ------------------------------------ "join": [ { "pfs": { "name": string, "repo": string, "branch": string, "glob": string, "join_on": string "lazy": bool "empty_files": bool "s3": bool } }, { "pfs": { "name": string, "repo": string, "branch": string, "glob": string, "join_on": string "lazy": bool "empty_files": bool "s3": requirements:
- Include only alphanumeric characters,
_and
-.
- Begin or end with only alphanumeric characters (not
_or
-).
- Not exceed 63 characters in length.
Description (optional)¶
description is an optional text field where you can add information about the pipeline.
Metadata¶
This parameter enables you to add metadata to your pipeline pods by using Kubernetes'
labels and
annotations. Labels help you to organize and keep track of your cluster objects by creating groups of pods based on the application they run, resources they use, or other parameters. Labels simplify the querying of Kubernetes objects and are handy in operations.
Similarly to labels, you can add metadata through annotations. The difference is that you can specify any arbitrary metadata through annotations.
Both parameters require a key-value pair. Do not confuse this parameter with
pod_patch which adds metadata to the user container of the pipeline pod. For more information, see Labels and Selectors and Kubernetes Annotations in the Kubernetes documentation.. There are also environment variables that are automatically injected into the container, such as:
PACH_JOB_ID– the ID of the current job.
PACH_OUTPUT_COMMIT_ID– the ID of the commit in the output repo for the current job.
<input>_COMMIT- the ID of the input commit. For example, if your input is the
imagesrepo, this will be
images_COMMIT.
For a complete list of variables and descriptions see: Configure Environment Variables. value is "constant=1".
Because spouts and services are designed to be single instances, do not modify the default
parallism_spec value for these pipelines.
Resource Requests (optional)¶
resource_requests describes the amount of resources that the pipeline workers will consume. Knowing this in advance enables Pachyderm to schedule big jobs on separate machines, so that they do not conflict, slow down, or terminate.
This parameter is optional, and if you do not explicitly add it in the pipeline spec, Pachyderm creates Kubernetes containers with the following default resources:
- The user container requests 0 CPU, 0 disk space, and 64MB of memory.
- The init container requests the same amount of CPU, memory, and disk space that is set for the user container.
- The storage container requests 0 CPU and the amount of memory set by the cache_size parameter.
The
resource_requests parameter enables you to overwrite these default values.
The
memory field is a string that describes the amount of memory, in bytes, that each worker needs. Allowed SI suffixes include M, K, G, Mi, Ki, Gi, and other., that each worker needs. Allowed SI suffixes include M, K, G, Mi, Ki, Gi, and other..
Sidecar Resource Limits (optional)¶
sidecar_resource_limits determines the upper threshold of resources allocated to the sidecar containers.
This field can be useful in deployments where Kubernetes automatically applies resource limits to containers, which might conflict with Pachyderm pipelines' resource requests. Such a deployment might fail if Pachyderm requests more than the default Kubernetes limit. The
sidecar_resource_limits enables you to explicitly specify these resources to fix the issue.
Datum Timeout (optional)¶
datum_timeout determines the maximum execution time allowed for each datum. The value must be a string that represents a time value, such as
1s,
5m, or
15h. This parameter takes precedence over the parallelism or number of datums, therefore, no single datum is allowed to exceed this value. By default,
datum_timeout is not set, and the datum continues to be processed as long as needed.
Datum Tries (optional)¶
datum_tries is an integer, such as
1,
2, or
3, that determines the number of times a job attempts to run on a datum when a failure occurs. Setting
datum_tries to
1 will attempt a job once with no retries. Only failed datums are retried in a retry attempt. If the operation succeeds in retry attempts, then the job is marked as successful. Otherwise, the job is marked as failed.
Job Timeout (optional)¶
job_timeout determines the maximum execution time allowed for a job. It differs from
datum_timeout in that the limit is applied across all workers and all datums. This is the wall time, which means that if you set
job_timeout to one hour and the job does not finish the work in one hour, it will be interrupted. When you set this value, you need to consider the parallelism, total number of datums, and execution time per datum. The value must be a string that represents a time value, such as
1s,
5m, or
15h. In addition, the number of datums might change over jobs. Some new commits might have more files, and therefore, more datums. Similarly, other commits might have fewer files and datums. If this parameter is not set, the job will run indefinitely until it succeeds or fails.
S3 Output Repository¶
s3_out allows your pipeline code to write results out to an S3 gateway endpoint instead of the typical
pfs/out directory. When this parameter is set to
true, Pachyderm includes a sidecar S3 gateway instance container in the same pod as the pipeline container. The address of the output repository will be
s3://<output_repo>. If you enable
s3_out, verify that the
enable_stats parameter is disabled.
If you want to expose an input repository through an S3 gateway, see
input.pfs.s3 in PFS Input.
See Also:
Input¶. While most types of pipeline specifications require an
input repository, there are exceptions, such as a spout, which does not need an
input.
{ "pfs": pfs_input, "union": union_input, "cross": cross_input, "cron": cron_input }
PFS Input¶
PFS inputs are the simplest inputs, they take input from a single branch on a single repo.
{ "name": string, "repo": string, "branch": string, "glob": string, "lazy" bool, "empty_files": bool "s3": name of the Pachyderm repository with the data that you want to join with other data.
input.pfs.branch is the
branch to watch for commits. If left blank, Pachyderm sets this value to
master.
input.pfs.glob is a glob pattern that is used to determine how the input data is partitioned..
input.pfs.s3 sets whether the sidecar in the pipeline worker pod should include a sidecar S3 gateway instance. This option enables an S3 gateway to serve on a pipeline-level basis and, therefore, ensure provenance tracking for pipelines that integrate with external systems, such as Kubeflow. When this option is set to
true, Pachyderm deploys an S3 gateway instance alongside the pipeline container and creates an S3 bucket for the pipeline input repo. The address of the input repository will be
s3://<input_repo>. When you enable this parameter, you cannot use glob patterns. All files will be processed as one datum.
Another limitation for S3-enabled pipelines is that you can only use either a single input or a cross input. Join and union inputs are not supported.
If you want to expose an output repository through an S3 gateway, see S3 Output Repository., when
"overwrite" is disabled, ticks accumulate in the cron input repo. When
"overwrite" is enabled, Pachyderm erases the old ticks and adds new ticks with each commit. If you do not add any manual ticks or run
pachctl run cron, only one tick file per commit (for the latest tick) is added to the input repo.
Join Input¶
A join input enables you to join files that are stored in separate Pachyderm repositories and that match a configured glob pattern. A join input must have the
glob and
join_on parameters configured to work properly. A join can combine multiple PFS inputs.
You can specify the following parameters for the
join input.
input.pfs.name— the name of the PFS input that appears in the
INPUTfield when you run the
pachctl list jobcommand. If an input name is not specified, it defaults to the name of the repo.
input.pfs.repo— see the description in PFS Input. the name of the Pachyderm repository with the data that you want to join with other data.
input.pfs.glob— a wildcard pattern that defines how a dataset is broken up into datums for further processing. When you use a glob pattern in joins, it creates a naming convention that Pachyderm uses to join files. In other words, Pachyderm joins the files that are named according to the glob pattern and skips those that are not.
You can specify the glob pattern for joins in a parenthesis to create one or multiple capture groups. A capture group can include one or multiple characters. Use standard UNIX globbing characters to create capture, groups, including the following:
?— matches a single character in a filepath. For example, you have files named
file000.txt,
file001.txt,
file002.txt, and so on. You can set the glob pattern to
/file(?)(?)(?)and the
join_onkey to
$2, so that Pachyderm matches only the files that have same second character.
*— any number of characters in the filepath. For example, if you set your capture group to
/(*), Pachyderm matches all files in the root directory.
If you do not specify a correct
globpattern, Pachyderm performs the
crossinput operation instead of
join.
input.pfs.lazy— see the description in PFS Input.
input.pfs.empty_files— see the description in PFS Input.
Githook URLinto the 'Payload
The
enable_stats parameter turns on statistics tracking for the pipeline. When you enable the statistics tracking, the pipeline automatically creates and commits datum processing information to a special branch in its output repo called
"stats". This branch stores information about each datum that the pipeline processes, including timing information, size information, logs, and
/pfs snapshots. You can view this statistics by running the
pachctl inspect datum and
pachctl list datum commands, as well as through the web UI. Do not enable statistics tracking for S3-enabled pipelines.
Once turned on, statistics tracking cannot be disabled for the pipeline. You can turn it off by deleting the pipeline, setting
enable_stats to
false or completely removing it from your pipeline spec, and recreating the pipeline from that updated spec file. While the pipeline that collects the stats exists, the storage space used by the stats cannot be released.
Note
Enabling stats results in slight storage use increase for logs and timing information. However, stats do not use as much extra storage as it might appear because snapshots of the
/pfs directory that are the largest stored assets do not require extra space.. A chunk is the unit of work that workers claim. Each worker claims 1 or more datums and it commits a full chunk once it's done processing it.
chunk_spec.number if nonzero, specifies that each chunk should contain
number datums. Chunks may contain fewer if the total number of datums don't divide evenly. If you lower the chunk number to 1 it'll update after every datum, the cost is extra load on etcd which can slow other stuff down. The default value is 2..
PPS Mounts and File Access¶
Mount Paths¶
The root mount point is at
/pfs, which contains:
/pfs/input_namewhich is where you would find the datum.
- Each input will be found here by its name, which defaults to the repo name if not specified.
/pfs/outwhich is where you write any output. | https://docs.pachyderm.com/latest/reference/pipeline_spec/ | 2020-10-23T22:11:13 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.pachyderm.com |
To ensure high performance and availability, vSAN clusters must meet certain bandwidth and network latency requirements.
The bandwidth requirements between the primary and secondary sites of a vSAN stretched cluster depend on the vSAN workload, amount of data, and the way you want to handle failures. For more information, see VMware vSAN Design and Sizing Guide. | https://docs.vmware.com/en/VMware-vSphere/7.0/vsan-network-design-guide/GUID-F3401655-6EFA-477B-B072-E8F001B50BCC.html | 2020-10-23T22:49:14 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.vmware.com |
Usage - quick start and tutorial
To get started with the Savvy architecture, follow the quick start instructions. This procedure will install the example customized image on your device and allow you to get a sense for the types of customizations permitted by the Savvy API. After experimenting with the sample customizations, you will probably want to experiment with your own customizations. The tutorial section explains how to do so.
Quick Start
This procedure will install Canonical’s example customization package onto a device. After installing the example customizations, you will be able to explore how the Savvy API can be used to change the look, feel, and behavior of Ubuntu Touch to suit your requirements. It requires:
- device supported by Ubuntu Touch
- then use ubuntu-device-flash to install Ubuntu Touch on your device
Note: the quick start assumes you have already been able to successfully install your device with the default Ubuntu image. If you encounter difficulty, please refer to the installation instructions.
Tutorial
Obtaining the code
The Savvy code is hosted in Launchpad and requires the use of bzr. The bzr branch contains example code, a test suite, and this documentation. To download it:
$ bzr branch lp:savilerow
The project layout is very straightforward:
doc/ src/ tests/
As you can see:
doc/contains this documentation
src/contains the example customization code
tests/contains the test suite
The
src/ directory will be of greatest interest, as it demonstrates all the
possible customizations permitted by the Savvy API.
Making a modification
In this example, we will set a different default wallpaper. Looking in the directory src/system/custom we will modify the following files:
. ├── etc │ ├── dconf │ ├── dconf_profile │ └── dconf_source │ └── db │ └── custom.d │ ├── custom.key │ └── locks │ └── custom.lock └── usr └── share └── backgrounds └── ringtel_wallpaper_plain.png
You may remove all other files and directories in
src/system/custom. The first
step is to add a new wallpaper. In this example, we will assume the filename
is wallpaper.jpg. Copy the file to
usr/share/backgrounds:
└── usr └── share └── backgrounds ├── ringtel_wallpaper_plain.png └── wallpaper.jpg
Next, you will need to edit custom.key and custom.lock:
$ cat etc/dconf_source/db/custom.d/custom.key # if the keys have uppercase letters, make them lowercase here, for # example com.canonical.Unity.Lenses -> com/canonical/unity/lenses [org/gnome/desktop/background] picture-uri=''
The contents of custom.lock should be:
$ cat etc/dconf_source/db/custom.d/locks/custom.lock /org/gnome/desktop/background/picture-uri
That's it! Now we are ready to test the results.
Note: the provided sample wallpaper is png format, but in this example, we are using an image in jpg format. Both file formats are supported.
Building the tarball
After making changes above, we will rebuild the tarball from the top-level directory:
$ pwd /home/ubuntu/Projects/savvy/tutorial $ ls doc/ src/ tests/ $ tar -Jcvf custom.tar.xz -C src/ system/ $ ls custom.tar.xz doc/ src/ tests/
This command will produce a new, unsigned tarball in the parent directory.
Installing the tarball
The Ubuntu Image Architecture requires that all tarballs installed on the system must be signed. In this tutorial, we have not signed the new custom tarball so we must use a developer shortcut which does not check for GPG signatures.
THIS SHOULD NOT BE USED FOR DAILY USERS. It is only to be used for testing an unreleased recovery image.
Download the recovery images and ubuntu_command from here:
To install the recovery:
$ adb reboot bootloader $ fastboot flash recovery ~/Downloads/mako-recovery.img $ fastboot reboot
Now to install the customization tarball.
Note: ensure that you have the
ubuntu_command from the above link:
$ adb push custom.tar.xz /cache/recovery $ adb push ~/Downloads/ubuntu_command /cache/recovery $ adb reboot recovery
Finishing up
After rebooting the device, you should have a new background. There are many more customizations possible with the Savvy API, and the rest of this documentation explains them. If there are customizations that the Savvy API does not support, please file a bug and the team will do its best to respond. | https://phone.docs.ubuntu.com/en/devices/oem/usage | 2020-10-23T22:25:21 | CC-MAIN-2020-45 | 1603107865665.7 | [] | phone.docs.ubuntu.com |
- Reference >
- mongo Shell Methods >
- Database Methods >
- db.killOp()
db.killOp()¶
Description¶
- db.killOp(opid)¶
Terminates an operation as specified by the operation ID. To find operations and their corresponding IDs, see db.currentOp().
The db.killOp() method has the following parameter:
Warning
Terminate running operations with extreme caution. Only use db.killOp() to terminate operations initiated by clients and do not terminate internal database operations. | http://docs.mongodb.org/manual/reference/method/db.killOp/ | 2014-08-20T06:49:30 | CC-MAIN-2014-35 | 1408500800767.23 | [] | docs.mongodb.org |
ALJ/KOT/tcg Mailed 8/25/2004
PUBLIC UTILITIES COMMISSION OF THE STATE OF CALIFORNIA
Resolution ALJ-184
Administrative Law Judge Division
August 19, 2004
RESOLUTION ALJ-184. Adopting annual process for setting hourly rates to use in calculating compensation awards to intervenors.
In today's resolution, we adopt an annual process for setting and updating hourly rates for use by intervenors in seeking compensation for substantially contributing to a Commission decision, as provided in the statutory intervenor funding program. (Pub. Util. Code §§ 1801-1812. Unless otherwise stated, all citations to statute are to the Public Utilities Code.) The hourly rates that we establish through this process will govern intervenors and their representatives who have recently participated in our proceedings, and will provide guidance to other intervenors and representatives.
In Decision (D.) 03-10-061 and D.03-10-062, we directed the Executive Director and Chief Administrative Law Judge to "develop a comprehensive process for the Commission to annually set rates for intervenor attorney, expert, and paralegal fees...." On October 29, 2003, the Executive Director and Chief Administrative Law Judge wrote to over 40 regular participants in our proceedings, including frequent intervenors and utilities from the various regulated industries. Their letter invited comments and suggestions to begin development of this annual process. Specifically, the Commission sought input on the following questions:
1. What annual process do you recommend for setting hourly rates?
2. How would the annual process you recommend meet (1) the standards of Section 1806, and (2) the goals of D.03-10-062, specifically, "promote fairness in awards, both in absolute and relative terms" and "increase administrative efficiency [so that intervenors are paid] on a more expedited basis"?
3. Consistent with Section 1806, what information should the Commission accept or require in setting hourly rates?
Aglet Consumer Alliance (Aglet), SBC Pacific Bell (SBC), Pacific Gas and Electric (PG&E), Southern California Edison (SCE), The Utility Reform Network (TURN), AT&T Communications of California, Inc., Greenlining Institute (Greenlining), and Grueneich Resource Advocates served opening comments, on November 14, 2003. Latino Issues Forum (LIF) served opening comments on November 25, 2003. SCE and Greenlining served reply comments on December 2, 2003 and PG&E, Aglet, LIF, SBC, and TURN served reply comments on December 3, 2003.
The comments raise three main issues, which we discuss and resolve below. We expect, however, to refine the process over time, based on our experience and suggestions by everyone involved.
Commenters differ on whether the process should produce individual rates for particular advocates or ranges of rates based upon general levels of training and experience. Some commenters suggest that the number of advocates eligible to claim intervenor compensation is sufficiently small that standardized rates for general levels of training and experience are unnecessary and cannot accurately account for different levels of experience and skill. Some commenters suggest that we adopt default rates based on general levels of training and experience but allow advocates to seek higher rates if they feel their specific training, experience, and skill warrant. Others recommend adopting ranges of rates based on training and experience, allowing advocates to present evidence of where they fall within the range.
After reviewing the comments, we propose to adopt rates for individual advocates based on their specific training and experience, taking into consideration the compensation of persons with comparable training and experience. With the additional data that we intend to gather, we can adopt fair rates for these advocates for a particular calendar year.
We intend that, in general, when we adopt a rate for a particular advocate for a particular calendar year, the intervenor seeking to recover fees for that advocate's work in that calendar year will use that rate in calculating the intervenor's compensation request. This generalization is subject to several qualifications. We observe, first, that historically we have augmented an advocate's rate by a "multiplier" in consideration of various specific factors on a case-by-case basis. We will continue that practice, but because a multiplier is case-specific, it does not actually change the adopted hourly rate for that advocate. Second, an intervenor may request an adjustment to an adopted hourly rate but must show good cause for doing so. For example, if a court or
regulatory agency awarded the advocate a higher hourly rate for work in the same calendar year, the intervenor may ask us to use the higher rate. The burden is on the intervenor to justify the higher rate, and in the example just given, we would expect the intervenor to address, among other things, the standard used by the court or agency in setting the higher rate and the comparability of the work performed at the Commission to the work performed at the court or agency.
Finally, the adopted rate carries our expectation about the level of the advocate's performance; to the extent that the advocate performs above or below that level in a particular proceeding we would consider augmenting or reducing the hourly rate. For example, we expect that advocates with experience before the Commission have a certain level of knowledge about our Rules of Practice and Procedure and filing requirements, so a seasoned advocate who fails to follow these rules would not be performing at a level consistent with what we would expect from someone of that training and experience. Thus, in that circumstance, we may consider awarding a lower hourly rate for the advocate's work in that proceeding. Similarly, an advocate who surpasses expectations may ask us to award a higher hourly rate. For example, where an advocate served ably in the dual role of attorney and expert, eliminating the intervenor's need to employ separate individuals for each role, we may consider awarding a higher hourly rate for that advocate's work in that proceeding.
Of necessity, we can adopt specific hourly rates only for those advocates who already have experience at the Commission. We also encourage new intervenors and advocates to participate in our proceedings. The annual process will develop information that will enable prospective intervenors to project reasonable rates by referring to ranges of training and experience revealed in that process. Particularly for attorney advocates, we have found from over 20 years of setting hourly rates that the rates tend to fall within three ranges, based on length of relevant experience and roughly corresponding to the associate, partner, and senior partner levels within a law firm. We expect to continue to specify these general ranges, which should be utilized by new intervenors and advocates in developing their proposed hourly rates.
Section 1806 requires that the Commission "take into consideration the market rates paid to persons of comparable training and experience who offer similar services" when awarding compensation to advocates eligible for intervenor compensation. For this consideration, we must have sufficient data about the training and experience of advocates of both intervenors and others offering similar services on behalf of utilities and this Commission. We also need information about the "comparable market rate" for those service providers that are paid by utilities and the Commission. Commenters
propose various types of information be gathered during a proceeding to set hourly rates.
So that we may assess the training and experience of Commission practitioners, we propose that current or prospective intervenors that expect to make requests for compensation for work in a given calendar year submit information about the training and experience of the personnel they expect to perform work on their behalf. The information submitted must cover both attorneys and non-attorneys. Intervenors must include the past rates adopted for their advocates in their filing and a proposed rate for the upcoming year. On the same date as the intervenor filing, respondent utilities1 must submit a list of the training and experience of in-house personnel who have worked on matters before the Commission during the prior calendar year.2 The utilities must prepare a similar list for outside counsel, experts, or other service providers who have supported the utilities' efforts before the Commission during the prior calendar year. Each of the utilities' lists must identify the title of the individual and type of service provided, describe the individual's training (for example, degrees and years obtained), and indicate the individual's experience appearing or supporting work before the Commission.
We agree with commenters that we currently have insufficient information regarding the "market rate for services paid by the ... public utility, ... to persons of comparable training and experience who are offering similar services." (§ 1806.) Therefore, we direct the utilities to provide this information for all persons identified on the above-described lists. For in-house personnel, the utilities must develop an effective hourly rate by identifying salary, benefits and other compensation, and an allocation of overheads for each individual listed. For outside service providers, the utilities must identify the rates charged to the utility (and the usual billing rate, if different) for each individual listed.3
Hourly rates paid by the Commission itself to its staff and consultants are also relevant under Section 1806, which says in part that the compensation we award "may not, in any case, exceed the comparable market rate for services paid by the commission or the public utility, whichever is greater" (emphasis added). We assume that hourly rates in the private sector generally exceed those paid by the Commission, but we will test this assumption by having our Executive Director review the data provided by the utilities. Following this review, the Executive Director will report instances, with appropriate data, in which the Commission has paid rates exceeding those paid by the utilities. Absent any such instances, the report need only note that fact, without further data.
In addition, we encourage intervenors and other interested persons to submit other information, for example, market surveys or benchmarking studies. We also invite independent experts or individuals with specialized knowledge of billing information to submit relevant information at the same time as intervenors and utilities submit their data.
As a general matter, Section 1806 requires us to look first to the compensation of practitioners before this Commission in setting rates for intervenors because of the statute's requirement to consider the costs of providers of similar services. However, we allow intervenors and others, when appropriate, to refer to rates charged or awarded for work in other forums.
Commenters propose different timing for the annual process. Some commenters suggest that rates be set for a base year and then adjusted annually by some type of index (for example, the Consumer Price Index) for some period of time before the base rate is re-evaluated. Some commenters suggest that rates be based on prior year data and applied retroactively to the awards for the past year. Others suggest that we adopt rates prospectively for the coming year. Others suggest that it is sufficient if rates are adopted for a given calendar year by April of that same year, as requests for compensation for work performed during January through March are unlikely to be resolved before April.
We agree with TURN that intervenors are unlikely to request an award of compensation for work performed in a given year prior to April of that same year. Therefore, our procedure is designed to adopt rates no later than April 30 for use that calendar year.
As described above, we are requiring utilities to submit data on compensation paid to in-house and outside representatives for the prior calendar year. We will adjust the prior year rates by the Consumer Price Index to bring them to a current year basis. The
rates requested by each intervenor will be compared against the adjusted utility rates and other data submitted to assess whether the intervenor requests the market rate for persons of comparable training and experience who are offering similar services.
We do not at this time adopt a base year rate with subsequent annual adjustments based on an index of general inflation; we agree with certain parties that market rates for advocates do not necessarily move in lockstep with inflation rates. We are open to considering an index that is more narrowly targeted to cost increases for the professional services that we compensate through the intervenor compensation program.
We will use the following generic schedule for the annual process beginning for 2005 calendar year rates:
January 15 Utilities submit data/Intervenors submit proposed rates and supporting information
February 5 Filings (by intervenors, utilities, or other interested persons) describing how January 15 data do or do not support proposed rates for particular advocates
March 23 Draft decision adopting rates
April 22 Commission adopts hourly rates
This timing would begin for 2005 calendar year rates.
The draft resolution contemplated doing the same process in the middle of this year to derive 2004 calendar year rates. Several considerations, including our review of comments on the draft resolution, prompt us to revise our approach to 2004.
First, there are reasonable concerns and questions about how the new process will work. To address them, we will institute the rulemaking for 2005 rates soon after our adoption of this resolution. We build into the rulemaking time to work out implementation issues before launching the above schedule. This implementation phase will include submission of preliminary data sets for utilities' 2003 costs of representation in our proceedings. Following submission, there will be a workshop. Our intent is that this implementation process will help all the participants reach a common understanding on matters such as level of detail, format, and aggregation.
Second, we will use an alternative approach, discussed in the draft resolution, for establishing 2004 rates. Under this approach, we will adopt an escalation factor and allow intervenors to use that factor to calculate award requests for work done in 2004. In other words, where we have approved an hourly rate for an advocate for 2003, an
intervenor may escalate that rate by the factor when seeking compensation for that advocate's work done in 2004. There will be a rebuttable presumption that a rate so escalated is reasonable.
The comments contain information that supports an escalation factor of 8%.4 In fact, 8% is at the low end of the information; however, we note that the Of Counsel surveys (which TURN and Aglet regularly rely on and which report annualized increases exceeding 10% in recent years) do not appear to reflect changes in public sector salaries. The latter, which are relevant to hourly rate determination under Section 1806, have not, at least at the State level, kept pace with private sector salaries. Consequently, under these limited circumstances, we find an 8% escalation factor is reasonable.
An intervenor may still make an individualized showing in appropriate circumstances, e.g., regarding an advocate new to our proceedings, or an advocate who (in the intervenor's opinion) had progressed to a significantly higher level of expertise since we had last set an hourly rate for that advocate. Similarly, a utility could oppose an increase to an advocate's hourly rate, whether the increase was predicated on the escalation factor or an individualized showing.
VI. Nature of the Annual Process
The annual process should provide greater certainty to intervenors and reduce controversy in particular award requests. We want to keep the annual process short and informal because we recognize that the cost of a slow burdensome process might outweigh the hoped-for benefits. Thus, we will use notice-and-comment procedure for receiving input from utilities, intervenors, and other participants. Analysis of the data should be straightforward, and we see no need for evidentiary hearings.
We will formalize the process, however, to the extent of issuing an order instituting rulemaking. The reports and comments produced for the annual process shall be submitted for filing in the corresponding rulemaking docket.
We anticipate some concern regarding confidentiality, particularly for personal financial data. We note that we have granted confidential treatment for the personal financial data submitted by intervenors to establish "significant financial hardship," which is one component of eligibility to claim intervenor compensation. Utilities must provide cost data, as described above, but they may aggregate the data and may omit the names of individuals, provided that the utility certifies that the data submitted comply fully with the requirements of Part IV above. Further, when submitting information claimed to be confidential, the party asserting the claim must submit a redacted (public) and an unredacted (sealed) version of the document containing the information and must state the statutory basis for asserting confidentiality under the Public Records Act. (Gov. Code § 6250 et seq.)
VII. Comments on Draft Resolution
As provided by Section 311(g)(1) and Rule 77.7(c) of the Commission's Rules of Practice and Procedure, this resolution was mailed in draft for public review and comment. We received comments from Aglet, Greenlining, PG&E, SBC, SCE, TURN (joined by Utility Consumers' Action Network), Valencia Water Company (Valencia), and Verizon California Inc. (Verizon). We received replies from AT&T, LIF, WorldCom, Inc. (MCI), SBC, SCE, and TURN (joined by Aglet).
In general, commenters expressed support for the proposed annual process. Several utility commenters assert that the cost information was excessive in detail, and that further steps should be taken to protect personal privacy and confidentiality when appropriate. In response, we have changed the draft resolution in various ways, in particular, adopting a proposal by TURN and Aglet to reduce the cost information burden. Further, we will not try to do a full-scale proceeding for calendar year 2004 rates this year, as contemplated by the draft resolution. Instead, we authorize (with certain qualifications) the use of an escalation factor by intervenors seeking new hourly rates for calendar year 2004, and we require data filings and a workshop in preparation for the first (calendar year 2005) formal rulemaking fully implementing the annual process.
Besides those issues discussed above, only two other issues appear in the comments. First, several commenters debate our use of "multipliers," which we mentioned in Part III of the draft resolution solely to explain that the annual process makes no change to our historic practice regarding their use. The subject is otherwise beyond the scope of this resolution. Similarly, these commenters debate whether intervenors do or do not face greater risks or delays than litigants in other forums in recovering their fees and costs. The debate is irrelevant for purposes of setting hourly rates under Section 1806.
Second, Valencia asks that "small" utilities (in essence, those with annual California revenues less than $500 million) be excluded from the annual process. We will retain the requirement that a utility participate in the annual process if we ordered the utility to pay an award for intervenor work performed in any of the three calendar years preceding the calendar year for which we are setting hourly rates. In practice, intervenor awards involving small utilities are infrequent, but they occur often enough that, consistent with § 1806, we should have data on costs of representation incurred by those utilities.
Findings
1. To date, the hourly rates used for calculating intervenor compensation awards have been developed and updated largely on a case-by-case basis.
2. An annual process for developing and updating hourly rates may be preferable to the case-by-case approach, in that the annual process may reduce controversy, avoid redundant litigation, and improve the perceived and actual fairness of the adopted hourly rates.
3. The annual process set forth in this resolution should be implemented with the understanding that the process may be refined over time.
4. It is reasonable under the circumstances to use an 8% escalation factor, as described in Part V, to set hourly rates for work performed in calendar year 2004.
Order
5. The annual process set forth in this resolution is adopted for developing and updating hourly rates of intervenors' representatives.
6. To set hourly rates for calendar year 2005, the Commission will institute a Rulemaking utilizing the adopted annual rate process. The annual process, with such refinements as the Commission may adopt over time, will be implemented through annual rulemakings, beginning with calendar year 2005.
7. For calendar year 2004 only, the Commission will use a blend (described in Part V of the discussion) of an escalation factor and current procedures to set hourly rates.
This resolution is effective today.
I hereby certify that this resolution was adopted by the Public Utilities Commission
at its regular meeting of August 19, 2004, and that the following Commissioners approved it.
MICHAEL R. PEEVEY
President
CARL W. WOOD
LORETTA M. LYNCH
GEOFFREY F. BROWN
SUSAN P. KENNEDY
Commissioners
1 The "respondent utilities" will be each utility that we have required to pay an award for intervenor work performed in any of the three calendar years preceding the calendar year for which we are setting hourly rates.
2 This listing must include in-house utility witnesses, attorneys, and project managers; however, the utility may limit the listing to those persons who have participated in Commission proceedings during the past three years or will participate in the upcoming calendar year.
3 To the extent that this information suggests logical ranges for comparing compensation rates for persons with similar experience, we encourage the utilities to group them accordingly.
4 The most remarkable information comes from PG&E, whose comments attach copies of two opinions by federal district court judge Vaughn R. Walker (N.D. Cal.). On the one hand, Judge Walker approves hourly rates for certain attorneys in 2001-02 that are somewhat lower than some rates this Commission has approved for the corresponding timeframe. On the other hand, Judge Walker uses census data for the San Francisco metropolitan area that indicate (under his methodology) an increase in the average hourly billing rate of almost 27% from 2001 to 2002. This calculation does not necessarily tell us about hourly rate escalation in more recent years, which concern us here; further, we note that Section 1806, which governs our determination of hourly rates, does not depend on census data and differs from the statute and judicial precedents to which the federal court is subject. | http://docs.cpuc.ca.gov/PUBLISHED/FINAL_RESOLUTION/39813.htm | 2014-08-20T06:48:37 | CC-MAIN-2014-35 | 1408500800767.23 | [] | docs.cpuc.ca.gov |
This section is intended to provide sample configurations and script examples common to long-term operation of a Jive installation. As opposed to the Run Book (Linux), these operations are common to a new installation, but generally not for day-to-day operation of this platform.
Jive includes several command-line tools you can use to perform maintenance tasks with your managed instance. With these tools, you can start and stop the application, upgrade the application, collect information needed by Jive support, and more.
You'll find these documented in the Application Management Command Reference.
The Jive platform is capable of encrypting HTTP requests via SSL or TLS. Enabling encryption of HTTP traffic requires several steps on a platform-managed host. For more about this, see Enabling SSL Encryption.
If you have purchased the Document Conversion module, see Setting Up a Document Conversion Node Some documents -- including PDFs and those from Microsoft Office -- are supported in a preview view in Jive. If you want to convert content from its native format into a form that can be previewed without altering the original document, you'll need the Document Conversion module, which you'll need to deploy on a server that is separate from your core Jive production instances.
For content search, all binary content uploaded to Jive, such as .doc, .ppt, .pdf, .txt, or .xls, goes through a process where Jive extracts the plain text from the documents so it can be used in the search index. By default, the output for this process is stored on the web app node in /usr/local/jive/applications/<instance-name>/home/search/search-text-extraction (these are encrypted files). However, we strongly recommend you change this to an NFS-mounted directory on a different node. In clustered environments, the NFS directory must be shared by all web app nodes in the cluster.
export CUSTOM_OPTS="-Djive.text.extract.dir=/path/to/chosen/directory"
When you install On-Premise search onto a multi-homed machine and you use the default host value of 0.0.0.0., On-Premise search may not choose the desired network interface. Therefore, if you are running On-Premise search on a multi-homed machine, you need to explicitly configure which network interface you want to bind to by changing the host values in the serviceDirectory.json file. For more on this, see Configuring Services Directory for On-Premise Search. | http://docs.jivesoftware.com/jive/6.0/community_admin/topic/com.jivesoftware.help.sbs.online_6.0/admin/PostInstallationTasks.html | 2014-08-20T06:49:52 | CC-MAIN-2014-35 | 1408500800767.23 | [] | docs.jivesoftware.com |
TOPICS×
Localization of user interface elements
Certain content that is displayed by the viewer is subject to localization. This includes user interface element tool tips and an error message displayed when the video cannot play.
Every textual content in the viewer that can be localized is represented by a special Viewer SDK identifier called SYMBOL. Any SYMBOL has a default associated text value for the English locale ( "en" ) supplied with the out-of-the-box viewer. It may":{ "Video360Player.ERROR":"Your Browser does not support HTML5 Video tag or the video cannot be played.", "PlayPauseButton.TOOLTIP_SELECTED":"Play" }, "fr":{ "Video360Player.ERROR":"Votre navigateur ne prend pas en charge la vidéo HTML5 tag ou la vidéo ne peuvent pas être lus.", "PlayPauseButton.TOOLTIP_SELECTED":"Jou: | https://docs.adobe.com/content/help/en/dynamic-media-developer-resources/library/viewers-for-aem-assets-only/video360/c-html5-aem-video360-localization.html | 2020-09-18T11:09:58 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.adobe.com |
Struct google_youtube3::
ChannelSectionSnippet
[−]
[src]
pub struct ChannelSectionSnippet { pub style: Option<String>, pub localized: Option<ChannelSectionLocalization>, pub title: Option<String>, pub position: Option<u32>, pub channel_id: Option<String>, pub type_: Option<String>, pub default_language: Option<String>, }
Basic details about a channel section, including title, style and position.
This type is not used in any activity, and only used as part of another schema.
Fields
style: Option<String>
The style of the channel section.
localized: Option<ChannelSectionLocalization>
Localized title, read-only.
title: Option<String>
The channel section's title for multiple_playlists and multiple_channels.
position: Option<u32>
The position of the channel section in the channel.
channel_id: Option<String>
The ID that YouTube uses to uniquely identify the channel that published the channel section.
type_: Option<String>
The type of the channel section.
default_language: Option<String>
The language of the channel section's default title and description. | https://docs.rs/google-youtube3/1.0.5+20170130/google_youtube3/struct.ChannelSectionSnippet.html | 2020-09-18T10:47:51 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.rs |
# How to work with our Figma designs
While working with our figma designs there are two concepts that you need to be familiar with:
- All of the components (not always with the names same as in a code) can be found under 'components' section in left bottom corner.
- Most of the components have style applied on them. It means that if you click on a text field of particular element you can find the name of a style (instead of raw color/size) referring to styleguide. You can find them in global SCSS vars. | https://docs.storefrontui.io/design/working-with-designs.html | 2020-09-18T10:37:08 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.storefrontui.io |
GridView.SynchronizeVisual(BaseView) Method
Synchronizes a View's display settings with the specified View.
Namespace: DevExpress.XtraGrid.Views.Grid
Assembly: DevExpress.XtraGrid.v20.1.dll
Declaration
public override void SynchronizeVisual( BaseView viewSource )
Public Overrides Sub SynchronizeVisual( viewSource As BaseView )
Parameters
Remarks
This method overrides the ColumnView.SynchronizeVisual method to implement synchronization of display settings specific to Grid Views. The SynchronizeVisual method forces the current View to have the same look and feel as the View specified by the method's parameter.
The method is used internally. Generally, you will have no need to call it in your applications.
See Also
Feedback | https://docs.devexpress.com/WindowsForms/DevExpress.XtraGrid.Views.Grid.GridView.SynchronizeVisual(DevExpress.XtraGrid.Views.Base.BaseView) | 2020-09-18T11:09:24 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.devexpress.com |
ADACS GPU Acceleration of LALSuite¶
As part of the 2018A semester of the ADACS Software Support Program, ADACS developer Dr. Greg Poole developed a GPU accelerated version of the LALSuite XLALSimIMRPhenomPFrequencySequence routine, which generates gravitational wave models of binary mergers of compact objects. Details and results of this work are presented here. Subsequent sections will describe how to download, install and use the ADACS GPU-accelerated version of LALSuite as well as present a Python package developed to aid testing and to demonstrate the use of the LALSuite SWIG wrappers which permit users to interface these routines from Python.
LALSuite Development Details¶
Users need to be aware of the following minor changes to LALSuite:
- A very slight change to the compilation procedure.
- An additional parameter has been added to SimIMRPhenomPFrequencySequence() to accommodate a new one-time-allocation buffer.
- Two function calls have been added for allocating and deallocating this buffer.
Changes to compilation¶
To compile the code with GPU acceleration, an NVidia GPU must be available with all relevant software installed, and the configuration step of the compilation must have the –enable-cuda switch added. See the Installation section for more details.
Changes to the LALSimulation API¶
When calling lalsimulation.SimIMRPhenomPFrequencySequence, an additional parameter has been added at the end of the function’s parameter list. This permits the passing of a one-time-allocated memory buffer (returned from a call to lalsimulation.PhenomPCore_buffer()), greatly increasing the speed of repeated calls (during the generation of MCMC chains, for example). Pass a NULL pointer (or None in Python) to ignore this functionality.
The LALSuite calls for allocating and deallocating this buffer are as follows:
- buf=lalsimulation.PhenomPCore_buffer(n_freq_max, n_streams) and
- lalsimulation.free_PhenomPCore_buffer(buf), respectively.
Some notes on the LALSimulation buffer¶
The memory buffer takes two parameters as input: n_freq_max and n_streams. The first should be set to the maximum number of frequencies that need to be generated by calls to lalsimulation.SimIMRPhenomPFrequencySequence using it, and the second is the number of asynchronous streams to be used.
The ADACS implementation of GPU acceleration for SimIMRPhenomPFrequencySequence() is asynchronous-enabled. This means that multiple independent streams can run concurrently, allowing simultaneous uploading of input arrays and downloading of results from the card. Presently, because time was not available to alter the LALSuite memory allocators to enable the allocation of pinned memory, this implementation is suboptimal and it is recommended that asynchronous functionality not be used by setting n_streams to be less-than-or-equal to 0. See Figures 1 and 2 below for a comparison.
Note
Be sure to call lalsimulation.free_PhenomPCore_buffer(buf) on the buffer returned by buf=lalsimulation.PhenomPCore_buffer() to free the memory allocated for the buffer, when it is no longer needed.
Note
See the lal_cuda.SimIMRPhenomP module and the lal_cuda executables for concrete practical examples of how to use the ADACS branch of LALSuite.
The lal_cuda Python package¶
A Python package (called lal_cuda) for running regression tests on the LALSuite routines discussed above – and for illustrating their use via Python – has been developed. See the lal_cuda Python API and lal_cuda Executables sections for an account of it’s API and of the executable scripts it provides.
Performance gains¶
The performance gains obtained from this work are presented in the two figures below. In short: speed-up factors of as high as approximately 8.5 have been obtained, although this result is a strong function of the number of frequencies being simulated.
Figure 1: Time-per-call (in milliseconds) of lalsimulation.SimIMRPhenomPFrequencySequence,.
Figure 2: Factor of speed-ups of calls to lalsimulation.SimIMRPhenomPFrequencySequence relative to the baseline case,. | https://adacs-ss18a-rsmith-python.readthedocs.io/en/latest/ | 2020-09-18T11:26:18 | CC-MAIN-2020-40 | 1600400187390.18 | [array(['_images/timings.deltas.png', '_images/timings.deltas.png'],
dtype=object)
array(['_images/timings.speedup.png', '_images/timings.speedup.png'],
dtype=object) ] | adacs-ss18a-rsmith-python.readthedocs.io |
TOPICS×
trackingServerSecure
Adobe collects data on your site by receiving an image request generated by the visitor. The trackingServerSecure variable determines the location an image request is sent over HTTPS. It also determines the location visitor cookies are stored. If this variable is not defined correctly, your implementation can experience data loss.
Changing this value makes AppMeasurement look for cookies in a different location. Unique visitor count can temporarily spike in reporting as visitor cookies are set in the new location.
SSL Tracking Server in Adobe Experience Platform Launch
SSL SSL Tracking Server field.
If this field is left blank, it defaults to the value in the trackingServer variable.
s.trackingServerSecure in AppMeasurement and Launch custom code editor
The s.trackingServerSecure variable is a string that contains the location to send image requests. It is almost always a subdomain of your site. Modern privacy practices in browsers commonly make third-party cookies unreliable. If this variable is blank, it uses the value in the s.trackingServer variable.
The value for this variable is almost always a first-party domain, such as data.example.com . See First-party cookies in the Experience Cloud in the Core Services user guide for more information on the first-party cookie process.
The individual who initially configures the first-party cookie implementation also defines the domain and subdomain used. For example:
s.trackingServerSecure = "data.example.com";
CNAME records usually point to a subdomain on ssl.d1.sc.omtrdc.net . | https://docs.adobe.com/content/help/en/analytics/implementation/vars/config-vars/trackingserversecure.html | 2020-09-18T11:57:58 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.adobe.com |
Components Installer Configuration
From Joomla! Documentation
Contents
Description
Installer Options configuration allows setting of parameters used globally for Installer.
How to Access
From the administrator area, select Extensions → Manage from the drop-down menu of the Administration screen. Click the Options button on top.
Screenshot
Details
Preferences Tab
- Joomla! Extensions Directory. (Show message/Hide message) Show or hide the information at the top of the installer page about the Joomla! Extensions Directory.
- Updates Caching (in hours). For how many hours should Joomla cache update information. This is also the cache time for the Update Notification Plugin, if enabled.
- Minimum Stability. The minimum stability of the extension updates you would like to see. Development is the least stable, Stable is production quality. If an extension doesn't specify a level it is assumed to be Stable.
Permissions Tab
This section shows permissions for Installer. The screen shows as follows.
Installer Options window you will see the toolbar.
The functions are:
- Save. Saves the Installer options and stays in the current screen.
- Save & Close. Saves the Installer options and closes the current screen.
- Cancel. Closes the current screen and returns to the previous screen without saving any modifications you may have made.
- Help. Opens this help screen. | https://docs.joomla.org/Help39:Components_Installer_Configuration | 2020-09-18T11:54:12 | CC-MAIN-2020-40 | 1600400187390.18 | [array(['/images/thumb/2/27/Help38-Components-Installer-Options-Permissions-en.png/800px-Help38-Components-Installer-Options-Permissions-en.png',
'Help38-Components-Installer-Options-Permissions-en.png'],
dtype=object) ] | docs.joomla.org |
Price Navigation
Price navigation can be used to distribute products by price range in layered navigation. You can also split each range in intervals. There are a few ways to calculate price navigation:
- Automatic (Equalize Price Ranges)
- Automatic (Equalize Product Counts)
- Manual.
Example: Price navigation steps
Configure price navigation
On the Admin sidebar, go to Stores > Settings > Configuration.
In the left panel, expand Catalog and choose Catalog underneath.
Expand
the Layered Navigation section.
By default, Display Product Count is set to
Yes. If necessary, deselect the Use system value checkbox to change this setting.
Set Price Navigation Steps Calculation for one of the methods in the following sections.
When complete, click Save Config.
Method 1: Automatic (equalize price ranges)
Leave Price Navigation Steps Calculation set to
Automatic (Equalize Price Ranges) (default). This setting uses the standard algorithm for price navigation.
Method 2: Automatic (equalize product counts)
If necessary, first deselect the Use system value checkbox to change these settings.
Set Price Navigation Steps Calculation to
Automatic (equalize product counts).
To display a single price when multiple products with the same price, set Display Price Interval as One Price to
Yes.
For Interval Division Limit, enter the threshold for a number of products within a price range.
The range cannot be further split beyond this limit. The default value is
9.
Automatic (equalize product counts)
Method 3: Manual
If necessary, first deselect the Use system value checkbox to change these settings.
Set Price Navigation Steps Calculation to
Manual.
Enter a value to determine the Default Price Navigation Step.
Enter the Maximum Number of Price Intervals allowed, up to
100. | https://docs.magento.com/user-guide/catalog/navigation-layered-price.html | 2020-09-18T11:49:57 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.magento.com |
TargetedTriggerAction.Target Property
Gets the target object. If TargetObject is set, returns TargetObject. Else, if TargetName is not set or cannot be resolved, defaults to the AssociatedObject.
Namespace: System.Windows.Interactivity
Assembly: System.Windows.Interactivity (in system.windows.interactivity.dll)
Syntax
'Declaration Protected ReadOnly Property Target As Object
'Usage Dim value As Object value = Me.Target
protected Object Target { get; }
protected: property Object^ Target { Object^ get (); }
/** @property */ protected Object get_Target ()
protected function get Target () : Object
Remarks
In general, this property should be used in place of AssociatedObject in derived classes.
Thread Safety
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
See Also
Reference
TargetedTriggerAction Class
TargetedTriggerAction Members
System.Windows.Interactivity Namespace | https://docs.microsoft.com/en-us/previous-versions/visualstudio/design-tools/expression-studio-4/ff726388(v=expression.40) | 2020-09-18T12:24:44 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.microsoft.com |
Artifacts#
Note: Using Artifacts during the beta period is free. Once the artifacts system is in the general availability, additional charges will apply based on usage.
Artifacts are used to persist files that are either final deliverables or intermediary/debugging files.
Semaphore has three levels of artifact store: job, workflow and project.
Each job and workflow gets its own namespaced artifact store, which is handy for storing debugging data or build artifacts that need to be promoted through the pipeline.
On the project level there is a single artifact store that usually stores final deliverables of CI/CD pipelines.
Job Artifacts#
Each job has an artifact store. You can view stored files from a job's page. Look for "Job Artifacts" button. The main use-case for job level artifacts is storing logs, screenshots and other types of files that make debugging easier.
To upload files to the job level artifacts store, use the built-in artifact CLI.
artifact push job <my_file_or_dir>
If you want to upload artifacts only in the case of failed job
using
epilogue in combination with
on_fail condition is a usual pattern.
blocks: - name: Build app task: jobs: - name: Job 1 commands: - make test epilogue: on_fail: commands: - artifact push job logs/test.log - artifact push job screenshots
Since job level debugging artifacts become irrelevant some time after a job has
finished, you can set artifacts to expire with
--expire-in flag.
artifact push job --expire-in 2w logs/test.log
For more details about uploading artifacts check the artifact CLI reference.
Workflow Artifacts#
As in the case of jobs, each workflow also gets its own artifact store. On the workflow page look for the "Workflow Artifacts" button.
Workflow artifacts can be used for storing various build and test reports and build artifacts. Promoting build artifacts through blocks and pipelines of a workflow is another common use-case.
The following example illustrates how executable
app, available in the
workflow level artifact store, can be downloaded in the downstream blocks of
the pipeline. In the same way artifacts can be downloaded from any other
pipeline of the same workflow.
blocks: - name: Build app task: jobs: - name: Make commands: - make - artifact push workflow app - name: Test task: jobs: - name: Unit tests commands: - artifact pull workflow app - make test - name: Integration tests commands: - artifact pull workflow app - make integration-tests
For more details about uploading and downloading artifacts see artifact CLI reference.
Project Artifacts#
Project level artifacts are great for storing final deliverables of the CI/CD process. To access them in the UI, look for the "Project Artifacts" button on the project page.
To upload project artifacts from any job of any workflow you need to use:
artifact push project myapp-v1.25.tar.gz
Similarly, if you want to download file from the project level artifact store,
use the
pull command.
artifact pull project myapp-v1.25.tar.gz | https://docs.semaphoreci.com/essentials/artifacts/ | 2020-09-18T10:15:42 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.semaphoreci.com |
Upgrade Sensu
Upgrade to Sensu Go 6.0 from a 5.x deployment
IMPORTANT: Before you upgrade to Sensu 6.0, use
sensuctl dump to create a backup of your existing installation.
You will not be able to downgrade to a Sensu 5.x version after you upgrade your database to Sensu 6.0 in step 3 of this process.
To upgrade your Sensu Go 5.x deployment to 6.0:
Install the 6.0 packages or Docker image.
Restart the services.
NOTE: For systems that use
systemd, run
sudo systemctl daemon-reloadbefore restarting the services.
# Restart the Sensu agent sudo service sensu-agent restart # Restart the Sensu backend sudo service sensu-backend restart
Run a single upgrade command on one your Sensu backends to migrate the cluster:
sensu-backend upgrade
- Add the
--skip-confirmflag to skip the confirmation in step 4 and immediately run the upgrade command.
sensu-backend upgrade --skip-confirm
NOTE: If you are deploying a new Sensu 6.0 cluster rather than upgrading from 5.x, you do not need to run the
sensu-backend upgradecommand.
Enter
yor
nto confirm if you did not add the
--skip-confirmflag. see a response that the upgrade command has already been run.
Upgrade to the latest 5.x. | https://docs.sensu.io/sensu-go/6.1/operations/maintain-sensu/upgrade/ | 2020-09-18T11:24:06 | CC-MAIN-2020-40 | 1600400187390.18 | [array(['/images/web-ui-entity-warning.png',
'Sensu web UI warning when the entity limit is reached'],
dtype=object) ] | docs.sensu.io |
Date: Fri, 18 Sep 2020 13:00:40 +0200 (CEST) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_41_789401177.1600426840715" ------=_Part_41_789401177.1600426840715 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
The library that utilizes different kind of libraries fast, with= out extremely complexity, with support for TorneAPI.
While waiting, PHPDocs are built every night - and they are located at&n= bsp; orneLIB-5.0/
It was never released. | https://docs.tornevall.net/exportword?pageId=5537814 | 2020-09-18T11:00:40 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.tornevall.net |
Introduction - Up and running in 10 minutes
At SCADE, we are working on streamlining the installation over the next couple of versions. Thanks to all that provided valuable feedback. Given a fast internet connection, installation should not take longer than 10 minutes. Please find the instructions below
Lastest installation changes
With the 0.9.14 release, we had to add Java SDK installation back to the installation steps because of the Apple Notarization Process.
With version 0.9.15 we fixed a problem that stopped SCADE from compiling on Android if you had multiple JVMs installed
Prerequisites
Supported versions
SCADE 1.0 is supporting the following versions
- Swift 5.0 on iOS
- 🆕Swift 5.0 on Android
- iOS X 10.14 and greater
- Android 5.0 and greater
Requirements
- OS requirement : Swift, and therefore SCADE, requires OSX version 10.14 or higher
- XCode 10 including Swift 5.0 is installed on the machine
- Android NDK R11c up to release 17
- Android SDK 24.4.1 or higher
- Newly added step Installation of Java SDK 8, not higher
- Harddisk space: You need about 800 MB of hard disk space
Ensure correct Java SDK version
Check for correct version
Your requirements for SCADE are as such
- a Java 8 JRE to run the SCADE IDE
- a Java 8 SDK to compile (every SDK includes a JRE)
So let us check if you have an Java 8 SDK installed. I a terminal window, run
- /usr/libexec/java_home -V command and the
- java -version command
You should have an output similar to this:
Install Java 8 SDK if not available
In case you have no Java 8 SDK listed, please download it here.
Download a Java8 SDK from i.e. Zulu and install DMG image
After downloading the DMG file, execute the installer. Java 8 SDK will be installed
Set the -vm parameter if you have multiple Java virtual machines installed
You need to set the -vm parameter if you have multiple JRE installed to make sure SCADE knows where to find the correct JRE.
How MAC OSX works is that for starting Java programs, it uses the Java framework that is listed on top of the list of the libexec/java_home -V command.
If you have more than 1 Java framework installed and its not on top of the list, you need to tell SCADE the location of the Java8 framework. See this example
- The top entry is a Java 12 framework and Java 12 will be used
- But you want to Java 8 entry highlighted in green
So to specify your JRE, execute the following steps:
Open the eclipse.ini file located at /Applications/Scade.app/Contents/Eclipse/ in a text editor
Add the two lines into the file as shown below
- Add one line starting with -vm directly above the -vmargs line
- Add the location of the JRE framework and add /bin
- Save the file
See a sample eclipse.ini file:
You are all set and SCADE should startup nicely.
Step 1 - Install SCADE
Download and extract SCADE from here
- Download SCADE here
- If your Mac did not automatically unzip the zip file, doubleclick to open the archive
- Scade.dmg becomes available in your folder
- Double click on Scade.dmg
- The Install dialog opens. Drag and drop Scade into the Applications folder
- Confirm that you want to overwrite any content
Start Scade
- Run SCADE from your applications directory
You find SCADE using Command + Space and searching for SCADE
Start it
2a. If you get an error as this one displayed below, make sure to set the correct VM as described here: Setup SCADE with multiple VMs installed
2b. When you start SCADE for the very first time, you need to confirm this dialog
Choose Workspace
You will now be asked to provide a workspace folder. The workspace folder will contain the following assets:
- SCADE projects
- SCADE SDK and binary built during compilation process
- SCADE GitHub downloads
- SCADE Fusion libraries
Workspace location
Please make sure that the workspace location you choose is OUTSIDE of the directory where SCADE resides. In this way, you can update the software without impacting your work.
Congrats. You installed SCADE successfully and can start developing and give it a first try. The following setup steps are necessary for compiling to Android.. :
Step 2 - Install Android Studio
SCADE uses the following parts of Android Studio
- the Android SDK
- the Android NDK
- (optionally) the Android Simulator to run Android binaries
If you haven't previously installed Android Studio, you must install it now.
- Download and install Android Studio here
- Start Android Studio, open any project or create new empty project
- Select Tools → SDK Manager in the main menu
- In the SDK Platforms tab, select the following components:
- Android 6.0
- In the SDK Tools tab, select the followin components:
- Android SDK Platform-Tools
- Android SDK Tools
- Google Play services
- NDK
- Support Repository / Google Repository
- Click Apply and wait for selected components to be installed
Results
Android SDK will be installed in the /Android/Library/sdk directory
Android NDK will be installed in the /Android/Library/sdk/ndk-bundle
Install Android NDK v17
SCADE supports Android NDK v17 and lower at the moment. The current version of Android Studio comes with a different version. Please install Android NDK v17 following these steps
Download NDK here
Extract the NDK to a directory of your choice. I choose /Users//android-ndk-r17c.
We use /Users/flangel/android-ndk-r17c in this example
Step 3 - Verify Swift and XCode Location
Especially if you have multiple versions of Xcode installed, its critical that you are using the correct version. Doublecheck using these commands
- swift -version
- xcrun swift -version
All these commands should point to Swift 5.0
If this doesn't result in Swift 5.0 option, try setting the command line tools Command Line Tools Setup
Step 4 - Configure Android support
The following settings make it possible to compile to Android. Without the settings, you will be able to develop and run on iOS and in our SCADE simulator, but not be able to compile to Android.
Open the preferences settings
and configure the paths. When you press the apply button, we do check the validity of the path.
Here are the default directories from the above installation steps:
Step 5 - Configure Java SDK support
Make sure that you Java path is the same you used for your Java8 SDK installation
Step 6 - Configure iOS support
The following settings make it possible to compile to iOS. Without the settings, you will be able to develop and run on iOS and in our SCADE simulator, but not be able to compile to iOS.
Configure files
Apple requires two different files for compiling apps
- a certificate file for signing the app, for instance cert.p12
- a mobile provision file prov.mobileprovisioning
You need to specify the location of these files in the build file of your app.
- Open the build file build.yaml that can be found in the root directory of your project
- Scroll to the codeSignSettings section
- Modify the entries as shown below to reflect the location of your files
Make sure certificate is in keychain
The certificate needs to be part of the keychain. Double click on it and make sure its part of your keychain:
Certificates and password support
Certificates can be password protected. Currently, we don't support password protected certificates, but will add this feature in the future.
Set Command Line Tool option
Sometime the command line tools use the wrong version.
- Run XCode 10.x
- Set Command Line Tools to use 10.x

Updated 7 months ago | https://docs.scade.io/docs/installation | 2020-09-18T10:19:37 | CC-MAIN-2020-40 | 1600400187390.18 | [array(['https://d5rrulwm8ujxz.cloudfront.net/UgInstall_Java8_check01.png',
None], dtype=object)
array(['https://d5rrulwm8ujxz.cloudfront.net/UgInstall_Java8_check02.png',
None], dtype=object)
array(['https://d5rrulwm8ujxz.cloudfront.net/UgInstall_Java8_check05.png',
None], dtype=object)
array(['https://d5rrulwm8ujxz.cloudfront.net/UgInstall_Java8_check06.png',
None], dtype=object)
array(['https://s3.amazonaws.com/scade.io/images/ug_install_rundmg.png',
None], dtype=object)
array(['https://s3.amazonaws.com/scade.io/images/ug_install_run.png',
None], dtype=object)
array(['https://s3.amazonaws.com/scade.io/images/ug_install_run3.png',
None], dtype=object)
array(['https://s3.amazonaws.com/scade.io/images/UgInstallGuide_WrongJavaVersion.png',
None], dtype=object)
array(['https://s3.amazonaws.com/scade.io/images/ug_install_run2a.png',
None], dtype=object)
array(['https://s3.amazonaws.com/scade.io/images/ug_install_workspace.png',
None], dtype=object)
array(['https://s3.amazonaws.com/scade.io/images/ScadeLogo.png', None],
dtype=object)
array(['https://s3.amazonaws.com/scade.io/images/ug_install_androidstudio_1.png',
None], dtype=object)
array(['https://s3.amazonaws.com/scade.io/images/ug_install_androidstudio_2a.png',
None], dtype=object)
array(['https://s3.amazonaws.com/scade.io/images/uginstall_swiftversion42.png',
None], dtype=object)
array(['https://s3.amazonaws.com/scade.io/images/scadePreferences1.png',
None], dtype=object)
array(['https://s3.amazonaws.com/scade.io/images/UgInstall_InstallationDialog.png',
None], dtype=object)
array(['https://d5rrulwm8ujxz.cloudfront.net/UgInstall_Java8_check05.png',
None], dtype=object)
array(['https://files.readme.io/8bff415-Screenshot_2020-01-27_at_12.39.45.png',
'Screenshot 2020-01-27 at 12.39.45.png'], dtype=object)
array(['https://files.readme.io/8bff415-Screenshot_2020-01-27_at_12.39.45.png',
'Click to close...'], dtype=object)
array(['https://s3.amazonaws.com/scade.io/images/ins_KeyChain.png', None],
dtype=object) ] | docs.scade.io |
Claiming DAPP Tokens¶
Automatic¶
The auto claim mechanism does not require participants to push an action themselves to claim the tokens. This is handled by the website automatically at the end of each cycle.
Manual¶
The manual claim function is always available and participants can claim their DAPP Tokens immediately after the cycle ends by sending an explicit action depending on the track they selected.
Instant Registration Track
Regular Registration Track
Login with the wallet of your choice and enter your account in the “payer” field (YOUR_ACCOUNT_HERE) and hit “Push Transaction”. | https://docs.liquidapps.io/en/v2.0/tokens/claiming.html | 2020-09-18T09:42:40 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.liquidapps.io |
Middleware
Returning Method Not Allowed
When the path matches, but the HTTP method does not, your application should
return a
405 Method Not Allowed status in response.
To enable that functionality, we provide
Mezzio\Router\Middleware\MethodNotAllowedMiddleware via the
mezzio-router package.
This middleware triggers when the following conditions occur:
- The request composes a
RouteResultattribute (i.e., routing middleware has completed), AND
- the route result indicates a routing failure due to HTTP method used (i.e.,
RouteResult::isMethodFailure()returns
true).
When these conditions occur, the middleware will generate a response:
- with a
405 Method Not Allowedstatus, AND
- an
Allowheader indicating the HTTP methods allowed.
Pipe the middleware after the routing middleware; if using one or more of the implicit methods middleware, this middleware must be piped after them, as it will respond for any HTTP method!
$app->pipe(RouteMiddleware::class); $app->pipe(ImplicitHeadMiddleware::class); $app->pipe(ImplicitOptionsMiddleware::class); $app->pipe(MethodNotAllowedMiddleware::class); // ... $app->pipe(DispatchMiddleware::class);
(Note: if you used the Mezzio skeleton, this middleware is likely already in your pipeline.)
Found a mistake or want to contribute to the documentation? Edit this page on GitHub! | https://docs.mezzio.dev/mezzio/v3/features/middleware/method-not-allowed-middleware/ | 2020-09-18T10:13:56 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.mezzio.dev |
Sample Programs: Motion Trails shows how to create motion trails in panda. The basic process is this: after rendering the scene, you copy the scene into a texture. You apply that texture to a full-screen quad. You integrate that quad into the rendering of the next frame. That creates a feedback loop.
The basic steps are: create a texture that will hold the contents of the main window. Tell the main window to copy is output into this texture using setupRenderTexture. Obtain a full-screen quad containing this texture using getTextureCard. Position this quad in the scene.
You can get a lot of different effects by simply moving the quad a little bit. By putting it behind the actor, the actor is fully visible, and surrounded by trails. By putting it in front of the actor, the trails overlap the actor. By rotating the quad slightly, you get a whirlpool. By offsetting it or scaling it, you can cause the trails to move away from the actor. You can colorize it, adjust its transparency, and otherwise tweak it in a number of ways.
Back to the List of Sample Programs:
Sample Programs in the Distribution | https://docs.panda3d.org/1.10/python/more-resources/samples/motion-trails | 2020-09-18T10:35:35 | CC-MAIN-2020-40 | 1600400187390.18 | [array(['../../../_images/screenshot-sample-programs-motion-trails.jpg',
'../../../_images/screenshot-sample-programs-motion-trails.jpg'],
dtype=object) ] | docs.panda3d.org |
Article sections
Overview
If you are author you may want to integrate Easy Social Share Buttons for WordPress right into your WordPress product.
Separate “Extended” license is required for each of your products where you will bundle plugin.
What Will Be The Difference For Your Customers Using Bundled Plugin Version
The bundled inside product version of Easy Social Share Buttons for WordPress has the same core features as the one they can buy directly. The differences are:
- The bundled version can’t be registered by users (with the code of your product). Your customers will see a theme integrated license message instead of not registered.
- The bundled versions do not have access to direct support from us (AppsCreo) as a plugin author. You will be responsible to answer questions from your customers or contact us for issues you can’t solve.
- The bundled versions do not receive automatic plugin updates. You as a developer should provide an update for your customers.
- You are not allowed to bundle the free plugin extensions that come with the plugin. And your customers won’t have access to download those free extensions.
Bundled into a product for sale versions of Easy Social Share Buttons can be updated to a full version by your customers. To do this they should make a direct purchase of plugin and activate with the purchase code. The activation will change their state to activated (not theme integrated) and they will have access to all features listed above.
Theme Integration Tutorial
Set Theme Integrated License of Plugin
This step is important to avoid further messages from your customers “Why plugin is not activated or not licensed”. To complete and change the license state to theme activated you need to place the following simple code inside functions.php file of your theme. This code can be also placed in a plugin (separate or core plugin of your theme/product).
add_filter('essb_is_theme_integrated', 'my_theme_essb_is_in_theme'); function my_theme_essb_is_in_theme() { return true; }
Adding Plugin With TGM Activation Class
To meet new theme submission requirements follow this steps:
- Include TGM Activation class into your theme PHP files (ex. in functions.php file)
- Add settings for TGM Activation class and hook them to the tgmpa_register action
<?php /** * Include the TGM_Plugin_Activation class. */ require_once dirname( __FILE__ ) . '/class-tgm-plugin-activation.php'; add_action( 'tgmpa_register', 'my_theme_register_essb_plugins' ); /** * Register the required plugins for this theme. * * The variable passed to tgmpa_register_plugins() should be an array of plugin * arrays. * * This function is hooked into tgmpa_init, which is fired within the * TGM_Plugin_Activation class constructor. */ function my_theme_register_essb_plugins() { /** * Array of plugin arrays. Required keys are name and slug. * If the source is NOT from the .org repo, then source is also required. */ $plugins = array( // This is an example of how to include a plugin pre-packaged with a theme array( 'name' => 'Easy Social Share Buttons for WordPress', // The plugin name 'slug' => 'easy-social-share-buttons3', // The plugin slug (typically the folder name) 'source' => get_stylesheet_directory() . '', // The plugin source 'required' => true, // If false, the plugin is only 'recommended' instead of required 'version' => '4.2', // ) ); // Change this to your theme text domain, used for internationalising strings $theme_text_domain = 'tgmpa'; /** * Array of configuration settings. Amend each line as needed. * If you want the default strings to be available under your own theme domain, * leave the strings uncommented. * Some of the strings are added into a sprintf, so see the comments at the * end of each line for what each argument will be. */ $config = array( 'domain' => $theme_text_domain, // Text domain - likely want to be the same as your theme. 'default_path' => '', // Default absolute path to pre-packaged plugins 'parent_menu_slug' => 'themes.php', // Default parent menu slug 'parent_url_slug' => 'themes.php', // Default parent URL slug 'menu' => 'install-required-plugins', // Menu slug 'has_notices' => true, // Show admin notices or not 'is_automatic' => false, // Automatically activate plugins after installation or not 'message' => '', // Message to output right before the plugins table 'strings' => array( 'page_title' => __( 'Install Required Plugins', $theme_text_domain ), 'menu_title' => __( 'Install Plugins', $theme_text_domain ), 'installing' => __( 'Installing Plugin: %s', $theme_text_domain ), // %1$s = plugin name 'oops' => __( 'Something went wrong with the plugin API.', $theme_text_domain ), 'notice_can_install_required' => _n_noop( 'This theme requires the following plugin: %1$s.', 'This theme requires the following plugins: %1$s.' ), // %1$s = plugin name(s) 'notice_can_install_recommended' => _n_noop( 'This theme recommends the following plugin: %1$s.', 'This theme recommends the following plugins: %1$s.' ), // %1$s = plugin name(s) 'notice_cannot_install' => _n_noop( 'Sorry, but you do not have the correct permissions to install the %s plugin. Contact the administrator of this site for help on getting the plugin installed.', 'Sorry, but you do not have the correct permissions to install the %s plugins. Contact the administrator of this site for help on getting the plugins installed.' ), // %1$s = plugin name(s) 'notice_can_activate_required' => _n_noop( 'The following required plugin is currently inactive: %1$s.', 'The following required plugins are currently inactive: %1$s.' ), // %1$s = plugin name(s) 'notice_can_activate_recommended' => _n_noop( 'The following recommended plugin is currently inactive: %1$s.', 'The following recommended plugins are currently inactive: %1$s.' ), // %1$s = plugin name(s) 'notice_cannot_activate' => _n_noop( 'Sorry, but you do not have the correct permissions to activate the %s plugin. Contact the administrator of this site for help on getting the plugin activated.', 'Sorry, but you do not have the correct permissions to activate the %s plugins. Contact the administrator of this site for help on getting the plugins activated.' ), // %1$s = plugin name(s) 'notice_ask_to_update' => _n_noop( 'The following plugin needs to be updated to its latest version to ensure maximum compatibility with this theme: %1$s.', 'The following plugins need to be updated to their latest version to ensure maximum compatibility with this theme: %1$s.' ), // %1$s = plugin name(s) 'notice_cannot_update' => _n_noop( 'Sorry, but you do not have the correct permissions to update the %s plugin. Contact the administrator of this site for help on getting the plugin updated.', 'Sorry, but you do not have the correct permissions to update the %s plugins. Contact the administrator of this site for help on getting the plugins updated.' ), // %1$s = plugin name(s) 'install_link' => _n_noop( 'Begin installing plugin', 'Begin installing plugins' ), 'activate_link' => _n_noop( 'Activate installed plugin', 'Activate installed plugins' ), 'return' => __( 'Return to Required Plugins Installer', $theme_text_domain ), 'plugin_activated' => __( 'Plugin activated successfully.', $theme_text_domain ), 'complete' => __( 'All plugins installed and activated successfully. %s', $theme_text_domain ), // %1$s = dashboard link 'nag_type' => 'updated' // Determines admin notice type - can only be 'updated' or 'error' ) ); tgmpa( $plugins, $config ); } ?>
Was this article helpful?
YesNo
in Developers
Related Articles
Was this article helpful?
YesNo | https://docs.socialsharingplugin.com/knowledgebase/theme-integration-technical-including-easy-social-share-buttons-for-wordpress-in-a-wordpress-theme/ | 2020-09-18T10:44:00 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.socialsharingplugin.com |
UI Virtualization
RadGridView's API supports UI Virtualization which processes only these visual elements that are loaded in its viewable area. This reduces the memory footprint of the application and speeds up the loading time, thus immensely enhancing the UI performance.
The grid control utilizes horizontal and vertical virtualization and introduces container recycling for speed improvement and reduction in memory usage. This is of great importance when the control is bound to large data sets. The container recycling pushes further the speed of horizontal and vertical scrolling, allowing RadGridView to reuse the existing containers for the different data items from the source collection instead of creating new ones.
You should not work with the visual elements of RadGridView(GridViewCell, GridViewRow, etc.) directly as this will result in inconsistent behavior due to the containers recycling mechanism. Instead, you should use the underlying data items as explained in the Style Selectors section.
These techniques, combined with the outstanding LINQ-based data engine, guarantee fast performance.
Both EnableColumnVirtualization and EnableRowVirtualization properties of RadGridView are set to True by default.
In case the UI Virtualization is disabled, then all the visual elements will be loaded once RadGridView is visualized and its items are populated. This can lead to huge performance issues and additional loading time. Disabling the virtualization is highly not recommended.
Do not place RadGridView in controls/panels which will measure it with infinity as this will disable the UI Virtualization. For example, ScrollViewer, StackPanel and Grid with Row.Height=Auto or Column.Width=Auto will measure it in that way. You can place it in RowDefinition with Height="*" instead.
You can check the topic on Styling or content mixed-up on scrolling on some issues with styling the visual elements. | http://docs.telerik.com/devtools/silverlight/controls/radgridview/features/ui-virtualization | 2017-06-22T18:35:50 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.telerik.com |
Backing Up Files
Please note: only home directories on configured workstations will be backed up by the server.
Backups are currently scheduled for Tue., Thur., and Saturday mornings at 1:00am.
The external drives can be used to setup a simple backup procedure. One way would be to invoke the rsync command:
rsync -avux --progress 'from' 'to'
Other options for backing up:
There are a growing number of options to backup via the web, some of which are even free for small amounts of data (<2-5 GB typically). This seems to be a particularly good way to backup PC and Mac laptops. One nice option. | https://docs.astro.columbia.edu/wiki/Backing%20Up%20Data | 2017-06-22T18:26:14 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.astro.columbia.edu |
Decred Pull Requests and Contributions¶
All code for Decred is kept on GitHub. This document provides some basic info on how we handle code contributions and some basic info on how to contribute. It is based partially on a similar document from btcsuite.
Initial Preparation¶
A good first step is to read the Code Contribution Guidelines documentation to get a good understanding of the policies used by the project. That document is primarily focused on the Go codebase but it is still a good start.
The following examples will be split into two sections, one for the Go projects (dcrd, dcrwallet, gominer, etc), and one for projects that do not use Go (decrediton, Paymetheus, dcrdocs, etc). In all cases, be sure to check out the README.md in each project if you need help setting the particular project up.
General Model¶
With this process we are trying to make contributing simple while also maintaining a high level of code quality and clean history. Members of the Decred team must follow the same procedures as external contributors.
Our model for hacking code code changes on a branch.
- Push these changes to your own forked GitHub repo.
- When your code or you can ask someone on irc/slack to review.
- ALL code must be reviewed and receive at least one approval before it can go in. Only team members can give official approval, but comments from other users are encouraged.
- If there are changes requested, make those changes and commit them to your local branch.
- Push those changes to the same branch you have been hacking on. They will show up in the PR that way and the reviewer can then compare to the previous version.
- Once your code is approved,.
Go¶
For projects using Go, you can follow this procedure. dcrd will be used as the example. This assumes you already have go1.6 or newer installed and a working
$GOPATH.
One time setup¶
- Fork dcrd on GitHub
- Run the following commands to obtain dcrd, all dependencies, and install it:
$ mkdir -p $GOPATH/src/github.com/decred/ $ git clone $GOPATH/src/github.com/decred/dcrd $ go get github.com/Masterminds/glide $ cd $GOPATH/src/github.com/decred/dcrd $ glide install $ go install $(glide nv)
- Add a git remote for your fork:
$ git remote add <yourname><yourname>/dcrd.git
Other projects¶
For projects not written in Go, the initial setup will depend on the project. I will use dcrdocs as an example here, but the basic steps are the same for any of the projects. Specific setup can be seen in the project README.md (for example how to install mkdocs to work on dcrdocs or electron for decrediton).
One time setup¶
- Fork dcrdocs on GitHub
- Run the following commands to obtain dcrd, all dependencies, and install it:
$ mkdir -p code/decred $ cd code/decred $ git clone $ cd dcrdocs
- Add a git remote for your fork:
$ git remote add <yourname><yourname>/dcrdocs.git
- Create a pull request with the GitHub UI. You can request a reviewer on the GitHub web page or you can ask someone on irc/slack.
There are a few other things to consider when doing a pull request. In the case of the Go code, there is significant test coverage already. If you are adding code, you should add tests as well. If you are fixing something, you need to make sure you do not break any existing tests. For the Go code, there is a script
goclean.sh in each repo to run the tests and the any static checkers we have. NO code will be accepted without passing all the tests. In the case of the node.js code (decrediton) all code must pass eslint. You can check this with the command
npm run lint.
Finally, each repo has a LICENSE. Your new code must be under the same LICENSE as the existing code and assigned copyright to ‘The Decred Developers’. In most cases this is the very liberal ISC license but a few repos are different. Check the repo to be sure.
If you have any questions for contributing, feel free to ask on irc/slack or GitHub. Decred team members (and probably community members too) will be happy to help. | https://docs.decred.org/advanced/contributing/ | 2017-06-22T18:19:25 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.decred.org |
RT 4.4.1 Documentation
Writing extensions
- Introduction
- Getting Started
- Extension Directories
- Callbacks
- Adding and Modifying Menus
- Changes to RT
- Preparing for CPAN
Introduction
RT has a lot of core features, but sometimes you have a problem to solve that's beyond the scope of just configuration. The standard way to add features to RT is with an extension. You can see the large number of freely available extensions on CPAN under the RT::Extension namespace to get an idea what's already out there. We also list some of the more useful extensions on the Best Practical website at
After looking through those, you still may not find what you need, so you'll want to write your own extension. Through the years there have been different ways to safely and effectively add things onto RT. This document describes the current best practice which should allow you to add what you need and still be able to safely upgrade RT in the future.
Getting Started
There are a few modules that will set up your initial sandbox for you to get you started. Install these modules from CPAN:
- Module::Install::RTx
Sets up your extension to be installed using Module::Install.
- Dist::Zilla::MintingProfile::RTx
Provides some tools for managing your distribution. Handy even if you're not putting your code on CPAN.
If this is your first time using Dist::Zilla, you can set up your CPAN details by running:
dzil setup
You can read about Dist::Zilla and the
dzil command at.
Change to the directory that will be the parent directory for your new extension and run the following, replacing Demo with a descriptive name for your new extension:
dzil new -P RTx RT-Extension-Demo
You'll see something like:
[DZ] making target dir /some-dir/RT-Extension-Demo [DZ] writing files to /some-dir/RT-Extension-Demo [DZ] dist minted in ./RT-Extension-Demo
If you're stuck on a name, take a look at some of the existing RT extensions. You can also ask around IRC (#rt on irc.perl.org) to see what people think makes sense for what the extension will do.
You'll now have a directory with the basic files for your extension. Included is a gitignore file, which is handy if you use git for your version control like we do. If you don't use git, feel free to delete it, but we hope you're using some sort of version control for your work.
Extension Directories
There are several places to put code to provide your new features and if you follow the guidelines below, you'll make sure things get installed in the right places when you're ready to use it. These standards apply to RT 4.0 through 4.4 and any differences between them are noted below.
Module Code
In your new extension directory you'll already have a
lib/RT/Extension/Demo.pm file, which is just a standard perl module. As you start writing code, you can use all of the standard RT libraries because your extension will be running in the context of RT and those are already pulled in. You can also create more modules under
lib as needed.
Mason Code
RT provides callbacks throughout its Mason templates to give you hooks to add features. The easiest way to modify RT is to add Mason template files that will use these callbacks. See "Callbacks" for more information. Your Mason templates should go in an
html directory with the appropriate directory structure to make sure the callbacks are executed.
If you are creating completely new pages for RT, you can put these under the
html directory also. You can create subdirectories as needed to add the page to existing RT paths (like Tools) or to create new directories for your extension.
CSS and Javascript
Where these files live differs between RT 4.2 and above, and RT 4.0 and below; if you need your extension to be compatible with both, you may need to provide both configurations. On RT 4.2 and above, create a
static directory at the top level under your extension, and under that a
css directory and a
js directory. Before RT 4.2, you should create
css and
js directories in
html/NoAuth/.
To add files to RT's include paths, you can use the "AddStyleSheets" in RT and "AddJavascript" in RT methods available in the RT module. You can put the lines near the top of your module code (in your "Demo.pm" file). If you set up the paths correctly, you should only need to set the file names like this:
RT->AddStyleSheets('myextension.css'); RT->AddJavaScript('myextension.js');
Creating Objects in RT
If you need to have users create a group, scrip, template, or some other object in their RT instance, you can automate this using an initialdata file. If you need this, the file should go in the
etc directory. This will allow users to easily run the initialdata file when installing with:
make initdb
Module::Install Files
As mentioned above, the RT extension tools are set up to use Module::Install to manage the distribution. When you run
perl Makefile.PL
for the first time, Module::Install will create an
inc directory for all of the files it needs. Since you are the author, a
.author directory (note the . in the directory name) is created for you in the
inc directory. When Module::Install detects this directory, it does things only the author needs, like pulling in modules to put in the
inc directory. Once you have this set up, Module::Install should mostly do the right thing. You can find details in the module documentation.
Tests
Test Directory
You can create tests for your new extension just as with other perl code you write. However, unlike typical CPAN modules where users run the tests as a step in the installation process, RT users installing extensions don't usually run tests. This is because running the tests requires your RT to be set up in development mode which involves installing some additional modules and having a test database. To prevent users from accidentally running the tests, which will fail without this testing setup, we put them in a
xt directory rather than the typical
t directory.
Writing Extension Tests
If you want to write and run tests yourself, you'll need a development RT instance set up. Since you are building an extension, you probably already have one. To start with testing, set the
RTHOME environment variable to the base directory of your RT instance so your extension tests run against the right instance. This is especially useful if you have your test RT installed in a non-standard location.
Next, you need to subclass from RT::Test which gives you access to the test RT and a test database for running tests. For this, you'll create a Test.pm file in your
lib tree. The easiest way to set up the test module to pull in RT::Test is to look at an example extension. RT::Extension::RepeatTicket, for example, has a testing configuration you can borrow from.
You'll notice that the file included in the extension is lib/RT/Extension/RepeatTicket/Test.pm.in. This is because there are paths that are set based on your RT location, so the actual Test.pm file is written when you run Makefile.PL with appropriate paths substituted when Makefile.PL is run. Module::Install provides an interface to make this easy with a
substitute feature. The substitution code is in the Makefile.PL file and you can borrow that as well.
Once you have that set up, add this to the top of your test files:
use RT::Extension::Demo::Test tests => undef;
and you'll be able to run tests in the context of a fully functioning RT instance. The RT::Test documentation describes some of the helper methods available and you can look at other extensions and the RT source code for examples of how to do things like create tickets, queues, and users, how to set rights, and how to modify tickets to simulate various RT tasks.
If you have a command-line component in your extension, the easiest way to test it is to set up a
run method using the Modulino approach. You can find an example of this approach in RT::Extension::RepeatTicket in the bin directory.
Patches
If you need to provide patches to RT for any reason, you can put them in a
patches directory. See "Changes to RT" for more information.
Callbacks
The RT codebase, mostly the Mason templates, contains hooks called callbacks that make it easy to add functionality without changing the RT code itself. RT invokes callbacks by looking in the source directories for files that might have extra code.
Directory Structure
RT looks in the local/plugins directory under the RT base directory for extensions registered with the
@Plugins configuration. RT then uses the following structure when looking for callbacks:
local/plugins/[ext name]/html/Callbacks/[custom name]/[rt mason path]/[callback name]
The extension installation process will handle some of this for you by putting your html directory under local/plugins/[ext name] as part of the installation process. You need to make sure the path under
html is correct since that is installed as-is.
The
Callbacks directory is required. The next directory can be named anything and is provided to allow RT owners to keep local files organized in a way that makes sense to them. In the case of an extension, you should name the directory the same as your extension. So if your extension is
RT::Extension::Demo, you should create a RT-Extension-Demo directory under Callbacks.
The rest of the path is determined by the RT Mason code and the callback you want to use. You can find callbacks by looking for calls to the
callback method in the RT Mason code. You can use something like this in your base RT directory:
# find share/html/ | xargs grep '\->callback'
As an example, assume you wanted to modify the ticket update page to put something after the Time Worked field. You run the above and see there is a callback in share/html/Ticket/Update.html that looks like this:
$m->callback( %ARGS, CallbackName => 'AfterWorked', Ticket => $TicketObj );
You look at the Update.html file and see that the callback is located right after the Time Worked field. To add some code that RT will run at that point, you would create the directory:
html/Callbacks/RT-Extension-Demo/Ticket/Update.html/
Note that Update.html is a file in the RT source, but it becomes a directory in your extension code. You then create a file with the name of the callback, in this case AfterWorked, and that's where you put your code. So the full path and file would be:
html/Callbacks/RT-Extension-Demo/Ticket/Update.html/AfterWorked
If you see a callback that doesn't have a
CallbackName parameter, name your file Default and it will get invoked since that is the default callback name when one isn't provided.
Callback Parameters
When you look at callbacks using the method above, the other important thing to consider is the parameter list. In addition to the
CallbackName, the other parameters listed in the callback will be passed to you to use as you develop your extension.
Getting these parameters is important because you'll likely need them in your code, getting data from the current ticket object, for example. These values are also often passed by reference, which allows you to modify them, potentially changing the behavior of the RT template when it continues executing after evaluating your code.
Some examples are adding a
Limit call to modify search results on a DBIx::SearchBuilder object, or setting a flag like
$skip_update for a callback like this:
$m->callback( CallbackName => 'BeforeUpdate', ARGSRef => \%ARGS, skip_update => \$skip_update, checks_failure => $checks_failure, results => \@results, TicketObj => $TicketObj );
There are many different callbacks in RT and these are just a few examples to give you idea what you can do in your callback code. You can also look at other extensions for examples of how people use callbacks to modify and extend RT.
Adding and Modifying Menus
You can modify all of RT's menus using callbacks as described in "Callbacks". The file in RT that controls menus is:
share/html/Elements/Tabs
and you'll find a Privileged and SelfService callback which gives you access to those two sets of menus. In those callbacks, you can add to or change the main menu, the page menu, or the page widgets.
You can look at the Tabs file itself for examples of adding menu items. The menu object is a RT::Interface::Web::Menu and you can find details on the available parameters in the documentation.
Here are some simple examples of what you might do in a callback:
<%init> # Add a brand new root menu item my $bps = Menu()->child( 'bps', # any unique identifier title => 'Corporate', path => '' ); #Add a submenu item to this root menu item $bps->child( 'wiki', title => 'Wiki', path => '', ); #Retrieve the 'actions' page menu item if (my $actions = PageMenu->child('actions')) { $actions->child( 'newitem', title => loc('New Action'), path => '/new/thing/here', ) } </%init>
Changes to RT
When writing an extension, the goal is to provide all of the new functionality in your extension code using standard interfaces into RT. However, sometimes when you're working on an extension, you'll find you really need a change in RT itself to make your extension work. Often this is something like adding a new callback or a method to a core module that would be helpful for everyone.
Since any change to RT will only be included in the next version and forward, you'll need to provide something for users on current or older versions of RT. An easy way to do this is to provide a patch in your extension distribution. In general, you should only provide patches if you know they will eventually be merged into RT. Otherwise, you may have to provide versions of your patches for each release of RT. You can read more about getting changes accepted into RT in the hacking document. We generally accept patches that add new callbacks.
Create a
patches directory in your extension distribution to hold your patch files. Name the patch files with the latest version of RT that needs the patch. For example, if the patch is needed for RT 4.0.7, name your patch
4.0.7-some-patch.diff. That tells users that if they are using RT 4.0.7 or earlier, they need to apply the patch. If your extension can be used for RT 3.8, you'll likely need to provide different patches using the same naming convention.
Also remember to update your install documentation to remind users to apply the patch.
Preparing for CPAN
When you have your extension ready and want to release it to the world, you can do so with a few simple steps.
Assuming you have run
perl Makefile.PL and you created the inc/.author directory as described above, a README file will be created for you. You can now type:
make manifest
and a MANIFEST file will be created. It should contain all of the needed to install and run your extension. If you followed the steps above, you'll have also have a inc directory which contains Module::Install code. Note that this code should also be included with your extension when you release it as it's part of the install process.
Next, check to see if everything is ready with:
make distcheck
If anything is missing, it will be reported and you can go fix it. When the check is clean, run:
make dist
and a new distribution will be created in the form of a tarred and gzipped file.
Now you can upload to cpan with the cpan-upload utility provided by CPAN::Uploader or your favorite method of uploading to CPAN.← Back to index | https://docs.bestpractical.com/rt/4.4.1/writing_extensions.html | 2017-06-22T18:20:18 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.bestpractical.com |
The website”.
This is a common misconception. If an affiliate would have brought you to our website they would have spent time and possibly even money to bring you here so they can earn that affiliate commission.
While if you got to our website through a non-affiliate channel, be it an organic channel or a paid channel, it was us (HetrixTools) who have spent that time and money to bring you here, so the affiliate commission is in no way an extra bonus for us if you are not referred by anyone, it simply goes into our own promotion funding and efforts.
So, you see, whether an affiliate works for that commission or we work for it ourselves to promote our own business, the commission cannot be simply discounted from your order price.
This being said, our prices are already incredibly low and we do offer further discounts for high volumes and custom packages, so be sure to contact our sales department if you wish to discuss a custom package that would fit your exact needs. | https://docs.hetrixtools.com/if-i-wasnt-referred-by-an-affiliate-can-i-get-the-commission-as-discount-for-my-order/ | 2017-06-22T18:29:22 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.hetrixtools.com |
When CKEditor is placed inside an HTML form, you can use the
button to submit the data to the server. By default the Save button is inactive; it becomes active when CKEditor is embedded in a form.
Note, though, that since this is a special use case, some system administrators might reprogram this button to perform another function, like saving the data in the database using Ajax. | http://docs.cksource.com/CKEditor_3.x/Users_Guide/Document/Save | 2017-06-22T18:35:02 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.cksource.com |
TilePic
The TilePic image file format
From the TilePic web site at the Digital Library Project, UC Berkeley:.
While indeed able to store any type of data, TilePic is most often used to store images as sets of fragments (or "tiles") at multiple resolutions. A TilePic-compatible image viewer can implement efficient pan-and-zoom display images of virtually any size by only requesting the tiles needed for display at the minimum resolution required for quality viewing.
CollectiveAccess can automatically convert uploaded images into TilePic files containing JPEG'ed tiles in a series of resolutions up to and including the resolution of the original uploaded image. CollectiveAccess's built-in image viewer implements a lightweight easy-to-use TilePic compatible viewer with continuous zoom and annotation capabilities.
You can learn more about the TilePic format at
Also, a few things about Tilepic encoding in CollectiveAccess:
- Very large images can use a whole lot of memory while converting. Generally ImageMagick will try to grab about 3 times the *uncompressed* size of the image while it's working. For a 200meg jPEG image, about 4.2gigs is grabbed by ImageMagick. ImageMagick should be able to process with less memory than the required memory if need be, but will generally take longer to complete if that is the case.
- Tilepic conversion is supported when running GD instead of ImageMagick, but that hasn't been tested extensively. From we've heard GD doesn't tolerate really large images very well. If you're planning to use CollectiveAccess with large images definitely stick with ImageMagick or CoreImage as the image processing back-end if at all possible.
(See also the open source TilePic reader for Windows.)
sphinx | http://docs.collectiveaccess.org/wiki/TilePic | 2017-06-22T18:23:55 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.collectiveaccess.org |
Decentralized Stake Pooling¶
One issue arising from previous PoS designs is how to perform pooling in PoS mining analogous to PoW mining pooling.
Decred solves this problem by allowing multiple inputs into a ticket purchase transaction and committing to the UTXO subsidy amount for each input proportionally, while also committing to a new output public key or script for these proportional rewards. The subsidy is then given to those generating the ticket in a trustless manner, and the ticket can be signed round robin before submission to the network. Importantly, control over the production of the vote itself is given to another public key or script which can not manipulate the subsidy given to the recipients. Production of the vote in a distributed manner can be achieved by using a script in the ticket that allows for multiple signers. | https://docs.decred.org/research/decentralized-stake-pooling/ | 2017-06-22T18:27:29 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.decred.org |
Recently Viewed Topics
Correcting Network Time Protocol Issues
If you are not receiving any AWS events, and the message below is found in the logs Network Time Protocol(NTP), it should be checked to ensure it is configured correctly.
Oct 28, 15 14:38:26.898556 (endpoint_0) INFO (webquery_endpoint.cpp:168,sendHealthStatus) - LCE Web Client Status: Alert: Endpoint Demo/CloudTrail-test-Cloud: CloudTrail query signature was invalid, and no further queries will be submitted. Check your system clock and timezone. To resume querying, update the system clock or restart the client.
Steps
Running the clock or date command will show the current time of the server.
# clock
Wed 04 Nov 2015 04:33:29 PM EST -0.266432 seconds
# date
Wed Nov 4 16:33:32 EST 2015
The following command can be run to re-sync the time with the configured NTP servers if the time is found to be incorrect.
# ntpd -qg
ntpd: time set -6.953726s
After the time is has been re-synced stop the LCE Web Query client using the command below.
# service lce_webquery stop
Remove the state.json file from the /opt/lce/webquery directory.
# rm –rf /opt/lce_webquery/state.json
Start the lce web_query client.
# service lce_webquery start | https://docs.tenable.com/lce/Content/LCE_WebQueryClient/WQC_CorrectingNetworkTimeProtocolIssues.htm | 2017-06-22T18:32:29 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.tenable.com |
The Hierarchical Clustering tool groups rows and/or columns in a data table and arranges them in a heat map visualization with a dendrogram (a tree graph) based on the distance or similarity between them. When using the hierarchical clustering tool, the input is a data table, and the result is a heat map with dendrograms. You can also initiate hierarchical clustering on an existing heat map from the Dendrograms page of the Heat Map Properties. See How to Use the Heat Map to learn more.
To perform a clustering with the Hierarchical Clustering tool:
Select Tools > Hierarchical Clustering....
Response: The Hierarchical Clustering dialog is displayed.
If the analysis contains more than one data table, select a Data table to perform the clustering calculation on.
Click Select Columns....
Response: The Select Columns dialog is displayed.
Select the columns you want to include in the clustering, and then click OK to close the dialog.
Select the Cluster rows check box if you want to create a row dendrogram.
Click the Settings... button to open the Edit Clustering Settings dialog.
Select a Clustering method.
Comment: For more information on clustering methods, see Clustering Methods Overview.
Select a Distance measure.
Comment: For more information on distance measures, see Distance Measures Overview. Distances exceeding 3.40282e+038 cannot be represented.
Select Ordering weight to use in the clustering calculation.
Comment: For more information see Ordering Weight.
Select an Empty value replacement Method from the drop-down list.
Comment: The available replacement methods are described in Details on Edit Clustering Settings.
Select a Normalization Method to use in the clustering calculation.
Comment: For more information, see Normalizing Columns.
Click OK.
Select the Cluster columns check box if you want to create a column dendrogram.
Go through steps 6 to 12 to define settings for the column dendrogram.
Click OK.
Response: The hierarchical clustering calculation is performed, and a heat map visualization with the specified dendrograms is created. A cluster column is also added to the data table and made available in the filters panel.
Comment: See Dendrograms and Clustering to learn more about dendrograms and cluster columns.
See also:
Overview of Hierarchical Clustering Theory
Details on Hierarchical Clustering
Dendrograms and Clustering
What is a Heat Map? | https://docs.tibco.com/pub/spotfire/6.5.0/doc/html/hc/hc_what_is_the_hierarchical_clustering_tool.htm | 2017-06-22T18:30:41 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.tibco.com |
Category:Users Guide
Documentation for end-users: Overview
This section provides guidance for cataloguers and administrators who are using CollectiveAccess to manage their collections. These pages will help you navigate the database, understand and differentiate between types, search and browse, apply settings and preferences, and manage metadata through the user interface. Scroll down to see an alphabetical list of pages in this User's Guide.
Contents
Access and Preferences
CollectiveAccess is a web-based system, and as a result can be accessed remotely with a proper login. Each login has a personal dashboard from which to navigate the database, and there are tools, such as the Inspector Window, to assist in tracking your work. To understand your navigational options, please visit: Accessing and Navigating the Database. Once you have accessed the database, you can also determine your Preferences, including languages used in menus, breadcrumb displays, etc.
Project Administrators, who have the power to set access controls, can define logins and privileges for other users. For information about the standard user roles as defined by Collective Access, please see Access Roles. To determine further access control settings, including creation of new logins, see Access Control Settings.
Primary Types and Record Creation
Although it is highly configurable, the essential structure of CollectiveAccess is composed of sixteen Primary Types or Basic Tables. To work effectively with the software, it is important to understand the nuances of these basic types of records. For a quick description of each type, please visit Basic Tables. Bear in mind that not all collections will have all of these types enabled, and some may exhibit variations on these types. However, understanding CollectiveAccess as a combination of these basic components will assist you in your cataloging.
Once you have developed a clear understanding of the Basic Tables, you will be ready to begin creating records. The particulars of record creation will vary from institution to institution, but all records are comprised of a set of common data types, and exhibit a similar User Interface. For guidance on how to begin creating records, see Creating Records.
Each record is comprised of at least a “Basic Info” screen, but many contain more screens according to the needs of the project. For an overview of basic screens in CollectiveAccess, please see Common Screens. For example, one common screen is “Media,” as many records incorporate images or other media. The media upload process is simple, and is explained in the page entitled Uploading Media.
Relationships
One of the key aspects of CollectiveAccess is its ability to create relationships between records. As you build a record, you will probably want to include important relationships. To accomplish this, choose "Relationships" from the side navigation after you have completed your Basic Info page in any type of record. Common relationships can include: Entity related to a Work, Objects related to an Event or Occurrence, and so on. For an overview of Relationship creation in CollectiveAccess, please see Relationships.
Searching and Browsing
As you populate your database, or after a data import, you will want to search and browse your data. CollectiveAccess provides a number of options to facilitate searching and browsing, with varying levels of complexity. For detailed instructions on how to use these features, please see Searching and Browsing. Please note that the Browse facets detailed here may vary from those configured for your institution, as they can be set-up to reflect the individual needs of your collection.
Displays and Sets
CollectiveAccess also provides tools that allow you to organize, view, and share your data in a clear and elegant fashion. For example, You can create custom displays for search results or record summaries, and then use those results to export tab or comma delimited files. Displays essentially allow you to select exactly which fields from any records you wish to see in a given environment. For more information on creating displays, please visit: Displays.
You can also use CollectiveAccess to create ordered groupings of any record type defined by users for a specific purpose, known as Sets. These are generally organized for temporary, practical purposes and can be shared between users. For more information on creating and using Sets, see Sets.
Administration
For end-users with administrative privileges, you may find that you need to understand the metadata-creation workflow. For example, You may need to add new values to a list, or even create an entirely new list for use in your database. To manage Lists and Vocabularies, you will need to navigate to Manage → Lists and Vocabularies in the User Interface. Consult Lists and Vocabularies for more information on managing lists.
In order to incorporate new lists into your cataloging, you will need to append them to metadata elements and then add them to the user interface. See User Interface Administration for help with those steps. The process described in “User Interface Administration” also applies to the creation of any new metadata element. It is important to understand that one does not create new metadata elements within the context of each individual screen; instead, using the administration options explained in the aforementioned pages, one chooses from a pool of metadata elements which can be added to the appropriate screens as needed.
Importing Data
For those who are switching to CollectiveAccess from a different system, there are tips and tools to assist in the data import process. If you are working with a dataset of a relatively manageable size, the user interface provides a tools for uploading import mappings and source data. For a quick overview of these tools, see Data Import. For a more in-depth look at the entire process, see Data Import: Creating and Running a Mapping.
Exporting Data
See Data_Exporter
For the pages mentioned above, as well as other useful guidelines, please see below.
Pages in category "Users Guide"
The following 48 pages are in this category, out of 48 total. | http://docs.collectiveaccess.org/wiki/Category:Users_Guide | 2017-06-22T18:18:48 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.collectiveaccess.org |
Using Eris outside of PHPUnit¶
Eris can be reused as a library for (reproducibly) generating random data, outside of PHPUnit test cases. For example, it may be useful in other testing frameworks or in scripts that run inside your testing infrastructure but not tied to a specific PHPUnit test suite.
Usage¶
<?php use Eris\Generator; require __DIR__.'/../vendor/autoload.php'; $eris = new Eris\Facade(); $eris ->forAll(Generator\int()) ->then(function ($integer) { var_dump($integer); });
This script instantiates a
Eris\Facade, which offers the same interface as
Eris\TestTrait.
forAll() is the main entry point and should be called over this object rather than
$this.
The Facade is automatically initialized, and is used here to dump 100 random integers. At this time, reproducibility can be obtained by explicitly setting the ERIS_SEED environment variable. | http://eris.readthedocs.io/en/latest/outside_phpunit.html | 2017-06-22T18:24:50 | CC-MAIN-2017-26 | 1498128319688.9 | [] | eris.readthedocs.io |
Rationale
Since).
How does it work?:
An example
-Scope creates a new scope and puts it on top of the stack
- scopeExits pops a scope from the stack
- a parent scope
- a map of custom data?
At the moment, no. The way the DSL is currently implemented disallows it.?). | http://docs.codehaus.org/pages/diffpages.action?originalId=233053586&pageId=236782077 | 2014-12-18T13:34:05 | CC-MAIN-2014-52 | 1418802766295.3 | [] | docs.codehaus.org |
Usage
Several listeners or panels might need access to OS-dependent shared libraries. They are divided into
- built-in libraries,
- 3rd party libraries.
Built-in libraries are available for inclusion at compile time out of the box.
To minimize the size of resulting installers, each native library must be explicitely included. This is done by the
<natives> element.
Example
For using the Shortcut Panel on Windows there are required the ShellLink native libraries to access some native APIs:
How to include native DLLs for ShortcutPanel
Both 32-bit and 64-bit versions are built-in to IzPack. | http://docs.codehaus.org/pages/viewpage.action?pageId=231736577 | 2014-12-18T13:37:53 | CC-MAIN-2014-52 | 1418802766295.3 | [] | docs.codehaus.org |
removeAttribute(String, int) method, any registered
destruction callback should be disabled as well, assuming that the
removed object will be reused or manually destroyed.
name- the name of the attribute to register the callback for
callback- the destruction callback to be executed
scope- the scope identifier
String getSessionId()
null
Object getSessionMutex()
null | http://docs.spring.io/spring/docs/2.5.6/api/org/springframework/web/context/request/RequestAttributes.html | 2014-12-18T13:30:31 | CC-MAIN-2014-52 | 1418802766295.3 | [] | docs.spring.io |
1 Movement Dashboard
To perform followup on the performance of Movements, an object has been designed to monitor all movements' parameters.
Below, the parameters of various tables and charts are described.
For this example, these parameters have been set to perform the report:
1.1 Movement Evolution
This window shows two charts separated into two tabs; these are designed to present the difference between Estimated and Confirmed item quantities, using various filters.
1.2 Movement Values by Family
The following table shows the user each item family's estimated item quantities, percentage of estimated items included in a family code (compared to the total of estimated items in all families), and confirmed item quantities.
The confirmed items will appear in green or red, depending on whether they match or exceed the estimates.
1.3 Movement by Type
This part of the report is used to observe the performance of estimated and confirmed item quantities for each movement type.
The confirmed item quantity will be presented in green if it matches or exceeds the estimate. If it does not meet the estimated quantity, the confirmed values will appear in red.
1.4 Movement by Terminal
This section represents the performance of all terminals in the central system. It contains estimated and confirmed item quantities for all movements.
1.5 Movement per User
This part of the report shows the percentage of item quantities which are confirmed or unconfirmed for each user's corresponding movements.
1.6 Movement Types per Family Type
This section shows the group of items in each item family, divided by movement type. It also shows the percentage of item quantities which are confirmed or unconfirmed, for each family.
1.7 Movement per Item
The table below shows data corresponding to all transactions performed per item family, followed by their estimated and confirmed quantities. The system also provides the ability to expand the information given; each item family will show its corresponding items and the number of transactions performed on them. Again, information on their estimated and confirmed quantities will follow.
1.8 Diagrams
This report presents two different matrix charts with information on transactions and items. | https://docs.deistercloud.com/content/Axional%20business%20products.5/Axional%20Mobile.4/Mobile%20Apps.8/POS.10/Store%20Warehouse.21/Reporting.6.xml?embedded=true | 2020-01-18T03:33:35 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.deistercloud.com |
RoutingSlip Object
Represents the routing slip for a workbook. The routing slip is used to send a workbook through the electronic mail system.
Remarks
The RoutingSlip object doesn’t exist and cannot be returned unless the HasRoutingSlip property for the workbook is True.
Example
Use the RoutingSlip property to return the RoutingSlip object. The following example sets the delivery style for the routing slip attached to the active workbook. For a more detailed example, see the RoutingSlip property.
See Also | https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2007/bb178287(v=office.12)?redirectedfrom=MSDN | 2020-01-18T04:05:23 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.microsoft.com |
A mesh is an unstructured grid usually with temporal and other components. The spatial component contains a collection of vertices, edges and faces in 2D or 3D space:
Different mesh types:
Triangular grid with numbered vertices.
Possible visualisation of mesh data ()
To add a mesh layer to QGIS:
Mesh tab in Data Source Manager
Mesh Layer Properties
The Information tab is read-only and represents an interesting place to quickly grab summarized information and metadata on the current layer. Provided information are (based on the provider of the layer) uri, vertex count, face count and dataset groups count.
The Source tab displays basic information about the selected mesh, including:
Use the Assign Extra Dataset to Mesh button to add more groups to the current mesh layer.
Click the
Style button to activate the dialog
as shown in the following image:
Mesh Layer Style
Style properties are divided in several tabs:
The tab
presents the following items:
The slider
, combo box
and
buttons
allow to explore another dimension of the data, if available.
As the slider moves, the metadata is presented accordingly.
See the figure Mesh groups below as an example.
The map canvas will display the selected dataset group as well.
Dataset in Selected Group(s)
You can apply symbology to each group using the tabs.
Under Groups, click on
to show contours with
default visualization parameters.
In the tab
you can see and change the current visualization options of contours
for the selected group, as shown in the image Styling contours in a mesh below:
Styling Contours in a Mesh Layer.
The button
Add values manually adds a value
to the individual color table. The button
Remove selected row
deletes a value from the individual color table. Double clicking on the value column
lets you insert a specific value. Double clicking on the color column opens the dialog
Change color, where you can select a color to apply on that value.
In the tab
, click on
to display vectors if available.
The map canvas will display the vectors in the selected group with default parameters.
Click on the tab
to change the visualization parameters for vectors as shown in the image below:
Styling Vectors in a Mesh Layer:
In the tab
, QGIS offers two possibilities to display the grid,
as shown in the image Mesh rendering:
Native Mesh Renderingthat shows quadrants
Triangular Mesh Renderingthat display triangles
Mesh Rendering
The line width and color can be changed in this dialog, and both the grid renderings can be turned off. | https://docs.qgis.org/3.4/en/docs/user_manual/working_with_mesh/mesh_properties.html | 2020-01-18T04:13:59 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.qgis.org |
Third Party Theme Compatibility
For the most part, WP Club Manager templates will integrate nicely with most WordPress themes. Where you may run into problems is when the default WP Club Manager content wrappers do not match your chosen themes. This will manifest itself by breaking your layout on WP Club Manager pages and shifting your sidebars into incorrect positions.
This problem can potentially affect the single player, staff, match and club pages because WP Club Manager uses templates of its own to display these pages and its impossible for WP Club Manager to know exactly what markup your theme uses.
There are two ways to resolve this:
- Using hooks (for advanced users/developers)
- Using our catch-all wpclubmanager_content() function inside your theme
Using wpclubmanager_content()
This solution allows you to create a new template page within your theme that is used for all WP Club Manager post type displays. While an easy catch-all solution, it does have a drawback in that this template is used for all WPCM post types (players, staff, matches and clubs). Developers are encouraged to use the hooks instead.
To set up this template page:
Duplicate page.php
Duplicate your theme’s page.php file, and name it wpclubmanager.php.This file should be found like this: wp-content/themes/YOURTHEME/wpclubmanager.php.
Edit your page (wpclubmanager.php)
Open up your newly created wpclubmanager wpclubmanager_content(); ?>
This will make it use WP Club Manager loop instead. Save the file. You’re done.
Using Hooks
Using hooks to alter the layout of WP Club Manager gives you the flexibility to style your theme in many different ways. This is similar to the method we use when creating our themes. It’s also the method we use to integrate nicely with the default WordPress themes.
By inserting a few lines in your theme’s
functions.php file.
First unhook the WP Club Manager wrappers;
remove_action( 'wpclubmanager_before_main_content', 'wpclubmanager_output_content_wrapper', 10); remove_action( 'wpclubmanager_after_main_content', 'wpclubmanager_output_content_wrapper_end', 10);
Then hook in your own functions to display the wrappers your theme requires;
add_action('wpclubmanager_before_main_content', 'my_theme_wrapper_start', 10); add_action('wpclubmanager WP Club Manager Support
Once you’re happy that your theme fully supports WP Club Manager, you should declare it in the code to hide the “Your theme does not declare WP Club Manager support” message. To do so you should add the following to your theme support function;
add_theme_support( 'wpclubmanager' );
Themes by Clubpress
Save time creating your own templates or finding a suitable theme and get one of our purpose-built WordPress themes. Our themes are built specifically for WP Club Manager and will offer extra functionality and features to make your WordPress club website more comprehensive than ever before. | https://docs.wpclubmanager.com/article/64-third-party-theme-compatibility | 2020-01-18T04:06:26 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.wpclubmanager.com |
MiningStructurePermission Element (ASSL)
Defines the permissions that members of a Role element have on an individual MiningStructure element.
Syntax
<MiningStructurePermissions> <MiningStructurePermission xsi: <!-- This element has no children other than those inherited from Permission --> </MiningStructurePermission> </MiningStructurePermissions>
Element Characteristics
Element Relationships
Remarks
The corresponding element in the Analysis Management Objects (AMO) object model is MiningStructurePermission.
See Also
Reference
Help and Information
Getting SQL Server 2005 Assistance | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms126981%28v%3Dsql.90%29 | 2020-01-18T04:37:58 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.microsoft.com |
Introduction to Device Objects
The operating system represents devices by device objects. One or more device objects are associated with each device. Device objects serve as the target of all operations on the device.
Kernel-mode drivers must create at least one device object for each device, with the following exceptions:
Minidrivers that have an associated class or port driver do not have to create their own device objects. The class or port driver creates the device objects, and dispatches operations to the minidriver.
Drivers that are part of device type-specific subsystems, such as NDIS miniport drivers, have their device objects created by the subsystem.
See the documentation for your particular device type to determine if your driver creates its own device objects.
Some device objects do not represent physical devices. A software-only driver, which handles I/O requests but does not pass those requests to hardware, still must create a device object to represent the target of its operations.
For more information about how your driver can create device objects, see Creating a Device Object.
Devices are usually represented by multiple device objects, one for each driver in the driver stack that handles I/O requests for the device. The device objects for a device are organized into a device stack. Whenever an operation is performed on a device, the system passes an IRP data structure to the driver for the top device object in the device stack. Each driver either handles the IRP or passes it to the driver that is associated with the next-lower device object in the device stack. For more information about device stacks, see WDM Device Objects and Device Stacks. For more information about IRPs, see Handling IRPs.
Device objects are represented by DEVICE_OBJECT structures, which are managed by the object manager. The object manager provides the same capabilities for device objects that it does for other system objects. In particular, a device object can be named, and a named device object can have handles opened on it. For more information about named device objects, see Named Device Objects.
The system provides dedicated storage for each device object, called the device extension, which the driver can use for device-specific storage. The device extension is created and freed by the system along with the device object. For more information, see Device Extensions.
The following figure illustrates the relationship between device objects and the I/O manager.
The figure shows the members of the DEVICE_OBJECT structure that are of interest to a driver writer. For more information about these members, see Creating a Device Object, Initializing a Device Object, and Properties of Device Objects.
Feedback | https://docs.microsoft.com/en-us/windows-hardware/drivers/kernel/introduction-to-device-objects | 2020-01-18T04:34:02 | CC-MAIN-2020-05 | 1579250591763.20 | [array(['images/3devobj.png', 'diagram illustrating a device object'],
dtype=object) ] | docs.microsoft.com |
Introduction to Hooks: Actions & Filters
Hooks in WordPress essentially allow you to manipulate code without editing core files. They are used extensively throughout WordPress and WP Club Manager and are very useful for developers.
There are two types of hook: actions and filters.
Action Hooks allow you to insert custom code at various points (wherever the hook is ran).
Filter Hooks allow you to manipulate and return a variable which it passes.
There is an excellent article on hooks and filters here.
Using hooks
If you use a hook to add or manipulate code, you can add your custom code to your theme
functions.php file.
Using action hooks
To execute your own code, you hook in by using the action hook
do_action( ‘action_name’ );. See below for an example on where to place your code:
add_action( 'action_name', 'your_function_name'); function your_function_name() { // Your code }
Using filter hooks
Filter hooks are called throughout are code using
apply_filter( ‘filter docs section. Please note however, that we are unable to provide support for customisations. | https://docs.wpclubmanager.com/article/65-introduction-to-hooks-actions-filters | 2020-01-18T02:40:07 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.wpclubmanager.com |
. You can also replicate HDFS data to and from Amazon S3 or Microsoft AD
-.
Replication with Sentry Enabled
If the cluster has Sentry enabled and you are using BDR to replicate files or tables and their permissions, configuration changes to HDFS are required.
The configuration changes are required due to how HDFS manages ACLs. When a user reads ACLs, HDFS provides the ACLs configured in the External Authorization Provider, which is Sentry. If Sentry is not available or it does not manage authorization of the particular resource, such as the file or directory, then HDFS falls back to its own internal ACLs. But when ACLs are written to HDFS, HDFS always writes these internal ACLs even when Sentry is configured. This causes HDFS metadata to be polluted with Sentry ACLs. It can also cause a replication failure in replication when Sentry ACLs are not compatible with HDFS ACLs.
To prevent issues with HDFS and Sentry ACLs, complete the following steps:
- Create a user account that is only used for BDR jobs since Sentry ACLs will be bypassed for this user.
For example, create a user named bdr-only-user.
- Configure HDFS on the source cluster:
- In the Cloudera Manager Admin Console, select.
- Select Configuration and search for the following property: NameNode Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml.
- Add the following property:
Name: Use the following property name: dfs.namenode.inode.attributes.provider.bypass.users
Value: Provide the following information: <username>, <username>@<RealmName>
Replace <username> with the user you created in step 1 and <RealmName> with the name of the Kerberos realm.For example, the user bdr-only>
-dr.
-, BDR uses a complete copy to replicate data. If you select this option, the BDR.). (Not supported when replicating to S3 or ADLS.)
- Delete Permanently - Uses the least amount of space; use with caution.
-.. | https://docs.cloudera.com/documentation/enterprise/6/6.1/topics/cm_bdr_hdfs_replication.html | 2020-01-18T02:35:59 | CC-MAIN-2020-05 | 1579250591763.20 | [array(['../images/cm_bdr_replications.png', None], dtype=object)
array(['../images/cm_bdr_hdfs_repl_history.png', None], dtype=object)
array(['../images/cm_bdr_hive_repl_history.png', None], dtype=object)] | docs.cloudera.com |
Message-ID: <196215158.8889.1579320940497.JavaMail.javamailuser@localhost> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_8888_613886020.1579320940398" ------=_Part_8888_613886020.1579320940398 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
This section explains how to configure your platform so that flexVDI= clients are able to connect and present a virtual desktop. Several co= nfigurations are possible, from simple ones where clients are able to reach= any host of the platform, to more complex ones where hosts are not directl= y accessible (e.g. they are behind a firewall or NAT router). Here, you wil= l understand these possibilities and their implications, and how to select = the configuration that best fits your corporate network.
This is the protocol that flexVDI clients and the Manager= em> follow to establish a VDI connection:
=20=20
As a result of this conversation, the client= must be able to contact the manager as well as every potential host where = the desktop may be running. This basic configuration may be suitable when c= lients are connecting from an internal network.
You can use the Hosts' VDI address to provide an alternative ad= dress for clients to connect to, to protect the Hosts and the = Manager behind a firewall. However this configuration still requires t= he clients to be able to reach the Manager, and this is a security= risk. So, the VDI address option has been deprecated in f= avor of flexVDI Gateway, and will be removed in future releases.= p>
The previous scenario has several drawbacks:
The flexVDI Gateway is a software component that overcomes this= limitations by encapsulating all the traffic, either to the manager or the= desktop, with WebSockets over TLS encryption at port 443:
=20=20
In this way, only TCP port 443 (or the port you configure) of the machin= e that runs the gateway must be exposed to the clients.
The flexVDI Gateway is available as an RPM package for CentOS 7 and RHEL= 7, and can be installed from the flexVDI 3.1 Repository. In fact, it is au= tomatically installed and enabled by default on every flexVDI host. It is a= lso part of the WebPortal virtual applia= nce, so you can import it into your flexVDI platform and use it as the = single entry point for all your clients.
Its configuration file,
/etc/flexvdi/flexvdi-gateway.conf, =
must contain a valid JSON object. These are the most common configuration p=
roperties (and their default values):
{ "ManagerIP": "", "SslPort": 443, "FirstSpicePort": 5900, "LastSpicePort": 25900, "CertFile": "/etc/ssl/certs/flexvdi-agent", "KeyFile": "/etc/ssl/certs/flexvdi-agent", "KeepAlive": 0, "Debug": false }=20
Usually, you should only need to set the manager's address. Other defaul= t values are just right most of the time. You should not need to modify the= range of valid Spice ports, as they match with the default range of ports = the manager generates when a new desktop is created. Also, the flexvdi-gate= way will use an auto-generated, self-signed certificate by default. Once yo= u modify the configuration file, restart the service with:
# systemctl restart flexvdi-gateway=20
Other options are:
It is possible to use a web load balancer to distribute the client conne= ctions among several flexVDI Gateway instances. However, in order = for this scenario to work, sequential connections of the same client must b= e assigned to the same gateway. This can be done, for instance, assigning t= he gateway by source address.
=20=20
Since the connection between the clients and the gateway are tunneled th= rough HTTPS with WebSockets, they can be managed by a reverse HTTPS proxy. = However, the proxy must be configured to open a WebSocket connection with t= he Gateway. For instance, an Nginx reverse proxy should be configured with = the following rule:
location =3D / { proxy_pass; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; }=20
For Apache, you need at least Apache 2.4 and the mod_proxy_wstunnel module. | https://docs.flexvdi.com/exportword?pageId=14844189 | 2020-01-18T04:17:14 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.flexvdi.com |
This tool combines several tools: Rotate, Scale, Shear and Perspective, performing one or several of these actions at once in one single operation. Combining two or more options gives you almost infinite possibilities of transformation.
As the other transformation tools, this tool works on the active layer (default).
After selecting the Unified Transform tool in the toolbox, click on the image window. Several elements appear on the image window:
Several kinds of handles, on the edges:
Diamonds for shearing
Squares for scaling.
Small diamonds for changing perspective, in large squares for
Scaling.
Click and drag a handle to perform the action of the concerned tool
A circle with a cross inside at the center of the image window for the pivot. Click and drag this circle to move the pivot. It can be placed out of the image window, and even where you want on screen (but you can no longer see it, unless you enlarge the image window).
The mouse pointer comes with small icons which vary according to position:
On the layer: Move icon,
Outside the layer: Rotate icon,
On handles: Shear or perspective or Scale icon,
On rotation center circle: Move icon and Rotation icon.
The status bar, at the bottom of the image window, displays the name of the current tool.
When you click on the image window, also appears the Matrix new window:
where you find the Reset (very useful when testing), Cancel and Transform buttons. This matrix window appears out of the image or, if the image window is large enough, inside the image window. You cannot edit the values of this matrix, but you can note them to reproduce the same transformation later.
Readjust button: with this button, new in GIMP-2.10.10, you can readjust transform handles based on the current zoom level.
There are different possibilities to activate the tool:
From the image-menu:→ →
The Tool can also be called by clicking the tool icon:
or by clicking on the Shift+L keyboard shortcut.
The available tool options can be accessed by double clicking the
Unified Transform tool icon.
Move: when this option is unchecked, moving the layer is smooth. Checking the option constrains movement to 45° from center.
Scale: when this option is checked, the aspect ratio is preserved.
Rotate: default rotation is smooth. When this option is checked, rotation goes by 15° steps.
Shear: normally, to shear the layer, you drag the corresponding icon along a layer edge. If this option is unchecked (default), you may move away from this edge. If this option is checked, shear handles remain on this edge.
Perspective: normally, to change perspective, you drag the corresponding icon along a layer edge. If this option is unchecked (default), you may move away from this edge. If this option is checked, perspective handles remain on this edge or on a diagonal.
Scale:
Shear: When this option is unchecked (default), the opposite edge is fixed and the pivot moves with shearing. When the option is checked, shearing is performed around a fixed pivot and the opposite side is sheared by the same amount, but in the opposite direction.
Perspective: when this option is checked, the position of pivot is maintained.
図14.148 Perspective from pivot
From pivot not checked, constrain checked
From pivot and constrain checked
Snap: if this option is checked, the pivot snaps to center or to corners when it comes close to them.
Lock: locks pivot.
Key modifiers are active when an action (move, scale, rotate...) is selected. Hold on:
Shift to check all Constrain unchecked options and uncheck already checked ones if a transformation handle is selected, or, if the pivot is selected, to snap pivot to center or corner,
Ctrl to check all 「From pivot」 unchecked options and uncheck already checked ones. | https://docs.gimp.org/ja/gimp-tool-unified-transform.html | 2020-01-18T02:41:58 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.gimp.org |
Set up your Azure Red Hat OpenShift dev environment
To build and run Microsoft Azure Red Hat OpenShift applications, you'll need to:
- Install version 2.0.65 (or higher) of the Azure CLI (or use the Azure Cloud Shell).
- Register for the
AROGAfeature and associated resource providers.
- Create an Azure Active Directory (Azure AD) tenant.
- Create an Azure AD application object.
- Create an Azure AD user.
The following instructions will walk you through all of these prerequisites.
Install the Azure CLI
Azure Red Hat OpenShift requires version 2.0.65 or higher of the Azure CLI. If you've already installed the Azure CLI, you can check which version you have by running:
az --version
The first line of output will have the CLI version, for example
azure-cli (2.0.65).
Here are instructions for installing the Azure CLI if you require a new installation or an upgrade.
Alternately, you can use the Azure Cloud Shell. When using the Azure Cloud Shell, be sure to select the Bash environment if you plan to follow along with the Create and manage an Azure Red Hat OpenShift cluster tutorial series.
Register providers and features
The
Microsoft.ContainerService AROGA feature,
Microsoft.Solutions,
Microsoft.Compute,
Microsoft.Storage,
Microsoft.KeyVault and
Microsoft.Network providers must be registered to your subscription manually before deploying your first Azure Red Hat OpenShift cluster.
To register these providers and features manually, use the following instructions from a Bash shell if you've installed the CLI, or from the Azure Cloud Shell (Bash) session in your Azure portal:
If you have multiple Azure subscriptions, specify the relevant subscription ID:
az account set --subscription <SUBSCRIPTION ID>
Register the Microsoft.ContainerService AROGA feature:
az feature register --namespace Microsoft.ContainerService -n AROGA
Register the Microsoft.Storage provider:
az provider register -n Microsoft.Storage --wait
Register the Microsoft.Compute provider:
az provider register -n Microsoft.Compute --wait
Register the Microsoft.Solutions provider:
az provider register -n Microsoft.Solutions --wait
Register the Microsoft.Network provider:
az provider register -n Microsoft.Network --wait
Register the Microsoft.KeyVault provider:
az provider register -n Microsoft.KeyVault --wait
Refresh the registration of the Microsoft.ContainerService resource provider:
az provider register -n Microsoft.ContainerService --wait
Create an Azure Active Directory (Azure AD) tenant
The Azure Red Hat OpenShift service requires an associated Azure Active Directory (Azure AD) tenant that represents your organization and its relationship to Microsoft. Your Azure AD tenant enables you to register, build, and manage apps, as well as use other Azure services.
If you don't have an Azure AD to use as the tenant for your Azure Red Hat OpenShift cluster, or you wish to create a tenant for testing, follow the instructions in Create an Azure AD tenant for your Azure Red Hat OpenShift cluster before continuing with this guide.
Create an Azure AD user, security group and application object
Azure Red Hat OpenShift requires permissions to perform tasks on your cluster, such as configuring storage. These permissions are represented through a service principal. You'll also want to create a new Active Directory user for testing apps running on your Azure Red Hat OpenShift cluster.
Follow the instructions in Create an Azure AD app object and user to create a service principal, generate a client secret and authentication callback URL for your app, and create a new Azure AD security group and user to access the cluster.
Next steps
You're now ready to use Azure Red Hat OpenShift!
Try the tutorial:
Feedback | https://docs.microsoft.com/en-us/azure/openshift/howto-setup-environment | 2020-01-18T03:28:48 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.microsoft.com |
Solution architecture for AWS cloud Edit on GitHub Request doc changes
The NetApp Data Availability Services (NDAS) solution for AWS cloud consists of three components: 1) an AWS cloud-resident app, which includes the orchestration user interface and a scalable catalog for intelligent searches, 2) an NDAS proxy service that is embedded within ONTAP on the target storage system, and 3) Copy to Cloud replication technology, an efficient and secure S3 data transport between ONTAP instances and the cloud S3 object store.
You register and initiate NetApp Data Availability Services from a NetApp web page, where the app is deployed as an Amazon Machine Instance (AMI) and launched on an AWS compute resource (EC2). During initial setup and configuration, the app catalog is automatically configured and deployed using two EC2 instances as a two-node Elasticsearch cluster. For resiliency, the two EC2 instances are deployed in different subnets, with the two subnets in separate availability zones, but all are in the same region.
After the initial setup and installation of the app, you create a new user ID and password on the NDAS login screen. After the initial login, a set of wizard screens helps you connect ONTAP target clusters to their own cloud object stores.
To establish secure data transfers across the hybrid cloud, the NDAS administrator generates a token in the NDAS app and provides it to an ONTAP administrator, who enters the encrypted token on the ONTAP target cluster in ONTAP System Manager. When the token is entered, the ONTAP target cluster initiates a request with the app to register the ONTAP target system. Once the token is received, the app approves the registration request and finalizes the configuration of secure and encrypted data pathways between the on-premises ONTAP target and the cloud app.
All secure data transfers and management messages between the ONTAP target cluster and NetApp Data Availability Services are encrypted over the wire using HTTPS with TLS encryption. | https://docs.netapp.com/us-en/netapp-data-availability-services/concept_solution_architecture.html | 2020-01-18T03:13:29 | CC-MAIN-2020-05 | 1579250591763.20 | [array(['./media/solution_architecture_v2.gif',
'AWS Solution Architecture'], dtype=object)] | docs.netapp.com |
Upgrades¶
C.
Concepts¶
Here are the key concepts you need to know before reading the section on the upgrade process:
RPC version pinning¶.
Graceful service shutdown¶.
Online Data Migrations¶.
API load balancer draining¶
When upgrading API nodes, you can make your load balancer only send new connections to the newer API nodes, allowing for a seamless update of your API nodes.
DB prune deleted rows¶
Currently resources are soft deleted in the database, so users are able to
track instances in the DB that are created and destroyed in production.
However, most people have a data retention policy, of say 30 days or 90 days
after which they will want to delete those entries. Not deleting those entries
affects DB performance as indices grow very large and data migrations take
longer as there is more data to migrate. To make pruning easier there’s a
cinder-manage db purge <age_in_days> command that permanently deletes
records older than specified age.
Versioned object backports¶.
Minimal Downtime Upgrade Procedure¶
Plan your upgrade¶
Read and ensure you understand the release notes for the next release.
Make a backup of your database. Cinder does not support downgrading of the database. Hence, in case of upgrade failure, restoring database from backup is the only choice.
Note that there’s an assumption that live upgrade can be performed only between subsequent releases. This means that you cannot upgrade Liberty directly into Newton, you need to upgrade to Mitaka first.
To avoid dependency hell it is advised to have your Cinder services deployed separately in containers or Python venvs.
Note that Cinder is basing version detection on what is reported in the
servicestable in the DB. Before upgrade make sure you don’t have any orphaned old records there, because these can block starting newer services. You can clean them up using
cinder-manage service remove <binary> <host>command.
Assumed service upgrade order is cinder-scheduler, cinder-volume, cinder-backup and finally cinder-api.
Rolling upgrade process¶
To reduce downtime, the services can be upgraded in a rolling fashion. It means upgrading a few services at a time. To minimise downtime you need to have HA Cinder deployment, so at the moment a service is upgraded, you’ll keep other service instances running.
Before maintenance window¶
First you should execute required DB schema migrations. To achieve that without interrupting your existing installation, install new Cinder code in new venv or a container and run the DB sync (
cinder-manage db sync). These schema change operations should have minimal or no effect on performance, and should not cause any operations to fail.
At this point, new columns and tables may exist in the database. These DB schema changes are done in a way that both the N and N+1 release can perform operations against the same schema.
During maintenance window¶
The first service is cinder-scheduler. It is load-balanced by the message queue, so the only thing you need to worry about is to shut it down gracefully (using
SIGTERMsignal)valueoptioncommand,.
After maintenance window¶
Once all services are running the new code, double check in the DB that there are no old orphaned records in.
Now all services are upgraded, we need to send the.
Now all the services are upgraded, the system is able to use the latest version of the RPC protocol and able to access all the features of the new release.
At this point, you must also ensure you update the configuration, to stop using any deprecated features or options, and perform any required work to transition to alternative features. All the deprecated options should be supported for one cycle, but should be removed before your next upgrade is performed.
Since Ocata, you also need to run
cinder-manage db online_data_migrationscommand to make sure data migrations are applied. The tool lets you limit the impact of the data migrations by using
--max_countoption to limit number of migrations executed in one run. If this option is used, the exit status will be 1 if any migrations were successful (even if others generated errors, which could be due to dependencies between migrations). The command should be rerun while the exit status is 1. If no further migrations are possible, the exit status will be 2 if some migrations are still generating errors, which requires intervention to resolve. The command should be considered completed successfully only when the exit status is 0.). | https://docs.openstack.org/cinder/train/upgrade.html | 2020-01-18T02:59:19 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.openstack.org |
Integration Services Connections
Microsoft SQL Server Integration Services tables and package configurations to SQL Server.
To make these connections, Integration Services uses connection managers, as described in the next section.
Connection Managers Web site. Web site.
Important
The connection managers listed in the following table work only with SQL Server 2008 Enterprise and SQL Server 2008 Developer.
Custom Connection Managers
You can also write custom connection managers. For more information, see Developing a Custom Connection Manager.
External Resources
Video, Leverage Microsoft Attunity Connector for Oracle to enhance Package Performance, on technet.microsoft.com
Wiki articles, SSIS Connectivity, on social.technet.microsoft.com
Blog entry, Connecting to MySQL from SSIS, on blogs.msdn.com.
Technical article, You get "DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER" error message when using Oracle connection manager in SSIS, on support.microsoft.com.
See Also | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms140203%28v%3Dsql.105%29 | 2020-01-18T03:25:59 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.microsoft.com |
Single Sign-on is not supported for Microsoft Dynamics. For manual sign-on, follow the instructions below:
Note: Multiple invalid login attempts may lock you out.
The Virtual Office for MS Dynamics interface consists of a navigation menu with the following menu items:
Under VO tab, you can find the following tabs:
Log out of the Virtual Office application.
Under the Search tab, you can search the contacts from Microsoft Dynamics or Virtual Office contact directory, assign contacts to active calls with unknown numbers, or with numbers that have multiple contact matches.
From the Contacts
tab, browse or search for contacts from your contact directory and/or MS Dynamics to make calls. Simply hover over the desired contact, and click the icon to call.
To log out the application: | https://docs.8x8.com/8x8WebHelp/virtual-office-for-microsoft-dynamics-agent/Content/virtual-office-for-microsoft-dynamics/microsoft-dynamics-get-started.htm | 2020-01-18T03:33:48 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.8x8.com |
TOPICS×
Filter Report Data
Filters allow you to narrow the report to include or exclude line items that match a filter.
Simple Filter
The simple filter appears on most reports to let you quickly find specific line items. Simple filters do not use any special characters, so -, ", ', + and other special characters match the literal value in the report. You can find line items that contain multiple terms using a space.
For example:
help search
Matches the following pages:
help:Search help:Paid Search Detection help:Configure paid search detection help:Search Keywords Report help:Internal Search Term
Advanced Filters
Advanced filters let you control the scope of your search using a collection of filters. You can select to match all filters, or any filters.
Contains
Matches if the term is found anywhere in the line item. This operates the same as the simple filter.
Spaces cannot be used in filters, because spaces are delimiters in searches
Does not contain
Matches if the term is not found anywhere in the line item. You can filter "unspecified", "none", "keyword unavailable" and other special values from reports using "does not contain".
Does not contain: none
For a more exact filter, you can use an Advanced (Special Characters) filter:
- Advanced (Special Character): -^none$
- Advanced (Special Character): -"keyword unavailable"
For example, the following line item is filtered by the "Does not contain" criteria, but is not filtered by the "Advanced (Special Character)" criteria:
help:Rename the None classification key
Contains One Of
Matches if any terms, separated by spaces, are found in the line item. The following filter shows all pages that contain "mens" or "sale":
Contains One Of: mens sale
Matches the following pages:
Womens Mens Mens:Desk & TravelJewelry & Accessories:Accessories:Hats:Mens Sale & Values
Equals
Matches if the entire line item, including spaces and other characters, match the specified phrase.
Equals: mens:desk & travel
Mens:Desk & Travel
Starts With
Matches if the line item, including spaces and other characters, starts with the specified phrase.
Starts With: mens
Matches the following pages:
Mens Mens:Desk & Travel Mens:Apparel Mens Perfume Spray Mens Hemp/Bamboo Flip Flops
Ends With
Matches if the line item, including spaces and other characters, ends with the specified phrase.
Ends With: jean
Matches the following pages:
Bell Bottom Jean Velvet Dream Skinny Leg Jean Dark Slimmer Jean Bling Belt High Waist Jean Ocean Blue Jean
Advanced (Special Character)
Advanced let you perform wildcard and other complex searches.
Create report-specific filters
Steps that describe how to create filters for reports.
Certain reports contain a filter that is specific to that report. For example, a Purchase Conversion Funnel Report lets you filter by web pages. A Geosegmentation Report lets you filter by geographical region. Additional reports have other filters specific to those reports.
When you access these filters, you can see report metrics for the items specified in the list.
To create report-specific filters
- Generate a report, such as a Purchase Report ( Site Metrics > Purchases > Purchase Conversion Funnel ).
- In the report header, click the Filter link.
- On the Filter Selector page, click Apply a Filter , then select a filter type.
- To search for an item, type a character string in the Search field.
- Click OK .
Add a correlation filter
Steps that describe how to add a correlation filter.
Certain reports let you add custom correlation filters. For example, if you are viewing the Pages Report for a report suite that has Site Sections correlated with a Women's page, you can create a filter rule that generates a report showing the most popular pages when Site Sections = Women.
You can filter the data shown in a correlation report using any available correlation. The example here shows how you add a search engine correlation filter.
To add a correlation filter
- Run a report that supports correlations. (See Running a Breakdown Report .)
- In the report header, click the Correlation Filter link.
- Under Filter Rule Creator, select a category to correlate with an item.
- Click OK. | https://docs.adobe.com/content/help/en/analytics/analyze/reports-analytics/customize-reports/t-reports-filter-options.html | 2020-01-18T04:20:16 | CC-MAIN-2020-05 | 1579250591763.20 | [array(['/content/dam/help/analytics.en/help/analyze/reports-analytics/reports-customize/assets/filter.png',
None], dtype=object)
array(['/content/dam/help/analytics.en/help/analyze/reports-analytics/reports-customize/assets/advanced_filter.png',
None], dtype=object) ] | docs.adobe.com |
Quick Start Guide 101¶
Introduction to Streaming Integration¶
The Streaming Integrator is one of the integrators in WSO2 Enterprise Integrator. It reads streaming data from files, cloud-based applications, streaming applications, and databases, and processes them. It also allows downstream applications to access streaming data by publishing information in a streaming manner. It can analyze streaming data, identify trends and patterns, and trigger integration flows.
The purpose of this guide if for you to understand the basic functions of the Streaming Integrator in 30 minutes.
To learn how to use the key functions of the Streaming Integrator, consider a laboratory that is observing the temperature of a range of rooms in a building via a sensor and needs to use the temperature readings as the input to derive other information.
Before you begin:
- Install Oracle Java SE Development Kit (JDK) version 1.8.
- Set the Java home environment variable.
- Download and install the following components of the Streaming Integrator:
- Streaming Integrator Tooling
- Streaming Integrator runtime
Creating your first Siddhi application¶
Create a basic Siddhi application for a simple use case., and a description via the
@App:descriptionannotation.
@App:name("TemperatureApp") @App:description("This application captures the room temperature and analyzes it, and presents the results as logs in the output console.")
The details to be captures include the room ID, device ID, and the temperature. To specify this, define an input stream with attributes to capture each of these details.
define stream TempStream(roomNo string, deviceNo long, temp double)
The technicians need to know the average temperature with each new temperature reading. To publish this information, define an output stream including these details as attributes in the schema.
define stream AverageTempStream(roomNo string, deviceNo long, avgTemp double)
The average temperature needs to be logged. Therefore, connect a sink of the
logtype to the output stream as shown below.
@sink(type = 'log', @map(type = 'passThrough')) define stream AverageTempStream (roomNo string, deviceID long, avgTemp double);
passThroughis specified as the mapping type because in this scenario, the attribute names are received as they are defined in the stream and do not need to be mapped.
To get the input events, calculate the average temperature and direct the results to the output stream, add a query below the stream definitions as follows:
To name the query, add the
@infoannotation and enter
CalculateAverageTemperatureas the query name.
@info(name = 'CalculateAvgTemp')
To indicate that the input is taken from the
TempStreamstream, add the
fromclause as follows:
from TempStream
Specify how the values for the output stream attributes are derived by adding a
selectclause as follows.
select roomNo, deviceNo, avg(temp)
To insert the results into the output stream, add the
insert intoclause as follows.
insert into AverageTempStream;
The completed Siddhi application is as follows:
@App:name('TemperatureApp') @App:description('This application captures the room temperature and analyzes it, and presents the results as logs in the output console.') define stream TempStream (roomNo string, deviceID long, temp double); @sink(type = 'log', @map(type = 'passThrough')) define stream AverageTempStream (roomNo string, deviceID long, avgTemp double); @info(name = 'CalculateAvgTemp') from TempStream select roomNo, deviceID, avg(temp) as avgTemp insert into AverageTempStream;
Testing your Siddhi application¶
The application you created needs to be tested before he uses it to process the actual data received. You can test it in the following methods:
Simulating events¶
To simulate events for the Siddhi application, you can use the event simulator available with in the Streaming Integration Tooling as explained in the procedure below.
In the Streaming Integrator Tooling, click the following icon for event simulation on the side panel.
The Simulation panel opens as shown below.
In the Single Simulation tab of the simulation panel, select TemperatureApp from the list for the Siddhi App Name field.
You need to send events to the input stream. Therefore, select TempStream" in the **Stream Name field. As a result, the attribute list appears in the simulation panel.
Then enter values for the attributes as follows:
Click Start and Send.
The output is logged in the console as follows:
Debugging¶
To debug your Siddhi application, you need to mark debug points, and then simulate events as you did in the previous section. The complete procedure is as follows:
- Open the
TemperatureAppSiddhi application.
To run the Siddhi application in the debug mode, click Run => Debug, or click the following icon for debugging.
As a result, the Debug console opens in a new tab below the Siddhi application as follows.
Apply debug points in the lines with the
fromand
insert intoclauses. To mark a debug point, you need to click on the left of the required line number so that it is marked with a dot as shown in the image below.
Info
You can only mark lines with from or insert into clauses as debug points.
- Now simulate a single event the same way you simulated it in the previous section, with the following values.
Click Send to send the event.
When a debug point is hit, the line marked as a debug point is highlighted as shown below. The status of the event and the query is displayed in the debug console below the Siddhi application.
Deploying Siddhi applications¶
After creating and testing the
TemperatureApp Siddhi application, you need to deploy it in the Streaming Integrator server, export it as a Docker image, or deploy in Kubernetes.
Deploying in Streaming Integrator server¶ the appropriate command based on your operating system:
- For Windows:
server.bat --run TemperatureApp.siddhi Siddhi application and the server you added as shown below.
Click Deploy.
As a result, the
TemperatureAppSiddhi application is saved in the
<SI_HOME>/deployment/siddhi-filesdirectory, and the following is message displayed in the dialog box.
Deploying in Docker¶
To export the
Temperature Temperature.
Extending the Streaming Integrator¶
The Streaming Integrator is by default shipped with most of the available Siddhi extensions by default. If a Siddhi extension you require is not shipped by default, you can download and install it.
In this scenario, let's assume that the laboratories require the siddhi-execution-extrema extension to carry out more advanced calculations for different types of time windows. To download and install it, follow the procedure below:
Open the Siddhi Extensions page. The available Siddhi extensions are displayed as follows.
Click on the V4.1.1 for this scenario. As a result, the following page opens.
To download the extension, click Download Extension. Then enter your email address in the dialog box that appears, and click Submit. As a result, a JAR fil is downloaded to a location in your machine (the location depends on your browser settings).
To install the siddhi-execution-extrema extension in your Streaming Integrator, place the JAR file you downloaded in the
<SI_HOME>/lib directory.
Further references¶
- For a quicker demonstration of the Streaming Integrator, see Getting Started with the Streaming Integrator in Five Minutes.
- For a quick guide on how the Streaming Integrator works with the Micro Integrator to trigger integration flows, see [Getting SI Running with MI in 5 Minutes].
- To get the Streaming Integrator running with Docker in five minutes, see Getting SI Running with Docker in 5 Minutes.
- To get the Streaming Integrator running in a Kubernetes cluster in five minutes, see Getting SI Running with Kubernetes in 5 Minutes. | https://ei.docs.wso2.com/en/latest/streaming-integrator/quick-start-guide/quick-start-guide-101/ | 2020-01-18T04:15:10 | CC-MAIN-2020-05 | 1579250591763.20 | [array(['../../images/quick-start-guide-101/Welcome-Page.png',
'Streaming Integrator Tooling Welcome Page'], dtype=object)
array(['../../images/quick-start-guide-101/Simulation-Panel.png',
'Simulation Panel'], dtype=object)
array(['../../images/quick-start-guide-101/output-console.png',
'Console Log'], dtype=object)
array(['../../images/quick-start-guide-101/debug-points.png',
'Debug Points'], dtype=object)
array(['../../images/quick-start-guide-101/debug-points-and-console.png',
'Debugging'], dtype=object) ] | ei.docs.wso2.com |
Documentation
Accepting payments with Paymentwall is simple. Regardless of whether you know how to write code or no - Paymentwall has solutions for you.
If you have a website using a popular engine, such as Shopify or Magento, no development is needed - simply follow the instructions in the list of Modules and Platforms.
If you have would like to build a custom integration - integrate one of our Payment APIs.
If you would like to get paid by simply invoicing your customers - use Paymentwall Invoicing.
Guidelines
APIs.
Direct API
Custom integration that allows merchants build and host their own payment forms and connect them to selected Paymentwall payment methods.
For highly custom integrations.
Supports client-side tokenization, iframe fields and server-to-server integrations.
Invoicing
Get paid for your services and products by simply creating an invoice, listing the items and emailing the invoices to your customers with a Pay Button. Automatically supports all Paymentwall payment methods to instantly accept payments. Invoices can be generated via your Merchant Area or automatically using the Invoicing API.
Widget API
[Deprecated] This API has been deprecated in favor of Terminal3 E-Commerce Shops. Terminal3 allows merchants create their E-Commerce Shops, manage inventory and virtual currency and connect to Paymentwall.
Paymentwall widget where users can select one of the multiple price points to pay.
Comes with Paymentwall product inventory system and Virtual Currency management system.
SDKs
Mobile SDK
Unified mobile payment solution for payment methods provided by Paymentwall.
SmartTV SDK
Accept payments on Smart TVs. | https://docs.paymentwall.com/ | 2020-01-18T04:03:00 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.paymentwall.com |
Setting up standards and workflows
When you are setting up your Cloud portal for the first time, you can customize some elements to reflect the work model for your organization. Configuring these settings will make it easier for your staff members to use Cloud, because it will be more familiar to them.
Customize terminology
You can change the labels of certain fields in Cloud to make them more appropriate to your organization's practice or to your region. For example, you could change the name of the address field for Province to State. You could also change the name for Client Entity to a more specific term for your organization's client base.
To customize Cloud terminology:
Ensure you have the Settings Admin role or equivalent privileges.
From the Cloud menu, select Settings.
Select Customization | Terminology.
Enter the appropriate options and select Save.
Create workflows
You can help organize your organization's information on Cloud by creating workflows. For example, if your organization has a standard process for completing engagements, you can create a workflow for Working Papers files with stages for each step of the engagement process. For client workspaces, you could create a workflow with stages that indicate the status of the client - for example, whether they are a Prospect, a New client, or a Long-term client.
If you create one or more workflows during initial setup, you or other staff will be able to use them later to help organize your Cloud content..
Create tags
Tags are keywords that you can assign to people, entities, or files in Cloud. Tags can be used as search terms, making it easy for users to locate relevant materials and personnel. You can create both system-wide tags that can be assigned to all categories and specialized tags that users can only assign to staff, contacts, files, or entities.
For example, if your organization has business clients that operate in different industries - agriculture, finance, manufacturing or oil and gas - you could create tags for these industries. Then your organization staff could tag client workspaces or files as you add them to Cloud, and partners and managers would be able to quickly view all engagement information for specific client industries..
Add custom properties to workspaces
You can add custom entity properties in Cloud so that staff members can add or view more detailed information when they are looking at an entity's details. This enables your organization to record and track information specifically relevant to your practice all in one place.
For example if your organization wanted to record banking information for its clients, you could create a custom property group named Banking Information. You could add text fields for this property group for bank name, branch number, and branch address. If you then assigned this property group to your client entities, you would be able to enter all of those details when creating or modifying a client entity.
To create custom entity properties:, choose the Type of input required from the user for that property, such as a checkbox, a URL, or an amount.
For any field in which user input is required for the entity to be saved, select the Required field checkbox.
Select Save to create the custom entity properties. | https://docs.caseware.com/2019/webapps/30/cn/Explore/Getting-Started/Setting-up-standards-and-workflows.htm | 2020-01-18T04:07:35 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.caseware.com |
LDAP Access Control¶
The Lightweight Directory Access Protocol is commonly used to implement access control policies in organizations. Various methods are available, from Mandatory Access Control (MAC) policy which can define directly what entities have access to which services, through the Role-Based Access Control (RBAC) scheme which can be used to grant different levels of access to different entities.
This document describes various mechanisms which are available in the DebOps LDAP environment supported by the debops.ldap and debops.slapd Ansible roles. These mechanisms can be used in different services to implement access control to a varying degree, based on the application.
Note
Not all rules defined here are implemented in various DebOps roles at the moment.
Controlling access to LDAP objects in the directory¶
The debops.slapd role implements a default LDAP Access Control List which can be used to define which LDAP objects have access to data and at what level. By default, read access is granted to almost entire LDAP directory by authorized users; role-based and group-based access control is used to limit read and/or write access to specific LDAP attributes.
Account-based access control¶
Applications can use the LDAP bind operation to check if a given username and
password combination is valid. To accomplish that, applications can utilize
either a Distinguished Name provided by the user, match the username to
a personal LDAP entry with the
uid attribute stored in
ou=People,dc=example,dc=org directory subtree, or use a search query to
find the LDAP entry of a person or a service account in the LDAP directory
using their username (in the
uid attribute) or the provided e-mail address
(in the
This access method is good for services and applications which should be available to all legitimate users in an organization. Anonymous and external users will not be granted access without authenticating first.
Various applications also require their own account objects in the LDAP
directory to access its contents. These accounts are usually stored under the
host objects in the
ou=Hosts,dc=example,dc=org LDAP subtree, or if the
applications are external to the organization or are implemented as a cluster,
under the
ou=Services,dc=example,dc=org LDAP subtree. Application accounts
are subject to the LDAP Access Control List rules defined by the OpenLDAP
service and may not have access to all of the LDAP entries and/or attributes.
This authorization type is global - any LDAP entry with
userPassword
attribute can be used to authorize access to a resource.
Examples of LDAP search queries¶
Directly check existence of a LDAP entry:
ldapsearch -Z -b "uid=$value,ou=People,dc=example,dc=org" uid
Search for personal Distinguished Name based on username or e-mail address. Esure that only one LDAP entry is returned, more entries result in an error code from LDAP which needs to be handled by the application:
ldapsearch -Z -z 1 -b ou=People,dc=example,dc=org \ "(& (objectClass=inetOrgPerson) (| (uid=$value) (mail=$value) ) )" dn
Search for service account Distinguished Name based on username and FQDN of the host. Only one LDAP entry is allowed, more entries should result in an error:
ldapsearch -Z -z 1 -b dc=example,dc=org \ "(& (objectClass=account) (uid=$username) (host=$fqdn) )" dn
Access control based on group membership¶
The group LDAP objects, defined under the
ou=Groups,dc=example,dc=org LDAP
subtree, can be used to control access to resources. These objects usually use
the
groupOfNames object class with the
member attribute which defines
the group members. Optionally, these objects can define a corresponding POSIX
group using the
posixGroup and
posixGroupId object classes which can
then be used to define access control in an UNIX environment.
The
groupOfNames object class enforces at least one group member at all
times. Groups can also have defined owners or managers using the
owner
attribute; in the default LDAP Access Control List configuration group owners have
the ability to add or remove group members from the groups they own.
Applications can check the
member attribute of one or more groups to
determine if a given user or application account belongs to a group and with
that information grant or revoke access to resources. Alternatively, the
memberOf attribute of the user or account LDAP object can be used to
determine group membership and control resource access based on that
information.
This authorization type can be either global, or scoped to a particular
application with group entries located under the
ou=Groups subtree under
the application LDAP entry.
Examples of LDAP search queries¶
Get the Distinguished Names of LDAP entries which are members of the UNIX Administrators group:
ldapsearch -Z -b "cn=UNIX Administrators,ou=Groups,dc=example,dc=org" member
Get the list of group Distinguished Names a given user belongs to:
ldapsearch -Z -b "uid=$username,ou=People,dc=example,dc=org" memberOf
Find all members of the UNIX Administrators group:
ldapsearch -Z "(memberOf=cn=UNIX Administrators,ou=Groups,dc=example,dc=org)" dn
Role-based access control¶
The role LDAP objects, defined under the
ou=Roles,dc=example,dc=org LDAP
subtree, are similar to the group objects described above. They are usually
defined using the
organizationalRole object class, and use the
roleOccupant attribute to determine the people and accounts which are
granted a given role.
The
organizationalRole object class does not require any particular members
to be present, unlike the
groupOfNames object class. This is a good choice
to create various roles which don't have existing role occupants - different
roles can then be granted to different people or accounts at a later date.
This authorization type can be either global, or scoped to a particular
application with role entries located under the
ou=Roles subtree under the
application LDAP entry.
Examples of LDAP search queries¶
Get the Distinguished Names of LDAP entries which are included in the LDAP Administrator role:
ldapsearch -Z -b "cn=LDAP Administrator,ou=Roles,dc=example,dc=org" roleOccupant
Attribute-based access control¶
LDAP entries can include the
authorizedServiceObject object class which
provides the
authorizedService attribute. This attribute is a multi-valued
string which can be used to define the access permissions to a particular
resource. Only "equal" match for this attribute is defined in the LDAP schema,
which limits its capabilities to a degree - searching for partial string
matches is not supported.
This authorization type is scoped to an LDAP entry, which results in less LDAP queries needed to find out particular access permissions. It can be used to implement Attribute-Based Access Control (ABAC) authorization scheme.
In DebOps, applications should standardize on a structured format of the
attribute values, either
all,
<service>,
<system>, or
<system>:<type>.
Global permissions¶
The
all value grants access to all services and systems and if present,
should be the only value of the
authorizedService attribute. Any additional
values present are nullified by it, therefore if more fine-grained access
control is desired, the
all value should be removed from the LDAP entry
entirely. Client applications are free to implement the meaning of the
all
value as they choose, however usually the usage in the LDAP search filter will
most likely be either
all or some specific set of values.
Service permissions¶
The
<service> value usually means a specific network service daemon, for
example
sshd,
slapd,
vsftpd and so on. Since web applications are
accessed via a web server, they should use their own separate service or system
names to allow more fine-grained access control to each web application. The
value grants blanket access to a particular service without fine-grained
control over capabilities of the user.
System permissions¶
The
<system> value is an agnostic name for a set of various services that
work together as a whole to accomplish a task. For example,
shell string would define access control
parameter for the SSH service, sudo access, NSS database service,
etc.
Similarly to the
<service> value, this value grants blanket access to
a particular system as a whole. It means that the system cannot define "global"
access and "partial" access at the same time (see below). It might be hard to
convert a "global" access permissions to "partial" access permissions,
therefore the choice of how to define the access should be selected early on
during development.
Partial system permissions¶
The
<system>:<type> value is a definition of a system access permissions
which are split into "parts" of the whole, each part defined by the permission
<type>. The partial permissions shouldn't overlap (two or more permissions
controlling the same resource access) or be additive (a permission type
implying presence of another permission type). There shouldn't be
a
<system>:all permission as well, since it would nullify partial
permissions for a given system.
Each system can define its own set of permission types, however the type names
should be as precise and descriptive as possible. A good example is the "mail"
system, with the
mail:receive permission allowing incoming messages to be
received by the e-mail account, the
mail:send permission allowing outgoing
messages to be sent by the e-mail account, and the
mail:access permission
granting read-write access to the e-mail account by its user.
It's easy to create additional permission types once the system is implemented, therefore in larger systems this should be a preferred method of access control. The partial permissions shouldn't be mixed with the "global" permission for a given system because that would nullify the partial permissions.
Examples of LDAP search queries¶
Get list of access control values of a given user account:
ldapsearch -Z -b 'uid=$username,ou=People,dc=example,dc=org' authorizedService
Find all personal accounts which have shell access or global access:
ldapsearch -Z -b "ou=People,dc=example,dc=org" \ "(& (objectClass=inetOrgPerson) (| (authorizedService=all) (authorizedService=shell) ) )" dn
Find all LDAP entries which can send e-mail messages or have global access:
ldapsearch -Z -b "dc=example,dc=org" \ "(| (authorizedService=all) (authorizedService=mail:send) )" dn
Host-based access control¶
The
hostObject LDAP object class gives LDAP entries access to the
host
attribute which is used to store hostnames and Fully Qualified Domain Names of
the LDAP entries. The attribute type supports substring (wildcard) matches and
can be used to create host-based access rules.
Various services and systems can check for the presence of the
host
attribute with specific value patterns. The preferred value format in this case
should be:
<service|system>:<host>, where the
<host> can be a FQDN
hostname, or a woldcard domain (
*.example.org), or just a hostname, or the
value
all for all hosts in the cluster.
Examples of LDAP search queries¶
Get list of POSIX accounts which should be present on a given host and have access to shell services:
ldapsearch -Z -b "dc=example,dc=org" \ "(& (objectClass=posixAccount) (| (host=shell:host.example.org) (host=shell:all) ) )"
Get list of POSIX accounts which should be present on any host in a specific domain. This uses the substring match to get all entries with a specific domain:
ldapsearch -Z -b "dc=example,dc=org" \ "(& (objectClass=posixAccount) (| (host=shell:*.example.org) (host=shell:all) ) )"
Get list of POSIX accounts which should be present on all hosts in a specific
domain. This query looks for all entries with a wildcard (
*.example.org)
domain defined as the value:
ldapsearch -Z -b "dc=example,dc=org" \ "(& (objectClass=posixAccount) (| (host=shell:\2a.example.org) (host=shell:all) ) )" | https://docs.debops.org/en/stable-1.0/ansible/roles/debops.ldap/ldap-access.html | 2020-01-18T03:23:53 | CC-MAIN-2020-05 | 1579250591763.20 | [] | docs.debops.org |
Open Source Gitify Updating Gitify
If you followed the procedure to install Gitify from Git, updating Gitify to a newer version is as simple as pulling in the recent changes.
Just pull in the latest changes, rerun composer in case anything changed with regards to the dependencies, and off you go.
cd /path/to/Gitify/ git pull composer install
Dealing with breaking changes
Especially in early releases, it's not uncommon for breaking changes to occur. These changes might be related to how the files get written - or read - and can break your workflow. Any potentially breaking changes are tagged with
[BC] in the changelog.
The easiest way to deal with changes like this is to make sure that, before updating, there are no differences between your data files and the database. So extract or build before updating.
After the update, extract again so files are written anew and are compatible with whatever changed in the new release. Otherwise you may not be able to build later.
Remember to update all your Gitify installs (local, staging and production) as close to each other as possible to be safe from changes between versions. It's also possible to install Gitify into your project files (as git submodule - tho we might add it to composer later), so you have a copy of Gitify local to the project, but then you lose the ability to run it anywhere. | http://docs.modmore.com/en/Open_Source/Gitify/Updating_Gitify.html | 2017-11-17T19:25:33 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.modmore.com |
Installation Guide
Full online documentation for the WP EasyCart eCommerce plugin!
Full online documentation for the WP EasyCart eCommerce plugin!
It is important to always upgrade your EasyCart plugin when updates become available. It is important to update all plugins, along with WordPress as they become available due to security patches, bug fixes, and often compatibility fixes.
**Note: WordPress updates is a crude process, which simply removes the entire plugin folder and replaces with the current new one off WordPress.org.
To insure you can upgrade the core plugin, we install all the core files into the plugin folder /wp-easycart. We put all data images, downloads, uploads, and custom design files into the alternative plugin directory /wp-easycart-data. WordPress and EasyCart should never remove this /wp-easycart-data folder, making it safe to upgrade the core plugin, while retaining any changes you make into the -data folder.
To upgrade, simply go to the WordPress admin -> Plugins section and look for the WP EasyCart plugin and update as necessary. This will update the core files, but touches nothing in the /wp-easycart-data folder.
Upgrading the professional admin plugin OR extensions is the same process. Just simply look for an update, or click the ‘Check for Update’ button on each plugin and then upgrade accordingly. No data is touched in your store, we simply update the plugin files. | http://docs.wpeasycart.com/wp-easycart-installation-guide/?section=reports-2 | 2017-11-17T19:28:58 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.wpeasycart.com |
Microsoft Security Bulletin MS15-071 - Important
Vulnerability in Netlogon Could Allow Elevation of Privilege (3068457)
Published: July 14, 2015
Version: 1.0
Executive Summary
This security update resolves a vulnerability in Microsoft Windows. The vulnerability could allow elevation of privilege if an attacker with access to a primary domain controller (PDC) on a target network runs a specially crafted application to establish a secure channel to the PDC as a replica domain controller.
This security update is rated Important for all supported editions of Windows Server 2003, Windows Server 2003 R2, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2. For more information, see the AffectedSoftware section.
The update addresses the vulnerability by modifying how Netlogon handles establishing secure channels. For more information about the vulnerability, see the VulnerabilityInformation section.
For more information about this update, see Microsoft Knowledge Base Article 3068457..
Vulnerability Information
Elevation of Privilege Vulnerability in Netlogon - CVE-2015-2374
An elevation of privilege vulnerability exists in Netlogon that is caused when the service improperly establishes a secure communications channel to a primary domain controller (PDC). To successfully exploit this vulnerability, an attacker would first need to have access to a PDC on a target network. An attacker could then run a specially crafted application that could establish a secure channel to the PDC as a replica domain controller and may be able to disclose credentials. Servers configured as domain controllers are at risk from this vulnerability. The update addresses the vulnerability by modifying the way that Netlogon handles establishing secure channels., 2015): Bulletin published.
Page generated 2015-12-23 10:58-08:00. | https://docs.microsoft.com/en-us/security-updates/SecurityBulletins/2015/ms15-071 | 2017-11-17T20:38:44 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.microsoft.com |
Full.
as_rgb(r, g, b)¶
Returns an RGB image with
rin the red channel,
gin the green, and
bin the blue. The channels are contrast stretched.
If any of the channels is None, that channel is set to zero. The same can be achieved by passing
0as that channels value. In fact, passing a number as a channel value will set the whole channel to that value.
Examples
This shows a nice looking picture:
z1 = np.linspace(0, np.pi) X,Y = np.meshgrid(z1, z1) red = np.sin(X) green = np.cos(4*Y) blue = X*Y plt.imshow(mahotas.as_rgb(red, green, blue))
Notice that the scaling on the
bluechannel is so different from the other channels (from 0..2500 compared with 0..1), but
as_rgbstretches each channel independently.. This conversion may result in over/underflow when using small integer types or unsigned types (if the output is negative). Converting to a floating point representation avoids this issue:
c = convolve(f.astype(float), kernel).
croptobbox(img, border=0)¶
Returns a version of img cropped to the image’s bounding box
Notes
Note that the border is on the bounding box, not on the final image! This means that if the image has a positive pixel on its margin, it will still be on the margin.
This ensures that the result is always a sub-image of the input.
mahotas.
cwatershed(surface, markers, Bc=None, return_lines=False) W, WL = cwatershed(surface, markers, Bc=None, return_lines=True)¶
Seeded watershed in n-dimensions
This function computes the watershed transform on the input surface (which may actually be an n-dimensional volume).
This function requires initial seed points. A traditional way of initializing watershed is to use regional minima:
minima = mh.regmin(f) markers,nr_markers = mh.label(minima) W = cwatershed(f, minima)
mahotas.
daubechies(f, code, inline=False)¶
Daubechies wavelet transform
This function works best if the image sizes are powers of 2!
mahotas.
dilate(A, Bc=None, out=None, output=None)¶
Morphological dilation.
The type of operation depends on the
dtypeof
A! If boolean, then the dilation is binary, else it is greyscale dilation. In the case of greyscale dilation, the smallest value in the domain of
Bcis interpreted as +Inf.
mahotas.
disk(radius, dim=2)¶
Return a binary disk structuring element of radius
radiusand dimension
dim).
References
For 2-D images, the following algorithm is used:
Felzenszwalb P, Huttenlocher D. Distance transforms of sampled functions. Cornell Computing and Information. 2004.
Available at:.
For n-D images (with n > 2), a slower hand-craft method is used.
mahotas.
dog(img, sigma1 = 2, thresh= None, just_filter = False)¶
Compute edges using the Difference of Gaussian (DoG) operator.
edges is a binary image of edges..
References
The following algorithm is used:
A Fast Algorithm for Computing the Euler Number of an Image and its VLSI Implementation, doi: 10.1109/ICVD.2000.812628
mahotas.
find(f, template)¶
Match template to image exactly
coordinates = find(f, template)
The output is in the same format as the
np.wherefunction.
mahotas.
fullhistogram(img)¶
Return a histogram with bins 0, 1, ..., ``img.max()``.
After calling this function, it will be true that
hist[i] == (img == i).sum(), for all
i.
Notes
Only handles unsigned integer arrays.if, when
Bcis overlaid on
input, centered at that position, the
1values line up with
1s, while the
0s line up with
0s (
2s correspond to don’t care).
Examples
print(hitmiss(np.array([ [0,0,0,0,0], [0,1,1,1,1], [0,0,1,1,1]]), np.array([ [0,0,0], [2,1,1], [2,1,1]]))) prints:: [[0 0 0 0 0] [0 0 1 1 0] [0 0 0 0 0]]
mahotas.
ihaar(f, preserve_energy=True, inline=False)¶
Reverse Haar transform
ihaar(haar(f))is more or less equal to
f(equal, except for possible rounding issues).
mahotas.
imread(filename, as_grey=False)¶
Read an image into a ndarray from a file.
This function depends on PIL (or Pillow) being installed.
mahotas.
imresize(img, nsize, order=3)¶
Resizes image
This function works in two ways: if
nsizeis a tuple or list of integers, then the result will be of this size; otherwise, this function behaves the same as
mh.interpolate.zoom
See also
zoom
- Similar function
scipy.misc.pilutil.imresize
- Similar function
mahotas.
imsave(filename, array)¶
Writes array into file filename
This function depends on PIL (or Pillow) being installed., minlength=None)¶
Labeled sum. sum will be an array of size
labeled.max() + 1, where
sum[i]is equal to
np.sum(array[labeled == i]).
mahotas.
locmax(f, Bc={3x3 cross}, out={np.empty(f.shape, bool)})¶
Local maxima
See also
regmax
- function.
locmin
- function Local minima
mahotas.
majority_filter(img, N=3, out={np.empty(img.shape, np.bool)})¶
Majority filter
filtered[y,x] is positive if the majority of pixels in the squared of size N centred on (y,x) are positive.
mahotas.
mean_filter(f, Bc, mode='ignore', cval=0.0, out=None)¶
Mean filter. The value at
mean[i,j]will be the mean of the values in the neighbourhood defined by
Bc.
See also
median_filter
- An alternative filtering method.
Notes
It only works for 2-D images.
otsu(img, ignore_zeros=False)¶
Calculate a threshold according to the Otsu method.
Example:
import mahotas as mh import mahotas.demos im = mahotas.demos.nuclear_image() # im is stored as RGB, let's convert to single 2D format: im = im.max(2) #Now, we compute Otsu: t = mh.otsu(im) # finally, we use the value to form a binary image: bin = (im > t)
See Wikipedia for details on methods:’s_method
mahotas.
overlay(gray, red=None, green=None, blue=None, if_gray_dtype_not_uint8='stretch')¶
Create an image which is greyscale, but with possible boolean overlays.
mahotas.
rank_filter(f, Bc, rank, mode='reflect', cval=0.0, out=None)¶
Rank filter. The value at
ranked[i,j]will be the
rankth largest in the neighbourhood defined by
Bc.
See also
median_filter
- A special case of rank_filter
mahotas.
rc(img, ignore_zeros=False)¶
Calculate a threshold according to the Riddler-Calvard method.
Example:
import mahotas as mh import mahotas.demos im = mahotas.demos.nuclear_image() # im is stored as RGB, let's convert to single 2D format: im = im.max(2) #Now, we compute a threshold: t = mh.rc(im) # finally, we use the value to form a binary image: bin = (im > t)
mahotas.
regmax(f, Bc={3x3 cross}, out={np.empty(f.shape, bool)})¶.
mahotas.
regmin(f, Bc={3x3 cross}, out={np.empty(f.shape, bool)})¶
Regional minima. See the documentation for
regmaxfor more details.). The method is simple linear stretching according to the formula:
p' = max * (p - img.min())/img.ptp() + min
Notes
If max > 255, then it truncates the values if dtype is not specified.
mahotas.
stretch_rgb(img, arg0=None, arg1=None, dtype=<type 'numpy.uint8'>)¶
Variation of stretch() function that works per-channel on an RGB image
mahotas.
template_match(f, template, mode='reflect', cval=0.0, out=None, output=None)¶
Match template to image
match = template_match(f, template, mode=’reflect’, cval=0., out={np.empty_like(f)})
The value at
match[i,j]will be the difference (in squared euclidean terms), between template and a same sized window on f centered on that point.
Note that the computation is performed using the same dtype as
f. Thus is may overflow if the template is large.
mahotas.
wavelet_center(f, border=0, dtype=float, cval=0.0)¶
fcis a centered version of
fwith a shape that is composed of powers of 2.
See also
wavelet_decenter
- function Reverse function
mahotas.
wavelet_decenter(w, oshape, border=0)¶
Undoes the effect of
wavelet_center
See also
wavelet_center
- function Forward function
mahotas.features.
ellipse_axes(bwimage)¶
Parameters of the ‘image ellipse’
semimajor,semiminor = ellipse_axes(bwimage)
Returns the parameters of the constant intensity ellipse with the same mass and second order moments as the original image.
References
Prokop, RJ, and Reeves, AP. 1992. CVGIP: Graphical Models and Image Processing 54(5):438-460
mahotas.features.
haralick(f, ignore_zeros=False, preserve_haralick_bug=False, compute_14th_feature=False, return_mean=False, return_mean_ptp=False, use_x_minus_y_variance=False, distance=1)¶
Compute Haralick texture features
Computes the Haralick texture features for the four 2-D directions or thirteen 3-D directions (depending on the dimensions of f).
ignore_zeroscan be used to have the function ignore any zero-valued pixels (as background). If there are no-nonzero neighbour pairs in all directions, an exception is raised. Note that this can happen even with some non-zero pixels, e.g.:
0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0
would trigger an error when
ignore_zeros=Trueas there are no horizontal non-zero pairs!
Notes
Haralick’s paper has a typo in one of the equations. This function implements the correct feature unless preserve_haralick_bug is True. The only reason why you’d want the buggy behaviour is if you want to match another implementation.
References
Cite the following reference for these features:
@article{Haralick1973, author = {Haralick, Robert M. and Dinstein, Its'hak and Shanmugam, K.}, journal = {Ieee Transactions On Systems Man And Cybernetics}, number = {6}, pages = {610--621}, publisher = {IEEE}, title = {Textural features for image classification}, url = {}, volume = {3}, year = {1973} }
mahotas.features.
lbp(image, radius, points, ignore_zeros=False)¶
Compute Linear Binary Patterns
The return value is a histogram of feature counts, where position
icorresponds to the number of pixels that had code
i. The codes are compressed so that impossible codes are not used. Therefore, this is the
i``th feature, not just the feature with binary code ``i.
References
- Gray Scale and Rotation Invariant Texture Classification with Local Binary Patterns
- Ojala, T. Pietikainen, M. Maenpaa, T. Lecture Notes in Computer Science (Springer) 2000, ISSU 1842, pages 404-420
mahotas.features.
pftas(img, T={mahotas.threshold.otsu(img)})¶
Compute parameter free Threshold Adjacency Statistics
TAS were presented by Hamilton et al. in “Fast automated cell phenotype image classification” ()
The current version is an adapted version which is free of parameters. The thresholding is done by using Otsu’s algorithm (or can be pre-computed and passed in by setting T), the margin around the mean of pixels to be included is the standard deviation. This was first published by Coelho et al. in “Structured Literature Image Finder: Extracting Information from Text and Images in Biomedical Literature” ()
Also returns a version computed on the negative of the binarisation defined by Hamilton et al.
Use tas() to get the original version of the features.
mahotas.features.
tas(img)¶
Compute Threshold Adjacency Statistics
TAS were presented by Hamilton et al. in “Fast automated cell phenotype image classification” ()
Also returns a version computed on the negative of the binarisation defined by Hamilton et al.
See also pftas() for a variation without any hardcoded parameters.
mahotas.features.
zernike_moments(im, radius, degree=8, cm={center_of_mass(im)})¶
Zernike moments through
degree. These are computed on a circle of radius
radiuscentered around
cm(or the center of mass of the image, if the
cmargument is not used).
Returns a vector of absolute Zernike moments through
degreefor the image
im.
References
Teague, MR. (1980). Image Analysis via the General Theory of Moments. J. Opt. Soc. Am. 70(8):920-930.
mahotas.colors.
rgb2graygreylab(rgb, dtype={float})¶
Convert sRGB to L*a*b* coordinates
mahotas.colors.
rgb2xyz(rgb, dtype={float})¶
Convert RGB to XYZ coordinates
The input is interpreted as sRGB. See Wikipedia for more details:
mahotas.colors.
xyz2lab(xyz, dtype={float})¶
Convert CIE XYZ to L*a*b* coordinates
mahotas.colors.
xyz2rgb(xyz, dtype={float})¶
Convert XYZ to sRGB coordinates
The output should be interpreted as sRGB. See Wikipedia for more details: | http://mahotas.readthedocs.io/en/latest/api.html | 2017-11-17T19:17:25 | CC-MAIN-2017-47 | 1510934803906.12 | [] | mahotas.readthedocs.io |
Security Bulletin
Microsoft Security Bulletin MS01-042 - Critical
Windows Media Player .NSC Processor Contains Unchecked Buffer
Published: July 26, 2001 | Updated: June 13, 2003
Version: 1.1
Originally posted: July 26, 2001
Updated: June 13, 2003
Summary
Who should read this bulletin:
Customers using Microsoft® Windows Media™ Player 6.4, 7, or 7.1.
Impact of vulnerability:
Run code of attacker's choice.
Recommendation:
- Windows Media Player 6.4 customers should either install the patch or upgrade to Windows Media Player 7.1 and then install the patch.
- Windows Media Player 7.0 customers should upgrade to Windows Media Player 7.1 and install the patch.
- Windows Media Player 7.1 customers should apply the patch.
Affected Software:
- Microsoft Windows Media Player 6.4
- Microsoft Windows Media Player 7
- Microsoft Windows Media Player 7.1
General Information
Technical details
Technical description:
Windows Media Player provides support for audio and video streaming. Streaming media channels can be configured by using Windows Media Station (.NSC) files. An unchecked buffer exists in the functionality used to process Windows Media Station files. This unchecked buffer could potentially allow an attacker to run code of his choice on the machine of another user. The attacker could either send a specially malformed file to another user and entice her to run or preview it, or he could host such a file on a web site and cause it to launch automatically whenever a user visited the site. The code could take any action on the machine that the legitimate user himself could take.
Mitigating factors:
- Customers who have applied the Outlook E-mail Security Update (OESU) for Outlook 2000 or are running Outlook XP, which has the OESU functionality built-in, are automatically protected against HTML e-mail based attempts to exploit this vulnerability.
- For others not in the above categories, the attacker would have to entice the potential victim to visit a web site he controlled, or to open an HTML e-mail he had sent.
-.
Vulnerability identifier: CAN-2001-0541
Tested Versions:
Microsoft tested Windows Media Player 6.4, Windows Media Player 7 and Windows Media Player 7.1 to assess whether they are affected by this vulnerability. Previous versions are no longer supported and may or may not be affected by this vulnerability.
Frequently asked questions
What's the scope of the vulnerability?
This is a buffer overrun vulnerability. It could enable an attacker to run code of his choice on the machine of another user is he was able to convince the user to visit a web site he controlled or to open a specially crafted HTML e-mail. The program would be capable of taking any action on the user's machine that the user herself could take, including adding, creating or deleting files, communicating with web sites or potentially even reformatting the hard drive.
What causes the vulnerability?
The vulnerability results because there is an unchecked buffer in a section of Windows Media Player that handles .NSC files. By including a particular type of malformed entry in a .NSC file, an attacker could cause code of his choice to execute when a user played the file.
What's a .NSC file?
Windows Media Station files (.NSC) were first introduced in NetShow 2.0 as NetShow Channels. In Windows Media Player, .NSC files are called Windows Media Station Files. .NSC files are essentially playlists that contain information to allow Windows Media Player to connect to and play streaming media. Windows Media Player uses Windows Media Station (.nsc) files to get the information it needs to receive multicast content over the Internet. These files can contain information such as stream location and rollover URL, as well as descriptive information about the station. Where standard streaming multimedia sends a single media stream to a single recipient, multicasting allows a single media stream to be received by more than one person, much like a Television or Radiobroadcast. .NSC files contain the information necessary to allow multimedia multicast streams to be processed correctly by Windows Media.
What's wrong with how Windows Media Player handles .NSC files?
One of the buffers that read data from .NSC files doesn't perform proper input validation. As a result, it would be possible for an attacker to craft a specially formed .NSC file that can overrun the buffer and modify the executable Windows Media Player code that is running.
What could this enable an attacker to do?
When it runs, Windows Media Player runs in the security context of the currently-logged-on user. If an attacker were to successfully exploit this vulnerability, the malicious code then could do anything on the machine that the current user could do. This means that the actions an attacker could take will depend a great deal on what privileges the user has on the system when they run the attacker's code.
- an attacker maliciously exploit this vulnerability?
There are two likely scenarios that that an attacker might try to exploit this vulnerability.
- He could send an HTML e-mail that would launch the malicious .NSC file when opened. An attacker could target specific individuals with this approach.
- He could host an .NSC file on a web site and cause it to be launched automatically whenever someone visited the site. This approach would require that the attacker wait for the potential victims to come to his site.
I'm using the Outlook E-mail Security Update, does this help protect me?
Customers who have deployed the Outlook E-Mail Security Update or who are using Outlook 2002 are protected from HTML e-mail-based attempts to exploit this vulnerability by the default security settings. The OESU and Outlook 2002 both set the Security Zone for HTML e-mail to the Restricted Sites Zone which automatically disables ActiveX controls in HTML e-mail. This means that an HTML e-mail with a .NSC file embedded by a malicious user would not run in Outlook, rendering the attack harmless.
If the malicious user placed the .NSC file on a web site, would it run automatically in the browser?
When using Internet Explorer (IE), the default security settings for the Internet Zone make it possible for a web site to automatically open .NSC files when a user visits the web site. This is because ActiveX controls are enabled by default in the Internet Zone in IE. However, users can use change the settings in the Internet Zone to disable ActiveX controls. If users make this change, then .NSC files will not launch automatically.
You said previously that the attacker would need to overrun the buffer with carefully-chosen data in order to run code of his choice. What would happen if she just overran it with random data?
If the buffer were overrun with random data, it would cause Windows Media Player to fail. This wouldn't pose a security problem, and the user could simply restart it and resume normal operation.
You said previously that the attacker would need to know the specific operations system that the user was running. Why is that?
To mount an effective attack exploiting this vulnerability, an attacker would need to know the potential victim's specific operating system so that he could tailor the malformed file appropriately for his platform. If the file is not fashioned appropriately for the user's platform, the attack would fail, causing Windows Media Player to crash, but not execute the attacker's code.
What does the patch do?
The patch eliminates the vulnerability by implementing proper input validation for .NSC files.
Patch availability
Download locations for this patch
Windows Media Player 6.4:
The vulnerability can be eliminated by installing the patch or upgrading to Windows Media Player 7.1 and then installing the patch.
Windows Media Player 7.0:
The vulnerability can be eliminated by upgrading to Windows Media Player 7.1 and then installing the patch.
Windows Media Player 7.1:
The vulnerability can be eliminated by installing the patch.
Additional information about this patch
Installation platforms:
The patch can be installed on systems running Windows Media Player 6.4, and Windows Media Player 7.1 respectively. Customers running Windows Media Player 7 should upgrade to version 7.1 and then install the patch.
Inclusion in future service packs:
The fix for this issue will be included in the forthcoming Windows 2000 Service Pack 3.
Reboot needed: Yes
Superseded patches: None.
Verifying patch installation:
- To verify that the patch has been installed on the machine, confirm that the following registry key has been created:
HKLM\SOFTWARE\Microsoft\Updates\Windows Media Player\WMSU55362304404 26, 2001: Bulletin Created.
- V1.1 (June 13, 2003): Updated download links to Windows Update.
Built at 2014-04-18T13:49:36Z-07:00 | https://docs.microsoft.com/en-us/security-updates/SecurityBulletins/2001/ms01-042 | 2017-11-17T19:35:31 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.microsoft.com |
There are several ways that you can export your model from ZBrush for use in another program. How you do it will depend on your particular requirements.
You can export your model directly from ZBrush using the Export button in the Tool palette.
Alternatively you can use GoZ, or the 3D Printing Exporter.
Whichever method you use, you may want to export texture maps along with your model, so that all the color and detail that you created in ZBrush is taken along too. Explore these pages to learn how:
For a full list of file formats that ZBrush can import and export see the Import & Export page. | http://docs.pixologic.com/user-guide/3d-modeling/exporting-your-model/ | 2017-11-17T19:11:19 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.pixologic.com |
When you create a protection group for array-based replication, you specify array information and Site Recovery Manager computes the set of virtual machines to a datastore group. Datastore groups contain all the files of the protected virtual machines.
You add virtual machines to an array-based protection group by placing them in a datastore that belongs to a datastore group that Site Recovery Manager associates with a protection group. Site Recovery Manager recomputes the datastore groups when it detects a change in a protected virtual machine. For example, if you add a hard disk that is on another LUN to a protected virtual machine, Site Recovery Manager adds the LUN to the datastore group of that protection group. You must reconfigure the protection to protect the new LUN. Site Recovery Manager computes consistency groups when you configure an array pair or when you refresh the list of devices.
You can also add virtual machines to the protection group by using Storage vMotion to move their files to one of the datastores in the datastore group. You can remove a virtual machine from an array-based protection group by moving the virtual machine's files to another datast. | https://docs.vmware.com/en/Site-Recovery-Manager/6.0/com.vmware.srm.admin.doc/GUID-9652C847-C351-47C0-BD00-3CD74E0A5EBC.html | 2017-11-17T19:45:35 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.vmware.com |
You can update linked-clone virtual machines by creating a new base image on the parent virtual machine and using the recompose feature to distribute the updated image to the linked clones. Prepare a Parent Virtual Machine to Recompose Linked ClonesBefore you recompose a linked-clone desktop pool, you must update the parent virtual machine that you used as a base image for the linked clones. Recompose Linked-Clone Virtual MachinesMachine recomposition simultaneously updates all the linked-clone virtual machines anchored to a parent virtual machine. Updating Linked Clones with RecompositionIn a recomposition, you can provide operating system patches, install or update applications, or modify the virtual machine hardware settings in all the linked clones in a desktop pool. Correcting an Unsuccessful RecompositionYou can correct a recomposition that failed. You can also take action if you accidentally recompose linked clones using a different base image than the one you intended to use. Parent topic: Managing View Composer Linked-Clone Desktop Virtual Machines | https://docs.vmware.com/en/VMware-Horizon-7/7.1/com.vmware.horizon-view.administration.doc/GUID-74EE5875-51CD-45C1-8206-9CA27FB7856C.html | 2017-11-17T19:45:46 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.vmware.com |
This function allows you to return the URLs for one or more record dashboards or a record list view that can then be used in a link component.
urlforrecord( recordType, [recordIds] )
recordType: (RecordType) The record type constant of the record(s).
recordIds: (Any Type Array) The identifiers of the record(s) to return URLs for.
Text Array
The user executing the function must have at least viewer rights to the record type or the expression will fail and an error will occur. They do not need to have viewer rights to the record(s).
For a process model record type, the recordIds value is the process Id. For a data store entity record type, it is the primary key for the entity.
If the recordIds value is null or empty, the function returns the URL for the record list view of the recordType value record type.
If the record type for the recordType value does not exist, the expression will fail and an error will occur.
If a user does not have at least viewer rights to the record or record type in the returned URL, the user will see an error when trying to open the URL.: Use this data type to indicate the record type for the record(s) you want URLs for.
Link SAIL Component: Add the returned URL to this SAIL component to display a link to the record detail view or record list view.
Records Tutorial: Shows you how to create your first record.
On This Page | https://docs.appian.com/suite/help/17.4/fnc_scripting_urlforrecord.html | 2019-10-14T01:13:17 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.appian.com |
There two ways to list your Items on Amazon:
- if Product is already available in Amazon catalog, you need to assign your Offer to the existing ASIN/ISBN;
- if Product is not in Amazon Catalog yet, you have to create a new ASIN/ISBN.
In this latter case, the informative and concise Product description is required. However, even if you match your Offer to the existing catalog page, you may submit your supplements to the Product details to Amazon for approval.
To send Product information to Amazon via M2E Pro, you have to create the Description Policy. Navigate to Amazon > Configuration > Policies, use Add Policy button.
Note
Due to the technical restrictions of Amazon API, the limited set of Product details can be submitted via M2E Pro.
M2E Pro Team constantly improves the Module integration with Amazon Channel and extends the list of supported Product Types.
Note: Description Policy cannot be assigned to Simple with Custom Options, Bundle and Downloadable with Separated Links Magento Products due to these Product type particularities.
- Title - specify the meaningful title for your description template.
- Marketplace - select the Marketplace on which you are going to submit this Product description. The list of available Catagories depends on Marketplace you select.
- Category - select Amazon Category in which your Offer can be found on Amazon after you list it. The details can be found here.
Note
M2E Pro shows the Category data which is received from Amazon API. Some Category names available in Description Policy may differ from those you see in your Seller Central Account. Select the Category which best suits your Product.
- Product Type - select the most appropriate Product Type. The list of available Specifics depends on the Product Type you select.
- New ASIN/ISBN Creation - enable if you are going to add a new Product to Amazon catalog. Select Magento Attribute where the related UPC/EAN values are stored. If your Product has no UPC/EAN, you can select one of the Product ID Override options.
Once you complete the required fields under the General tab, switch to the Definition tab to provide the main information about your Product. | https://docs.m2epro.com/display/AmazonMagentoV6X/Description+Policy | 2019-10-14T01:25:12 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.m2epro.com |
Quickstart: Animations for Windows Phone
[ This article is for Windows Phone 8 developers. If you’re developing for Windows 10, see the latest documentation. ]
In Windows Phone, animations can enhance your apps by adding movement and interactivity. By animating a background color or applying an animated transform, you can create dramatic screen transitions or provide helpful visual cues. This Quickstart shows you how to create basic animations by changing property values and by using key frames.
This topic contains the following sections.
- Animating a double property
- Animating a color property
- Starting, stopping, pausing, and resuming
- Animating by using key-frames
- Animating by using easing functions
- Related Topics
Animating a double property
Windows Phone animations are created by changing property values of objects. For example, you can animate the Width of a Rectangle, the angle of a RotateTransform, or the color value of a Button.
In the following example, the Opacity property is animated.
<StackPanel> <StackPanel.Resources> <!-- Animates the rectangle's opacity. --> <Storyboard x: <DoubleAnimation Storyboard. </Storyboard> </StackPanel.Resources> <Rectangle MouseLeftButtonUp="Rectangle_Tapped" x: <_0<<
The following sections discuss the steps for animating the Opacity property and examine the XAML that is used for each step.
1. Identifying the property to animate
In this example, you're animating the Opacity property of a Rectangle. You don't have to declare the property you want to animate on the object itself. However, you typically name the object that you want to animate. Naming the object makes it easier to specify which object is being targeted by the animation. The following XAML shows how to name the RectangleMyAnimatedRectangle.
<Rectangle x:Name="MyAnimatedRectangle" ...
2. Creating a storyboard and making it a resource
A Storyboard is the container that you put animation objects into. You have to make the Storyboard a resource that is available to the object that you want to animate. The following XAML shows how to make the Storyboard a resource of the root element, which is a StackPanel.
<StackPanel x: <StackPanel.Resources> <!-- Animates the rectangle's opacity. --> <Storyboard x: <!-- Animation objects go here. --> </Storyboard> </StackPanel.Resources> </StackPanel>
3. Adding an animation object to the storyboard
Because the value of the property you are animating (Opacity) uses a double, this example uses the DoubleAnimation object. An animation object specifies what is animated and how that animation behaves. The following XAML shows how the DoubleAnimation is added to the Storyboard.
<Storyboard x: <DoubleAnimationStoryboard. </Storyboard>
This DoubleAnimation object specifies the following animation:
Storyboard.TargetProperty="Opacity" specifies that the Opacity property is animated.
Storyboard.TargetName="MyAnimatedRectangle" specifies which object this property is animating (the Rectangle).
From="1.0" To="0.0" specifies that the Opacity property starts at a value of 1 and animates to 0 (starts opaque and then fades).
Duration="0:0:1" specifies how long the animation lasts (how fast the Rectangle fades). Because the Duration property is specified in the form of "hours:minutes:seconds", the duration used in this example is one second.
AutoReverse="True" specifies that when the animation ends, it goes in reverse. In the case of this example, it fades and then reverses to full opacity.
RepeatBehavior="Forever" specifies that when the animation starts, it continues indefinitely. In this example, the Rectangle fades in and out continuously.
4. Starting the animation
A common way to start an animation is in response to an event. In this example, the MouseLeftButtonUp event is used to begin the animation when the user taps the Rectangle.
<Rectangle MouseLeftButtonUp="Rectangle_Tapped" x:
The Storyboard is started by using the Begin method.
myStoryboard.Begin();
Note
You can use C# or Visual Basic instead of XAML to set up an animation.
Animating a color property
The previous example showed how to animate a property that used a value of Double. What if you want to animate a Color? Windows Phone provides animation objects that are used to animate other types of values. The following basic animation objects animate properties of Double, Color, and Point, respectively:
Note
You can also animate properties that use objects.
The following example shows how to create a ColorAnimation.
<StackPanel MouseLeftButtonUp="Rectangle_Tapped"> <StackPanel.Resources> <Storyboard x: <!-- Animate the background color of the canvas from red to green over 4 seconds. --> <ColorAnimation Storyboard. </Storyboard> </StackPanel.Resources> <StackPanel.Background> <SolidColorBrush x: </StackPanel.Background> <_1<<
Starting, stopping, pausing, and resuming
The previous example showed how to start an animation by using the Begin method. Storyboard also has Stop, Pause, and Resume methods that can be used to control an animation. The following example creates four Button objects that enable the user to control the animation of an Ellipse across the screen.
<Canvas> <Canvas.Resources> <Storyboard x: <!-- Animate the center point of the ellipse. --> <PointAnimation Storyboard. </Storyboard> </Canvas.Resources> <Path Fill="Blue"> <Path.Data> <!-- Describe an ellipse. --> <EllipseGeometry x: </Path.Data> </Path> <StackPanel Orientation="Vertical" Canvas. <StackPanel Orientation="Horizontal"> <!-- Button that begins animation. --> <Button Click="Animation_Begin" Width="165" Height="130" Margin="2" Content="Begin" /> <!-- Button that pauses animation. --> <Button Click="Animation_Pause" Width="165" Height="130" Margin="2" Content="Pause" /> </StackPanel> <StackPanel Orientation="Horizontal"> <!-- Button that resumes animation. --> <Button Click="Animation_Resume" Width="165" Height="130" Margin="2" Content="Resume" /> <!-- Button that stops animation. Stopping the animation returns the ellipse to its original location. --> <Button Click="Animation_Stop" Width="165" Height="130" Margin="2" Content="Stop" /> </StackPanel> </StackPanel> </Canvas>(); }
Private Sub Animation_Begin(sender As Object, e As RoutedEventArgs) myStoryboard.Begin() End Sub Private Sub Animation_Pause(sender As Object, e As RoutedEventArgs) myStoryboard.Pause() End SubPrivate Sub Animation_Resume(sender As Object, e As RoutedEventArgs) myStoryboard.[Resume]() End Sub Private Sub Animation_Stop(sender As Object, e As RoutedEventArgs) myStoryboard.[Stop]() End Sub
Animating by using key-frames
Up to now, the examples in this Quickstart have shown animating between two values. (These are called From/To/By animations.) Key-frame animations let you use more than two target values and control an animation's interpolation method. By specifying multiple values to animate, you can make more complex animations. By specifying the animation's interpolation (specifically, by using the KeySpline property), you can control the acceleration of an animation.
The following example shows how to use a key-frame animation to animate the Height of a Rectangle. <_3<<
The XAML includes the following three key frames. Each key frame specifies a value to animate to at a certain time. The entire animation takes 1.5 seconds.
The first key frame in this example is a LinearDoubleKeyFrame. LinearTypeKeyFrame objects such as LinearDoubleKeyFrame create a smooth, linear transition between values. However, in this example, it is just used to specify that the animation is at value 30 at time 0.
The second key frame in this example is a SplineDoubleKeyFrame, which specifies that the Height of the Rectangle is 300 at time 0.8 seconds after the animation begins. SplineTypeKeyFrame objects such as SplineDoubleKeyFrame create a variable transition between values according to the value of the KeySpline property. In this example, the Rectangle begins by moving slowly and then speeds up toward the end of the time segment.
The third key frame in this example is a SplineDoubleKeyFrame, which specifies that the Height of the Rectangle is 250 at time 1.5 seconds after the animation begins (0.7 seconds after the last SplineDoubleKeyFrame ended). In contrast to the previous SplineDoubleKeyFrame, this key frame makes the animation start off fast and slow down toward the end.
Perhaps the trickiest property used by the SplineDoubleKeyFrame is the KeySpline property. This property specifies the first and second control points of a Bezier curve, which describes the acceleration of the animation.
Animating by using easing following example demonstrates the easing functions that come with the runtime. Select an easing function from the dropdown, set its properties, and then run the animation. The XAML for the animation is shown on the lower-right of the example.
Below is a list of the easing functions demonstrated in the example above along with a quick summary of what it does.
You can apply these easing functions to Key-Frame animations using either EasingDoubleKeyFrame, EasingPointKeyFrame, or EasingColorKeyFrame. The following example shows how to use key frames with easing functions associated with them to create an animation of a Rectangle that contracts upward, slows down, then expands downward (as though falling) and then bounces to a stop.
<StackPanel Background="White"> <StackPanel.Resources> <Storyboard x: <_4<<
In addition to using the easing functions included in the run-time, you can create your own custom easing functions by inheriting from EasingFunctionBase.
See Also
Other Resources
Animations, motion, and output for Windows Phone | https://docs.microsoft.com/en-us/previous-versions/windows/apps/jj206955%28v%3Dvs.105%29 | 2019-10-14T02:34:41 | CC-MAIN-2019-43 | 1570986648481.7 | [array(['images/jj206955.qs_animations_double%28en-us%2cvs.105%29.png',
None], dtype=object)
array(['images/jj206955.qs_animations_color%28en-us%2cvs.105%29.png',
None], dtype=object)
array(['images/jj206955.qs_animations_starting%28en-us%2cvs.105%29.png',
None], dtype=object)
array(['images/jj206955.qs_animations_keyframes%28en-us%2cvs.105%29.png',
None], dtype=object)
array(['images/jj206955.qs_animations_easing%28en-us%2cvs.105%29.png',
None], dtype=object) ] | docs.microsoft.com |
git2-rs
libgit2 bindings for Rust
[dependencies] git2 = "0.3"
Building git2-rs
First, you'll need to install CMake. Afterwards, just run:
$ git clone $ cd git2-rs $ cargo build
License
git2-rs is primarily distributed under the terms of both the MIT license and
the Apache License (Version 2.0), with portions covered by various BSD-like
licenses.
See LICENSE-APACHE, and LICENSE-MIT for details. | https://docs.rs/crate/git2/0.4.3 | 2019-10-14T00:38:17 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.rs |
economies of scale, defined as the reduction in cost per unit resulting from increased production realized through operational efficiencies, constitute one of the key goals for firms to improve their competitiveness. But this productive model seems to be in the same time contradictory to the individual and specific customer relationships that firms try to follow. Indeed, involved in an industrial network, they have to take up the challenge and find a balance between economies and standardization on one hand and adapted products and customized relationships on the other hand. How could they achieve this goal and what are the implications of this opposition on their activity links? The following development, based on Håkansson and Snehota's model in "Developing Relationships in Business Networks" (1995) and on Richardson's analyze in "The organization of industry" (1972), tries first to demonstrate how activity links have consequences on the division of labor and how firms have to deal with a large sense of activity.
[...] This view is particularly developed in the Håkansson and Snehota's model, which presents activity links as continuum. The long and short term consequences resulted from changes on activity links and the role of the pattern in the efficiency of links lead firms to accept that the challenge between differentiation and standardisation is an always renewed goal. The movement toward efficiency implies constant adaptation in the relationships and new types of activity links. Consequently, this view of activity links as ?unstable? organization also raises the question of position in the network for the firms. [...]
[...] Indeed, firms, focused on their final products, are involved in various activity links with their suppliers and customers that they have to design in an efficient division of labor to protect their business and generate economies of scale. Thus, firms have to well analyze their division of labor and to build up adapted links to reach economies of scale and differentiation in the same time. Consequently, activity links imply an exchange of know how between the two partners and an implication of both sides to reach efficiency. This last point raises also the question of cooperation and highlights on the fact that activity links can not be simplified in business transactions. [...]
[...] According to him, firms have to specialize activities for which capabilities offer comparative advantages?. Those activities correspond largely to similar activities that is to say activities which require the same capabilities (resources). Beside this kind of activity links, Richardson also define the complementary activities as activities that need to be coordinated and corresponding to different phases of a process of production and the closely complementary activities that are based on a planning in advance. Firms deal with those three kinds of activities and focus on the first type (similar activity) to implement economies of scale. [...]
[...] Finally, the standardisation-differentiation challenge imposes firms to work continuously on their relationships. Industries improve their suppliers and customers relationships to offer always better services and products. Thus, activity links oriented on the final product, appear as a continuous movement. The relationships have to be monitored and adapted continuously to fit with the wider network, the environment and the customer demands. Step by step toward the best but not optimal activity links, firms mold their relationships to reach economies of scale. [...]
Enter the password to open this PDF file:
-
Docs.school utilise des cookies sur son site. En poursuivant votre navigation sur Docs.school ou en cliquant sur OK, vous en acceptez l'utilisation. Privacy Policy | https://docs.school/business-comptabilite-gestion-management/management-et-organisation/dissertation/reseau-industriel-24514.html | 2019-10-14T01:35:16 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.school |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
Step 2: Verifying Your Data in QLDB
Amazon QLDB provides an API to request a proof for a specified document ID and its associated block. You must also provide the tip address of a digest that you previously saved, as described in Step 1: Requesting a Digest in QLDB.
Then, you can use the proof returned by QLDB to verify the document revision against the saved digest, using a client-side API. This gives you control over the algorithm that you use to verify your data.
AWS Management Console
This section describes the steps to verify a document revision against a previously saved digest using the Amazon QLDB console.
Before you start, make sure that you follow the steps in Step 1: Requesting a Digest in QLDB. Verification requires a previously saved digest that covers the document revision you want to verify.
To verify a document revision (console)
Open the Amazon QLDB console at.
First, query your ledger for the
idand
blockAddressof the document revision that you want to verify. These fields are included in the document's metadata, which you can query in the committed view.
The document
idis a system-assigned unique identifier. The
blockAddressspecifies the block location where the revision was committed.
In the navigation pane, choose Query editor.
Choose the ledger name in which you want to verify a document revision.
In the query editor window, enter a
SELECTstatement in the following syntax, and then choose Run.
SELECT metadata.id, blockAddress FROM _ql_committed_
tableWHERE
criteria
For example, the following query returns a document from the
vehicle-registrationsample ledger created in Getting Started with the Amazon QLDB Console.
SELECT r.metadata.id, r.blockAddress FROM _ql_committed_VehicleRegistration AS r WHERE r.data.VIN = 'KM8SRDHF6EU074761'
Copy and save the
idand
blockAddressvalues that your query returns. Be sure to omit the double quotes for the
idfield. In Amazon Ion, string data types are delimited with double quotes. For example, you must copy only the red italicized text in the following snippet.
"
LtMNJYNjSwzBLgf7sLifrG"
Now that you have a document revision selected, you can start the process of verifying it.
In the navigation pane, choose Verification.
On the Verify document form, under Specify the document that you want to verify, enter the following input parameters:
Ledger—The ledger in which you want to verify a document revision.
Block address—The
blockAddressvalue returned by your query in Step 4.
Document ID—The
idvalue returned by your query in Step 4.
Under Specify the digest to use for verification, select the digest that you previously saved by choosing Choose digest. If the file is valid, this auto-populates all the digest fields on your console. Or, you can manually copy and paste the following values directly from your digest file:
Digest—The
digestvalue from your digest file.
Digest tip address—The
digestTipAddressvalue from your digest file.
Review your document and digest input parameters, and then choose Verify.
The console automates two steps for you:
Request a proof from QLDB for your specified document.
Use the proof returned by QLDB to call a client-side API, which verifies your document revision against the provided digest. To examine this verification algorithm, see the following section QLDB Driver to download the example code.
The console displays the results of your request in the Verification results card. For more information, see Verification Results.
QLDB Driver
You can also verify a document revision using the Amazon QLDB API with the QLDB
driver. To learn more, see Running the Java Sample Application in Amazon QLDB, and follow the steps to download and install
the sample application. For an example of requesting a proof for a document revision
and then verifying that revision, you can try running the demo code in class
GetRevision. This class runs the following steps:
Requests a new digest from the sample ledger
vehicle-registration.
Requests a proof for a sample document revision from the
VehicleRegistrationtable in the
vehicle-registrationledger.
Verifies the sample revision using the returned digest and proof.
Before trying a verification, make sure that you follow at least Steps 1–3
in the Getting Started with the
Driver tutorial for Java to create a ledger named
vehicle-registration and
load it with sample data. | https://docs.aws.amazon.com/qldb/latest/developerguide/verification.verify.html | 2019-10-14T01:38:35 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.aws.amazon.com |
Management¶
A cloud deployment is a living system. Machines age and fail, software becomes outdated, vulnerabilities are discovered. When errors or omissions are made in configuration, or when software fixes must be applied, these changes must be made in a secure, but convenient, fashion. These changes are typically solved through configuration management.
It is important to protect the cloud deployment from being configured or manipulated by malicious entities. With many systems in a cloud employing compute and networking virtualization, there are distinct challenges applicable to OpenStack which must be addressed through integrity lifecycle management.
Administrators must perform command and control over the cloud for various operational functions. It is important these command and control facilities are understood and secured.
- Continuous systems management
- Integrity life-cycle
- Management interfaces | https://docs.openstack.org/security-guide/management.html | 2019-10-14T00:47:51 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.openstack.org |
We're constantly improving the Path API. Unfortunately, sometimes these improvements are not always backwards compatible with your existing custom Path checkers. The following table describes the changes we've made to the Path API for 11.2 as well as a description of the changes that you should make to your own custom checkers:
Currently, the use of deprecated functions causes compiler warnings to be generated. At the beginning of 2017, a future release of Klocwork will generate compiler errors instead of compiler warnings. We plan to fully retire the C version of the Path API by mid-2017, so if you're using deprecated functions, we recommend you start working on migrating to supported functions now. For more information, see the Klocwork C/C++ Path API Reference. | https://docs.roguewave.com/en/klocwork/current/importantchangestothepathapiinversion112 | 2019-10-14T01:27:28 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.roguewave.com |
git2-rs
libgit2 bindings for Rust
[dependencies] git2 = "0.3"
To get this library to pick them up the standard
rust-openssl
instructions can be used to transitively inform libssh2-sys about where
the header files are:
export OPENSSL_INCLUDE_DIR=`brew --prefix openssl`/include export OPENSSL_LIB_DIR=`brew --prefix openssl`/lib
License
git2-rs is primarily distributed under the terms of both the MIT license and
the Apache License (Version 2.0), with portions covered by various BSD-like
licenses.
See LICENSE-APACHE, and LICENSE-MIT for details. | https://docs.rs/crate/git2/0.4.4 | 2019-10-14T01:37:28 | CC-MAIN-2019-43 | 1570986648481.7 | [] | docs.rs |
Difference between revisions of "User Guide for Multiple Depth Sensors Configuration"
Revision as of 09:53, 2 November 2014
Contents
- 1 System Requirements
- 2 Software Installation
- 3 Recording Video from Multiple Depth Sensors
- 4 Calibration
- 5 Recording Actor's Performance
- 6 Processing Video from Two Depth Sensors
- 7 Clean-up
- 8 Export and Motion Transfer
- 9 Troubleshooting
- | http://docs.ipisoft.com/index.php?title=User_Guide_for_Multiple_Depth_Sensors_Configuration&diff=prev&oldid=367 | 2019-08-17T14:40:05 | CC-MAIN-2019-35 | 1566027313428.28 | [array(['/images/thumb/c/cc/Note.png/24px-Note.png', 'Note.png'],
dtype=object)
array(['/images/thumb/c/cc/Note.png/24px-Note.png', 'Note.png'],
dtype=object)
array(['/images/8/83/GPUz_example.gif', 'GPUz example.gif'], dtype=object)
array(['/images/thumb/f/ff/Important.png/24px-Important.png',
'Important.png'], dtype=object)
array(['/images/3/33/iPi_Recorder_3_Setup.png',
'iPi Recorder 3 Setup.png'], dtype=object)
array(['/images/thumb/c/cc/Note.png/24px-Note.png', 'Note.png'],
dtype=object)
array(['/images/thumb/f/ff/Important.png/24px-Important.png',
'Important.png'], dtype=object)
array(['/images/a/ad/iPi_Mocap_Studio_3_Setup.png',
'iPi Mocap Studio 3 Setup.png'], dtype=object)
array(['/images/thumb/c/cc/Note.png/24px-Note.png', 'Note.png'],
dtype=object)
array(['/images/thumb/c/cc/Note.png/24px-Note.png', 'Note.png'],
dtype=object)
array(['/images/thumb/f/ff/Important.png/24px-Important.png',
'Important.png'], dtype=object)
array(['/images/thumb/4/45/Tip.png/24px-Tip.png', 'Tip.png'], dtype=object)
array(['/images/thumb/4/45/Tip.png/24px-Tip.png', 'Tip.png'], dtype=object)
array(['/images/thumb/4/45/Tip.png/24px-Tip.png', 'Tip.png'], dtype=object)] | docs.ipisoft.com |
:
You, <[email protected. | https://docs.huihoo.com/linux/man/20100621/ | 2019-08-17T15:55:43 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.huihoo.com |
Reporting Bugs - Parole Reports:
Here's a list (updated daily) of open bugreports with the date that bug was reported in brackets. To see the full list via bugzilla, go here.
- [Bug 15780] Parole is playing big_buck_bunny_720p_h264.mov as a slide show (2019/08/07 12:05)
- [Bug 15778] Video playback freezes every couple minutes for 4-5 seconds (2019/08/03 12:46)
- [Bug 15751] Appstream metadata does not validate (2019/07/27 14:59)
- [Bug 15550] autoimagesink not working as expected (2019/07/22 19:00)
- [Bug 15496] I cannot slide along a song to a different instant (2019/05/28 23:41)
- [Bug 15443] Change systray/system tray to notification area (2019/05/22 00:07)
- [Bug 15230] Pulse audio cut the bass! (2019/03/26 22:08)
- [Bug 12335] playlist initial order (2019/03/03 10:02)
- [Bug 13749] Previous Track button behaves incorrectly when in shuffle mode (2019/02/04 00:21)
- [Bug 15015] Attempt to add support of org.xfce.ScreenSaver service (2019/01/02 23:04)
- [Bug 14881] se cierra (2018/11/18 04:19)
- [Bug 13548] [Request] Optional clear recent files on exit (2018/10/25 16:58)
- [Bug 14713] During playback the tracks names disappear. (2018/09/23 20:40)
- [Bug 14669] Open File dialog does not remember size (2018/09/04 11:20)
- [Bug 14668] Missing Feature: change playback speed (2018/09/04 11:16)
- [Bug 14569] Feature Request - Allow changing play speed (2018/07/31 19:33)
- [Bug 14006] Unable to play anything with any user but root (2018/07/31 19:28)
- [Bug 14227] Rapidly cycling through tracks bungles up playlist titles (2018/02/17 22:27)
- [Bug 14222] Mistakes files loaded over network (2018/02/16 01:36)
- [Bug 14175] parole won't start (2018/01/23 01:21). | https://docs.xfce.org/apps/parole/bugs | 2019-08-17T15:23:41 | CC-MAIN-2019-35 | 1566027313428.28 | [array(['/_media/apps/parole/bugs.png', None], dtype=object)] | docs.xfce.org |
This. section 4.3 Specifying Program Options.
The following list briefly describes the MySQL server and server-related programs:
mysqld
mysqld-max
mysqld-maxExtended MySQL Server.
mysqld_safe
mysqld_safeattempts to start
mysqld-maxif it exists, and
mysqldotherwise. See section 5.1.3 The
mysqld_safeServer Startup Script.
mysql.server
mysqld_safeto start the MySQL server. See section 5.1.4 The
mysql.serverServer Startup Script.
mysqld_multi
mysqld_multiProgram for Managing Multiple MySQL Servers.
mysql_install_db
mysql_fix_privilege_tables
There are several other programs that also are run on the server host:
myisamchk
MyISAMtables.
myisamchkis described in section 5.7.2 Table Maintenance and Crash Recovery.
make_binary_distribution the convenience of other MySQL users.
mysqlbug
mysqld-maxExtended MySQL Server
A MySQL-Max server is a version of the
mysqld MySQL server that
has been built to include additional features.
The distribution to use depends on your platform:
mysqld.exe) and the MySQL-Max server (
mysqld-max.exe), so you need not get a special distribution. Just use a regular Windows distribution, available at. See section 2.3 Installing MySQL on Windows.
MySQL-serverRPM first to install a standard server named
mysqld. Then use the
MySQL-MaxRPM to install a server named
mysqld-max. The
MySQL-MaxRPM presupposes that you have already installed the regular server RPM. See section 2.4 Installing MySQL on Linux for more information on the Linux RPM packages.
mysqldbut that has the additional features included.
You can find the MySQL-Max binaries on the MySQL AB Web site at.
MySQL AB builds the MySQL-Max servers by using the following
configure options:
--with-server-suffix=-max
-maxsuffix to the
mysqldversion string.
--with-innodb
InnoDBstorage engine. MySQL-Max servers always include
InnoDBsupport, but this option actually is needed only for MySQL 3.23. From MySQL 4 on,
InnoDBis included by default in binary distributions, so you do not need a MySQL-Max server to obtain
InnoDBsupport.
--with-bdb
BDB) storage engine.
CFLAGS=-DUSE_SYMDIR
MySQL-Max binary distributions are a convenience for those who wish to install precompiled programs. If you build MySQL using a source distribution, you can build your own Max-like server by enabling the same features at configuration time that the MySQL-Max binary distributions are built with.
MySQL-Max servers include the BerkeleyDB (
BDB) storage engine
whenever possible, but not all platforms support
BDB. The following
table shows:
A value of
NO means that the server was compiled without support
for the feature, so it cannot be activated at runtime.
A value of
DISABLED occurs either because the server was
started with an option that disables the feature, or because not
all options required to enable it were given. In the latter case, the
host_name.err error log file should contain a reason indicating why
the option is disabled.
One situation in which you might see
DISABLED occurs with MySQL 3.23
when the
InnoDB storage engine is compiled in. In MySQL 3.23, you
must supply at least the
innodb_data_file_path option at runtime to
set up the
InnoDB tablespace. Without this option,
InnoDB
disables itself.
See section 15.3
InnoDB in MySQL 3.23.
You can specify configuration options for the
BDB storage engine, too,
but
BDB will not disable itself if you do not provide them.
See section 14.
mysqld_safeServer:
MySQL-MaxRPM relies on this
mysqld_safebehavior. The RPM installs an executable named
mysqld-max, which causes
mysqld_safeto automatically use that executable from that point on..
To override the default behavior and specify explicitly which server you
want to run, specify a
--mysqld or
--mysqld-version option to
mysqld_safe.
Many of the options to
mysqld_safe are the same as the options to
mysqld. See section 5.2.1
mysqld Command-Line Options. 4:
--basedir=path
--core-file-size=size
mysqldshould be able to create. The option value is passed to
ulimit -c.
--datadir=path
--defaults-extra-file=path
--defaults-file=path
--err-log=path
--log-erroroption, to be used before MySQL 4.0.
--ledir=path
mysqldprogram. Use this option to explicitly indicate the location of the server.
--log-error=path
--mysqld=prog_name
ledirdirectory) that you want to start. This option is needed if you use the MySQL binary distribution but have the data directory outside of the binary distribution.
--mysqld-version=suffix
--mysqldoption, but you specify only the suffix for the server program name. The basename is assumed to be
mysqld. For example, if you use
--mysqld-version=max,
mysqld_safewill start the
mysqld-maxprogram in the
ledirdirectory. If the argument to
--mysqld-versionis empty,
mysqld_safeuses
mysqldin the
ledirdirectory.
--nice=priority
niceprogram to set the server's scheduling priority to the given value. This option was added in MySQL 4.0.14.
--no-defaults
--open-files-limit=count
mysqldshould be able to open. The option value is passed to
ulimit -n. Note that you need to start
mysqld_safeas
rootfor this to work properly!
--pid-file=path
--port=port_num
--socket=path
--timezone=zone
TZtime zone environment variable to the given option value. Consult your operating system documentation for legal time zone specification formats.
--user={user_name | user_id}
mysqldserver as the user having the name user_name or the numeric user ID user_id. (``User'' in this context refers to a system login account, not a MySQL user listed in the grant tables.):
mysqld_safeis invoked. For binary distributions,
mysqld_safelooks under its working directory for `bin' and `data' directories. For source distributions, it looks for `libexec' and `var' directories. This condition should be met if you execute
mysqld_safefrom your MySQL installation directory (for example, `/usr/local/mysql' for a binary distribution).
mysqld_safeattempts will try does the following:
MyISAMand
ISAMtables.
mysqld, monitors it, and restarts it if it terminates in error.
mysqldto the `host_name.err' file in the data directory.
mysqld_safescreen output to the `host_name.safe' file in the data directory.
mysql.serverServer section 2.9.2.2 Starting and Stopping MySQL Automatically.
mysql.server reads options from the
[mysql.server] and
[mysqld] sections of option files. (For backward compatibility,
it also reads
[mysql_server] sections, although you should rename such
sections to
[mysql.server] when you begin using MySQL 4.0 or later.)
mysqld_multiProgram).
#.9 stop 8,10-13
For an example of how you might set up an option file, use this command:
shell> mysqld_multi --example
mysqld_multi supports the following options:
--config-file=name
mysqld_multilooks for
[mysqld#]option groups. Without this option, all options are read from the usual `my.cnf' file. The option does not affect where
mysqld_multireads its own options, which are always taken from the
[mysqld_multi]group in the usual `my.cnf' file.
--example
--log=name
--mysqladmin=prog_name
mysqladminbinary to be used to stop servers.
--mysqld=prog_name
mysqldbinary to be used. Note that you can specify
mysqld_safeas the value for this option also. The options are passed to
mysqld. Just make sure that you have the directory where
mysqldis located in your
PATHenvironment variable setting or fix
mysqld_safe.
--no-log
--password=password
mysqladmin. Note that the password value is not optional for this option, unlike for other MySQL programs.
--silent
--tcp-ip
stopand
reportoperations.
--user=user_name
mysqladmin.
--verbose
--version
Some notes about
mysqld_multi:
mysqldservers (with the
mysqladminprogram) has the same username and password for each server. Also, make sure that the account has the
SHUTDOWNprivilege. If the servers that you want to manage have many different usernames or passwords for the administrative accounts, you might want to create an account on each server that has the same username and password. For example, you might set up a common
multi_adminaccount by executing the following commands for each server:
shell> mysql -u root -S /tmp/mysql.sock -proot_password mysql> GRANT SHUTDOWN ON *.* -> TO 'multi_admin'@'localhost' IDENTIFIED BY 'multipass';See section 5.5.2 How the Privilege System Works. You will have to do this for each
mysqldserver. Change the connection parameters appropriately when connecting to each one. Note that the host part of the account name must allow you to connect as
multi_adminfrom the host where you want to run
mysqld_multi.
--pid-fileoption is very important if you are using
mysqld_safeto start
mysqld(for example,
--mysqld=mysqld_safe) Every
mysqldshould have its own process ID file. The advantage of using
mysqld_safeinstead of
mysqldis that
mysqld_safe``guards'' its
mysqldprocess and will restart it if the process terminates due to a signal sent using
kill -9, or for other reasons, such as a segmentation fault. Please note that the
mysqld_safescript might require that you start it from a certain place. This means that you might have to change location to a certain directory before running
mysqld_multi. If you have problems starting, please see the
mysqld_safescript. Check especially the lines:
---------------------------------------------------------------- MY_PWD=`pwd` # Check if we are starting this relative (for the binary release) if test -d $MY_PWD/data/mysql -a -f ./share/mysql/english/errmsg.sys -a \ -x ./bin/mysqld ----------------------------------------------------------------See section 5.1.3 The
mysqld_safeServer Startup Script. The test performed by these lines should be successful, or you might encounter problems.
mysqld.
--useroption for
mysqld, but in order to do this you need to run the
mysqld_multiscript as the Unix
rootuser. Having the option in the option file doesn't matter; you will just get a warning, if you are not the superuser and the
mysqldprocesses are started under your own Unix account.
mysqldprocess is started as. Do not use the Unix root account for this, unless you know what you are doing.
mysqld_multibe sure that you understand the meanings of the options that are passed to the
mysqldservers and why you would want to have separate
mysqldprocesses. Beware of the dangers of using multiple
mysqldservers with the same data directory. Use separate data directories, unless you know what you are doing. Starting multiple servers with the same data directory will not give you extra performance in a threaded system. See section 5.
This section discusses MySQL server configuration topics:
mysqldCommand-Line Options
When you start the
mysqld server, you can specify program options
using any of the methods described in section 4,
prints the full help message. As of 4.1.1, it prints a brief message; to see
the full list, use
mysqld --verbose --help.
The following list shows some of the most common server options. Additional options are described elsewhere:
mysqldConcerning Security.
MyISAMStartup Options, section 14.4.3
BDBStartup Options, section 15.5
InnoDBStartup Options.
You can also set the value of a server system variable by using the variable name as an option, as described later in this section.
--helpdisplays the full help message. As of 4.1.1, it displays an abbreviated message only. Use both the
--verboseand
--helpoptions to see the full message.
--ansi
--sql-modeoption instead.
--basedir=path, -b path
--big-tables
--bind-address=IP
--console
--log-erroris specified. On Windows,
mysqldwill not close the console screen if this option is used.
--character-sets-dir=path
--chroot=path
mysqldserverand
SELECT ... INTO OUTFILE.
--character-set-server=charset
--core-file
mysqlddies. For some systems, you must also specify the
--core-file-sizeoption to
mysqld_safe. See section 5.1.3 The
mysqld_safeServer Startup Script. Note that on some systems, such as Solaris, you will not get a core file if you are also using the
--useroption.
--collation-server=collation
--datadir=path, -h path
--debug[=debug_options], -# [debug_options]
--with-debug, you can use this option to get a trace file of what
mysqldis doing. The debug_options string often is
'd:t:o,file_name'. See section E.1.2 Creating Trace Files.
--default-character-set=charset
--character-set-serveras of MySQL 4.1.3. See section 5.8.1 The Character Set Used for Data and Sorting.
--default-collation=collation
--collation-serveras of MySQL 4.1.3. See section 5.8.1 The Character Set Used for Data and Sorting.
--default-storage-engine=type
--default-table-type. It is available as of MySQL 4.1.2.
--default-table-type=type
--default-time-zone=type
time_zonesystem variable. If this option is not given, the default time zone will be the same as the system time zone (given by the value of the
system_time_zonesystem variable. This option is available as of MySQL 4.1.3.
--delay-key-write[= OFF | ON | ALL]
DELAYED KEYSoption should be used. Delayed key writing causes key buffers not to be flushed between writes for
MyISAMtables.
OFFdisables delayed key writes.
ONenables delayed key writes for those tables that were created with the
DELAYED KEYSoption.
ALLdelays key writes for all
MyISAMtables. Available as of MySQL 4.0.3. See section 7.5.2 Tuning Server Parameters. See section 14.1.1
MyISAMStartup Options. Note: If you set this variable to
ALL, you should not use
MyISAMtables from within another program (such as from another MySQL server or with
myisamchk) when the table is in use. Doing so will lead to index corruption.
--delay-key-write-for-all-tables
--delay-key-write=ALLfor use prior to MySQL 4.0.3. As of 4.0.3, use
--delay-key-writeinstead.
--des-key-file=file_name
DES_ENCRYPT()and
DES_DECRYPT()from this file.
--enable-named-pipe
mysqld-ntand
mysqld-max-ntservers that support named pipe connections.
--exit-info[=flags], -T [flags]
mysqldserver. Do not use this option unless you know exactly what it does!
--external-locking
lockddoes not fully work (as on Linux), you will easily get
mysqldto deadlock. This option previously was named
--enable-locking. Note: If you use this option to enable updates to
MyISAMtables from many MySQL processes, you have to ensure that these conditions are satisfied:
--delay-key-write=ALLor
DELAY_KEY_WRITE=1on any shared tables.
--external-lockingtogether with
--delay-key-write=OFF --query-cache-size=0. (This is not done by default because in many setups it's useful to have a mixture of the above options.)
--flush
--init-file=file
--innodb-safe-binlog
InnoDBtables and the binary log. See section 5.9.4 The Binary Log.
--language=lang_name, -L lang_name
--log[=file], -l [file]
host_name.logas the filename.
--log-bin=[file]
host_name-binas the log file basename.
--log-bin-index[=file]
host_name-bin.indexas the filename.
--log-error[=file]
host_name.erras the filename.
--log-isam[=file]
ISAM/
MyISAMchanges to this file (used only when debugging
ISAM/
MyISAM).
--log-long-format
--log-slow-queriesand
--log-long-format, queries that are not using indexes also are logged to the slow query log. Note that
--log-long-formatis deprecated as of MySQL version 4.1, when
--log-short-formatwas introduced (the long log format is the default setting since version 4.1). Also note that starting with MySQL 4.1, the
--log-queries-not-using-indexesoption is available for the purpose of logging queries that do not use indexes to the slow query log.
--log-queries-not-using-indexes
--log-slow-queries, then queries that are not using indexes also are logged to the slow query log. This option is available as of MySQL 4.1. See section 5.9.5 The Slow Query Log.
--log-short-format
--log-slow-queries[=file]
long_query_timeseconds to execute to this file. See section 5.9.5 The Slow Query Log. Note that the default for the amount of information logged has changed in MySQL 4.1. See the
--log-long-formatand
--log-short-formatoptions for details.
--log-update[=file]
--log-bin). See section 5.9.4 The Binary Log. Starting from version 5.0.0, using
--log-updatewill just turn on the binary log instead (see section D.1.4 Changes in release 5.0.0 (22 Dec 2003: Alpha)).
--log-warnings, -W
Aborted connection...to the error log. Enabling this option is recommended, for example, if you use replication (you A.2.10 Communication Errors and Aborted Connections. This option was named
--warningsbefore MySQL 4.0.
--low-priority-updates
INSERT,
REPLACE,
DELETE,
UPDATE) will have lower priority than selects. This can also be done via
{INSERT | REPLACE | DELETE | UPDATE} LOW_PRIORITY ...to lower the priority of only one query, or by
SET LOW_PRIORITY_UPDATES=1to change the priority in one thread. See section 7.3.2 Table Locking Issues.
--memlock
mysqldprocess in memory. This works on systems such as Solaris that support the
mlockall()system call. This might help if you have a problem where the operating system is causing
mysqldto swap on disk. Note that use of this option requires that you run the server as
root, which is normally not a good idea for security reasons.
--myisam-recover [=option[,option...]]]
MyISAMstorage engine recovery mode. The option value is any combination of the values of
DEFAULT,
BACKUP,
FORCE, or
QUICK. If you specify multiple values, separate them by commas. You can also use a value of
""to disable this option. If this option is used,
mysqldwill, when it opens a
MyISAMtable, open check whether the table is marked as crashed or wasn't closed properly. (The last option works only if you are running with
--skip-external-locking.) If this is the case,
mysqldwill run a check on the table. If the table was corrupted,
mysqldwill.
--ndb-connectstring=connect_string
NDBstorage engine, it is possible to point out the management server that distributes the cluster configuration by setting the connect string option. See section 16.3.4.2 The MySQL Cluster
connectstringfor syntax.
--ndbcluster
NDB Clusterstorage engine (from version 4.1.3, the MySQL-Max binaries are built with
NDB Clusterenabled) the default disabling of support for the
NDB Clusterstorage engine can be overruled by using this option. Using the
NDB Clusterstorage engine is necessary for using MySQL Cluster. See section 16 MySQL Cluster.
--new
--newoption can be used to make the server behave as 4.1 in certain respects, easing a 4.0 to 4.1 upgrade:
0xFFare treated as strings by default rather than as numbers. (Works in 4.0.12 and up.)
TIMESTAMPis returned as a string with the format
'YYYY-MM-DD HH:MM:SS'. (Works in 4.0.13 and up.) See section 11 Column Types.
--pid-file=path
mysqld_safe.
--port=port_num, -P port_num
--old-protocol, -o
--one-thread
--open-files-limit=count
mysqld. If this is not set or set to 0, then
mysqldwill use this value to reserve file descriptors to use with
setrlimit(). If this value is 0 then
mysqldwill reserve
max_connections*5or
max_connections + table_cache*2(whichever is larger) number of files. You should try increasing this if
mysqldgives you the error "Too many open files."
--safe-mode
--safe-show-database
SHOW DATABASESstatement displays only the names of 5.5.3 Privileges Provided by MySQL.
--safe-user-create
GRANTstatement, if the user doesn't have the
INSERTprivilege for the
mysql.usertable or any column in the table.
--secure-auth
--shared-memory
--shared-memory-base-name=name
--skip-bdb
BDBstorage engine. This saves memory and might speed up some operations. Do not use this option if you require
BDBtables.
--skip-concurrent-insert
MyISAMtables. (This is to be used only if you think you have found a bug in this feature.)
--skip-delay-key-write
DELAY_KEY_WRITEoption for all tables. As of MySQL 4.0.3, you should use
--delay-key-write=OFFinstead. See section 7.5.2 Tuning Server Parameters.
--skip-external-locking
isamchkor
myisamchk, you must shut down the server. See section 1.2.3 MySQL Stability. In MySQL 3.23, you can use
CHECK TABLEand
REPAIR TABLEto check and repair
MyISAMtables. This option previously was named
--skip-locking.
--skip-grant-tables
mysqladmin flush-privilegesor
mysqladmin reloadcommand, or by issuing a
FLUSH PRIVILEGESstatement.)
--skip-host-cache
--skip-innodb
InnoDBstorage engine. This saves memory and disk space and might speed up some operations. Do not use this option if you require
InnoDBtables.
--skip-isam
ISAMstorage engine. As of MySQL 4.1,
ISAMis disabled by default, so this option applies only if the server was configured with support for
ISAM. This option was added in MySQL 4.1.1.
--skip-name-resolve
Hostcolumn values in the grant tables must be IP numbers or
localhost. See section 7.5.6 How MySQL Uses DNS.
--skip-ndbcluster
NDB Clusterstorage engine. This is the default for binaries that were built with
NDB Clusterstorage engine support, this means that the system will only allocate memory and other resources for this storage engine if it is explicitly enabled.
--skip-networking
mysqldmust be made via named pipes or shared memory (on Windows) or Unix socket files (on Unix). This option is highly recommended for systems where only local clients are allowed. See section 7.5.6 How MySQL Uses DNS.
--skip-new
--skip-symlink
--skip-symbolic-links, for use before MySQL 4.0.13.
--symbolic-links, --skip-symbolic-links
directory.symfile that contains the path to the real directory. See section 7.6.1.3 Using Symbolic Links for Databases on Windows.
MyISAMindex file or data file to another directory with the
INDEX DIRECTORYor
DATA DIRECTORYoptions of the
CREATE TABLEstatement. If you delete or rename the table, the files that its symbolic links point to also are deleted or renamed. See section 13.2.6
CREATE TABLESyntax.
--skip-safemalloc
--with-debug=full, all MySQL programs check for memory overruns during each memory allocation and memory freeing operation. This checking is very slow, so for the server you can avoid it when you don't need it by using the
--skip-safemallocoption.
-.
--skip-stack-trace
mysqldunder a debugger. On some systems, you also must use this option to get a core file. See section E.1 Debugging a MySQL Server.
--skip-thread-priority
--socket=path
MySQL.
--sql-mode=value[,value[,value...]]
--temp-pool
--transaction-isolation=level
READ-UNCOMMITTED,
READ-COMMITTED,
REPEATABLE-READ, or
SERIALIZABLE. See section 13.4.6
SET TRANSACTIONSyntax.
--tmpdir=path, -t path
/tmpdirectory resides on a partition that is too small to hold temporary tables. Starting from MySQL 4.1,.
--user={user_name | user_id}, -u {user_name | user_id}
mysqldserver as the user having the name user_name or the numeric user ID user_id. (``User'' in this context refers to a system login account, not a MySQL user listed in the grant tables.) This option is mandatory when starting
mysqldas
root. The server will change its user ID during its startup sequence, causing it to run as that particular user rather than as
root. See section 5.4.1 General Security Guidelines. Starting from MySQL 3.23.56 and 4.0.12: To avoid a possible security hole where a user adds a
--user=rootoption to some `my.cnf' file (thus causing the server to run as
root),
mysqlduses only the first
--useroption specified and produces a warning if there are multiple
--useroptions. Options in `/etc/my.cnf' and `datadir/my.cnf' are processed before command-line options, so it is recommended that you put a
--useroption in `/etc/my.cnf' and specify a value other than
root. The option in `/etc/my.cnf' will be found before any other
--useroptions, which ensures that the server runs as a user other than
root, and that a warning results if any other
--useroption is found.
--version, -V
-O var_name=value
syntax. However, this syntax is deprecated as of MySQL 4.0.
You can find a full description for all variables in section 5.2.3 Server System Variables. The section on tuning server parameters includes information on how to optimize them. See section 7.5.2 Tuning Server Parameters.
You can change the values of most system variables for a running server with the
SET statement. See section 13
STRICT_TRANS_TABLES
TRADITIONAL
INSERT/
UPDATEwill
DATEand
DATETIMEcolumns. It does not apply
TIMESTAMPcolumns,as well.
ANSI_QUOTES
ANSI_QUOTESenabled, you cannot use double quotes to quote a literal string, because it will be interpreted as an identifier. (New in MySQL 4.0.0)
ERROR_FOR_DIVISION_BY_ZERO
MOD(X,0)) during an
INSERT/
UPDATE. If this mode is not given, MySQL instead returns
NULLfor divisions by zero. If used with
IGNORE, MySQL generates a warning for divisions by zero, but the result of the operation is
NULL. (New in MySQL 5.0.2)
HIGH_NOT_PRECEDENCE
NOToperator precedence is handled so that expressions such as
NOT a BETWEEN b AND care parsed as
NOT (a BETWEEN b AND c). Before MySQL 5.0.2, the expression is parsed as
(NOT a) BETWEEN b AND c. The old higher-precedence behavior can be obtained by enabling the
HIGH_NOT_PRECEDENCESQL mode. (New in MySQL 5.0.2)
mysql> SET sql_mode = ''; mysql> SELECT NOT 1 BETWEEN -5 AND 5; -> 0 mysql> SET sql_mode = 'broken_not'; mysql> SELECT NOT 1 BETWEEN -5 AND 5; -> 1
IGNORE_SPACE
USER()function, the name of the
usertable in the
mysqldatabase and the
Usercolumn in that table become reserved, so you must quote them:
SELECT "User" FROM mysql."user";(New in MySQL 4.0.0)
NO_AUTO_CREATE_USER
GRANTfrom automatically creating new users if it would otherwise do so, unless a password also is specified. (New in MySQL 5.0.2)
NO_AUTO_VALUE_ON_ZERO
NO_AUTO_VALUE_ON_ZEROaffects handling of
AUTO_INCREMENTcolumns. Normally, you generate the next sequence number for the column by inserting either
NULLor
0into it.
NO_AUTO_VALUE_ON_ZEROsuppresses this behavior for
0so that only
NULLgenerates the next sequence number. (New in MySQL 4.1.1) This mode can be useful if
0has been stored in a table's
AUTO_INCREMENTcolumn. (This is not a recommended practice, by the way.) For example, if you dump the table with
mysqldumpand then reload it, MySQL normally generates new sequence numbers when it encounters the
0values, resulting in a table with different contents than the one that was dumped. Enabling
NO_AUTO_VALUE_ON_ZERObefore reloading the dump file solves this problem. As of MySQL 4.1.1,
mysqldumpautomatically includes a statement in the dump output to enable
NO_AUTO_VALUE_ON_ZERO.
NO_DIR_IN_CREATE
INDEX DIRECTORYand
DATA DIRECTORYdirectives. This option is useful on slave replication servers. (New in MySQL 4.0.15)
NO_FIELD_OPTIONS
SHOW CREATE TABLE. This mode is used by
mysqldumpin portability mode. (New in MySQL 4.1.1)
NO_KEY_OPTIONS
SHOW CREATE TABLE. This mode is used by
mysqldumpin portability mode. (New in MySQL 4.1.1)
NO_TABLE_OPTIONS
ENGINE) in the output of
SHOW CREATE TABLE. This mode is used by
mysqldumpin portability mode. (New in MySQL 4.1.1)
NO_UNSIGNED_SUBTRACTION
UNSIGNEDif one of the operands is unsigned. Note that this makes
UNSIGNED BIGINTnot 100% usable in all contexts. See section 12.7 Cast Functions and Operators. (New in MySQL 4.0.2)
NO_ZERO_DATE
'0000-00-00'as a valid date. You can still insert zero dates with the
IGNOREoption. (New in MySQL 5.0.2)
NO_ZERO_IN_DATE
IGNOREoption, we insert a
'0000-00-00'date for any such date. (New in MySQL 5.0.2)
ONLY_FULL_GROUP_BY
GROUP BYpart refer to a not selected column. (New in MySQL 4.0.0)
PIPES_AS_CONCAT
||as a string concatenation operator (same as
CONCAT()) rather than as a synonym for
OR. (New in MySQL 4.0.0)
REAL_AS_FLOAT
REALas a synonym for
FLOATrather than as a synonym for
DOUBLE. (New in MySQL 4.0.0)
STRICT_ALL_TABLES
STRICT_TRANS_TABLES:
STRICT_ALL_TABLES, MySQL returns an error and ignores the rest of the rows. However, in this case, the earlier rows will already have been inserted or updated. This means that you might get a partial update, which might not be what you want. To avoid this, it's best to use single-row statements because these can be aborted without changing the table. section 13.2.6
CREATE TABLESyntax..
See section 13.5.4.20
SHOW WARNINGS Syntax.
REAL_AS_FLOAT,
PIPES_AS_CONCAT,
ANSI_QUOTES,
IGNORE_SPACE,
ONLY_FULL_GROUP_BY. See section 1.5.3 Running MySQL in ANSI Mode.
DB2
PIPES_AS_CONCAT,
ANSI_QUOTES,
IGNORE_SPACE,
NO_KEY_OPTIONS,
NO_TABLE_OPTIONS,
NO_FIELD_OPTIONS.
MAXDB
PIPES_AS_CONCAT,
ANSI_QUOTES,
IGNORE_SPACE,
NO_KEY_OPTIONS,
NO_TABLE_OPTIONS,
NO_FIELD_OPTIONS,
NO_AUTO_CREATE_USER.
MSSQL
PIPES_AS_CONCAT,
ANSI_QUOTES,
IGNORE_SPACE,
NO_KEY_OPTIONS,
NO_TABLE_OPTIONS,
NO_FIELD_OPTIONS.
MYSQL323
NO_FIELD_OPTIONS,
HIGH_NOT_PRECEDENCE.
MYSQL40
NO_FIELD_OPTIONS,
HIGH_NOT_PRECEDENCE.
ORACLE
PIPES_AS_CONCAT,
ANSI_QUOTES,
IGNORE_SPACE,
NO_KEY_OPTIONS,
NO_TABLE_OPTIONS,
NO_FIELD_OPTIONS,
NO_AUTO_CREATE_USER.
POSTGRESQL
PIPES_AS_CONCAT,
ANSI_QUOTES,
IGNORE_SPACE,
NO_KEY_OPTIONS,
NO_TABLE_OPTIONS,
NO_FIELD_OPTIONS.
TRADITIONAL
STRICT_TRANS_TABLES,
STRICT_ALL_TABLES,
NO_ZERO_IN_DATE,
NO_ZERO_DATE,
ERROR_FOR_DIVISION_BY_ZERO,
NO_AUTO_CREATE_USER. already section 5.2.3.1 9.4 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | bdb_version | Sleepycat Software: ... | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set | latin1 | | character_sets | latin1 big5 czech euc_kr | | concurrent_insert | ON | | connect_timeout | 5 | | convert_character_set | | | datadir | /usr/local/mysql/data/ | |_lock_wait_timeout | 50 | | innodb_log_arch_dir | | | innodb_log_archive | OFF | | innodb_log_buffer_size | 1048576 | | innodb_log_file_size | 5242880 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | ./ | | innodb_mirrored_log_groups | 1 | | | 1 | | at
section 15.5
InnoDB Startup Options.
Values for buffer sizes, lengths, and stack sizes are given in bytes unless otherwise specified.
Information on tuning these variables can be found in section 7.5.2 Tuning Server Parameters.
ansi_mode
ONif
mysqldwas started with
--ansi. See section 1.5.3 Running MySQL in ANSI Mode. This variable was added in MySQL 3.23.6 and removed in 3.23.41. See the description for
sql_mode.
back_log_loghigher than your operating system limit will be ineffective.
basedir
--basediroption.
bdb_cache_size
BDBtables. If you don't use
BDBtables, you should start
mysqldwith
--skip-bdbto not waste memory for this cache. This variable was added in MySQL 3.23.14.
bdb_home
BDBtables. This should be assigned the same value as the
datadirvariable. This variable was added in MySQL 3.23.14.
bdb_log_buffer_size
BDBtables. If you don't use
BDBtables, you should set this to 0 or start
mysqldwith
--skip-bdbto not waste memory for this cache. This variable was added in MySQL 3.23.31.
bdb_logdir
BDBstorage engine writes its log files. This variable can be set with the
--bdb-logdiroption. This variable was added in MySQL 3.23.14.
bdb_max_lock
BDBtable (10,000 by default). You should increase this if errors such as the following occur when you perform long transactions or when
mysqldhas to examine many rows to calculate a query:
bdb: Lock table is out of available locks Got error 12 from ...This variable was added in MySQL 3.23.29.
bdb_shared_data
ONif you are using
--bdb-shared-data. This variable was added in MySQL 3.23.29.
bdb_tmpdir
--bdb-tmpdiroption. This variable was added in MySQL 3.23.14.
bdb_version
version_bdb.
binlog_cache_size
--log-binoption). If you often use big, multiple-statement transactions, you can increase this to get more performance. The
Binlog_cache_useand
Binlog_cache_disk_usestatus variables can be useful for tuning the size of this variable. This variable was added in MySQL 3.23.29. See section 5.9.4 The Binary Log.
bulk_insert_buffer_size
MyISAMuses.
character_set
character_set_xxxvariables.
character_set_client
character_set_connection
character_set_database
character_set_server. This variable was added in MySQL 4.1.1.
character_set_results
character_set_server
character_set_system
utf8. This variable was added in MySQL 4.1.1.
character_sets
SHOW CHARACTER SETfor a list of character sets.)
character_sets_dir
collation_connection
collation_database
collation_server. This variable was added in MySQL 4.1.1.
collation_server
concurrent_insert
ON(the default), MySQL allows
INSERTand
SELECTstatements to run concurrently for
MyISAMtables that have no free blocks in the middle. You can turn this option off by starting
mysqldwith
--safeor
--skip-new. This variable was added in MySQL 3.23.7.
connect_timeout
mysqldserver waits for a connect packet before responding with
Bad handshake.
convert_character_set
SET CHARACTER SET. This variable was removed in MySQL 4.1.
datadir
--datadiroption.
default_week_format
WEEK()function. This variable is available as of MySQL 4.0.14.
delay_key_write
MyISAMtables. It can have one of the following values to affect handling of the
DELAY_KEY_WRITEtable option that can be used in
CREATE TABLEstatements. If
DELAY_KEY_WRITEtables by starting the server with the
--myisam-recoveroption (for example,
--myisam-recover=BACKUP,FORCE). See section 5.2.1
mysqldCommand-Line Options and section 14.1.1
MyISAMStartup Options. Note that
--external-lockingdoesn't offer any protection against index corruption for tables that use delayed key writes. This variable was added in MySQL 3.23.8.
delayed_insert_limit
delayed_insert_limitdelayed rows, the
INSERT DELAYEDhandler thread checks whether there are any
SELECTstatements pending. If so, it allows them to execute before continuing to insert delayed rows.
delayed_insert_timeout
INSERT DELAYEDhandler thread should wait for
INSERTstatements before terminating.
delayed_queue_size
INSERT DELAYEDstatements. If the queue becomes full, any client that issues an
INSERT DELAYEDstatement will wait until there is room in the queue again.
expire_logs_days
flush
ONif you have started
mysqldwith the
--flushoption. This variable was added in MySQL 3.22.9.
flush_time
flush_timeseconds to free up resources and sync unflushed data to disk. We recommend this option only on Windows 9x or Me, or on systems with minimal resources available. This variable was added in MySQL 3.22.18.
ft_boolean_syntax
IN BOOLEAN MODE. This variable was added in MySQL 4.0.1. See section 12.6.1 Boolean Full-Text Searches. The default variable value is
'+ -><()~*:""&|'. The rules for changing the value are as follows:
ft_max_word_len
FULLTEXTindex. This variable was added in MySQL 4.0.0. Note:
FULLTEXTindexes must be rebuilt after changing this variable. Use
REPAIR TABLE tbl_name QUICK.
ft_min_word_len
FULLTEXTindex. This variable was added in MySQL 4.0.0. Note:
FULLTEXTindexes must be rebuilt after changing this variable. Use
REPAIR TABLE tbl_name QUICK.
ft_query_expansion_limit
WITH QUERY EXPANSION. This variable was added in MySQL 4.1.1.
ft_stopword_file
'') disables stopword filtering. This variable was added in MySQL 4.0.10. Note:
FULLTEXTindexes must be rebuilt after changing this variable. Use
REPAIR TABLE tbl_name QUICK.
group_concat_max_len
GROUP_CONCAT()function. This variable was added in MySQL 4.1.0.
have_archive
YESif
mysqldsupports
ARCHIVEtables,
NOif not. This variable was added in MySQL 4.1.3.
have_bdb
YESif
mysqldsupports
BDBtables.
DISABLEDif
--skip-bdbis used. This variable was added in MySQL 3.23.30.
have_compress
zlibcompression library is available to the server. If not, the
COMPRESS()and
UNCOMPRESS()functions cannot be used. This variable was added in MySQL 4.1.1.
have_crypt
crypt()system call is available to the server. If not, the
CRYPT()function cannot be used. This variable was added in MySQL 4.0.10.
have_csv
YESif
mysqldsupports
ARCHIVEtables,
NOif not. This variable was added in MySQL 4.1.4.
have_example_engine
YESif
mysqldsupports
EXAMPLEtables,
NOif not. This variable was added in MySQL 4.1.4.
have_geometry
have_innodb
YESif
mysqldsupports
InnoDBtables.
DISABLEDif
--skip-innodbis used. This variable was added in MySQL 3.23.37.
have_isam
YESif
mysqldsupports
ISAMtables.
DISABLEDif
--skip-isamis used. This variable was added in MySQL 3.23.30.
have_ndbcluster
YESif
mysqldsupports
NDB Clustertables.
DISABLEDif
--skip-ndbclusteris used. This variable was added in MySQL 4.1.2.
have_openssl
YESif
mysqldsupports SSL (encryption) of the client/server protocol. This variable was added in MySQL 3.23.43.
have_query_cache
YESif
mysqldsupports the query cache. This variable was added in MySQL 4.0.2.
have_raid
YESif
mysqldsupports the
RAIDoption. This variable was added in MySQL 3.23.30.
have_rtree_keys
RTREEindexes are available. (These are used for spatial indexed in
MyISAMtables.) This variable was added in MySQL 4.1.3.
have_symlink
DATA DIRECTORYand
INDEX DIRECTORYtable options. This variable was added in MySQL 4.0.0.
init_connectis not executed for users having the
SUPERprivilege; this is in case that content has been wrongly set (contains a wrong query, for example with a syntax error), thus making all connections fail. Not executing it for
SUPERusers enables those to open a connection and fix
init_connect. This variable was added in MySQL 4.1.2.
init_file
--init-fileoption when you start the server. This is a file containing SQL statements that you want the server to execute when it starts. Each statement must be on a single line and should not include comments. This variable was added in MySQL 3.23.2.
init_slave
init_connect, but is a string to be executed by a slave server each time the SQL thread starts. The format of the string is the same as for the
init_connectvariable. This variable was added in MySQL 4.1.2.
innodb_xxx
InnoDBsystem variables are listed at section 15.5
InnoDBStartup Options.
interactive_timeout
CLIENT_INTERACTIVEoption to
mysql_real_connect(). See also
wait_timeout.
join_buffer_size
join_buffer_sizeto get a faster full join when adding indexes is not possible. One join buffer is allocated for each full join between two tables. For a complex join between several tables for which indexes are not used, multiple join buffers might be necessary.
key_buffer_size
MyISAMand
ISAMtables are buffered and are shared by all threads.
key_buffer_sizeis the size of the buffer used for index blocks. The key buffer is also known as the key cache. The maximum allowable setting for
key_buffer_size 13.4.5
LOCK TABLESand
UNLOCK TABLESSyntax. You can check the performance of the key buffer by issuing a
SHOW STATUSstatement and examining the
Key_read_requests,
Key_reads,
Key_write_requests, and
Key_writesstatus variables. See section 13.5. From MySQL 4.1.1 on, the buffer block size is available from the
key_cache_block_sizeserveris unavailable. The
Key_blocks_usedvariable can be used as follows to determine the fraction of the key buffer in use:
(Key_blocks_used * 1024) / key_buffer_sizeHowever,
Key_blocks_usedindicates the maximum number of blocks that have ever been in use at once, so this formula does not necessary represent the current fraction of the buffer that is in use. See section 7.4.6 The
MyISAMKey Cache.
key_cache_age_threshold
MyISAMKey Cache.
key_cache_block_size
MyISAMKey Cache.
key_cache_division_limit
MyISAMKey Cache.
language
large_file_support
mysqldwas compiled with options for large file support. This variable was added in MySQL 3.23.28.
license
local_infile
LOCALis supported for
LOAD DATA INFILEstatements. This variable was added in MySQL 4.0.3.
locked_in_memory
mysqldwas locked in memory with
--memlock. This variable was added in MySQL 3.23.25.
log
log_bin
log_error
log_slave_updates
log_slow_queries
long_query_timevariable. This variable was added in MySQL 4.0.2. See section 5.9.5 The Slow Query Log.
log_update
log_warnings
long_query_time
Slow_queriesstatus variable is incremented. If you are using the
--log-slow-queriesoption, the query is logged to the slow query log file. This value is measured in real time, not CPU time, so a query that is under the threshold on a lightly loaded system might be above the threshold on a heavily loaded one. See section 5.9.5 The Slow Query Log.
low_priority_updates
1, all
INSERT,
UPDATE,
DELETE, and
LOCK TABLE WRITEstatements wait until there is no pending
SELECTor
LOCK TABLE READon the affected table. This variable previously was named
sql_low_priority_updates. It was added in MySQL 3.22.5.
lower_case_file_system
ONmeans filenames are case insensitive,
OFFmeans they are case sensitive. This variable was added in MySQL 4.0.19.
lower_case_table_names
lower_case_table_namesto 2.
max_allowed_packet
net_buffer_lengthbytes, but can grow up to
max_allowed_packetbytes when needed. This value by default is small, to catch big (possibly wrong) packets. You must increase this value if you are using big
BLOBcolumns or long strings. It should be as big as the biggest
BLOByou want to use. The protocol limit for
max_allowed_packetis 16MB before MySQL 4.0 and 1GB thereafter.
max_binlog_cache_size
Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. This variable was added in MySQL 3.23.29.
max_binlog_size
max_binlog_size. If
max_relay_log_sizeis 0, the value of
max_binlog_sizeapplies to relay logs as well.
max_relay_log_sizewas added in MySQL 4.0.14.
max_connect_errors
FLUSH HOSTSstatement.
max_connections
mysqldrequires. See section 7.4.8 How MySQL Opens and Closes Tables for comments on file descriptor limits. Also see section A.2.6
Too many connections.
max_delayed_threads
INSERT DELAYEDstatements. If you try to insert data into a new table after all
INSERT DELAYEDthreads are in use, the row will be inserted as if the
DELAYEDattribute wasn't specified. If you set this to 0, MySQL never creates a thread to handle
DELAYEDrows; in effect, this disables
DELAYEDentirely. This variable was added in MySQL 3.23.0.
max_error_count
SHOW ERRORSor
SHOW WARNINGS. This variable was added in MySQL 4.1.0.
max_heap_table_size
MEMORY(
HEAP) tables are allowed to grow. The value of the variable is used to calculate
MEMORYtable
MAX_ROWSvalues. Setting this variable has no effect on any existing
MEMORYtable, unless the table is re-created with a statement such as
CREATE TABLEor
TRUNCATE TABLE, or altered with
ALTER TABLE. This variable was added in MySQL 3.23.0.
max_insert_delayed_threads
max_delayed_threads. It was added in MySQL 4.0.19.
max_join_size
SELECTstatements that probably will need to examine more than
max_join_sizerow combinations
SQL_BIG_SELECTSvalue to
0. If you set the
SQL_BIG_SELECTSvalue again, the
max_join_sizevariable is ignored. If a query result already is in the query cache, no result size check is performed, because the result has already been computed and it does not burden the server to send it to the client. This variable previously was named
sql_max_join_size.
max_length_for_sort_data
filesortalgorithm to use. See section 7.2.10 How MySQL Optimizes
ORDER BY. This variable was added in MySQL 4.1.1
max_relay_log_size
max_binlog_sizefor both binary logs and relay logs. You must set
max_relay_log_sizeto between 4096 bytes and 1GB (inclusive), or to 0. The default value is 0. This variable was added in MySQL 4.0.14. See section 6.3 Replication Implementation Details.
max_seeks_for_key
SHOW INDEXSyntax). By setting this to a low value (100?), you can force MySQL to prefer keys instead of table scans. This variable was added in MySQL 4.0.14.
max_sort_length
BLOBor
TEXTvalues. Only the first
max_sort_lengthbytes of each value are used; the rest are ignored.
max_tmp_tables
max_user_connections
max_write_lock_count
myisam_data_pointer_size
CREATE TABLEfor
MyISAMtables when no
MAX_ROWSoption is specified. This variable cannot be less than 2 or larger than 8. The default value is 4. This variable was added in MySQL 4.1.2. See section A.2.11
The table is full.
myisam_max_extra_sort_file_size
MyISAMindex.
myisam_max_sort_file_size
MyISAMindex
--myisam-recoveroption. This variable was added in MySQL 3.23.36.
myisam_repair_threads
MyISAMtable indexes are created in parallel (each index in its own thread) during the
Repair by sortingprocess. The default value is 1. Note: Multi-threaded repair is still alpha quality code. This variable was added in MySQL 4.0.13.
myisam_sort_buffer_size
MyISAMindexes during a
REPAIR TABLEor when creating indexes with
CREATE INDEXor
ALTER TABLE. This variable was added in MySQL 3.23.16.
named_pipe
net_buffer_length
max_allowed_packetbytes.
net_read_timeout
net_read_timeoutis the timeout value controlling when to abort. When the server is writing to the client,
net_write_timeoutis the timeout value controlling when to abort. See also
slave_net_timeout. This variable was added in MySQL 3.23.20.
net_retry_count
net_write_timeout
net_read_timeout. This variable was added in MySQL 3.23.20.
new
old_passwords
open_files_limit
mysqldto open. This is the real value allowed by the system and might be different from the value you gave
mysqldas a startup option. The value is 0 on systems where MySQL can't change the number of open files. This variable was added in MySQL 3.23.20.
optimizer_prune_level
optimizer_search_depth
pid_file
--pid-fileoption. This variable was added in MySQL 3.23.23.
port
--portoption.
preload_buffer_size
protocol_version
query_alloc_block_size
query_cache_limit
query_cache_min_res_unit
query_cache_size
query_cache_typeis set to 0. This variable was added in MySQL 4.0.1.
query_cache_type
GLOBALvalue sets the type for all clients that connect thereafter. Individual clients can set the
SESSIONvalue to affect their own use of the query cache. This variable was added in MySQL 4.0.3.
query_cache_wlock_invalidate
WRITElock on a
MyISAMtable, other clients are not blocked from issuing queries. This variable was added in MySQL 4.0.19.
query_prealloc_size
query_prealloc_sizevalue might be helpful in improving performance, because it can reduce the need for the server to perform memory allocation during query execution operations. This variable was added in MySQL 4.0.16.
range_alloc_block_size
read_buffer_size
record_buffer.
read_only
ONfor a replication slave server, it causes the slave to allow no updates except from slave threads or from users with the
SUPERprivilege. This can be useful to ensure that a slave server accepts no updates from clients. This variable was added in MySQL 4.0.14.
relay_log_purge
read_rnd_buffer_size
ORDER BYperformance by a lot..
safe_show_database
skip_show_database. This variable was removed in MySQL 4.0.5. Instead, use the
SHOW DATABASESprivilege to control access by MySQL accounts to database names.
secure_auth
--secure-authoption, it blocks connections from all accounts that have passwords stored in the old (pre-4.1) format. In that case, the value of this variable is
ON, otherwise it is
OFF. You should enable this option if you want to prevent all
--server-idoption. It is used for master and slave replication servers. This variable was added in MySQL 3.23.26.
shared_memory
shared_memory_base_name
skip_external_locking
OFFif
mysqlduses external locking. This variable was added in MySQL 4.0.3. Previously, it was named
skip_locking.
skip_networking
ONif.
skip_show_database
SHOW DATABASESstatement if they don't have the
SHOW DATABASESprivilege. This can improve security if you're concerned about people being able to see what databases other users have. See also
safe_show_database. This variable was added in MySQL 3.23.4. As of MySQL 4.0.2, its effect also depends on the
SHOW DATABASESprivilege: If the variable value is
ON, the
SHOW DATABASESstatement is allowed only to users who have the
SHOW DATABASESprivilege, and the statement displays all database names. If the value is
OFF,
SHOW DATABASESis allowed to all users, but displays each database name only if the user has the
SHOW DATABASESprivilege or some privilege for the database.
slave_compressed_protocol
slave_net_timeout
slow_launch_time
Slow_launch_threadsstatus variable. This variable was added in MySQL 3.23.15.
socket
sort_buffer_size
ORDER BYor
GROUP BYoperations. See section A.4.4 Where MySQL Stores Temporary Files.
sql_mode
sql_slave_skip_counter
storage_engine
table_type. It was added in MySQL 4.1.2.
sync_binlog will lose at most one statement/transaction from the binary log; but it is also the slowest choice (unless the disk has a battery-backed cache, which makes sync'ing very fast). This variable was added in MySQL 4.1.3.
sync_frm
fdatasync()); this is slower but safer in case of crash. Default is 1.
system_time_zone
system_time_zone. Typically the time zone is specified by the
TZenvironment variable. It also can be specified using the
--timezoneoption of the
mysqld_safescript. This variable was added in MySQL 4.1.3.
table_cache
mysqldrequires. You can check whether you need to increase the table cache by checking the
Opened_tablesstatus variable. See section 5.2.4 Server Status Variables. If the value of
Opened_tablesis large and you don't do
FLUSH TABLESa lot (which just forces all tables to be closed and reopened), then you should increase the value of the
table_cachevariable. For more information about the table cache, see section 7.4.8 How MySQL Opens and Closes Tables.
table_type
--default-table-typeoption. This variable was added in MySQL 3.23.0. See section 5.2.1
mysqldCommand-Line Options.
thread_cache_size doesn't give a notable performance improvement if you have a good thread implementation.) By examining the difference between the
Connectionsand
Threads_createdstatus variables (see section 5.2.4 Server Status Variables for details) you can see how efficient the thread cache is. This variable was added in MySQL 3.23.16.
thread_concurrency
mysqldcalls
thr_setconcurrency()with this value. This function allows applications to give the threads system a hint about the desired number of threads that should be run at the same time. This variable was added in MySQL 3.23.7.
thread_stack
crash-metest are dependent on this value. The default is large enough for normal operation. See section 7.1.4 The MySQL Benchmark Suite.
time_zone
'SYSTEM'(use the value of
system_time_zone), but can be specified explicitly at server startup time with the
--default-time-zoneoption. This variable was added in MySQL 4.1.3.
timezone
TZenvironment variable when
mysqldis started. The time zone also can be set by giving a
--timezoneargument to
mysqld_safe. This variable was added in MySQL 3.23.15. As of MySQL 4.1.3, it is obsolete and has been replaced by the
system_time_zonevariable. See section A.4.6 Time Zone Problems.
tmp_table_size
MyISAMtable. Increase the value of
tmp_table_sizeif you do many advanced
GROUP BYqueries and you have lots of memory.
tmpdir. This variable was added in MySQL 3.22.4.
transaction_alloc_block_size
transaction_prealloc_size
transaction_alloc_blocksthat is not freed between queries. By making this big enough to fit all queries in a common transaction, you can avoid a lot of
malloc()calls. This variable was added in MySQL 4.0.16.
tx_isolation
updatable_views_with_limit.
version
version_bdb
BDBstorage engine version. This variable was added in MySQL 3.23.31 with the name
bdb_versionand renamed to
version_bdbin MySQL 4.1.1.
version_comment
configurescript has a
--with-commentoption that allows a comment to be specified when building MySQL. This variable contains the value of that comment. This variable was added in MySQL 4.0.17.
version_compile_machine
version_compile_os
wait_timeout
wait_timeoutvalue is initialized from the global
wait_timeoutvalue or from the global
interactive_timeoutvalue, depending on the type of client (as defined by the
CLIENT_INTERACTIVEconnect option to
mysql_real_connect()). See also
interactive_timeout.
Beginning with MySQL 4.0.3, many server system variables are dynamic and can
be set at runtime using
SET GLOBAL or
SET SESSION. You can also
select their values using
See section 9.4 System Variables.
The following table shows the full list of all dynamic system variables.
The last column indicates for each variable whether
GLOBAL or
SESSION (or both) apply. | | Created_tmp_disk_tables | 0 | | Created_tmp_files | 60 | | Created_tmp_tables | 8340 | | Delayed_errors | 0 | | Delayed_insert_threads | 0 | | Delayed_writes | 0 | | Flush_commands | 1 | | Handler_delete | 462604 | | Handler_read_first | 105881 | | Handler_read_key | 27820558 | | Handler_read_next | 390681754 | | Handler_read_prev | 6022500 | | Handler_read_rnd | 30546748 | | Handler_read_rnd_next | 246216530 | | Handler_update | 16945404 | | Handler_write | 60356676 | | Key_blocks_used | 14955 | | Key_read_requests | 96854827 | | Key_reads | 162040 | | Key_write_requests | 7589728 | | Key_writes | 3813196 | | Max_used_connections | 0 | | Not_flushed
Aborted_connects
Binlog_cache_disk_use
binlog_cache_sizeand used a temporary file to store statements from the transaction. This variable was added in MySQL 4.1.2.
Binlog_cache_use
Bytes_received
Bytes_sent
Com_xxx
Com_deleteand
Com_insertcount
DELETEand
INSERTstatements.
Connections
Created_tmp_disk_tables
Created_tmp_files
mysqldhas created. This variable was added in MySQL 3.23.28.
Created_tmp_tables
Created_tmp_disk_tablesis big, you may want to increase the
tmp_table_sizevalue to cause temporary tables to be memory-based instead of disk-based.
Delayed_errors
INSERT DELAYEDfor which some error occurred (probably
duplicate key).
Delayed_insert_threads
INSERT DELAYEDhandler threads in use.
Delayed_writes
INSERT DELAYEDrows written.
Flush_commands
FLUSHstatements.
Handler_commit
COMMITstatements. This variable was added in MySQL 4.0.2.
Handler_discover
NDB Clusterstorage engine if it knows about a table with a given name. This is called discovery.
Handler_discoverindicates the number of time tables have been discovered. This variable was added in MySQL 4.1.2.
Handler_delete
Handler_read_first
SELECT col1 FROM foo, assuming that
col1is indexed.
Handler_read_key
Handler_read_next
Handler_read_prev
ORDER BY ... DESC. This variable was added in MySQL 3.23.6.
Handler_read_rnd
Handler_read_rnd_next
Handler_rollback
ROLLBACKstatements. This variable was added in MySQL 4.0.2.
Handler_update
Handler_write
Innodb_buffer_pool_pages_data
Innodb_buffer_pool_pages_dirty
Innodb_buffer_pool_pages_flushed
Innodb_buffer_pool_pages_free
Innodb_buffer_pool_pages_latched
InnoDBbuffer pool. These are pages currently being read or written or that can't be flushed or removed now for some other reason. Added in MySQL 5.0.2.
Innodb_buffer_pool_pages_misc
Innodb_buffer_pool_pages_total-
Innodb_buffer_pool_pages_free-
Innodb_buffer_pool_pages_data. Added in MySQL 5.0.2.
Innodb_buffer_pool_pages_total
Innodb_buffer_pool_read_ahead_rnd
InnoDBinitiated. This happens when a query is to scan a large portion of a table but in random order. Added in MySQL 5.0.2.
Innodb_buffer_pool_read_ahead_seq
InnoDBinitiated. This happens when
InnoDBdoes a sequential full table scan. Added in MySQL 5.0.2.
Innodb_buffer_pool_read_requests
InnoDBhas done. Added in MySQL 5.0.2.
Innodb_buffer_pool_reads
InnoDBcould not satisfy from buffer pool and had to do a single-page read. Added in MySQL 5.0.2.
Innodb_buffer_pool_wait_free
InnoDBbuffer pool happen in the background. However, if it's necessary to read or create a page and no clean pages are available, it's necessary to wait for pages to be flushed first. This counter counts instances of these waits. If the buffer pool size was set properly, this value should be small. Added in MySQL 5.0.2.
Innodb_buffer_pool_write_requests
InnoDBbuffer pool. Added in MySQL 5.0.2.
Innodb_data_fsyncs
fsync()operations so far. Added in MySQL 5.0.2.
Innodb_data_pending_fsyncs
fsync()operations. Added in MySQL 5.0.2.
Innodb_data_pending_reads
Innodb_data_pending_writes
Innodb_data_read
Innodb_data_reads
Innodb_data_writes
Innodb_data_written
Innodb_dblwr_writes
Innodb_dblwr_pages_written
Innodb_log_waits
Innodb_log_write_requests
Innodb_log_writes
Innodb_os_log_fsyncs
Innodb_os_log_pending_fsyncs
Innodb_os_log_pending_writes
Innodb_os_log_written
Innodb_page_size
InnoDBpage size (default 16KB). Many values are counted in pages; the page size allows them to be easily converted to bytes. Added in MySQL 5.0.2.
Innodb_pages_created
Innodb_pages_read
Innodb_pages_written
Innodb_row_lock_current_waits
Innodb_row_lock_time
Innodb_row_lock_time_avg
Innodb_row_lock_time_max
Innodb_row_lock_waits
Innodb_rows_deleted
InnoDBtables. Added in MySQL 5.0.2.
Innodb_rows_inserted
InnoDBtables. Added in MySQL 5.0.2.
Innodb_rows_read
InnoDBtables. Added in MySQL 5.0.2.
Innodb_rows_updated
InnoDBtables. Added in MySQL 5.0.2.
Key_blocks_not_flushed
Not_flushed_key_blocks.
Key_blocks_unused
key_buffer_sizein section 5.2.3 Server System Variables. This variable was added in MySQL 4.1.2. section 5.2.3 Server System Variables.
Key_blocks_used
Key_read_requests
Key_reads
Key_readsis big, then your
key_buffer_sizevalue is probably too small. The cache miss rate can be calculated as
Key_reads/
Key_read_requests.
Key_write_requests
Key_writes
Last_query_cost
Max_used_connections
Not_flushed_delayed_rows
INSERT DELAYqueues.
Not_flushed_key_blocks
Key_blocks_not_flushedbefore MySQL 4.1.1.
Open_files
Open_streams
Open_tables
Opened_tables
Opened_tablesis big, your
table_cachevalue is probably too small.
Qcache_free_blocks
Qcache_free_memory
Qcache_hits
Qcache_inserts
Qcache_lowmem_prunes
Qcache_not_cached
query_cache_type).
Qcache_queries_in_cache
Qcache_total_blocks
Questions
Rpl_status
Select_full_join
Select_full_range_join
Select_range
Select_range_check
Select_scan
Slave_open_temp_tables
Slave_running
ONif this server is a slave that is connected to a master. This variable was added in MySQL 3.23.16.
Slow_launch_threads
slow_launch_timeseconds to create. This variable was added in MySQL 3.23.15.
Slow_queries
long_query_timeseconds. See section 5.9.5 The Slow Query Log.
Sort_merge_passes
sort_buffer_sizesystem variable. This variable was added in MySQL 3.23.28.
Sort_range
Sort_rows
Sort_scan
Ssl_xxx
Table_locks_immediate
Table_locks_waited
Threads_cached
Threads_connected
Threads_created
Threads_createdis big, you may want to increase the
thread_cache_sizevalue. The cache hit rate can be calculated as
Threads_created/
Connections. This variable was added in MySQL 3.23.31.
Threads_running
Uptime
The server shutdown process can be summarized like this:
A more detailed description of the process follows:
SHUTDOWNprivilege can execute a
mysqladmin shutdowncommand.
mysqladmincan be used on any platform supported by MySQL. Other operating sytem-specific shutdown initiation methods are possible as well: The server shuts down on Unix when it receives a
SIGTERMsignal. A server running as a service on Windows shuts down when the services manager tells it to.
SIGTERMsignal, the signal thread might handle shutdown itself, or it might create a separate thread to do so. If the server tries to create a shutdown thread and cannot (for example, if memory is exhausted), it issues a diagnostic message that will appear in the error log:
Error: Can't create thread to kill server
KILLSyntax, in particular for the instructions about killed
REPAIR TABLEor
OPTIMIZE TABLEoperations on
MyISAMtables. For threads that have an open transaction, the tranaction is rolled back. Note that if a thread is updating a non-transactional table, an operation such as a multiple-row
UPDATEor
INSERTmay) then stops. If the SQL thread was in the middle of a transaction at this point, the transaction is rolled back..5 The MySQL Access Privilege System. some support for SSL-encrypted connections between MySQL clients and servers. Many of the concepts discussed here are not specific to MySQL at all; the same general ideas apply to almost all applications.
When running MySQL, follow these guidelines whenever possible:
rootaccounts) access to the
usertable in the
mysqldatabase! This is critical. The encrypted password is the real password in MySQL. Anyone who knows the password that is listed in the
usertable and has access to the host listed for the account can easily log in as that user.
GRANTand
REVOKEstatements are used for controlling access to MySQL. Do not grant any more privileges than necessary. Never grant privileges to all hosts. Checklist:
mysql -u root. If you are able to connect successfully to the server without being asked for a password, you have problems. Anyone can connect to your MySQL server as the MySQL
rootuser with full privileges! Review the MySQL installation instructions, paying particular attention to the information about setting a
rootpassword. See section 2.9.3 Securing the Initial MySQL Accounts.
SHOW GRANTSstatement and check to see who has access to what. Then use the
REVOKEstatement to remove those privileges that are not necessary.
MD5(),
SHA1(), or some other one-way hashing function.
nmap. MySQL uses port 3306 by default. This port should not be accessible from untrusted hosts. Another simple way to check whether or not your MySQL port is open is to try the following command from some remote machine, where
server_hostis the host on which your MySQL server runs:
shell> telnet server_host 3306If you get a connection and some garbage characters, the port is open, and should be closed on your firewall or router, unless you really have a good reason to keep it open. If
telnetjust hangs or the connection is refused, everything is OK; the port is blocked.
;when a user enters the value
234, the user can enter the value
234 OR 1=1to cause the application to generate the query
SELECT * FROM table WHERE ID=234 OR 1=1. As a result, the server retrieves every:
%22(`"'),
%23(`#'), and
%27(`'') in the URL.
mysql_real_escape_string()API call.
escapeand
quotemodifiers for query streams.
mysql_escape_string()function, which is based on the function of the same name in the MySQL C API. Prior to PHP 4.0.3, use
addslashes()instead.
quote()method or use placeholders.
PreparedStatementobject and placeholders.
tcpdumpand
stringsutilities. plaintext data, this doesn't always mean that the information actually is encrypted. If you need high security, you should consult with a security expert. 5.5.9 Password Hashing in MySQL 4.1 for a discussion of the different password handling methods.) If the connection between the client and the server goes through an untrusted network, you should use an SSH tunnel to encrypt the communication.
All other information is transferred as text that can be read by anyone who is able to watch the connection. If you are concerned about this, you can use the compressed protocol (in MySQL 5.6.7 Using Secure Connections.
To make a MySQL system secure, you should strongly consider the following suggestions:
mysqlprogram to connect as any other person simply by invoking it as
mysql -u other_user db_nameif other_user has no password. If all users have a password, connecting using another user's account becomes much more difficult. To change the password for a user, use the
SET PASSWORDstatement. It is also possible to update the
usertable in the
mysqldatabase directly. For example, to change the password of all MySQL accounts that have a username of
root, do this:
shell> mysql -u root mysql> UPDATE mysql.user SET Password=PASSWORD('newpwd') -> WHERE User='root'; mysql> FLUSH PRIVILEGES;
rootuser. This is very dangerous, because any user with the
FILEprivilege will be able to create files as
root(for example,
~root/.bashrc). To prevent this,
mysqldrefuses to run as
rootunless that is specified explicitly using a
--user=rootoption.
mysqldcan be run as an ordinary unprivileged user instead. You can also create a separate Unix account named
mysqlto make everything even more secure. Use the account only for administering MySQL. To start
mysqldas another Unix user, add a
useroption that specifies the username to the
[mysqld]group of the `/etc/my.cnf' option file or the `my.cnf' option file in the server's data directory. For example:
[mysqld] user=mysqlThis causes the server to start as the designated user whether you start it manually or by using
mysqld_safeor
mysql.server. For more details, see section A.3.2 How to Run MySQL as a Normal User. Running
mysqldas a Unix user other than
rootdoes not mean that you need to change the
rootusername in the
usertable. Usernames for MySQL accounts have nothing to do with usernames for Unix accounts.
--skip-symbolic-linksoption.) This is especially important if you run
mysqldas
root, because anyone that has write access to the server's data directory then could delete any file in the system! See section 7.6.1.2 Using Symbolic Links for Tables on Unix.
mysqldruns as.
PROCESSor
SUPERprivilege to non-administrative users. The output of
SUPERprivilege (
PROCESSbefore MySQL 4.0.2), so that a MySQL
rootuser can log in and check server activity even if all normal connections are in use. The
SUPERprivilege can be used to terminate client connections, change server operation by changing the value of system variables, and control replication servers.
FILEprivilege to non-administrative users. Any user that has this privilege can write a file anywhere in the filesystem with the privileges of the
mysqlddaemon! To make this a bit safer, files generated with
SELECT ... INTO OUTFILEwill' into a table, which then can be displayed with
max_user_connectionsvariable in
mysqld. The
GRANTstatement also supports resource control options for limiting the extent of server use allowed to an account.
mysqldConcerning Security
The following
mysqld options affect security:
--local-infile[={0|1}]
--local-infile=0, clients cannot use
LOCALin
LOAD DATAstatements. See section 5.4.4 Security Issues with
LOAD DATA LOCAL.
--safe-show-database
SHOW DATABASESstatement displays the names of only 13.5.1.2
GRANTand
REVOKESyntax.
--safe-user-create
GRANTstatement unless the user has the
INSERTprivilege for the
mysql.usertable.statement to give privileges to other users.
--secure-auth
--skip-grant-tables
mysqladmin flush-privilegesor
mysqladmin reloadcommand, or by issuing a
FLUSH PRIVILEGESstatement.)
--skip-name-resolve
Hostcolumn values in the grant tables must be IP numbers or
localhost.
--skip-networking
mysqldmust be made via Unix socket files. This option is unsuitable when using a MySQL version prior to 3.23.27 with the MIT-pthreads package, because Unix socket files were not supported by MIT-pthreads at that time.
-.
LOAD DATA LOCAL
The
LOAD DATA statement can load a file that is located on the
server host, or it can load a file that is located on the client host when
the
LOCAL keyword is specified.
There are two potential security issues with supporting the
LOCAL
version of
LOAD DATA statements:
LOAD DATAstatement. Such a server could access any file on the client host to which the client user has read access.
LOAD DATA LOCALto read any files that the Web server process has read access to (assuming that a user could run any command against the SQL server). In this environment, the client with respect to the MySQL server actually is the Web server, not the program being run by the user connecting to the Web server.
To deal with these problems, we changed how
LOAD DATA
LOCAL is handled as of MySQL 3.23.49 and MySQL 4.0.2 (4.0.13 on Windows):
--enable-local-infileoption, to be compatible with MySQL 3.23.48 and before.
--enable-local-infileoption to
configure,
LOAD DATA LOCALcannot be used by any client unless it is written explicitly to invoke
mysql_options(... MYSQL_OPT_LOCAL_INFILE, 0). See section 21.2.3.41
mysql_options().
LOAD DATA LOCALcommands from the server side by starting
mysqldwith the
--local-infile=0option.
mysqlcommand-line client,
LOAD DATA LOCALcan be enabled by specifying the
--local-infile[=1]option, or disabled with the
--local-infile=0option. Similarly, for
mysqlimport, the
--localor
-Loption enables local data file loading. In any case, successful use of a local loading operation requires that the server is enabled to allow it.The
loose-prefix can be used as of MySQL 4.0.2.
LOAD DATA LOCAL INFILEis disabled, either in the server or the client, a client that attempts to issue such a statement receives the following error message:
ERROR 1148: The used command is not allowed with this MySQL version
MySQL has an advanced but non-standard security/privilege system. This section describes how it works.
The primary function of the MySQL privilege system is to
authenticate a user connecting from a given host, and to associate that user
with privileges on a database such as
INSERT,
UPDATE, and
DELETE.
Additional functionality includes the ability to have an anonymous user and
to grant privileges for MySQL-specific functions such as
LOAD
DATA INFILE and administrative operations.. For example, the user
joe who connects from
office.com need not be the same
person as the user
joe who connects from
elsewhere.com.
MySQL handles this by allowing you to distinguish users on different
hosts that happen to have the same name: You can grant
joe one set
of privileges for connections from
office.com, and a different set
of privileges for connections from
elsewhere.com.
MySQL access control involves two stages:
SELECTprivilege for the table or the
DROPprivilege for the database.
If your privileges are changed (either by yourself or someone else) while you are connected, those changes will not necessarily take effect immediately for the next statement you issue. See section 5.5.7 When Privilege Changes Take Effect for details.
The server stores privilege information in the grant tables of the
mysql database (that is, in the database named
mysql).
The MySQL server reads the contents of these tables into memory when it
starts and re-reads them under the circumstances indicated in section 5.5.7 When Privilege Changes Take Effect. Access-control decisions are based on the in-memory copies of the
grant tables.
Normally, you manipulate the contents of the grant tables indirectly by using
the
GRANT and
REVOKE statements to set up accounts and control
the privileges available to each one.
See section 13.5.1.2
GRANT and
REVOKE Syntax..
The
Create_view_priv and
Show_view_priv columns were added in
MySQL 5.0.1.:
usertable entry with
Hostand
Uservalues of
'thomas.loc.gov'and
'bob'would be used for authenticating connections made to the server from the host
thomas.loc.govby a client that specifies a username of
bob. Similarly, a
dbtable entry with
Host,
User, and
Dbcolumn values of
'thomas.loc.gov',
'bob'and
'reports'would be used when
bobconnects from the host
thomas.loc.govto access the
reportsdatabase. The
tables_privand
columns_privtables contain scope columns indicating tables or table/column combinations to which each entry applies.
Scope columns contain strings. They are declared as shown here; the default value for each is the empty string:
Before MySQL 3.23, the
Db column is
CHAR(32) in some tables
and
CHAR(60) in others.
For access-checking purposes, comparisons of
Host values are
case-insensitive.
User,
Db, and
Table_name values are case sensitive.
Column_name values are case insensitive in MySQL
3.22.12 or later.
In the
user,
db, and
host tables, each privilege
is listed in a separate column that is declared as
ENUM('N','Y') DEFAULT 'N'. In other words, each privilege can be disabled
or enabled, with the default being disabled.
In the
tables_priv and
columns_priv tables, the privilege
columns are declared as
SET columns. Values in these columns can
contain any combination of the privileges controlled by the table:
Briefly, the server uses the grant tables as follows:
usertable scope columns determine whether to reject or allow incoming connections. For allowed connections, any privileges granted in the
usertable indicate the user's global (superuser) privileges. These privileges apply to all databases on the server.
dbtable scope columns determine which users can access which databases from which hosts. The privilege columns determine which operations are allowed. A privilege granted at the database level applies to the database and to all its tables.
hosttable is used in conjunction 5.5.6 Access Control, Stage 2: Request Verification. Note: The
hosttable is not affected by the
GRANTand
REVOKEstatements. Most MySQL installations need not use this table at.
Administrative privileges (such as
RELOAD or
SHUTDOWN)
are specified only in the
user table. This is because
administrative operations are operations on the server itself and are not
database-specific, so there is no reason to list these privileges in the
other grant tables. In fact, re-read the tables by issuing a
FLUSH
PRIVILEGES statement or executing a
mysqladmin flush-privileges or
mysqladmin reload command.
Changes to the grant tables take effect as indicated in
section 5.com';
A useful
diagnostic tool is the
mysqlaccess script, which Yves Carlier has
provided for the MySQL distribution. Invoke
mysqlaccess with
the
--help option to find out how it works.
Note that
mysqlaccess checks access using only the
user,
db, and
host tables. It does not check table or column
privileges specified in the
tables_priv or
columns_priv tables.
For additional help in diagnosing privilege-related problems, see
section 5.5.8 Causes of
Access denied Errors. For general advice on security issues, see
section 5.4 General Security Issues.
Information about account privileges is stored in the
user,
db,
host,
tables_priv, and
columns_priv tables in the
mysql database. The MySQL server reads the contents of these
tables into memory when it starts and re-reads them under the circumstances
indicated in section 5 at section 13.5.1.2
GRANT and
REVOKE Syntax.
The
CREATE TEMPORARY TABLES,
EXECUTE,
LOCK TABLES,
REPLICATION CLIENT,
REPLICATION SLAVE,
SHOW DATABASES,
and
SUPER privileges were added in MySQL 4.0.2.
The
EXECUTE and
REFERENCES privileges currently are unused.
The.
The
FILE privilege gives you permission to read and write files on
the server host using the
LOAD DATA INFILE and
SELECT ... INTO
OUTFILE statements. A user who has the
FILE privilege can read any
file on the server host that is either world-readable or readable by the MySQL
server. (This implies the user can read any file in any database
directory, because the server can access any of those files.)
The
FILE privilege also.
See section 13.5.5.3
KILL Syntax. they connect to the current server as their master.
Without this privilege, the slave cannot request updates that have been made
to databases on the master server.
The
SHOW DATABASES privilege allows the account to see database names
by issuing the
SHOW DATABASE statement. Accounts that do not have this
privilege see only databases for which they have some privileges, and cannot
use the statement at all if the server was started with the
--skip-show-database option.
It is a good idea in general to grant privileges to only those accounts that need them, but you should exercise particular caution in granting administrative privileges:
GRANTprivilege allows users to give their privileges to other users. Two users with different privileges and with the
GRANTprivilege are able to combine privileges.
ALTERprivilege may be used to subvert the privilege system by renaming tables.
FILEprivilege can.
SHUTDOWNprivilege can be abused to deny service to other users entirely by terminating the server.
PROCESSprivilege can be used to view the plain text of currently executing queries, including queries that set or change passwords.
SUPERprivilege can be used to terminate other clients or change how the server operates.
mysqldatabase itself can be used to change passwords and other access privilege information. Passwords are stored encrypted, so a malicious user cannot simply read them to know the plain text password. However, a user with write access to the
usertable
Passwordcolumn can change an account's password, and then connect to the MySQL server using that account.
There are some things that you cannot do with the MySQL privilege system:
MySQL client programs generally expect you to specify connection parameters when you want to access a MySQL server:
For example, the
mysql client can be started as follows from a command-line prompt
(indicated here by
shell>):
shell> mysql -h host_name -u user_name you to enter the password.
The password is not displayed as you enter it.
This is more secure than giving the password on the command line.
Any user on your system may be able to see a password specified on the command
line by executing a command such as
ps auxww.
See section 5.6.6 Keeping Your Password Secure.
MySQL client programs use default values for any connection parameter option that you do not specify:
localhost.
ODBCon Windows and your Unix login name on Unix.
-pis missing.
Thus, for a Unix user with a login name of
joe, all of the following
commands are equivalent:
shell> mysql -h localhost -u joe shell> mysql -h localhost shell> mysql -u joe shell> mysql
Other MySQL clients behave similarly.
You can specify different default values to be used when you make a connection so that you need not enter them on the command line each time you invoke a client program. This can be done in a couple of ways:
[client]section of an option file. The relevant section of the file might look like this:
[client] host=host_name user=user_name password=your_passOption files are discussed further in section 4.3.2 Using Option Files.
mysqlusing
MYSQL_HOST. The MySQL username can be specified using
USER(this is for Windows and NetWare only). The password can be specified using
MYSQL_PWD, although this is insecure; see section 5.6.6 Keeping Your Password Secure. For a list of variables, see section F Environment Variables.
When you attempt to connect to a MySQL server, the server accepts or rejects the connection based on your identity and whether you can verify your identity by supplying the correct password. If not, the server denies access to you completely. Otherwise, the server accepts the connection, then enters Stage 2 and waits for requests.:
Hostvalue may be a hostname or an IP number, or
'localhost'to indicate the local host.
Hostcolumn values. These have the same meaning as for pattern-matching operations performed with the
LIKEoperator. For example, a
Hostvalue of
'%'matches any hostname, whereas a value of
'%.mysql.com'matches any host in the
mysql.comdomain.
Hostvalues specified as IP numbers, you can specify a netmask indicating how many address bits to use for the network number. For example:
mysql> GRANT ALL PRIVILEGES ON db.* -> TO [email protected]'192.58.197.0/255.255.255.0';This allows
davidto connect from any client host having an IP number
client_ipfor which the following condition is true:
client_ip & netmask = host_ipThat is, for the
GRANTstatement just shown:
client_ip & 255.255.255.0 = 192.58.197.0IP numbers that satisfy this condition and can connect to the MySQL server are those that lie in the range from
192.58.197.0to
192.58.197.255.
Hostvalue in a
dbtable record means that its privileges should be combined with those in the entry in the
hosttable that matches the client hostname. The privileges are combined using an AND (intersection) operation, not OR (union). You can find more information about the
hosttable in section 5.5.6 Access Control, Stage 2: Request Verification. A blank
Hostvalue in the other grant tables is the same as
'%'.
Because you can use IP wildcard values in the
Host column
(for example,
'144.155.166.%' to match every host on a
subnet), someone could try to exploit this capability by naming a host
144.155.166.somewhere.com. To foil such attempts, MySQL disallows
matching on hostnames that start with digits and a dot. Thus, if you have
a host named something like
1.2.foo.com, its name will never match
the
Host column of the grant tables. An IP wildcard value can
match only IP numbers, not hostnames.
In the
User column, wildcard characters are not allowed, but you can
specify a blank value, which matches any name. If the
user table
entry that matches an incoming connection has a blank username, the user is
considered to be an anonymous user with no name, not a user with the
name that the client actually specified. This means that a blank username
is used for all further access checking for the duration of the connection
(that is, during Stage 2).
The
Password column can be blank. This is not a wildcard and does
not mean that any password matches. It means that the user must connect
without specifying a password.
Non-blank
Password values in the
user table represent
encrypted passwords. MySQL does not store passwords in plaintext form for
anyone to see. Rather, the password supplied by a user who is attempting to
connect is encrypted (using the
PASSWORD() function). The encrypted
password then is used during the connection process when checking whether
the password is correct. (This is done without the encrypted password ever
traveling over the connection.) From MySQL's point of view, the
encrypted password is the REAL password, so you should 5.5.9 Password Hashing in MySQL 4.1.
The following examples show how various combinations of
Host and
User values in the
user table apply to incoming
connections:
It is possible for the client hostname and username of
an incoming connection to match more than one entry in the
user table. The preceding set of examples demonstrates this:
Several of the entries shown match a connection from
thomas.loc.gov by
fred.
When multiple matches are possible, the server must determine which of them to use. It resolves this issue as follows:
usertable into memory, it sorts the entries. with the same
Host value are ordered with
the most-specific
User values first (a blank
User value means
``any user'' and is least specific). For the
user table just shown,
the result after sorting looks like this:
+-----------+----------+- | Host | User | ... +-----------+----------+- | localhost | root | ... | localhost | | ... | % | jeffrey | ... | % | root | ... +-----------+----------+-
When a client attempts to connect, the server looks through the sorted appears first in sorted order, so that is the one
the server uses.
Here is another example. Suppose that the
user table looks like this:
+----------------+----------+- | Host | User | ... +----------------+----------+- | % | jeffrey | ... | thomas.loc.gov | | ... +----------------+----------+-
The sorted table looks like this:
+----------------+----------+- | Host | User | ... +----------------+----------+- | thomas.loc.gov | | ... | % | jeffrey | ... +----------------+----------+-
A connection by
jeffrey from
thomas.loc.gov is matched by the
first entry, whereas a connection by
jeffrey from
whitehouse.gov
is matched by the second.
It is a common misconception to think that, for a given username, all entries
that explicitly name that user will be used first when the server attempts to
find a match for the connection. This is simply not true. The previous
example illustrates this, where a connection from
thomas.loc.gov by
jeffrey is first matched not by the entry containing
'jeffrey'
as the
User column value, but by the entry with no username!
As a result,
jeffrey will be authenticated as an anonymous user, even
though he specified a username when connecting.
If you are able to connect to the server, but your privileges are not
what you expect, you probably are being authenticated as some other
account. To find out what account the server used to authenticate
you, use the
CURRENT_USER() function..
See section 12.8.3 Information Functions.
Another thing you can do to diagnose authentication problems is to print
out the
user table and sort it by hand to see where the first
match is being made. 5:
Hostand
Dbcolumns of either table. These have the same meaning as for pattern-matching operations performed with the
LIKEoperator. If you want to use either character literally when granting privileges, you must escape it with a backslash. For example, to include `_' character as part of a database name, specify it as `\_' in the
GRANTstatement.
'%'
Hostvalue in the
dbtable means ``any host.'' A blank
Hostvalue in the
dbtable means ``consult the
hosttable for further information'' (a process that is described later in this section).
'%'or blank
Hostvalue in the
hosttable means ``any host.''
'%'or blank
Dbvalue in either table means ``any database.''
Uservalue in either table matches the anonymous user.
The server reads in and sorts the
db and
host tables and
columns_priv tables grant table-specific and
column-specific privileges. Values in the scope columns of these tables can
take the following form:
Hostcolumn of either table. These have the same meaning as for pattern-matching operations performed with the
LIKEoperator.
'%'or blank
Hostvalue in either table means ``any host.''
Db,
Table_name, and
Column_namecolumns cannot contain wildcards or be blank in either table.
The server sorts the
tables_priv and
columns_priv tables based
on the
Host,
Db, and
User columns. This is similar to
db table sorting, but simpler because only the
Host column can
contain wildcards.
The request verification process is described here. (If you are familiar with the access-checking source code, you will notice that the description here differs slightly from the algorithm used in the code. The description is equivalent to what the code actually does; it differs only to make the explanation simpler.)
For requests that require administrative privileges such as
SHUTDOWN or
RELOAD, the
server checks only the
user table entry because that is the only table
that specifies administrative privileges. Access is granted if the entry
allows the requested operation and denied otherwise. For example, if you
want to execute
mysqladmin shutdown but your
user table entry. If the entry allows the requested operation,
access is granted. If the global privileges in the
user table are
insufficient, the server determines the user's database-specific privileges
by checking the
db and
host tables:
dbtable for a match on the
Host,
Db, and
Usercolumns. The
Hostand
Usercolumns are matched to the connecting user's hostname and MySQL username. The
Dbcolumn is matched to the database that the user wants to access. If there is no entry for the
Hostand
User, access is denied.
dbtable entry and its
Hostcolumn is not blank, that entry defines the user's database-specific privileges.
dbtable entry's
Hostcolumn is blank, it signifies that the
hosttable enumerates which hosts should be allowed access to the database. In this case, a further lookup is done in the
hosttable to find a match on the
Hostand
Dbcolumns. If no
hosttable entry matches, access is denied. If there is a match, the user's database-specific privileges are computed as the intersection (not the union!) of the privileges in the
dband
hosttable entries; that is, the privileges that are
'Y'in both entries. (This way you can grant general privileges in the
dbtable entry and then selectively restrict them on a host-by-host basis using the
hosttable.
Expressed in boolean terms, the preceding description of how a user's privileges are calculated may be summarized like this:
global privileges OR (database privileges AND host privileges) OR table privileges OR column privileges
It may not be apparent why, if the global
user entry grants one
privilege and the
db table. For example, at TcX, the
host
table contains a list of all machines on the local network. These are
granted all privileges.
You can also use the
host table to indicate hosts that are not
secure. Suppose that you have a machine
public.your.domain that is located
in a public area that you do not consider secure. You can allow access to
all hosts on your network except that machine by using
host table
entries
like this:
+--------------------+----+- | Host | Db | ... +--------------------+----+- | public.your.domain | % | ... (all privileges set to 'N') | %.your.domain | % | ... (all privileges set to 'Y') +--------------------+----+-
Naturally, you should always test your entries in the grant tables (for
example, by using
SHOW GRANTS or
mysqlaccess) to make sure that your access privileges are
actually set up the way you think they are.
When
mysqld starts, all grant table contents are read into memory and
become effective for access control at that point.
When the server reloads the grant tables, privileges for existing client connections are affected as follows:
USE db_namestatement.
If you modify the grant tables using
GRANT,
REVOKE, or
SET PASSWORD, the server notices these changes and reloads the
grant tables into memory again immediately.
If you modify the grant tables directly using statements such as
INSERT,
UPDATE, or
DELETE, your changes have
no effect on privilege checking until you either restart the server
or tell it to reload the tables. To reload the grant tables manually,
issue a
FLUSH PRIVILEGES statement or execute a
mysqladmin
flush-privileges or
mysqladmin reload command.
If you change the grant tables directly but forget to reload them, your changes will have no effect until you restart the server. This may leave you wondering why your changes don't seem to make any difference!
Access deniedErrors
If you encounter problems when you try to connect to the MySQL server, the following items describe some courses of action you can take to correct the problem.
shell> mysql ERROR 2003: Can't connect to MySQL server on 'host_name' (111) shell> mysql ERROR 2002: Can't connect to local MySQL server through socket '/tmp/mysql.sock' (111)It might also be that the server is running, but you are trying to connect using a TCP/IP port, named pipe, or Unix socket file different from those on which the server is listening. To correct this when you invoke a client program, specify a
--portoption to indicate the proper port, or a
--socketoption to indicate the proper named pipe or Unix socket file. To find out what port is used, and where the socket is, you can do:
shell> netstat -l | grep mysql
mysqldatabase containing the grant tables. For distributions that do not do this, you should initialize the grant tables manually by running the
mysql_install_dbscript. For details, see section 2.9.2 Unix Post-Installation Procedures. One way to determine whether you need to initialize the grant tables is to look for a `mysql' directory under the data directory. (The data directory normally is named `data' or `var' and is located under your MySQL installation directory.) Make sure that you have a file named `user.MYD' in the `mysql' database directory. If you do not, execute the
mysql_install_dbscript. After running this script and starting the server, test the initial privileges by executing this command:
shell> mysql -u root testThe server should let you connect without error.
shell> mysql -u root mysqlThe server should let you connect because the MySQL
rootuser has no password initially. That is also a security risk, so setting the password for the
rootaccounts is something you should do while you're setting up your other MySQL users. For instructions on setting the initial passwords, see section 2.9.3 Securing the Initial MySQL Accounts.
mysql_fix_privilege_tablesscript? If not, do so. The structure of the grant tables changes occasionally when new capabilities are added, so after an upgrade you should always make sure that your tables have the current structure. For instructions, see section 2.10.7 Upgrading the Grant Tables.
shell> mysql Client does not support authentication protocol requested by server; consider upgrading MySQL clientFor information on how to deal with this, see section 5.5.9 Password Hashing in MySQL 4.1 and section A.2.3
Client does not support authentication protocol.
rootand get the following error, it means that you don't have an entry in the
usertable with a
Usercolumn value of
'root'and that
mysqldcannot resolve the hostname for your client:
Access denied for user ''@'unknown' to database mysqlIn this case, you must restart the server with the
--skip-grant-tablesoption and edit your `/etc/hosts' or `\windows\hosts' file to add an entry for your host.
Access deniedwhen you run a client without any options, make sure that you haven't specified an old password in any of your option files! You can suppress the use of option files by a client program by invoking it with the
--no-defaultsoption. For example:
shell> mysqladmin --no-defaults -u root versionThe option files that clients use are listed in section 4.3.2 Using Option Files. Environment variables are listed in section F Environment Variables.
rootpassword:option as described in the previous item. For information on changing passwords, see section 5.6.5 Assigning Account Passwords. If you have lost or forgotten the
rootpassword, you can restart
mysqldwith
--skip-grant-tablesto change the password. See section A.4.1 How to Reset the Root Password.
SET PASSWORD,
INSERT, or
UPDATE, you must encrypt the password using the
PASSWORD()function. If you do not use
PASSWORD()for these statements, the password will not work. For example, the following statement sets a password, but fails to encrypt it, so the user will not be able to connect afterward:
mysql> SET PASSWORD FOR 'abe'@'host_name' = 'eagle';Instead, set the password like this:
mysql> SET PASSWORD FOR 'abe'@'host_name' = PASSWORD('eagle');The
PASSWORD()function is unnecessary when you specify a password using the
GRANTstatement or the
mysqladmin passwordcommand, both of which automatically use
PASSWORD()to encrypt the password. See section 5.6.5 Assigning Account Passwords.
localhostis a synonym for your local hostname, and is also the default host to which clients try to connect if you specify no host explicitly. However, connections to
localhoston Unix systems do not work if you are using a MySQL version older than 3.23.27 that uses MIT-pthreads:
localhostconnections are made using Unix socket files, which were not supported by MIT-pthreads at that time. To avoid this problem on such systems, you can use a
--host=127.0.0.1option to name the server host explicitly. This will make a TCP/IP connection to the local
mysqldserver. You can also use TCP/IP by specifying a
--hostoption that uses the actual hostname of the local host. In this case, the hostname must be specified in a
usertable entry on the server host, even though you are running the client program on the same host as the server.
Access deniederror when trying to connect to the database with
mysql -u user username.
Access deniederror message will tell you who you are trying to log in as, the client host from which you are trying to connect, and whether or not you were using a password. Normally, you should have one entry in the
usertable that exactly matches the hostname and username that were given in the error message. For example, if you get an error message that contains
using password: NO, it means that you tried to log in without an password.
usertable with a
Hostvalue that matches the client host:
Host ... is not allowed to connect to this MySQL serverYou can fix this by setting up an account for the combination of client hostname and username that you are using when trying to connect. If change the
'%'in the
usertable entry to the actual hostname that shows up in the log. Otherwise, you'll have a system that is insecure because it allows connections from any host for the given username.) isn't a big problem.
shell> mysqladmin -u root -pxxxx -h some-hostname ver Access denied for user 'root'@'' (using password: YES)This indicates a DNS problem. To fix it, execute
mysqladmin flush-hoststo reset the internal DNS hostname cache. See section 7.5.6 How MySQL Uses DNS. Some permanent solutions are:
/etc/hosts.
mysqldwith the
--skip-name-resolveoption.
mysqldwith the
--skip-host-cacheoption.
localhost. Unix connections to
localhostuse a Unix socket file rather than TCP/IP.
.(period). Connections to
.use a named pipe rather than TCP/IP.
mysql -u root testworks but
mysql -h your_hostname -u root testresults in
Access denied(where your_hostname is the actual hostname of the local host), not, you have not granted database access for other_db_name to the given user.
mysql -u user_nameworks when executed on the server host, but
mysql -h host_name -u user_namedoesn't work when executed on a remote client host, you have not enabled access to the server for the given username from the remote host.'t work is that the default privileges include an entry with
Host=
'localhost'and
User=
''. Because that entry has a
Hostvalue
statement to reload the grant tables.
dbor
hosttable:
Access to database deniedIf the entry selected from the
dbtable has an empty value in the
Hostcolumn, make sure that there are one or more corresponding entries in the
hosttable specifying which hosts the
dbtable entry applies to.
Access deniedmessage whenever you issue a
SELECT ... INTO OUTFILEor
LOAD DATA INFILEstatement, your entry in the
usertable doesn't have the
FILEprivilege enabled.
INSERT,
UPDATE, or
DELETEstatements) and your changes seem to be ignored, remember that you must execute a
FLUSH PRIVILEGESstatement or a
mysqladmin flush-privilegescommand to cause the server to re-read the privilege tables. Otherwise, your changes have no effect until the next time the server is restarted. Remember that after you change the
rootpassword with an
UPDATEcommand, you won't need to specify the new password until after you flush the privileges, because the server won't know you've changed the password yet!
mysql -u user_name db_nameor
mysql -u user_name -pyour_pass db_name. If you are able to connect using the
mysqlclient, the problem lies with your program, not with the access privileges. (There is no space between
-pand the password; you can also use the
--password=your_passsyntax to specify the password. If you use the
-poption alone, MySQL will prompt you for the password.)
mysqldserver with the
--skip-grant-tablesoption. Then you can change the MySQL grant tables and use the
mysqlaccessscript to check whether your modifications have the desired effect. When you are satisfied with your changes, execute
mysqladmin flush-privilegesto tell the
mysqldserver to start using the new grant tables. (Reloading the grant tables overrides the
--skip-grant-tablesoption. This allows you to tell the server to begin using the grant tables again without stopping and restarting it.)
mysqldserver with a debugging option (for example,
--debug=d,general,query). This will print host and user information about attempted connections, as well as information about each command issued. See section E.1.2 Creating Trace Files.
mysqldump mysqlcommand. As always, post your problem using the
mysqlbugscript. See section 1.4.1.3 How to Report Bugs or Problems. In some cases, you may need to restart
mysqldwith
--skip-grant-tablesto run
mysqldump.
MySQL user accounts are listed in the
user table of the
mysql
database. Each MySQL account is assigned a password, although
what is stored in the
Password column of the
user table is not the
plaintext version of the password, but a hash value computed from
it. Password hash values are computed by the
PASSWORD() function.
MySQL uses passwords in two phases of client/server communication:
usertable for the account that the client wants to use.
usertable. The client can do this by using the
PASSWORD()function to generate a password hash, or by using the
GRANTor
SET PASSWORDstatements.
In other words, the server uses hash values during authentication when
a client first attempts to connect. The server generates hash values
if a connected client invokes the
PASSWORD() function or uses a
GRANT
or
SET PASSWORD statement to set or change a password.
The password hashing mechanism was updated in MySQL 4.1 to provide
better security and to reduce the risk of passwords being intercepted.
However, this new mechanism is understood only by: This discussion contrasts 4.1 behavior with pre-4.1 behavior, but the 4.1 behavior described here actually begins with 4.1.1. MySQL 4.1.0 is an ``odd'' release because it has a slightly different mechanism than that implemented in 4.1.1 and up. Differences between 4.1.0 and more recent versions are described further in section 5.5.9.2 Password Hashing in MySQL 4.1.0.
Prior to MySQL 4.1, password hashes computed by the
PASSWORD() function
are 16 bytes long. Such hashes look like this:
mysql> SELECT PASSWORD('mypass'); +--------------------+ | PASSWORD('mypass') | +--------------------+ | 6f8c114b58f2ce9e | +--------------------+
The
Password column of the
user table (in which these hashes are stored)
also is 16 bytes long before MySQL 4.1.
As of MySQL 4.1,:
Passwordcolumn will be made 41 bytes long automatically.
mysql_fix_privilege_tablesscript to increase the length of the
Passwordcolumn from 16 to 41 bytes. (The script does not change existing password values, which remain 16 bytes long.)
A widened
Password column can store password hashes in both the old and
new formats. The format of any given password hash value can be
determined two ways:
The longer password hash format has better cryptographic properties, and client authentication based on long hashes is more secure than that based on the older short hashes.:
For short-hash accounts, the authentication process is actually a bit more secure for 4.1 clients than for older clients. In terms of security, the gradient from least to most secure is:
The way in which the server generates password hashes for connected
clients is affected by the width of the
Password column and by the
--old-passwords option. A 4.1 server generates long hashes only if certain
conditions are met:
The
Password column must be wide enough to hold long
values and the
--old-passwords option must not be given.
These conditions apply as follows:
Passwordcolumn
GRANT, or
SET PASSWORD. This is the behavior that occurs if you have upgraded to 4.1 but have not yet run the
mysql_fix_privilege_tablesscript to widen the
Passwordcolumn.
Passwordcolumn is wide, it can store either short or long password hashes. In this case,
GRANT, and
SET PASSWORDgenerate long hashes unless the server was started with the
--old-passwordsoption. clients can still use accounts that have
long password hashes), but it does prevent creation of a long
password hash in the
user table as the result of a password-changing
operation. Were that to occur, the account no longer could be used
by pre-4.1 clients. Without the
--old-passwords option, the following
undesirable scenario is possible:
--old-passwords, this results in the account having a long password hash.
This scenario illustrates that, if you must support older pre-4.1 clients,
it is dangerous to run a 4.1 server without using the
--old-passwords
option. By running the server with
--old-passwords, password-changing
operations will not generate long password hashes and thus do not cause
accounts to become inaccessible to older clients. (Those clients cannot
inadvertently lock themselves out by changing their password and ending
up with a long password hash.)
The downside of the
--old-passwords option is that any passwords you
create or change will use short hashes, even for 4.1 clients. Thus, you
lose the additional security provided by long password hashes. If you want
to create an account that has a long hash (for example, for use by 4.1
clients), you must do so while running the server without
--old-passwords.
The following scenarios are possible for running a 4.1 server:
Scenario 1: Short
Password column in user table:
Passwordcolumn.
GRANT, or
SET PASSWORDuse short hashes exclusively. Any change to an account's password results in that account having a short password hash.
--old-passwordsoption can be used but is superfluous because with a short
Passwordcolumn, the server will generate only short password hashes anyway.
Scenario 2: Long
Password column; server not started with
--old-passwords option:
Passwordcolumn.
GRANT, or
SET PASSWORDuse via
GRANT,
PASSWORD(), or
SET PASSWORD results in the account being
given a long password hash. From that point on, no pre-4.1 client can
authenticate to that account until the client upgrades to 4.1.
To deal with this problem, you can change a password in a special way.
For example, normally you use
SET PASSWORD as follows to change
an account:
Passwordcolumn.
--old-passwords).
GRANT, or
SET PASSWORDuse short hashes exclusively. Any change to an account's password results in that account having a short password hash.
In this scenario, you cannot create accounts that have long password
hashes, because the
--old-passwords option prevents generation of long hashes. Also,
if you create an account with a long hash before using the
--old-passwords
option, changing the account's password while
--old-passwords is in
effect results in the account being given a short password, causing it
to lose the security benefits of a longer hash.
The disadvantages for these scenarios may be summarized as follows:
In scenario 1, you cannot take advantage of longer hashes that provide more secure authentication.
In scenario 2, accounts with short hashes become inaccessible to pre-4.1
clients if you change their passwords without explicitly using
OLD_PASSWORD().
In scenario 3,
--old-passwords prevents accounts with short hashes from
becoming inaccessible, but password-changing operations cause accounts
with long hashes to revert to short hashes, and you cannot change them
back to long hashes while
--old-passwords is in effect.
An upgrade to MySQL 4.1 can cause a compatibility issue for
applications that use
PASSWORD() to generate passwords for their own
purposes. Applications really should not do this, because
should be used only to manage passwords for MySQL accounts. But some
applications use
PASSWORD() for their own purposes anyway.
If you upgrade to 4.1 and run the server under conditions where it
generates long password hashes, an application that uses hashing in MySQL 4.1.0 differs from hashing in 4.1.1 and up. The 4.1.0 differences.
This section describes how to set up accounts for clients of your MySQL server. It discusses the following topics::
-uor
--useroption..
PASSWORD()SQL function. Unix password encryption is the same as that implemented by the
ENCRYPT()SQL function. See the descriptions of the
PASSWORD()and
ENCRYPT()functions in section 12.8.2 Encryption.)
When you install MySQL, the grant tables are populated with an initial set of
accounts. These accounts have names and access privileges that are described
in section 2.9.3 Securing the Initial MySQL Accounts, which also discusses how to assign passwords
to them. Thereafter, you normally set up, modify, and remove MySQL accounts
using the
GRANT and
REVOKE statements.
See section 13.5.1.2
GRANT and
REVOKE Syntax..
You can create MySQL accounts in two ways:
GRANTstatements
The preferred method is to use
GRANT statements, because they are
more concise and less error-prone.
GRANT is available as of MySQL
3.22.11; its syntax is described in
section 13.5.1.2
GRANT and
REVOKE Syntax..9:
montyto be able to connect from anywhere as
monty. Without the
localhostaccount, the anonymous-user account for
localhostthat is created by
mysql_install_dbwould take precedence when
montyconnects from the local host. As a result,
montywould be treated as an anonymous user. The reason for this is that the anonymous-user account has a more specific
Hostcolumn value than the
'monty'@'%'account and thus comes earlier in the
usertable sort order. (
usertable sorting is discussed in section 5.5.5 Access Control, Stage 1: Connection Verification.)
adminand no password. This account can be used only by connecting from the local host. It is granted the
RELOADand
PROCESSadministrative privileges. These privileges allow the
adminuser to execute the
mysqladmin reload,
mysqladmin refresh, and
mysqladmin flush-xxxcommands, as well as
mysqladmin processlist. No privileges are granted for accessing any databases. You could add such privileges later by issuing additional
GRANTstatements.
dummyand no password. This account can be used only by connecting from the local host. No privileges are granted. The
USAGEprivilege in the
GRANTstatement:
bankaccountdatabase, but only from the local host.
expensesdatabase, but only from the host
whitehouse.gov.
customerdatabase,;
To remove an account, use the
DROP USER statement, which was added in
MySQL 4.1.1. For older versions of MySQL, use
DELETE instead.
The account removal procedure is described in section 13.5.1.1
DROP USER Syntax.
interest to many MySQL administrators, particularly those for Internet
Service Providers.
Starting from MySQL 4.0.2, you can limit the following server resources for individual accounts:.10.7:
FLUSH USER_RESOURCESstatement. The counts also can be reset by reloading the grant tables (for example, with a
FLUSH PRIVILEGESstatement or a
mysqladmin reloadcommand).
GRANT USAGEas described earlier and specify a limit value equal to the value that the account already has.:
Passwordcolumn:
shell> mysql -u root mysql mysql> INSERT INTO user (Host,User,Password) -> VALUES('%','jeffrey',PASSWORD('biscuit')); mysql> FLUSH PRIVILEGES;
UPDATEto set the
Passwordcolumn 5.6.1 MySQL Usernames and Passwords.:
-pyour_passor
--password=your_passoption on the command line. For example:
shell> mysql -u francis -pfrank db_nameThis is convenient but insecure, because your password becomes visible to system status programs such as
psthat may be invoked by other users to display command lines. MySQL clients typically overwrite the command-line password argument with zeros during their initialization sequence, but there is still a brief interval during which the value is visible.
-por
--passwordoption with no password value specified. In this case, the client program solicits the password from the terminal:
shell> mysql -u francis -p db_name Enter password: *******!
[client]section of the `.my.cnf' file in your home directory:
[client] password=your_passIf you store your password in `.my.cnf', the file should not be accessible to anyone but yourself. To ensure this, set the file access mode to
400or
600. For example:
shell> chmod 600 .my.cnfsection 4.3.2 Using Option Files discusses option files in more detail.
MYSQL_PWDenvironment variable. This method of specifying your MySQL password must be considered extremely insecure and should not be used. Some versions of
psinclude section F Environment Variables.
All in all, the safest methods are to have the client program prompt for the password or to specify the password in a properly protected option file. the requirements of individual applications..
To use SSL connections between the MySQL server and client programs, your system must be able to support OpenSSL and your version of MySQL must be 4.0.0 or newer.
To get secure connections to work with MySQL, you must do the following:
configurescript with the
--with-vioand
--with-openssloptions.
mysql.usertable. This is necessary if your grant tables date from a version prior to MySQL 4.0.0. The upgrade procedure is described in section 2.10.7 Upgrading the Grant Tables.
mysqldserver supports OpenSSL, examine the value of the
have_opensslsystem variable:
mysql> SHOW VARIABLES LIKE 'have_openssl'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | have_openssl | YES | +---------------+-------+If the value is
YES, the server supports OpenSSL connections..
GRANTOptions
MySQL can check X509 certificate attributes in addition to the usual
authentication that is based on the username and password. To specify
SSL-related options for a MySQL account, use the
REQUIRE clause of
the
GRANT statement.
See section 13.5.1.2
GRANT and
REVOKE Syntax.
There are different possibilities for limiting connection types for an account:
REQUIRE SSLoption limits the server to allow only SSL encrypted connections for the account. Note that this option can be omitted if there are any ACL records that allow non-SSL connections.
mysql> GRANT ALL PRIVILEGES ON test.* TO 'root'@'localhost' -> IDENTIFIED BY 'goodsecret' REQUIRE SSL;
REQUIRE X509means that the client must have a valid certificate but that the exact certificate, issuer, and subject do not matter. The only requirement is that it should be possible to verify its signature with one of the CA certificates.
mysql> GRANT ALL PRIVILEGES ON test.* TO 'root'@'localhost' -> IDENTIFIED BY 'goodsecret' REQUIRE X509;
REQUIRE ISSUER protected]';Note that the
ISSUERvalue should be entered as a single string.
REQUIRE SUBJECT 'subject'places the restriction on connection attempts that the client must present a valid X509 certificate with subject
'subject'on it. If the client presents a certificate that is valid but has a different subject, the server rejects the connection.
mysql> GRANT ALL PRIVILEGES ON test.* TO 'root'@'localhost' -> IDENTIFIED BY 'goodsecret' -> REQUIRE SUBJECT '/C=EE/ST=Some-State/L=Tallinn/ O=MySQL demo client certificate/ CN=Tonu Samuel/[email protected]';Note that the
SUBJECTvalue should be entered as a single string.
REQUIRE CIPHER 'cipher'is needed to ensure that strong enough ciphers and key lengths
CIPHER options can be
combined in the
REQUIRE clause like this:
mysql> GRANT ALL PRIVILEGES ON test.* TO 'root'@'localhost' -> IDENTIFIED BY 'goodsecret' -> REQUIRE SUBJECT '/C=EE/ST=Some-State/L=Tallinn/ O=MySQL demo client certificate/ CN=Tonu Samuel/[email protected]' -> AND ISSUER '/C=FI/ST=Some-State/L=Helsinki/ O=MySQL Finland AB/CN=Tonu Samuel/[email protected]' -> AND CIPHER 'EDH-RSA-DES-CBC3-SHA';
Note that the
SUBJECT and
ISSUER values each should be entered
as a single string.
Starting from MySQL 4.0.4, the
AND keyword is optional between
REQUIRE options.
The order of the options does not matter, but no option can be specified twice.
The following list describes options that are used for specifying the use of SSL, certificate files, and key files. These options are available beginning with MySQL 4.0. They may be given on the command line or in an option file.
--ssl
--ssl-ca,
--ssl-cert, and
--ssl-keyoptions. This option is more often used in its opposite form to indicate that SSL should not be used. To do this, specify the option as
--skip-sslor
--ssl=0. Note that use of
--ssldoesnclause in the
GRANTstatement. Then use this account to connect to the server, with both a server and client that have SSL support enabled.
--ssl-ca=file_name
--ssl-capath=directory_name
--ssl-cert=file_name
--ssl-cipher=cipher_list
openssl cipherscommand. Example:
--ssl-cipher=ALL:-AES:-EXP
--ssl-key=file_name
Here is a note about how to connect to get a secure connection to remote MySQL server with SSH (by David Carlson [email protected]):
SecureCRTfrom. Another option is
f-securefrom. You can also find some free ones on
Host_Name = yourmysqlserver_URL_or_IP. Set
userid=your_useridto log in to your server. This
useridvalue may not be the same as the username of your MySQL account..
This section discusses how to make database backups (full and incremental)
and how to perform
table maintenance. The syntax of the SQL statements described here is
given in section 13.5 Database Administration Statements.
Much of the information here pertains primarily to
MyISAM tables.
InnoDB backup procedures are given in section 15.2.
See section 13.1.7
SELECT Syntax and
section 13.5.2.2
BACKUP TABLE Syntax.
Another way to back up a database is to use the
mysqldump program or
the
mysqlhotcopy script.
See section 8.8 The
mysqldump Database Backup Program and
section 8.9 The
mysqlhotcopy Database Backup Program.
shell> mysqldump --tab=/path/to/some/dir --opt db_nameOr:
shell> mysqlhotcopy db_name /path/to/some/dirYou can also simply copy all table files (`*.frm', `*.MYD', and `*.MYI' files) as long as the server isn't updating anything. The
mysqlhotcopyscript uses this method. (But note that these methods will not work if your database contains
InnoDBtables.
InnoDBdoes not store table contents in database directories, and
mysqlhotcopyworks only for
MyISAMand
ISAMtables.)
mysqldif it's running, then start it with the
--log-bin[=file_name]option. See section 5.9.4 The Binary Log. The binary log files provide you with the information you need to replicate changes to the database that are made subsequent to the point at which you executed
mysqldump.
For
InnoDB tables, it's possible to perform an online backup that
takes no locks on tables; see section 8.8 The
mysqldump Database Backup Program
MySQL supports incremental backups: You need to start the server with the
--log-bin option to enable binary logging; see section 5.9.4 The Binary Log.s. See section 8.8 The
mysqldump Database Backup Program and section 8.9 The
mysqlhotcopy 5.9.4 The Binary Log.
mysqldumpbackup, or binary backup.
shell> mysqlbinlog hostname-bin.[0-9]* | mysqlIn your case, you may want to re-run only certain binary logs, from certain positions (usually you want to re-run all binary logs from the date of the restored backup, excepting possibly some incorrect queries). See section 8.5 The
mysqlbinlogBinary Log Utility for more information on the
mysqlbinlogutility and how to use it. If you are using the update logs instead (which is a deprecated feature removed in MySQL 5.0), you can process their contents like this:
shell> ls -1 -t -r hostname.[0-9]* | xargs cat | mysql
lsis used to sort the update log filenames into the right order.
You can also do selective backups of individual files:
SELECT * INTO OUTFILE 'file_name' FROM tbl_name.
LOAD DATA INFILE 'file_name' REPLACE ...To avoid duplicate records, the table must have a
PRIMARY KEYor a
UNIQUEindex. The
REPLACEkeyword:
FLUSH TABLES WITH READ LOCK.
mount vxfs snapshot.
UNLOCK TABLES. section 14 5.7.2.2 General Options for
myisamchk.
In many cases, you may find it simpler to do
MyISAM table maintenance
using the SQL statements that perform operations that
myisamchk can
do:
MyISAMtables, use
CHECK TABLEor
REPAIR TABLE.
MyISAMtables, use
OPTIMIZE TABLE.
MyISAMtables, use
ANALYZE TABLE.
These statements were introduced in different versions, but all are available
from MySQL 3.23.14 on.
See
section 13.5.2.1
ANALYZE TABLE Syntax,
section 13.5.2.3
CHECK TABLE Syntax,
section 13.5.2.5
OPTIMIZE TABLE Syntax,
and section 13.5.2.6
REPAIR TABLE Syntax..
myisamchkInvocation 5.
myisamchk
The options described in this section can be used for any
type of table maintenance operation performed by
myisamchk.
The sections following this one describe options that pertain only to specific
operations, such as table checking or repairing.
--debug=debug_options, -# debug_options
'd:t:o,file_name'.
--silent, -s
-stwice (
-ss) to make
myisamchkvery silent.
--verbose, -v
-dand
-e. Use
-vmultiple times (
-vv,
-vvv) for even more output.
--version, -V
--wait, -w
mysqldwith the
--skip-external-lockingoption, the table can be locked only by another
myisamchkcommand.:
--safe-recover.
CHAR,
VARCHAR, or
TEXTcolumns, because the sort operation needs to store the complete key values as it proceeds. If you have lots of temporary space and you can force
myisamchkto repair by sorting, you can use the
--sort-recoveroption.
myisamchk supports the following options for table checking operations:
--check, -c
--check-only-changed, -C
--extend-check, -e
myisamchkor
myisamchk --medium-checkshould be able to determine whether there are any errors in the table. If you are using
--extend-checkand have plenty of memory, setting the
key_buffer_sizevariable to a large value will help the repair operation run faster.
--fast, -F
--force, -f
myisamchkfinds any errors in the table. The repair type is the same as that specified with the
--repairor
-roption.
--information, -i
--medium-check, -m
--extend-checkoperation. This finds only 99.99% of all errors, which should be good enough in most cases.
--read-only, -T
myisamchkto check a table that is in use by some other application that doesn't use locking, such as
mysqldwhen run with the
--skip-external-lockingoption.
--update-state, -U
--check-only-changedoption, but you shouldn't use this option if the
mysqldserver is using the table and you are running it with the
--skip-external-lockingoption.
myisamchk
myisamchk supports the following options for table repair operations:
--backup, -B
--character-sets-dir=path
--correct-checksum
--data-file-length=#, -D #
--extend-check, -e
--force, -f
--keys-used=#, or (
isamchk -r).
--no-symlinks, -l
myisamchkrepairs the table that a symlink points to. This option doesn't exist as of MySQL 4.0, because versions from 4.0 on will not remove symlinks during repair operations.
--parallel-recover, -p
-rand
-n, but creates all the keys in parallel, using different threads. This option was added in MySQL 4.0.2. This is alpha code. Use at your own risk!
--quick, -q
myisamchkto modify the original data file in case of duplicate keys.
--recover, -r
ISAM/
MyISAMtables). If you want to recover a table, this is the option to try first. You should try
-oonly if
myisamchkreports that the table can't be recovered by
-r. (In the unlikely case that
-rfails, the data file is still intact.) If you have lots of memory, you should increase the value of
sort_buffer_size.
--safe-recover, -o
-r, but can handle a couple of very unlikely cases that
-rcannot. This recovery method also uses much less disk space than
-r. Normally, you should repair first with
-r, and then with
-oonly if
-rfails. If you have lots of memory, you should increase the value of
key_buffer_size.
--set-character-set=name
--sort-recover, -n
myisamchkto use sorting to resolve the keys even if the temporary files should be very big.
--tmpdir=path, -t path
myisamchkuses the value of the
TMPDIRenvironment variable. Starting from MySQL 4.1,
tmpdircan be set to a list of directory paths that will be used successively in round-robin fashion for creating temporary files. The separator character between directory names should be colon (`:') on Unix and semicolon (`;') on Windows, NetWare, and OS/2.
--unpack, -u
myisampack.
myisamchk
myisamchk supports the following options for actions other than
table checks and repairs:
--analyze, -a
myisamchk --description --verbose tbl_namecommand or the
SHOW KEYS FROM tbl_namestatement.
--description, -d
--set-auto-increment[=value], -A[value]
AUTO_INCREMENTnumbering for new records to start at the given value (or higher, if there are already records with
AUTO_INCREMENTvalues this large). If value is not specified,
AUTO_INCREMENTnumber for new records begins with the largest value currently in the table, plus one.
--sort-index, -S
--sort-records=#, -R #
SELECTand
ORDER BYoperations that use this index. (The first time you use this option to sort a table, it may be very slow.) To determine a table's index numbers, use
SHOW KEYS, which displays a table's indexes in the same order that
myisamchksees them. Indexes are numbered beginning with 1.
myisamchkMemory:
--quick; in this case, only the index file is re-created. This space is needed on the same filesystem as the original data file! (The copy is created in the same directory as the original.)
--recoveror
--sort-recover(but not when using
--safe-recover), you will need space for a sort buffer. The amount of space required is:
(largest_key + row_pointer_length) * number_of_rows * 2You can check the length of the keys and the
row_pointer_lengthwith
myisamchk -dv tbl_name. This space is allocated in the temporary directory (specified by
TMPDIRor
--tmpdir=path).
If you have a problem with disk space during repair, you can try to use
--safe-recover instead of
--recover.
myisamchkfor reason for why a table could be
corrupted. See section 14.
MyISAMTables for Errors
To check a
MyISAM table, use the following commands:
myisamchk tbl_name
myisamchkwithout options or with either the
-sor
--silentoption.
myisamchk -m tbl_name
myisamchk -e tbl_name
-emeans ``extended check''). It does a check-read of every key for each row to verify that they indeed point to the correct row. This may take a long time for a large table that has many indexes. Normally,
myisamchkstops after the first error it finds. If you want to obtain more information, you can add the
--verbose(
-v) option. This causes
myisamchkto keep going, up through a maximum of 20 errors.
myisamchk -e -i tbl_name
-ioption tells
myisamchkto print some informational statistics, too.
In most cases, a simple
myisamchk with no arguments other than the
table name is sufficient to check a table..
See section 13.5.2.3
CHECK TABLE Syntax
and section 13.5.2.6
REPAIR TABLE Syntax.
The symptoms of a corrupted table include queries that abort unexpectedly and observable errors such as these: 5 'checked'.:
myisamchk -r tbl_name(
-rmeans ``recovery mode''). This will remove incorrect records and deleted records from the data file and reconstruct the index file.:
shell> mysql db_name mysql> SET AUTOCOMMIT=1; mysql> TRUNCATE TABLE tbl_name; mysql> quitIf your version of MySQL doesn't have
TRUNCATE TABLE, use
DELETE FROM tbl_nameinstead.:
myisamchk -r.. See section 13.5.2.5
OPTIMIZE TABLE Syntax.
myisamchk also has a number of other options you can use to improve
the performance of a table:
-S,
--sort-index
-R index_num,
--sort-records=index_num
-a,
--analyze
For a full description of the options, see section 5.7.2.1
myisamchk Invocation Syntax.
It is a good idea to perform table checks on a regular basis rather than
waiting for problems to occur.
One way to check and repair
MyISAM tables is
with the
CHECK TABLE and
REPAIR TABLE statements.
These are available starting with MySQL 3.23.16.
See section 13.5.2.3
CHECK TABLE Syntax and
section 13.5.2.6
REPAIR TABLE Syntax.-sized
To obtain a description of a table or statistics about it, use the commands shown here. We explain some of the information in more detail later:
myisamchk -d tbl_nameRuns is no risk of destroying data.
myisamchk -d -v tbl_nameAdding
-vruns
myisamchkin verbose mode so that it produces more information about what it is doing.
myisamchk -eis tbl_nameShows only the most important information from a table. This operation is slow because it must read the entire table.
myisamchk -eiv tbl_nameTh fileName of the
MyISAM(index) file.
File-versionVersion of
MyISAMformat. Currently always 2.
Creation timeWhen the data file was created.
Recover timeWhen the index/data file was last reconstructed.
Data recordsHow many records are in the table.
Deleted blocksHow many deleted blocks still have reserved space. You can optimize your table to minimize this space. See section 5.7.2.10 Table Optimization.
Datafile partsFor dynamic record format, this indicates how many data blocks there are. For an optimized table without fragmented records, this is the same as
Data records.
Deleted dataHow many bytes of unreclaimed deleted data there are. You can optimize your table to minimize this space. See section 5.7.2.10 Table Optimization.
Datafile pointerTheThe size of the index file pointer, in bytes. It is usually 1, 2, or 3 bytes. Most tables manage with 2 bytes, but this is calculated automatically by MySQL. It is always a block address.
Max datafile lengthHow long the table data file can become, in bytes.
Max keyfile lengthHow long the table index file can become, in bytes.
RecordlengthHow much space each record takes, in bytes.
Record formatThe format used to store table rows. The preceding examples use
Fixed length. Other possible values are
Compressedand
Packed.
table descriptionA list of all keys in the table. For each key,
myisamchkdisplays some low-level information:
KeyThis key's number.
StartWhere in the record this index part starts.
LenHow long this index part is. For packed numbers, this should always be the full length of the column. For strings, it may be shorter than the full length of the indexed column, because you can index a prefix of a string column.
IndexWhether a key value can exist multiple times in the index. Values are
uniqueor
multip.(multiple).
TypeWhat data type this index part has. This is a
MyISAMdata type with the options
packed,
stripped, or
empty.
RootAddress of the root index block.
BlocksizeThe size of each index block. By default this is 1024, but the value may be changed at compile time when MySQL is built from source.
Rec/keyThis.
table descriptionlines for the ninth index. This indicates that it is a multiple-part index with two parts.
Keyblocks usedWhat percentage of the keyblocks are used. When a table has just been reorganized with
myisamchk, as for the table in the examples, the values are very high (very near the theoretical maximum).
PackedMySQL tries to pack keys with a common suffix. This can only be used for indexes on
CHAR,
VARCHAR, or
DECIMALcolumns. For long indexed strings that have similar leftmost parts, this can significantly reduce the space used. In the third example above, the fourth key is 10 characters long and a 60% reduction in space is achieved.
Max levelsHow deep the B-tree for this key is. Large tables with long key values get high values.
RecordsHow many rows are in the table.
M.recordlengthThe average record length. This is the exact record length for tables with fixed-length records, because all records have the same length.
PackedMySQL strips spaces from the end of strings. The
Packedvalue indicates the percentage of savings achieved by doing this.
Recordspace usedWhat percentage of the data file is used.
Empty spaceWhat percentage of the data file is unused.
Blocks/RecordAverage number of blocks per record (that is, how many links a fragmented record is composed of). This is always 1.0 for fixed-format tables. This value should stay as close to 1.0 as possible. If it gets too big, you can reorganize the table. See section 5.7.2.10 Table Optimization.
RecordblocksHow many blocks (links) are used. For fixed format, this is the same as the number of records.
DeleteblocksHow many blocks (links) are deleted.
RecorddataHow many bytes in the data file are used.
Deleted dataHow many bytes in the data file are deleted (unused).
Lost spaceIf a record is updated to a shorter length, some space is lost. This is the sum of all such losses, in bytes.
LinkdataWhen the dynamic table format is used, record fragments are linked with pointers (4 to 7 bytes each).
Linkdatais the sum of the amount of storage used by all such pointers.
If a table has been compressed with
myisampack,
myisamchk
-d prints additional information about each table column. See
section 8.2
myisampack, the MySQL Compressed Read-only Table Generator, for an example of this
information and a description of what it means.
This section describes how to configure the server to use different character sets. It also discusses how to set the server's time zone and enable per-connection time zone support..8.8 specific character set as follows:
[client] default-character-set=charset
This is normally unnecessary, however.
In MySQL 4.0, to get German sorting order, you should start
mysqld
with a
--default-character-set=latin1_de option. This affects server
behavior in several ways:
ä -> ae ö -> oe ü -> ue ß -> ss
LIKE, the one-character to two-character mapping is not done. All letters are converted to uppercase. Accents are removed from all letters except
Ü,
ü,
Ö,
ö,
Ä, and
ä.
In MySQL 4.1 and up, character set and collation are specified separately.
You should select the
latin1 character set and either the
latin1_german1_ci or
latin1_german2_ci collation. For
example, to start the server with the
latin1_german1_ci collation,
use the
--character-set-server=latin1 and
--collation-server=latin1_german1_ci options.
For information on the differences between these two collations, see section 10.11.2 West European Character Sets..
This section discusses the procedure for adding add another character set to MySQL. You must have a MySQL source distribution to use these instructions.
To choose the proper procedure, decide whether the character set is simple or complex:
For example,
latin1 and
danish are simple character sets,
whereas
big5 and
czech are complex character sets.
In the following procedures, the name of your character set is represented by MYSET.
For a simple character set, do the following:
ctypearray takes up the first 257 words. The
to_lower[],
to_upper[]and
sort_order[]arrays take up 256 words each after that.
CHARSETS_AVAILABLEand
COMPILED_CHARSETSlists in
configure.in.
For a complex character set, do the following:
ctype_MYSET,
to_lower_MYSET, and so on. These correspond to the arrays for a simple character set. See section 5.8.4 The Character Definition Arrays.
/* * This comment is parsed by configure to create ctype.c, * so don't change it unless you know what you are doing. * * .configure. number_MYSET=MYNUMBER * .configure. strxfrm_multiply_MYSET=N * .configure. mbmaxlen_MYSET=N */The
configureprogram uses this comment to include the character set into the MySQL library automatically. The
strxfrm_multiplyand
mbmaxlenlines are explained in the following sections. You need include them only if you need the string collating functions or the multi-byte character set functions, respectively.
my_strncoll_MYSET()
my_strcoll_MYSET()
my_strxfrm_MYSET()
my_like_range_MYSET()
CHARSETS_AVAILABLEand
COMPILED_CHARSETSlists in
configure.in.
The `sql/share/charsets/README' file includes additional instructions.
If you want to have the character set included in the MySQL
distribution, mail a patch to the MySQL
internals mailing list.
See section 1.4.1.1 The MySQL Mailing Lists. 5.8.
If you try to use a character set that is not compiled into your binary, you might run into the following problems:
--character-sets-diroption when you run the program in question.
ERROR 1105: File '/usr/local/share/mysql/charsets/?.conf' not found (Errcode: 2)In this case, you should either get a new
Indexfile or manually add the name of any missing character sets to the current file.
For
MyISAM tables, you can check the character set name and number for a
table with
myisamchk -dvv tbl_name.
Before MySQL 4.1.3,.
Beginning with MySQL 4.1.3, the server maintains several time zone settings:
system_time_zonesystem variable.
time_zonesystem variable indicates the time zone the server currently is operating in. The initial value is
'SYSTEM', which indicates that the server time zone is the same as the system time zone. The initial value can be specified explicitly with the
--default-time-zone=timezoneoption. If you have the
SUPERprivilege, you can set the global value at runtime with this statement:
mysql> SET GLOBAL time_zone = timezone;
time_zonevariable. Initially this is the same as the global
time_zonevariable, but can be reset with this statement:
mysql> SET time_zone = timezone;
The current values of the global and per-connection time zones can be retrieved like this:
mysql> SELECT @@global.time_zone, @@session.time_zone;
timezone values can be given as strings indicating an offset
from UTC, such as
'+10:00' or
'-6:00'. If the time zone-related
tables in the
mysql database have been created and populated, you
can also used named time zones, such as
'Europe/Helsinki',
'US/Eastern', or
'MET'. The value
'SYSTEM' indicates
that the time zone should be the same as the system time zone. Time zone
names are not case sensitive.
The MySQL installation procedure creates the time zone tables in the
mysql database, but does not load them. You must do so manually.
(If you are upgrading to MySQL 4.1.3 or later from an earlier version, you
should create the tables by upgrading your
mysql database. Use the
instructions in section 2.10.7 Upgrading the Grant Tables.)name,
If your time zone needs to account for leap seconds, initialize the leap second information like this, where tz_file is the name of your time zone file:
shell> mysql_tzinfo_to_sql --leap tz_file | mysql -u root mysql
If your system doesn't have a zoneinfo database (for example, Windows or
HP-UX), you can use the package of pre-built time zone tables that is
available for download at. This package contains
`.frm', `.MYD', and `.MYI' files for the
MyISAM time
zone tables. These tables should belong to the
mysql database, so
you should place the files in the `mysql' subdirectory of your MySQL
server's data directory. The server should be shut down while you do this.
Warning! Please don't use the downloadable package if your system
has a zoneinfo database. Use the
mysql_tzinfo_to_sql utility
instead! Otherwise, you may cause a difference in datetime handling between
MySQL and other applications on your system. 13.5.5.2
FLUSH Syntax.
If you are using MySQL replication capabilities, slave replication servers maintain additional log files called relay logs. These are discussed in section 6 Replication in MySQL.. See section E.1.4 Using a Stack Trace.=file_name 5.
(The query log also contains all statements, whereas the update and binary
logs do not contain statements that only select data.).
Note: The update log has been deprecated and replaced by the binary log. See section 5.
The binary log has replaced the old update log, which is unavailable starting from MySQL 5.0. The binary log contains all information that is available in the update log in a more efficient format and in a manner that is transactionally safe.
The binary log contains all statements which updated data or (starting from
MySQL 4.1.3) could potentially have updated it (for example, a
DELETE
which matched no rows). 5 section.
See section 13.5.5.5
RESET Syntax and section 13.6.1 SQL Statements for Controlling Master Servers.
The binary log format has some known limitations which can affect recovery from
backups, especially in old versions. These caveats which also affect
replication are listed at section 6.7 Replication Features and Known Problems..
You can use the following options to
mysqld to affect what is logged
to the binary log. See also the discussion that follows this option list.
--binlog-do-db=db_name
USE) is db_name. All other databases that are not explicitly mentioned are ignored. If you use this, you should ensure that you only do updates in the current database. Observe that there is an exception to the
CREATE/ALTER/DROP DATABASEstatements, which use the database manipulated to decide if it should log the statement rather than. Similar to the case for
--binlog-do-db, there is an exception to the
CREATE/ALTER/DROP DATABASEstatements, which use the database manipulated to decide if it should log the statement rather than the current database.
To log or ignore multiple databases, specify the appropriate option multiple times, once for each database.
The rules for logging or ignoring updates to the binary log are
evaluated according to the following rules. Observe that there is an
exception for
CREATE/ALTER/DROP DATABASE statements. In those
cases, the database being created/altered/dropped replace the
current database in the rules below.
binlog-do-dbor
binlog-ignore-dbrules?
binlog-do-dbor
binlog-ignore-dbor both). Is there a current database (has any database been selected by
USE?)?
binlog-do-dbrules?
binlog-do-dbrules?
binlog-ignore-dbrules. Does the current database match any of the
binlog-ignore-dbrules?
(see section 13.6.1 SQL Statements for Controlling Master Servers), which will also safely update the binary
log index file for you (and which can take a date argument since
MySQL 4.1)
A client with the
SUPER privilege can disable binary
logging of its own statements by using a
SET
SQL_LOG_BIN=0 statement. See section 13.5.3
SET Syntax. 8 did have to use a temporary file. These two variables
can be used for tuning
binlog_cache_size to a large enough value that
avoids the use of temporary files. 6.5 Replication Compatibility Between MySQL Versions.). See section 5.2.3 Server System Variables. Even with this set to 1, there is still the chance of an
inconsistency between the tables content and the binary log content in
case of crash. For example, if using
InnoDB tables, and the MySQL
server processes a
COMMIT statement, it writes the whole transaction
to the binary log and then commits this transaction into
InnoDB. If
it crashes between those two operations, at restart the transaction will be
rolled back by InnoDB but still exist in the binary log.. The effect of this option is that at restart
after a crash,
after doing a rollback of transactions, the MySQL server will cut rolled
back
InnoDB transactions from the binary log. This ensures that the
binary log reflects the exact data of
InnoDB tables, and so, that
the slave remains in sync with the master (not receiving a statement which
has been rolled back). Note that
--innodb-safe-binlog can be used
even if the MySQL server updates other storage engines than InnoDB. Only
statements/transactions affecting
InnoDB tables are subject to
being removed from the binary log at
InnoDB's crash recovery. If at
crash recovery the MySQL server discovers that the binary log is shorter
than it should have been (that is, it lacks at least one successfully committed
InnoDB transaction), which should not happen if
sync_binlog=1
and the disk/filesystem do an actual sync when they are requested to (some
don't), it will print an error message ("The binary log <name> is shorter
than its expected size"). In this case, this binary log is not correct,
replication should be restarted from a fresh master's data snapshot.
Before MySQL 4.1.9, a write to a binary log file or binary log index file
that failed due to a full disk or an exceeded quota resulted in corruption
of the file. Starting from MySQL 4.1.9, writes to the binary log file and
binary log index file are handled the same way as writes to
MyISAM
tables.
See section A.4.3 How MySQL Handles a Full Disk. 5.2.1
mysqld Command-Line Options.
The MySQL Server can create a number of different log files that make it easy to see what is going on. See section 5 5:
--log) or slow query logging (
--log-slow-queries) is used, closes and reopens the log file (`mysql.log' and ``hostname`-slow.log' as default).
--log-update) or binary logging (
--log-bin) is used, closes the log and opens a new log file with a higher sequence number.
If you are using only an update log, you only have to rename the log file and then flush the logs before making a backup. For example, you can do something like this:
shell> cd mysql-data-directory shell> mv mysql.log mysql.old shell> mysqladmin flush-logs
Then make a backup and remove `mysql.old'..3 Specifying Program Options.
At least the following options must be different for each server:
--port=port_num
--portcontrols the port number for TCP/IP connections.
--socket=path
--socketcontrols the Unix socket file path on Unix and the name of the named pipe on Windows. On Windows, it's necessary to specify distinct pipe names only for those servers that support named pipe connections.
--shared-memory-base-name=name
--pid-file=path
If you use the following log file options, they must be different for each server:
--log=path
--log-bin=path
--log-update=path
--log-error=path
--log-isam=path
--bdb-logdir=path
Log file options are described in section 5!
lockddaemon, Windows. This section describes how to make sure that you start each server with different values for those startup options that must be unique per server, such as the data directory. These options are described in section 5.10 Running Multiple MySQL Servers on the Same Machine.f C:\> C:\mysql\bin\mysq\bin\mysqladmin --port=3307 shutdown C:\> C:\mysql\bin\mysq-nt --defaults-file=C:\my-opts1.cnf
Modify `C:\my-opts2.cnf' similarly for use by the second server.
On NT-based systems, a MySQL server can be run as a Windows service. The procedures for installing, controlling, and removing a single MySQL service are described in section 2.3.12.
mysqld-ntusing the service name of
mysqld1and the 4.0.17
mysqld-ntusingInstallTo start the services, use the services manager, or use
NET STARTwith the appropriate service names:
C:\> NET START mysqld1 C:\> NET START mysqld2To stop the services, use the services manager, or use
NET STOPwith the appropriate service names:
C:\> NET STOP mysqld1 C:\> NET STOP mysqld2
--defaults-filewhenFor the 4.0.17
mysqld-nt, create a file `C:\my-opts2.cnf' that looks like this:
[mysqld] basedir = C:/mysql-4.0.17 port = 3308 enable-named-pipe socket = mypipe2Install --install mysqld2 --defaults-file=C:\my-opts2.cnfTo use a
--defaults-fileoption.
The easiest way is!
section 5.1.5 The
mysqld_multi Program for Managing Multiple MySQL Servers.
When you want to connect with a client program to a MySQL server that is listening to different network interfaces than those compiled into your client, you can use one of the following methods:
--host=host_name --port=port_numberto connect via TCP/IP to a remote server, with
--host=127.0.0.1 --port=port_numberto connect via TCP/IP to a local server, or with
--host=localhost --socket=file_nameto connect to a local server via a Unix socket file or a Windows named pipe.
--protocol=tcpto connect via TCP/IP,
--protocol=socketto connect via a Unix socket file,
--protocol=pipeto connect via a named pipe, or
--protocol=memoryto connect via shared memory. For TCP/IP connections, you may also need to specify
--hostand
--portoptions. For the other types of connections, you may need to specify a
--socketoption to specify a Unix socket file or named pipe name, or a
--shared-memory-base-nameoption to specify the shared memory name. Shared memory connections are supported only on Windows.
MYSQL_UNIX_PORTand
MYSQL_TCP_PORTenvironment F Environment Variables.
[client]group of an option file. For example, you can use `C:\my.cnf' on Windows, or the `.my.cnf' file in your home directory on Unix. See section 4.3.2 Using Option Files.
mysql_real_connect()call. You can also have the program read option files by calling
mysql_options(). See section 21.2.3 C API Function Descriptions.
DBD::mysqlmodule, you can read options from MySQL option files. For example:
$dsn = "DBI:mysql:test;mysql_read_default_group=client;" . "mysql_read_default_file=/usr/local/mysql/data/my.cnf"; $dbh = DBI->connect($dsn, $user, $password);See section 21.4 MySQL Perl API. Other programming interfaces may provide similar capabilities for reading option files..
This section describes how the query cache works when it is operational. section 5.11 5.11:
mysqlsystem database.
SELECT ... IN SHARE MODE SELECT ... INTO OUTFILE ... SELECT ... INTO DUMPFILE ... SELECT * FROM ... WHERE autoincrement_col IS NULLThe last form is not cached because it is used as the ODBC workaround for obtaining the last insert ID value. See section 22.1.14.1 How to Get the Value of an
AUTO_INCREMENTColumn in ODBC.
TEMPORARYtables.
SELECTprivilege for all the involved databases and tables. If this is not the case, the cached result is not used.
SELECTOptions
There are two query cache-related options that may be
specified in a
SELECT statement:
SQL_CACHE
query_cache_typesystem variable is
ONor
DEMAND.
SQL_NO_CACHE
Examples:
SELECT SQL_CACHE id, name FROM customer; SELECT SQL_NO_CACHE id, name FROM customer;:
0or
OFFprevents caching or retrieval of cached results.
1or
ONallows caching except of those statements that begin with
SELECT SQL_NO_CACHE.
2or
DEMANDcauses:
query_cache_min_res_unitis 4KB. This should be adequate for most cases.
query_cache_min_res_unit. The number of free blocks and queries removed due to pruning are given by the values of the
Qcache_free_blocksand
Qcache_lowmem_prunesstatus variables.
Qcache_total_blocksand
Qcache_queries_in_cachestatus variables), you can increase performance by increasing
query_cache_min_res_unit. However, be careful to not make it too large (see the previous item).
query_cache_min_res_unit is present from MySQL 4.1. 5.11.3 Query Cache Configuration.
Go to the first, previous, next, last section, table of contents. | https://docs.huihoo.com/mysql/5.0.3-alpha/manual_MySQL_Database_Administration.html | 2019-08-17T15:03:10 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.huihoo.com |
This article is intended for individuals who administer CAB for Google G Suite. It is structured in such a way that you easily find the information you need to set up and manage the respective environment.
Scope and prerequisites
CAB for G Suite covers the following:
Data backup (emails, chats, calendars, contacts, tasks, and documents)
Backup can be activated for all or selected user accounts.
Backup data restorenotification_importantCAB restore process is non-destructive; that is, the restored data does not overwrite the existing one.
Backup data export
To administer CAB for G Suite, the following are required:
G Suite subscription
G Suite account with Super Admin role
Setting up admin account
In order to properly activate the backup task and automatically discover all user accounts, your G Suite admin account must have API access enabled. Also, you have to turn POP and IMAP access on for all user accounts in your organization.
Backup task activation
Once you have set up your G Suite admin account, you can activate G Suite backup task as follows:
In the CAB Management Portal, click BACKUPS in the sidebar menu.
Click Add Backup Task, and then click G Suite.
Click Integrate with Google, G Suite or Edit (
).
Change the backup settings as needed (see below), and then click Save.
Admin details
Here, you can change the admin account used to activate the backup task. For this, click Integrate with Google, and then sign in to a different G Suite admin account.
User account backup management
To manage backups at a user account level and view the relevant details, click BACKUPS in the sidebar menu, and then click G Suite or Edit (
).
Select Automatically start a backup when a new mailbox is added to automatically start backup for new G Suite.
Backup data restore and export
To restore or export the backup data, follow these steps:
In the CAB Management Portal, click RECOVERY in the sidebar menu, and then click G Suite., and then click Restore or Download.
Via Item Search
Select one of the available categories (Email, Documents, Contacts, Tasks, or Calendars) to search in.
Enter the search query, and then click Continue.wb_incandescent For Email, Documents, and Calendars, click More (
) to show the advanced search options.. Also, you can choose the export format for the archived files, that is Standard in .EML (emails in
.eml, contacts in
.vcf, tasks in
.ics, and calendars in
.ical) or Outlook compatible .PST.
Once you have initiated the restore or the download process, you can view its current status in the Restore & download status section.
You can manage the process using the following special actions in the Action column: | https://docs.infrascale.com/cab/application-guides/g-suite | 2019-08-17T15:40:44 | CC-MAIN-2019-35 | 1566027313428.28 | [array(['/images/cab/cab-g-suite/cab-g-suite-admin-details.png', None],
dtype=object)
array(['/images/cab/cab-g-suite/cab-g-suite-admin-details.png', None],
dtype=object)
array(['/images/cab/cab-g-suite/cab-g-suite-user-account-backup-management.png',
None], dtype=object)
array(['/images/cab/cab-g-suite/cab-g-suite-user-account-backup-management.png',
None], dtype=object)
array(['/images/cab/cab-archived-data-indexing.png', None], dtype=object)
array(['/images/cab/cab-archived-data-indexing.png', None], dtype=object)
array(['/images/cab/cab-g-suite/cab-g-suite-backup-preferences.png', None],
dtype=object)
array(['/images/cab/cab-g-suite/cab-g-suite-backup-preferences.png', None],
dtype=object)
array(['/images/cab/cab-g-suite/cab-g-suite-domain-backup-management.png',
None], dtype=object)
array(['/images/cab/cab-g-suite/cab-g-suite-domain-backup-management.png',
None], dtype=object)
array(['/images/cab/cab-g-suite/cab-g-suite-restore-status.png', None],
dtype=object)
array(['/images/cab/cab-g-suite/cab-g-suite-restore-status.png', None],
dtype=object) ] | docs.infrascale.com |
Search results
Authentication
The API Supports two authentication methods: the first one is to authenticate using your user credentials (used by the UI, not recommended for programmatic use of the API) and the second one uses Authentication Tokens that can be created and revoked for any reason.
Authenticating using user credentials
To Authenticate using your credentials, simply send two headers:
x-api-email: YOUR_EMAIL_HERE
x-api-password: YOUR_PASSWORD_HERE
Note that authenticating using this method is not recommended due to security reasons. Use this method only to create Authentication Tokens.
Creating an Authentication Token
To create an Authentication Token, login and click on your user name to open the user menu, then choose "Api Tokens" from the menu. Click on Generate to create a new Authentication Token. After generating a new token, you'll receive the token, which you'll need to save as a secret somewhere and use it for API calls to Upsolver. You will not be able to receive the token again.
Alternatively, send a POST request to
/api-tokens with the name and description of the Api Token, for example:
curl --request POST \ --url \ --header 'content-type: application/json' \ --header 'x-api-email: YOUR_EMAIL_HERE' \ --header 'x-api-password: YOUR_PASSWORD_HERE' \ --data '{ "displayData": { "name": "Data Sources Bot API Token", "description": "API Token used by the bot to create Data Sources" } }'
Fill your email and password in the headers.
The response should look like this:
{ "id": "12345678-1234-1234-1234-1234567890ab", "organizationId": "12345678-1234-1234-1234-1234567890ab", "displayData": { "name": "Data Sources Bot API Token", "description": "API Token used by the bot to create Data Sources", "statusMessage": "ok", "statusType": "ok", "creationTime": "2018-11-01T12:09:27.971Z", "createdBy": "Documentation Bot ([email protected])" }, "apiToken": "abc12345678901234567890123456789", "createdBy": "[email protected]" }
Copy the
apiToken field and save it in a secret place. Authentication Tokens can't be restored, so if you lost it you will need to create a new one.
Then every request should have a header "Authorization" with your token, for example, in my case :
curl --request GET --url --header 'authorization: abc12345678901234567890123456789' | https://docs.upsolver.com/API/authentication.html | 2019-08-17T14:36:51 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.upsolver.com |
Updating & Deleting Zones
Follow this guide to update basic information about your zones, or to delete them.
Everything related to updating/deleting zones is on the "Settings" tab.
- Click DNS in the nav menu on the left hand side.
- Select the zone you want to modify from the list.
- Click the Settings tab underneath the zone name at the top.
Updating a Zone
You can't change the root domain of a zone after it has been created.
Deleting a Zone
Locate the delete form on the right-hand side. Enter the root domain of the zone into the box, and click "Delete! | https://docs.cycle.io/dns/zones/updating-deleting-zones/ | 2019-08-17T15:15:17 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.cycle.io |
CFA.
| https://docs.infrascale.com/dr/cfa/management-console/server/scheduled-jobs | 2019-08-17T14:49:22 | CC-MAIN-2019-35 | 1566027313428.28 | [array(['/images/cfa-management-console/server-tab-imgs/image7.png', None],
dtype=object)
array(['/images/cfa-management-console/server-tab-imgs/image7.png', None],
dtype=object) ] | docs.infrascale.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.