code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
---|---|
mariadb SHOW PACKAGE BODY STATUS SHOW PACKAGE BODY STATUS
========================
**MariaDB starting with [10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/)**Oracle-style packages were introduced in [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/).
Syntax
------
```
SHOW PACKAGE BODY STATUS
[LIKE 'pattern' | WHERE expr]
```
Description
-----------
The `SHOW PACKAGE BODY STATUS` statement returns characteristics of stored package bodies (implementations), such as the database, name, type, creator, creation and modification dates, and character set information. A similar statement, `[SHOW PACKAGE STATUS](../show-package-status/index)`, displays information about stored package specifications.
The `LIKE` clause, if present, indicates which package names to match. The `WHERE` and `LIKE` clauses can be given to select rows using more general conditions, as discussed in [Extended SHOW](../extended-show/index).
The [ROUTINES table](../information-schema-routines-table/index) in the INFORMATION\_SCHEMA database contains more detailed information.
Examples
--------
```
SHOW PACKAGE BODY STATUS LIKE 'pkg1'\G
*************************** 1. row ***************************
Db: test
Name: pkg1
Type: PACKAGE BODY
Definer: root@localhost
Modified: 2018-02-27 14:44:14
Created: 2018-02-27 14:44:14
Security_type: DEFINER
Comment: This is my first package body
character_set_client: utf8
collation_connection: utf8_general_ci
Database Collation: latin1_swedish_ci
```
See Also
--------
* [SHOW PACKAGE STATUS](../show-package-status/index)
* [SHOW CREATE PACKAGE BODY](../show-create-package-body/index)
* [CREATE PACKAGE BODY](../create-package-body/index)
* [DROP PACKAGE BODY](../drop-package-body/index)
* [Oracle SQL\_MODE](../sql_modeoracle-from-mariadb-103/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ISNULL ISNULL
======
Syntax
------
```
ISNULL(expr)
```
Description
-----------
If *`expr`* is NULL, ISNULL() returns 1, otherwise it returns 0.
See also [NULL Values in MariaDB](../null-values-in-mariadb/index).
Examples
--------
```
SELECT ISNULL(1+1);
+-------------+
| ISNULL(1+1) |
+-------------+
| 0 |
+-------------+
SELECT ISNULL(1/0);
+-------------+
| ISNULL(1/0) |
+-------------+
| 1 |
+-------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb POINT POINT
=====
Syntax
------
```
Point(x,y)
```
Description
-----------
Constructs a [WKB](../wkb/index) Point using the given coordinates.
Examples
--------
```
SET @g = ST_GEOMFROMTEXT('Point(1 1)');
CREATE TABLE gis_point (g POINT);
INSERT INTO gis_point VALUES
(PointFromText('POINT(10 10)')),
(PointFromText('POINT(20 10)')),
(PointFromText('POINT(20 20)')),
(PointFromWKB(AsWKB(PointFromText('POINT(10 20)'))));
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Comparison Operators Comparison Operators
=====================
The comparison operators include: !=, <, <=, <=>, >=, >, etc...
| Title | Description |
| --- | --- |
| [!=](../not-equal/index) | Not equal operator. |
| [<](../less-than/index) | Less than operator. |
| [<=](../less-than-or-equal/index) | Less than or equal operator. |
| [<=>](../null-safe-equal/index) | NULL-safe equal operator. |
| [=](../equal/index) | Equal operator. |
| [>](../greater-than/index) | Greater than operator. |
| [>=](../greater-than-or-equal/index) | Greater than or equal operator. |
| [BETWEEN AND](../between-and/index) | True if expression between two values. |
| [COALESCE](../coalesce/index) | Returns the first non-NULL parameter |
| [GREATEST](../greatest/index) | Returns the largest argument. |
| [IN](../in/index) | True if expression equals any of the values in the list. |
| [INTERVAL](../interval/index) | Index of the argument that is less than the first argument |
| [IS](../is/index) | Tests whether a boolean is TRUE, FALSE, or UNKNOWN. |
| [IS NOT](../is-not/index) | Tests whether a boolean value is not TRUE, FALSE, or UNKNOWN |
| [IS NOT NULL](../is-not-null/index) | Tests whether a value is not NULL |
| [IS NULL](../is-null/index) | Tests whether a value is NULL |
| [ISNULL](../isnull/index) | Checks if an expression is NULL |
| [LEAST](../least/index) | Returns the smallest argument. |
| [NOT BETWEEN](../not-between/index) | Same as NOT (expr BETWEEN min AND max) |
| [NOT IN](../not-in/index) | Same as NOT (expr IN (value,...)) |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Modulo Operator (%) Modulo Operator (%)
===================
Syntax
------
```
N % M
```
Description
-----------
Modulo operator. Returns the remainder of `N` divided by `M`. See also [MOD](../mod/index).
Examples
--------
```
SELECT 1042 % 50;
+-----------+
| 1042 % 50 |
+-----------+
| 42 |
+-----------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB ColumnStore software upgrade 1.0.11 to 1.0.12 MariaDB ColumnStore software upgrade 1.0.11 to 1.0.12
=====================================================
MariaDB ColumnStore software upgrade 1.0.11 to 1.0.12
-----------------------------------------------------
Note: Columnstore.xml modifications you manually made are not automatically carried forward on an upgrade. These modifications will need to be incorporated back into .XML once the upgrade has occurred.
The previous configuration file will be saved as /usr/local/mariadb/columnstore/etc/Columnstore.xml.rpmsave.
If you have specified a root database password (which is good practice), then you must configure a .my.cnf file with user credentials for the upgrade process to use. Create a .my.cnf file in the user home directory with 600 file permissions with the following content (updating PASSWORD as appropriate):
```
[mysqladmin]
user = root
password = PASSWORD
```
### Choosing the type of upgrade
As noted on the Preparing guide, you can installing MariaDB ColumnStore with the use of soft-links. If you have the softlinks be setup at the Data Directory Levels, like mariadb/columnstore/data and mariadb/columnstore/dataX, then your upgrade will happen without any issues. In the case where you have a softlink at the top directory, like /usr/local/mariadb, you will need to upgrade using the binary package. If you updating using the rpm package and tool, this softlink will be deleted when you perform the upgrade process and the upgrade will fail.
#### Root User Installs
#### Upgrading MariaDB ColumnStore using RPMs
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Download the package mariadb-columnstore-1.0.12-1-centos#.x86\_64.rpm.tar.gz to the PM1 server where you are installing MariaDB ColumnStore.** Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Unpack the tarball, which will generate a set of RPMs that will reside in the /root/ directory.
```
# tar -zxf mariadb-columnstore-1.0.12-1-centos#.x86_64.rpm.tar.gz
```
* Upgrade the RPMs. The MariaDB ColumnStore software will be installed in /usr/local/.
```
# rpm -e --nodeps $(rpm -qa | grep '^mariadb-columnstore')
# rpm -ivh mariadb-columnstore-*1.0.12*rpm
```
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml.rpmsave
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
For RPM Upgrade, the previous configuration file will be saved as:
/usr/local/mariadb/columnstore/etc/Columnstore.xml.rpmsave
### Initial download/install of MariaDB ColumnStore binary package
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /usr/local directory -mariadb-columnstore-1.0.12-1.x86\_64.bin.tar.gz (Binary 64-BIT)to the server where you are installing MariaDB ColumnStore.
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Run pre-uninstall script
```
# /usr/local/mariadb/columnstore/bin/pre-uninstall
```
* Unpack the tarball, in the /usr/local/ directory.
```
# tar -zxvf -mariadb-columnstore-1.0.12-1.x86_64.bin.tar.gz
```
* Run post-install scripts
```
# /usr/local/mariadb/columnstore/bin/post-install
```
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml,rpmsave
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
### Upgrading MariaDB ColumnStore using the DEB package
A DEB upgrade would be done on a system that supports DEBs like Debian or Ubuntu systems.
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /root directory
mariadb-columnstore-1.0.12-1.amd64.deb.tar.gz
(DEB 64-BIT) to the server where you are installing MariaDB ColumnStore.
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Unpack the tarball, which will generate DEBs.
```
# tar -zxf mariadb-columnstore-1.0.12-1.amd64.deb.tar.gz
```
* Remove, purge and install all MariaDB ColumnStore debs
```
# cd /root/
# dpkg -r $(dpkg --list | grep 'mariadb-columnstore' | awk '{print $2}')
# dpkg -P $(dpkg --list | grep 'mariadb-columnstore' | awk '{print $2}')
# dpkg --install mariadb-columnstore-*1.0.12-1*deb
```
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml,rpmsave
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
#### Non-Root User Installs
### Initial download/install of MariaDB ColumnStore binary package
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /home/'non-root-user" directory
mariadb-columnstore-1.0.12-1.x86\_64.bin.tar.gz (Binary 64-BIT)to the server where you are installing MariaDB ColumnStore.
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Run pre-uninstall script
```
# $HOME/mariadb/columnstore/bin/pre-uninstall
--installdir= /home/guest/mariadb/columnstore
```
* Unpack the tarball, which will generate the $HOME/ directory.
```
# tar -zxvf -mariadb-columnstore-1.0.12-1.x86_64.bin.tar.gz
```
* Run post-install scripts
```
# $HOME/mariadb/columnstore/bin/post-install
--installdir=/home/guest/mariadb/columnstore
```
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml,rpmsave
```
# $HOME/mariadb/columnstore/bin/postConfigure -u -i /home/guest/mariadb/columnstore
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Aria Clients and Utilities Aria Clients and Utilities
===========================
Clients and utilities for working with the [Aria](../aria/index) storage engine
| Title | Description |
| --- | --- |
| [aria\_chk](../aria_chk/index) | Used for checking, repairing, optimizing and sorting Aria tables. |
| [aria\_pack](../aria_pack/index) | Tool for compressing Aria tables. |
| [aria\_read\_log](../aria_read_log/index) | Tool for displaying and applying log records from an Aria transaction log. |
| [aria\_s3\_copy](../aria_s3_copy/index) | Copies an Aria table to and from S3. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb DBT3 Benchmark Queries DBT3 Benchmark Queries
======================
Known things about DBT-3 benchmark and its queries
Q1
--
A simple, one-table query.
```
select
l_returnflag,
l_linestatus,
sum(l_quantity) as sum_qty,
sum(l_extendedprice) as sum_base_price,
sum(l_extendedprice * (1 - l_discount)) as sum_disc_price,
sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge,
avg(l_quantity) as avg_qty,
avg(l_extendedprice) as avg_price,
avg(l_discount) as avg_disc,
count(*) as count_order
from
lineitem
where
l_shipdate <= date_sub('1998-12-01', interval 79 day)
group by
l_returnflag,
l_linestatus
order by
l_returnflag,
l_linestatus;
```
Query plan:
```
+------+-------------+----------+------+---------------+------+---------+------+----------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+----------+------+---------------+------+---------+------+----------+----------------------------------------------+
| 1 | SIMPLE | lineitem | ALL | i_l_shipdate | NULL | NULL | NULL | 59711977 | Using where; Using temporary; Using filesort |
+------+-------------+----------+------+---------------+------+---------+------+----------+----------------------------------------------+
```
* l\_shipdate < date\_sub('1998-12-01', interval 79 day) is satisifed by 59,334,576 rows.
* The table has 59,986,052 rows in total.
* There are a total of 4 different values of `(l_returnflag,l_linestatus)`. This means, sorting doesn't matter, and temporary table is a very small heap table.
Q2
--
plans starting with table "part" ... - 8sec on cold cache. scale=10 (scale=30 ETA 1min)
Q3
--
```
select
l_orderkey, sum(l_extendedprice*(1-l_discount)) as revenue,
o_orderdate, o_shippriority
from
customer,
orders,
lineitem
where
c_mktsegment = 'BUILDING' and c_custkey = o_custkey
and l_orderkey = o_orderkey and o_orderdate < date '1995-03-15'
and l_shipdate > date '1995-03-15'
group by l_orderkey, o_orderdate, o_shippriority
order by revenue desc, o_orderdate
limit 10;
```
There seems to be an improvement in mysql-5.6: <http://jorgenloland.blogspot.ru/2013/02/dbt-3-q3-6-x-performance-in-mysql-5610.html>
(speedup can be observed only when the query is in the form like the above (TODO: figure out where do different forms of queries come from?))
EXPLAINs (scale=10):
```
+------+-------------+----------+--------+---------------------------------------------------------+---------+---------+----------------------------+----------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+----------+--------+---------------------------------------------------------+---------+---------+----------------------------+----------+----------------------------------------------+
| 1 | SIMPLE | orders | ALL | NULL | NULL | NULL | NULL | 15115145 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | customer | eq_ref | PRIMARY | PRIMARY | 4 | dbt3sf10.orders.o_custkey | 1 | Using where |
| 1 | SIMPLE | lineitem | ref | PRIMARY,i_l_shipdate,i_l_orderkey,i_l_orderkey_quantity | PRIMARY | 4 | dbt3sf10.orders.o_orderkey | 2 | Using where |
+------+-------------+----------+--------+---------------------------------------------------------+---------+---------+----------------------------+----------+----------------------------------------------+
```
```
+------+-------------+----------+--------+---------------------------------------------------------+----------------+---------+----------------------------+---------+---------------------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+----------+--------+---------------------------------------------------------+----------------+---------+----------------------------+---------+---------------------------------------------------------------------+
| 1 | SIMPLE | orders | range | PRIMARY,i_o_date_clerk | i_o_date_clerk | 4 | NULL | 7557572 | Using index condition; Using where; Using temporary; Using filesort |
| 1 | SIMPLE | customer | eq_ref | PRIMARY | PRIMARY | 4 | dbt3sf10.orders.o_custkey | 1 | Using where |
| 1 | SIMPLE | lineitem | ref | PRIMARY,i_l_shipdate,i_l_orderkey,i_l_orderkey_quantity | PRIMARY | 4 | dbt3sf10.orders.o_orderkey | 2 | Using where |
+------+-------------+----------+--------+---------------------------------------------------------+----------------+---------+----------------------------+---------+---------------------------------------------------------------------+
```
With statistics on 'building': we get a plan of customer, orders, lineitem. It is 5% worse. There seems to be no other possibilities.
Q5
--
Nothing good so far.
watch the "c\_nationkey = s\_nationkey" condition. It is a "side" condition (ie it is not from the "natural" relationships between tables). It is not clear whether accounting for its selectivity will give anything)
Q8
--
```
Timour is analyzing this query.
```
Q9
--
```
Timour is analyzing this query.
```
```
+------+-------------+----------+--------+----------------------------------------------------------------------------------------+---------------------+---------+--------------------------------------------------+------+----------+---------------------------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+------+-------------+----------+--------+----------------------------------------------------------------------------------------+---------------------+---------+--------------------------------------------------+------+----------+---------------------------------------------------------------------------+
| 1 | SIMPLE | nation | ALL | PRIMARY | NULL | NULL | NULL | 25 | 100.00 | Using temporary; Using filesort |
| 1 | SIMPLE | supplier | ref | PRIMARY,i_s_nationkey | i_s_nationkey | 5 | dbt3.nation.n_nationkey | 4077 | 100.00 | Using index |
| 1 | SIMPLE | partsupp | ref | PRIMARY,i_ps_partkey,i_ps_suppkey | i_ps_suppkey | 4 | dbt3.supplier.s_suppkey | 38 | 100.00 | Using join buffer (flat, BKA join); Key-ordered Rowid-ordered scan |
| 1 | SIMPLE | part | eq_ref | PRIMARY | PRIMARY | 4 | dbt3.partsupp.ps_partkey | 1 | 100.00 | Using where; Using join buffer (incremental, BKA join); Key-ordered scan |
| 1 | SIMPLE | lineitem | ref | PRIMARY,i_l_suppkey_partkey,i_l_partkey,i_l_suppkey,i_l_orderkey,i_l_orderkey_quantity | i_l_suppkey_partkey | 10 | dbt3.partsupp.ps_partkey,dbt3.supplier.s_suppkey | 3 | 100.00 | Using join buffer (incremental, BKA join); Key-ordered Rowid-ordered scan |
| 1 | SIMPLE | orders | eq_ref | PRIMARY | PRIMARY | 4 | dbt3.lineitem.l_orderkey | 1 | 100.00 | Using join buffer (incremental, BKA join); Key-ordered scan |
+------+-------------+----------+--------+----------------------------------------------------------------------------------------+---------------------+---------+--------------------------------------------------+------+----------+---------------------------------------------------------------------------+
```
Watch for "p\_name like ..." condition. What if we force table part to be the 1st.
Q10
---
Q13
---
```
SergeyP is analyzing this query.
```
```
select
c_count,
count(*) as custdist
from
(
select
c_custkey,
count(o_orderkey) as c_count
from
customer left outer join orders on
c_custkey = o_custkey
and o_comment not like '%express%requests%'
group by
c_custkey
) as c_orders
group by
c_count
order by
custdist desc,
c_count desc;
```
Q15
---
```
SergeyP is analyzing this query.
```
Q17
---
```
Timour is analyzing this query.
```
* Q17 cannot benefit from [MDEV-89](https://jira.mariadb.org/browse/MDEV-89) because the '<' predicate depends on both query tables. No matter what is the join order, the subquery has to be attached to the last table in the plan.
* There are no sargable conditions for 'lineitem', and it is bigger than 'part', and selectivity(p\_brand = 'Brand#43' and p\_container = 'WRAP DRUM') = 0.1%. Therefore table part should be first in the join plan.
* selectivity(p\_partkey = l\_partkey) << selectivity(l\_quantity < (select ...)), therefore the expensive subquery will be placed correctly after the cheap join condition.
* One possible improvement for this query is to stop the execution of the subquery as soon as the selected expression becomes false. In this case stop when (l\_quantity >= 0.2 \* avg(l\_quantity)).
```
select sum(l_extendedprice) / 7.0 as avg_yearly
from lineitem, part
where
p_partkey = l_partkey
and p_brand = 'Brand#43'
and p_container = 'WRAP DRUM'
and l_quantity < (select 0.2 * avg(l_quantity) from lineitem where l_partkey = p_partkey);
```
```
+------+--------------------+----------+------+---------------------------------+---------------------+---------+---------------------+---------+----------+------------------------------------------------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+------+--------------------+----------+------+---------------------------------+---------------------+---------+---------------------+---------+----------+------------------------------------------------------------------------------------------------+
| 1 | PRIMARY | part | ALL | PRIMARY | NULL | NULL | NULL | 6000000 | 1.39 | Using where |
| 1 | PRIMARY | lineitem | ref | i_l_suppkey_partkey,i_l_partkey | i_l_suppkey_partkey | 5 | dbt3.part.p_partkey | 29 | 100.00 | Using where; Subqueries: 2; Using join buffer (flat, BKA join); Key-ordered Rowid-ordered scan |
| 2 | DEPENDENT SUBQUERY | lineitem | ref | i_l_suppkey_partkey,i_l_partkey | i_l_suppkey_partkey | 5 | dbt3.part.p_partkey | 29 | 100.00 | |
+------+--------------------+----------+------+---------------------------------+---------------------+---------+---------------------+---------+----------+------------------------------------------------------------------------------------------------+
```
Q20
---
* This query will benefit from [MWL#253](http://askmonty.org/worklog/?tid=253) (Exact index stats), and [MDEV-83](https://jira.mariadb.org/browse/MDEV-83) (Cost-based choice for the pushdown of subqueries to joined tables).
```
explain extended
select sql_calc_found_rows
s_name, s_address
from
supplier, nation
where s_suppkey in (select ps_suppkey from partsupp
where ps_partkey in (select p_partkey from part where p_name like 'g%')
and ps_availqty >
(select 0.5 * sum(l_quantity)
from lineitem
where l_partkey = ps_partkey and l_suppkey = ps_suppkey
and l_shipdate >= date('1993-01-01') and l_shipdate < date('1993-01-01') + interval '1' year ))
and s_nationkey = n_nationkey
and n_name = 'UNITED STATES'
order by s_name
limit 10;
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb MariaDB ColumnStore software upgrade 1.1.6 GA to 1.2.1 Beta MariaDB ColumnStore software upgrade 1.1.6 GA to 1.2.1 Beta
===========================================================
MariaDB ColumnStore software upgrade 1.1.6 GA to 1.2.1 Beta
-----------------------------------------------------------
This upgrade also applies to 1.2.0 Alpha to 1.2.1 Beta upgrades
### Changes in 1.2.1
#### Non-distributed is the default distribution mode in postConfigure
The default distribution mode has changed from 'distributed' to 'non-distributed'. During an upgrade, however, the default is to use the distribution mode used in the original installation. The options '-d' and '-n' can always be used to override the default.
#### Non-root user sudo setup
Root-level permissions are no longer required to install or upgrade ColumnStore for some types of installations. Installations requiring some level of sudo access, and the instructions, are listed here: [https://mariadb.com/kb/en/library/preparing-for-columnstore-installation-121/#update-sudo-configuration-if-needed-by-root-user](../library/preparing-for-columnstore-installation-121/index#update-sudo-configuration-if-needed-by-root-user)
#### Running the mysql\_upgrade script
As part of the upgrade process to 1.2.1, the user is required to run the mysql\_upgrade script on all of the following nodes.
* User Modules on a system configured with separate User and Performance Modules
* Performance Modules on a system configured with separate User and Performance Modules and Local Query Feature is enabled
* Performance Modules on a system configured with combined User and Performance Modules
mysql\_upgrade should be run once the upgrade has been completed.
This is an example of how it run on a root user install:
```
/usr/local/mariadb/columnstore/mysql/bin/mysql_upgrade --defaults-file=/usr/local/mariadb/columnstore/mysql/my.cnf --force
```
This is an example of how it run on a non-root user install, assuming ColumnStore is installed under the user's home directory:
```
$HOME/mariadb/columnstore/mysql/bin/mysql_upgrade --defaults-file=$HOME/mariadb/columnstore/mysql/my.cnf --force
```
### Setup
In this section, we will refer to the directory ColumnStore is installed in as <CSROOT>. If you installed the RPM or DEB package, then your <CSROOT> will be /usr/local. If you installed it from the tarball, <CSROOT> will be where you unpacked it.
#### Columnstore.xml / my.cnf
Configuration changes made manually are not automatically carried forward during the upgrade. These modifications will need to be made again manually after the upgrade is complete.
After the upgrade process the configuration files will be saved at:
* <CSROOT>/mariadb/columnstore/etc/Columnstore.xml.rpmsave
* <CSROOT>/mariadb/columnstore/mysql/my.cnf.rpmsave
#### MariaDB root user database password
If you have specified a root user database password (which is good practice), then you must configure a .my.cnf file with user credentials for the upgrade process to use. Create a .my.cnf file in the user home directory with 600 file permissions with the following content (updating PASSWORD as appropriate):
```
[mysqladmin]
user = root
password = PASSWORD
```
### Choosing the type of upgrade
Note, softlinks may cause a problem during the upgrade if you use the RPM or DEB packages. If you have linked a directory above /usr/local/mariadb/columnstore, the softlinks will be deleted and the upgrade will fail. In that case you will need to upgrade using the binary tarball instead. If you have only linked the data directories (ie /usr/local/MariaDB/columnstore/data\*), the RPM/DEB package upgrade will work.
#### Root User Installs
##### Upgrading MariaDB ColumnStore using the tarball of RPMs (distributed mode)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Download the package mariadb-columnstore-1.2.1-1-centos#.x86\_64.rpm.tar.gz to the PM1 server where you are installing MariaDB ColumnStore.**
**Shutdown the MariaDB ColumnStore system:**
```
# mcsadmin shutdownsystem y
```
* Unpack the tarball, which will generate a set of RPMs that will reside in the /root/ directory.
```
# tar -zxf mariadb-columnstore-1.2.1-1-centos#.x86_64.rpm.tar.gz
```
* Uninstall the old packages, then install the new packages. The MariaDB ColumnStore software will be installed in /usr/local/.
```
# rpm -e --nodeps $(rpm -qa | grep '^mariadb-columnstore')
# rpm -ivh mariadb-columnstore-*1.2.1*rpm
```
* Run postConfigure using the upgrade option
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-116-ga-to-121-beta/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-116-ga-to-121-beta/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using RPM Package Repositories (non-distributed mode)
The system can be upgraded when it was previously installed from the Package Repositories. This will need to be run on each module in the system.
Additional information can be found in this document on how to setup and install using the 'yum' package repo command:
[https://mariadb.com/kb/en/library/installing-mariadb-ax-from-the-package-repositories](../library/installing-mariadb-ax-from-the-package-repositories)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Shutdown the MariaDB ColumnStore system:**
```
# mcsadmin shutdownsystem y
```
* Uninstall MariaDB ColumnStore Packages
```
# yum remove mariadb-columnstore*
```
* Install MariaDB ColumnStore Packages
```
# yum --enablerepo=mariadb-columnstore clean metadata
# yum install mariadb-columnstore*
```
NOTE: On all modules except for PM1, start the columnstore service
```
# /usr/local/mariadb/columnstore/bin/columnstore start
```
* Run postConfigure using the upgrade and non-distributed options
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u -n
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-116-ga-to-121-beta/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-116-ga-to-121-beta/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using the binary tarball (distributed mode)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /usr/local directory mariadb-columnstore-1.2.1-1.x86\_64.bin.tar.gz
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Run pre-uninstall script
```
# /usr/local/mariadb/columnstore/bin/pre-uninstall
```
* Unpack the tarball in the /usr/local/ directory.
```
# tar -zxvf mariadb-columnstore-1.2.1-1.x86_64.bin.tar.gz
```
* Run post-install scripts
```
# /usr/local/mariadb/columnstore/bin/post-install
```
* Run postConfigure using the upgrade option
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-116-ga-to-121-beta/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-116-ga-to-121-beta/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using the DEB tarball (distributed mode)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /root directory mariadb-columnstore-1.2.1-1.amd64.deb.tar.gz
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Unpack the tarball, which contains DEBs.
```
# tar -zxf mariadb-columnstore-1.2.1-1.amd64.deb.tar.gz
```
* Remove and install all MariaDB ColumnStore debs
```
# cd /root/
# dpkg -r $(dpkg --list | grep 'mariadb-columnstore' | awk '{print $2}')
# dpkg --install mariadb-columnstore-*1.2.1-1*deb
```
* Run postConfigure using the upgrade option
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-116-ga-to-121-beta/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-116-ga-to-121-beta/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using DEB Package Repositories (non-distributed mode)
The system can be upgraded when it was previously installed from the Package Repositories. This will need to be run on each module in the system
Additional information can be found in this document on how to setup and install using the 'apt-get' package repo command:
[https://mariadb.com/kb/en/library/installing-mariadb-ax-from-the-package-repositories](../library/installing-mariadb-ax-from-the-package-repositories)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Shutdown the MariaDB ColumnStore system:**
```
# mcsadmin shutdownsystem y
```
* Uninstall MariaDB ColumnStore Packages
```
# apt-get remove mariadb-columnstore*
```
* Install MariaDB ColumnStore Packages
```
# apt-get update
# sudo apt-get install mariadb-columnstore*
```
NOTE: On all modules except for PM1, start the columnstore service
```
# /usr/local/mariadb/columnstore/bin/columnstore start
```
* Run postConfigure using the upgrade and non-distributed options
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u -n
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-116-ga-to-121-beta/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-116-ga-to-121-beta/index#running-the-mysql_upgrade-script)
#### Non-Root User Installs
##### Upgrade MariaDB ColumnStore from the binary tarball without sudo access (non-distributed mode)
This upgrade method applies when root/sudo access is not an option.
The uninstall script for 1.1.6 requires root access to perform some operations. These operations are the following:
* removing /etc/profile.d/columnstore{Alias,Env}.sh to remove aliases and environment variables from all users.
* running '<CSROOT>/mysql/columnstore/bin/syslogSetup.sh uninstall' to remove ColumnStore from the logging system
* removing the columnstore startup script
* remove /etc/ld.so.conf.d/columnstore.conf to ColumnStore directories from the ld library search path
Because you are upgrading ColumnStore and not uninstalling it, they are not necessary. If at some point you wish to uninstall it, you (or your sysadmin) will have to perform those operations by hand.
The upgrade instructions:
* Download the binary tarball to the current installation location on all nodes. See <https://downloads.mariadb.com/ColumnStore/>
* Shutdown the MariaDB ColumnStore system:
```
$ mcsadmin shutdownsystem y
```
* Copy Columnstore.xml to Columnstore.xml.rpmsave, and my.cnf to my.cnf.rpmsave
```
$ cp <CSROOT>/mariadb/columnstore/etc/Columnstore{.xml,.xml.rpmsave}
$ cp <CSROOT>/mariadb/columnstore/mysql/my{.cnf,.cnf.rpmsave}
```
* On all nodes, untar the new files in the same location as the old ones
```
$ tar zxf columnstore-1.2.1-1.x86_64.bin.tar.gz
```
* On all nodes, run post-install, specifying where ColumnStore is installed
```
$ <CSROOT>/mariadb/columnstore/bin/post-install --installdir=<CSROOT>/mariadb/columnstore
```
* On all nodes except for PM1, start the columnstore service
```
$ <CSROOT>/mariadb/columnstore/bin/columnstore start
```
* On PM1 only, run postConfigure, specifying the upgrade, non-distributed installation mode, and the location of the installation
```
$ <CSROOT>/mariadb/columnstore/bin/postConfigure -u -n -i <CSROOT>/mariadb/columnstore
```
* Run the mysql\_upgrade script on the nodes documented above for a non-root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-116-ga-to-121-beta/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-116-ga-to-121-beta/index#running-the-mysql_upgrade-script)
##### Upgrade MariaDB ColumnStore from the binary tarball (distributed mode)
Upgrade MariaDB ColumnStore as user USER on the server designated as PM1:
* Download the package into the user's home directory mariadb-columnstore-1.2.1-1.x86\_64.bin.tar.gz
* Shutdown the MariaDB ColumnStore system:
```
$ mcsadmin shutdownsystem y
```
* Run the pre-uninstall script; this will require sudo access as you are running a script from 1.1.6.
```
$ <CSROOT>/mariadb/columnstore/bin/pre-uninstall --installdir=<CSROOT>/mariadb/columnstore
```
* Make the sudo changes as noted at the beginning of this document
* Unpack the tarball in the same place as the original installation
```
$ tar -zxvf mariadb-columnstore-1.2.1-1.x86_64.bin.tar.gz
```
* Run post-install scripts
```
$ <CSROOT>/mariadb/columnstore/bin/post-install --installdir=<CSROOT>/mariadb/columnstore
```
* Run postConfigure using the upgrade option
```
$ <CSROOT>/mariadb/columnstore/bin/postConfigure -u -i <CSROOT>/mariadb/columnstore
```
* Run the mysql\_upgrade script on the nodes documented above for a non-root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-116-ga-to-121-beta/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-116-ga-to-121-beta/index#running-the-mysql_upgrade-script)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CONNECT - Using the TBL and MYSQL Table Types Together CONNECT - Using the TBL and MYSQL Table Types Together
======================================================
Used together, these types lift all the limitations of the [FEDERATED](../federated-storage-engine/index) and [MERGE](../merge/index) engines.
**MERGE:** Its limitation is obvious, the merged tables must be identical [MyISAM](../myisam-storage-engine/index) tables, and MyISAM is not even the default engine for MariaDB. However, [TBL](../connect-table-types-tbl-table-type-table-list/index) accesses a collection of CONNECT tables, but because these tables can be user specified or internally created [MYSQL](../connect-table-types-mysql-table-type-accessing-mysqlmariadb-tables/index) tables, there is no limitation to the type of the tables that can be merged.
TBL is also much more flexible. The merged tables must not be "identical", they just should have the columns defined in the TBL table. If the type of one column in a merged table is not the one of the corresponding column of the TBL table, the column value will be converted. As we have seen, if one column of the TBL table of the TBL column does not exist in one of the merged table, the corresponding value will be set to null. If columns in a sub-table have a different name, they can be accessed by position using the FLAG column option of CONNECT.
However, one limitation of the TBL type regarding MERGE is that TBL tables are currently read-only; INSERT is not supported by TBL. Also, keep using MERGE to access a list of identical MyISAM tables because it will be faster, not passing by the MySQL API.
**FEDERATED(X):** The main limitation of FEDERATED is to access only MySQL/MariaDB tables. The MYSQL table type of CONNECT has the same limitation but CONNECT provides the [ODBC table type](../connect-table-types-odbc-table-type-accessing-tables-from-other-dbms/index) and [JDBC table type](../connect-jdbc-table-type-accessing-tables-from-other-dbms/index) that can access tables of any RDBS providing an ODBC or JDBC driver (including MySQL even it is not really useful!)
Another major limitation of FEDERATED is to access only one table. By combining TBL and MYSQL tables, CONNECT enables to access a collection of local or remote tables as one table. Of course the sub-tables can be on different servers. With one SELECT statement, a company manager will be able to interrogate results coming from all of his subsidiary computers. This is great for distribution, banking, and many other industries.
Remotely executing complex queries
----------------------------------
Many companies or administrations must deal with distributed information. CONNECT enables to deal with it efficiently without having to copy it to a centralized database. Let us suppose we have on some remote network machines *m1, m2, … mn* some information contained in two tables *t1* and *t2*.
Suppose we want to execute on all servers a query such as:
```
select c1, sum(c2) from t1 a, t2 b where a.id = b.id group by c1;
```
This raises many problems. Returning the column values of the *t1* and *t2* tables from all servers can be a lot of network traffic. The group by on the possibly huge resulting tables can be a long process. In addition, the join on the *t1* and *t2* tables may be relevant only if the joined tuples belong to the same machine, obliging to add a condition on an additional tabid or servid special column.
All this can be avoided and optimized by forcing the query to be locally executed on each server and retrieving only the small results of the group by queries. Here is how to do it. For each remote machine, create a table that will retrieve the locally executed query. For instance for m1:
```
create table rt1 engine=connect option_list='host=m1'
srcdef='select c1, sum(c2) as sc2 from t1 a, t2 b where a.id = b.id group by c1';
```
Note the alias for the functional column. An alias would be required for the c1 column if its name was different on some machines. The t1 and t2 table names can also be eventually different on the remote machines. The true names must be used in the `SRCDEF` parameter. This will create a set of tables with two columns named c1 and sc2[[1](#_note-0)].
Then create the table that will retrieve the result of all these tables:
```
create table rtall engine=connect table_type=tbl
table_list='rt1,rt2,…,rtn' option_list='thread=yes';
```
Now you can retrieve the desired result by:
```
select c1, sum(sc2) from rtall;
```
Almost all the work will be done on the remote machines, simultaneously thanks to the thread option, making this query super-fast even on big tables placed on many remote machines.
Thread is currently experimental. Use it only for test and report any malfunction on [JIRA](../jira/index).
Providing a list of servers
---------------------------
An interesting case is when the query to run on remote machines is the same for all of them. It is then possible to avoid declaring all sub-tables. In this case, the table list option will be used to specify the list of servers the `SRCDEF` query must be sent. This will be a list of URL’s and/or Federated server names.
For instance, supposing that federated servers srv1, srv2, … srv*n* were created for all remote servers, it will be possible to create a tbl table allowing getting the result of a query executed on all of them by:
```
create table qall [column definition]
engine=connect table_type=TBL srcdef='a query'
table_list='srv1,srv2,…,srvn' [option_list='thread=yes'];
```
For instance:
```
create table verall engine=connect table_type=TBL srcdef='select @@version' table_list=',server_one';
select * from verall;
```
This reply:
| @@version |
| --- |
| 10.0.3-MariaDB-debug |
| 10.0.2-MariaDB |
Here the server list specifies a void server corresponding to the local running MariaDB and a federated server named *server\_one*.
---
1. [↑](#_ref-0) To generate the columns from the `SRCDEF` query, CONNECT must execute it. This will make sure it is ok. However, if the remote server is not connected yet, or the remote table not existing yet, you can alternatively specify the columns in the create table statement.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb EXPLAIN Analyzer EXPLAIN Analyzer
================
The [EXPLAIN Analyzer](https://mariadb.org/explain_analyzer/analyze/) is an online tool for analyzing and optionally sharing the output of both `[EXPLAIN](../explain/index)` and `EXPLAIN EXTENDED`.
Using the Analyzer
------------------
Using the analyzer is very simple.
1. In the mysql client, run `EXPLAIN` on a query and copy the output. For example:
```
EXPLAIN SELECT * FROM t1 INNER JOIN t2 INNER JOIN t3 WHERE t1.a=t2.a AND t2.a=t3.a;
+------+-------------+-------+------+---------------+------+---------+------+------+--------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+------+---------------+------+---------+------+------+--------------------------------------------------------+
| 1 | SIMPLE | t1 | ALL | NULL | NULL | NULL | NULL | 3 | |
| 1 | SIMPLE | t2 | ALL | NULL | NULL | NULL | NULL | 3 | Using where; Using join buffer (flat, BNL join) |
| 1 | SIMPLE | t3 | ALL | NULL | NULL | NULL | NULL | 3 | Using where; Using join buffer (incremental, BNL join) |
+------+-------------+-------+------+---------------+------+---------+------+------+--------------------------------------------------------+
3 rows in set (0.00 sec)
```
2. Paste the output into the [`EXPLAIN` Analyzer input box](https://mariadb.org/explain_analyzer/analyze/) and click the "Analyze Explain" button.
3. The formatted `EXPLAIN` will be shown. You can now click on various part to get more information about them.
### Some Notes:
* As you can see in the example above, you don't need to chop off the query line or the command prompt.
* To save the EXPLAIN, so you can share it, or just for future reference, click the "Save Explain for analysis and sharing" button and then click the "Analyze Explain" button. You will be given a link which leads to your saved `EXPLAIN`. For example, the above explain can be viewed here: <https://mariadb.org/explain_analyzer/analyze/>
* Some of the elements in the formatted `EXPLAIN` are clickable. Clicking on them will show pop-up help related to that element.
Clients which integrate with the Explain Analyzer
-------------------------------------------------
The Analyzer has an API that client programs can use to send EXPLAINs. If you are a client application developer, see the [EXPLAIN Analyzer API](../explain-analyzer-api/index) page for details.
The following clients have support for the EXPLAIN Analyzer built in:
### HeidiSQL
[HeidiSQL](https://www.heidisql.com/) has a button when viewing a query that sends the query to the explain analyzer.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Creating a Trace File Creating a Trace File
=====================
If `mysqld` is crashing, creating a trace file is a good way to find the issue.
A `mysqld` binary that has been compiled with debugging support can create trace files using the DBUG package created by Fred Fish. To find out if your `mysqld` binary has debugging support, run `mysqld -V` on the command line. If the version number ends in `-debug` then your `mysqld` binary was compiled with debugging support.
See [Compiling MariaDB for debugging](../compiling-mariadb-for-debugging/index) for instructions on how to create your own `mysqld` binary with debugging enabled.
To create the trace log, start `mysqld` like so:
```
mysqld --debug
```
Without options for --debug, the trace file will be named `/tmp/mysqld.trace` in MySQL and older versions of MariaDB before 10.5 and `/tmp/mariadbd.trace` starting from [MariaDB 10.5](../what-is-mariadb-105/index).
On Windows, the debug `mysqld` is called `mysqld-debug` and you should also use the `--standalone` option. So the command on Windows will look like:
```
mysqld-debug --debug --standalone
```
Once the server is started, use the regular `mysql` command-line client (or another client) to connect and work with the server.
After you are finished debugging, stop the server with:
```
mysqladmin shutdown
```
DBUG Options
------------
Trace files can grow to a significant size. You can reduce their size by telling the server to only log certain items.
The `--debug` flag can take extra options in the form of a colon (:) delimited string of options. Individual options can have comma-separated sub-options.
For example:
```
mysqld --debug=d,info,error,query:o,/tmp/mariadbd.trace
```
The '`d`' option limits the output to the named DBUG\_<N> macros. In the above example, the `/tmp/mariadbd.trace` tracefile will contain output from the info, error, and query DBUG macros. A '`d`' by itself (with no sub-options) will select all DBUG\_<N> macros.
The '`o`' option redirects the output to a file (`/tmp/mariadbd.trace` in the example above) and overwrites the file if it exists.
See Also
--------
* [Options for --debug](../mysql_debug/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MyRocks MyRocks
========
MyRocks is a storage engine that adds the RocksDB database to MariaDB. RocksDB is an LSM database with a great compression ratio that is optimized for flash storage.
| MyRocks Version | Introduced | Maturity |
| --- | --- | --- |
| MyRocks 1.0 | [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/), [MariaDB 10.2.16](https://mariadb.com/kb/en/mariadb-10216-release-notes/) | Stable |
| MyRocks 1.0 | [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/), [MariaDB 10.2.14](https://mariadb.com/kb/en/mariadb-10214-release-notes/) | Gamma |
| MyRocks 1.0 | [MariaDB 10.3.4](https://mariadb.com/kb/en/mariadb-1034-release-notes/), [MariaDB 10.2.13](https://mariadb.com/kb/en/mariadb-10213-release-notes/) | Beta |
| MyRocks 1.0 | [MariaDB 10.2.5](https://mariadb.com/kb/en/mariadb-1025-release-notes/) | Alpha |
| Title | Description |
| --- | --- |
| [About MyRocks for MariaDB](../about-myrocks-for-mariadb/index) | Enables greater compression than InnoDB, and less write amplification. |
| [Getting Started with MyRocks](../getting-started-with-myrocks/index) | Installing and getting started with MyRocks. |
| [Building MyRocks in MariaDB](../building-myrocks-in-mariadb/index) | MariaDB compile process for MyRocks. |
| [Loading Data Into MyRocks](../loading-data-into-myrocks/index) | MyRocks has ways to load data much faster than normal INSERTs |
| [MyRocks Status Variables](../myrocks-status-variables/index) | MyRocks-related status variables. |
| [MyRocks System Variables](../myrocks-system-variables/index) | MyRocks server system variables. |
| [MyRocks Transactional Isolation](../myrocks-transactional-isolation/index) | TODO: MyRocks uses snapshot isolation Support do READ-COMMITTED and REPEAT... |
| [MyRocks and Replication](../myrocks-and-replication/index) | Details about how MyRocks works with replication. |
| [MyRocks and Group Commit with Binary log](../myrocks-and-group-commit-with-binary-log/index) | MyRocks supports group commit with the binary log |
| [Optimizer Statistics in MyRocks](../optimizer-statistics-in-myrocks/index) | How MyRocks provides statistics to the query optimizer |
| [Differences Between MyRocks Variants](../differences-between-myrocks-variants/index) | Differences between Facebook's, MariaDB's and Percona Server's MyRocks. |
| [MyRocks and Bloom Filters](../myrocks-and-bloom-filters/index) | Bloom filters are used to reduce read amplification. |
| [MyRocks and CHECK TABLE](../myrocks-and-check-table/index) | MyRocks supports the CHECK TABLE command. |
| [MyRocks and Data Compression](../myrocks-and-data-compression/index) | MyRocks supports several compression algorithms. |
| [MyRocks and Index-Only Scans](../myrocks-and-index-only-scans/index) | MyRocks and index-only scans on secondary indexes. |
| [MyRocks and START TRANSACTION WITH CONSISTENT SNAPSHOT](../myrocks-and-start-transaction-with-consistent-snapshot/index) | FB/MySQL has added new syntax which returns the binlog coordinates pointing at the snapshot. |
| [MyRocks Column Families](../myrocks-column-families/index) | MyRocks stores data in column families, which are similar to tablespaces. |
| [MyRocks in MariaDB 10.2 vs MariaDB 10.3](../myrocks-in-mariadb-102-vs-mariadb-103/index) | MyRocks storage engine in MariaDB 10.2 and MariaDB 10.3. |
| [MyRocks Performance Troubleshooting](../myrocks-performance-troubleshooting/index) | MyRocks exposes its performance metrics through several interfaces. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb OQGRAPH OQGRAPH
========
The Open Query GRAPH computation engine, or OQGRAPH as the engine itself is called, allows you to handle hierarchies (tree structures) and complex graphs (nodes having many connections in several directions).
| OQGRAPH Version | Introduced | Maturity |
| --- | --- | --- |
| 3.0 | [MariaDB 10.0.25](https://mariadb.com/kb/en/mariadb-10025-release-notes/) | Gamma |
| 3.0 | [MariaDB 10.0.7](https://mariadb.com/kb/en/mariadb-1007-release-notes/) | Beta |
| 2.0 | [MariaDB 5.2.1](https://mariadb.com/kb/en/mariadb-521-release-notes/) | |
| Title | Description |
| --- | --- |
| [Installing OQGRAPH](../installing-oqgraph/index) | Installing OQGRAPH. |
| [OQGRAPH Overview](../oqgraph-overview/index) | Overview of the OQGRAPH storage engine. |
| [OQGRAPH Examples](../oqgraph-examples/index) | OQGRAPH examples. |
| [Compiling OQGRAPH](../compiling-oqgraph/index) | How to compile OQGRAPH. |
| [Building OQGRAPH Under Windows](../building-oqgraph-under-windows/index) | OQGRAPH build instructions for Windows. |
| [OQGRAPH System and Status Variables](../oqgraph-system-and-status-variables/index) | List and description of OQGRAPH system and status variables. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Buildbot Setup for Virtual Machines - Debian 5 i386 Buildbot Setup for Virtual Machines - Debian 5 i386
===================================================
Base install
------------
Download netinst CD image debian-503-i386-netinst.iso and install:
```
cd /kvm
qemu-img create -f qcow2 vms/vm-debian5-i386-base.qcow2 8G
kvm -m 2047 -hda /kvm/vms/vm-debian5-i386-base.qcow2 -cdrom /kvm/debian-503-i386-netinst.iso -redir 'tcp:2226::22' -boot d -smp 2 -cpu qemu32,-nx -net nic,model=virtio -net user
```
Serial port console
-------------------
From base install, setup for serial port, and setup accounts for passwordless ssh login and sudo:
```
qemu-img create -b vm-debian5-i386-base.qcow2 -f qcow2 vm-debian5-i386-serial.qcow2
kvm -m 2047 -hda /kvm/vms/vm-debian5-i386-serial.qcow2 -redir 'tcp:2226::22' -boot c -smp 2 -cpu qemu32,-nx -net nic,model=virtio -net user -nographic
su
apt-get install sudo openssh-server emacs22-nox
visudo
# uncomment %sudo ALL=NOPASSWD: ALL
# add user account to group sudo.
# Copy in public ssh key.
# Add in /etc/inittab:
S0:2345:respawn:/sbin/agetty -h -L ttyS0 19200 vt100
```
Add to /boot/grub/menu.lst:
```
serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
terminal --timeout=3 serial console
```
also add in menu.lst to kernel line (after removing `quiet splash'):
```
console=tty0 console=ttyS0,115200n8
```
Add user buildbot, with disabled password. Add as sudo, and add ssh key.
```
sudo adduser --disabled-password buildbot
sudo adduser buildbot sudo
sudo su - buildbot
mkdir .ssh
# Paste all necessary keys.
cat >.ssh/authorized_keys
chmod -R go-rwx .ssh
```
Image for .deb build
--------------------
```
qemu-img create -b vm-debian5-i386-serial.qcow2 -f qcow2 vm-debian5-i386-build.qcow2
kvm -m 2047 -hda /kvm/vms/vm-debian5-i386-build.qcow2 -redir 'tcp:2226::22' -boot c -smp 2 -cpu qemu32,-nx -net nic,model=virtio -net user -nographic
sudo apt-get build-dep mysql-server-5.0
sudo apt-get install devscripts hardening-wrapper doxygen texlive-latex-base ghostscript libevent-dev libssl-dev zlib1g-dev libreadline5-dev
```
Image for install testing
-------------------------
```
qemu-img create -b vm-debian5-i386-serial.qcow2 -f qcow2 vm-debian5-i386-install.qcow2
kvm -m 2047 -hda /kvm/vms/vm-debian5-i386-install.qcow2 -redir 'tcp:2226::22' -boot c -smp 2 -cpu qemu32,-nx -net nic,model=virtio -net user -nographic
# No packages mostly!
sudo apt-get install debconf-utils
cat >>/etc/apt/sources.list <<END
deb file:///home/buildbot/buildbot/debs binary/
deb-src file:///home/buildbot/buildbot/debs source/
END
sudo debconf-set-selections /tmp/my.seed
```
See the [General Principles](../buildbot-setup-for-virtual-machines-general-principles/index) article how to obtain the `my.seed` file.
Image for upgrade testing
-------------------------
```
qemu-img create -b vm-debian5-i386-install.qcow2 -f qcow2 vm-debian5-i386-upgrade.qcow2
kvm -m 2047 -hda /kvm/vms/vm-debian5-i386-upgrade.qcow2 -redir 'tcp:2226::22' -boot c -smp 2 -cpu qemu32,-nx -net nic,model=virtio -net user -nographic
sudo apt-get install mysql-server-5.0
mysql -uroot -prootpass -e "create database mytest; use mytest; create table t(a int primary key); insert into t values (1); select * from t"
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB ColumnStore with Spark MariaDB ColumnStore with Spark
==============================
Introduction
============
Apache Spark (<http://spark.apache.org/>) is a popular open source data processing engine. It can be integrated with MariaDB ColumnStore utilizing the Spark SQL feature.
There are currently two possibilities to interact from Spark with ColumnStore. The first, is to use the ColumnStoreExporter which is part of the Bulk Data Adapters. ColumnStoreExporter can be used to export dataframes into existing tables in ColumnStore which is magnitudes faster than injecting Dataframes through JDBC. The second way is to use the MariaDB Java Connector and connect through JDBC. This is especially useful to read data from ColumnStore into Spark and to apply changes to ColumnStore's database structure through DDL.
MariaDB ColumnStore Exporter
============================
Connects Spark and ColumnStore through ColumStore's bulk write API.
Configuration
-------------
The following steps outline installing and configuring the MariaDB ColumnStoreExporter to be available in the Spark runtime:
* The latest version of the MariaDB Bulk Data Adapters need to be installed. See additional [documentation](../columnstore-bulk-write-sdk/index).
* The configuration file */usr/local/spark/conf/sparks-default.conf* should be created or updated to point to the BulkWriteAPI and ColumnStoreExporter libraries. Their paths depend on the OS you are using.
For Debian 8, 9 and Ubuntu 16.04:
```
spark.driver.extraClassPath /usr/lib/javamcsapi.jar:/usr/lib/spark-scala-mcsapi-connector.jar
spark.executor.extraClassPath /usr/lib/javamcsapi.jar:/usr/lib/spark-scala-mcsapi-connector.jar
```
For CentOS 7:
```
spark.driver.extraClassPath /usr/lib64/javamcsapi.jar:/usr/lib64/spark-scala-mcsapi-connector.jar
spark.executor.extraClassPath /usr/lib64/javamcsapi.jar:/usr/lib64/spark-scala-mcsapi-connector.jar
```
#### Troubleshooting
* Depending on your Java environment you might have to manually link the C++ library libjavamcsapi.so to your java.library.path.
* Depending on your Python environment you might have to manually link the Python modules columnStoreExporter.py and pymcsapi.py, and the C++ library \_pymcsapi.so to the Python packages directory used by Spark.
For Python 2.7 they can be found in:
```
/usr/lib/python2.7/dist-packages, for Debian 8, 9 and Ubuntu 16.04, and in
/usr/lib/python2.7/site-packages, for CentOS 7.
```
For Python 3 they can be found in:
```
/usr/lib/python3/dist-packages, for Debian 8, 9 and Ubuntu 16.04, and in
/usr/lib/python3.4/site-packages for CentOS 7.
```
Usage
-----
ColumnStoreExporter is compatible with Python 2.7, Python 3 and Scala.
It has a fairly simple notation: ColumnStoreExporter.export(database, table, dataframe), but requires that dataframe and table have the same structure.
Here is a simple demonstration exporting a dataframe containing numbers from 0 to 127 and their ASCII representation using ColumnStoreExporter into an existing table created with following DDL:
```
CREATE TABLE test.spark (ascii_representation CHAR(1), number INT) ENGINE=COLUMNSTORE;
```
Python 2.7 / 3
```
# necessary imports
from pyspark import SparkContext
from pyspark.sql import SQLContext, Row
import columnStoreExporter
# get the spark session
sc = SparkContext("local", "MariaDB Spark ColumnStore Example")
sqlContext = SQLContext(sc)
# create the test dataframe
asciiDF = sqlContext.createDataFrame(sc.parallelize(range(0, 128)).map(lambda i: Row(number=i, ascii_representation=chr(i))))
# export the dataframe
columnStoreExporter.export("test","spark",asciiDF)
```
Scala
```
// necessary imports
import org.apache.spark.sql.{SparkSession,DataFrame}
import com.mariadb.columnstore.api.connector.ColumnStoreExporter
// get the spark session
val spark: SparkSession = SparkSession.builder.master("local").appName("MariaDB Spark ColumnStore Example").getOrCreate
import spark.implicits._
val sc = spark.sparkContext
// create the test dataframe
val asciiDF = sc.makeRDD(0 until 128).map(i => (i.toChar.toString, i)).toDF("ascii_representation", "number")
// export the dataframe
ColumnStoreExporter.export("test", "spark", asciiDF)
```
### Documentation
The following documents provide SDK documentation:
* Usage documentation for Spark ([PDF](https://mariadb.com/kb/en/mariadb-columnstore-with-spark/+attachment/spark_mcsapi_usage_1_2_3 "PDF"), [HTML](https://mariadb.com/kb/en/mariadb-columnstore-with-spark/+attachment/spark_mcsapi_usage_html_1_2_3 "HTML")) and PySpark ([PDF](https://mariadb.com/kb/en/mariadb-columnstore-with-spark/+attachment/pyspark_mcsapi_usage_1_2_3 "PDF"), [HTML](https://mariadb.com/kb/en/mariadb-columnstore-with-spark/+attachment/pyspark_mcsapi_usage_html_1_2_3 "HTML")) for 1.2.3 GA.
Limitations
-----------
* ColumnStoreExporter currently can't handle Blob data types.
* The table needs to be existent and in the same structure of the dataframe to export.
MariaDB Java Connector
======================
Connects Spark and ColumnStore through JDBC.
Configuration
-------------
The following steps outline installing and configuring the MariaDB Java Connector to be available to the spark runtime:
* The latest version of the MariaDB Java Connector should be downloaded from <https://mariadb.com/downloads/connector> and copied to the master node, e.g. under /usr/share/java.
* The configuration file */usr/local/spark/conf/sparks-default.conf* should be created or updated to point to the jdbc directory:
```
spark.driver.extraClassPath /usr/share/java/mariadb-java-client-1.5.7.jar
spark.executor.extraClassPath /usr/share/java/mariadb-java-client-1.5.7.jar
```
Usage
-----
Currently Spark does not correctly recognize mariadb specific jdbc connect strings and so the *jdbc:mysql* syntax must be used. The followings shows a simple pyspark script to query the results from ColumnStore UM server columnstore\_1 into a spark dataframe:
```
from pyspark import SparkContext
from pyspark.sql import DataFrameReader, SQLContext
url = 'jdbc:mysql://columnstore_1:3306/test'
properties = {'user': 'root', 'driver': 'org.mariadb.jdbc.Driver'}
sc = SparkContext("local", "ColumnStore Simple Query Demo")
sqlContext = SQLContext(sc)
df = DataFrameReader(sqlContext).jdbc(url='%s' % url, table='results', properties=properties)
df.show()
```
Spark SQL currently offers very limited push down capabilities, so to take advantage of ColumnStore's ability to perform efficient group by, then an inline table must be used, for example:
```
df = DataFrameReader(sqlContext).jdbc(url='%s' % url,
table='( select year, sum(closed_roll_assess_land_value) sum_land_value from property_tax group by year) pt',
properties=properties)
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Configuring MariaDB Galera Cluster Configuring MariaDB Galera Cluster
==================================
A number of options need to be set in order for Galera Cluster to work when using MariaDB. These should be set in the [MariaDB option file](../configuring-mariadb-with-option-files/index).
Mandatory Options
-----------------
Several options are mandatory, which means that they \*must\* be set in order for Galera Cluster to be enabled or to work properly with MariaDB. The mandatory options are:
* `[wsrep\_provider](../galera-cluster-system-variables/index#wsrep_provider)` — Path to the Galera library
* `[wsrep\_cluster\_address](../galera-cluster-system-variables/index#wsrep_cluster_address)` — See [Galera Cluster address format and usage](../galera-cluster-address/index)
* `[binlog\_format=ROW](../replication-and-binary-log-server-system-variables/index#binlog_format)` — See [Binary Log Formats](../binary-log-formats/index)
* `[default\_storage\_engine=InnoDB](../server-system-variables/index#default_storage_engine)`
* `[innodb\_autoinc\_lock\_mode=2](../xtradbinnodb-server-system-variables/index#innodb_autoinc_lock_mode)`
* `[innodb\_doublewrite=1](../xtradbinnodb-server-system-variables/index#innodb_doublewrite)` — This is the default value, but it should not be changed when using Galera provider version >= 2.0.
* `[query\_cache\_size=0](../server-system-variables/index#query_cache_size)` — Only mandatory for MariaDB versions prior to MariaDB Galera Cluster 5.5.40, MariaDB Galera Cluster 10.0.14, and [MariaDB 10.1.2](https://mariadb.com/kb/en/mariadb-1012-release-notes/).
* `[wsrep\_on=ON](../galera-cluster-system-variables/index#wsrep_on)` — Enable wsrep replication (starting 10.1.1)
Performance-related Options
---------------------------
These are optional optimizations that can be made to improve performance.
* `[innodb\_flush\_log\_at\_trx\_commit=0](../xtradbinnodb-server-system-variables/index#innodb_flush_log_at_trx_commit)` — This is not usually recommended in the case of standard MariaDB. However, it is a bit safer with Galera Cluster, since inconsistencies can always be fixed by recovering from another node.
Writing Replicated Write Sets to the Binary Log
-----------------------------------------------
Like with [MariaDB replication](../high-availability-performance-tuning-mariadb-replication/index), write sets that are received by a node with [Galera Cluster's certification-based replication](../about-galera-replication/index) are not written to the [binary log](../binary-log/index) by default. If you would like a node to write its replicated write sets to the [binary log](../binary-log/index), then you will have to set `[log\_slave\_updates=ON](../replication-and-binary-log-system-variables/index#log_slave_updates)`. This is especially helpful if the node is a replication master. See [Using MariaDB Replication with MariaDB Galera Cluster: Configuring a Cluster Node as a Replication Master](../library/using-mariadb-replication-with-mariadb-galera-cluster-using-mariadb-replica/index#configuring-a-cluster-node-as-a-replication-master).
Replication Filters
-------------------
Like with [MariaDB replication](../high-availability-performance-tuning-mariadb-replication/index), [replication filters](../replication-filters/index) can be used to filter write sets from being replicated by [Galera Cluster's certification-based replication](../about-galera-replication/index). However, they should be used with caution because they may not work as you'd expect.
The following replication filters are honored for [InnoDB](../innodb/index) DML, but not DDL:
* `[binlog\_do\_db](../mysqld-options/index#-binlog-do-db)`
* `[binlog\_ignore\_db](../mysqld-options/index#-binlog-ignore-db)`
* `[replicate\_wild\_do\_table](../replication-and-binary-log-server-system-variables/index#replicate_wild_do_table)`
* `[replicate\_wild\_ignore\_table](../replication-and-binary-log-server-system-variables/index#replicate_wild_ignore_table)`
The following replication filters are honored for DML and DDL for tables that use both the [InnoDB](../innodb/index) and [MyISAM](../myisam-storage-engine/index) storage engines:
* `[replicate\_do\_table](../replication-and-binary-log-server-system-variables/index#replicate_do_table)`
* `[replicate\_ignore\_table](../replication-and-binary-log-server-system-variables/index#replicate_ignore_table)`
However, it should be kept in mind that if replication filters cause inconsistencies that lead to replication errors, then nodes may abort.
See also [MDEV-421](https://jira.mariadb.org/browse/MDEV-421) and [MDEV-6229](https://jira.mariadb.org/browse/MDEV-6229).
Network Ports
-------------
Galera Cluster needs access to the following ports:
* **Standard MariaDB Port** (default: 3306) - For MySQL client connections and [State Snapshot Transfers](../introduction-to-state-snapshot-transfers-ssts/index) that use the `mysqldump` method. This can be changed by setting `[port](../server-system-variables/index#port)`.
* **Galera Replication Port** (default: 4567) - For Galera Cluster replication traffic, multicast replication uses both UDP transport and TCP on this port. Can be changed by setting `[wsrep\_node\_address](../galera-cluster-system-variables/index#wsrep_node_address)`.
* **Galera Replication Listening Interface** (default: `0.0.0.0:4567`) needs to be set using [gmcast.listen\_addr](../wsrep_provider_options/index#gmcastlisten_addr), either
+ in [wsrep\_provider\_options](../galera-cluster-system-variables/index#wsrep_provider_options): `wsrep_provider_options='gmcast.listen_addr=tcp://<IP_ADDR>:<PORT>;'`
+ or in [wsrep\_cluster\_address](../galera-cluster-system-variables/index#wsrep_cluster_address)
* **IST Port** (default: 4568) - For Incremental State Transfers. Can be changed by setting `[ist.recv\_addr](http://galeracluster.com/library/documentation/galera-parameters.html#ist-recv-addr)` in `[wsrep\_provider\_options](../galera-cluster-system-variables/index#wsrep_provider_options)`.
* **SST Port** (default: 4444) - For all [State Snapshot Transfer](../introduction-to-state-snapshot-transfers-ssts/index) methods other than `mysqldump`. Can be changed by setting `[wsrep\_sst\_receive\_address](../galera-cluster-system-variables/index#wsrep_sst_receive_address)`.
Mutiple Galera Cluster Instances on One Server
----------------------------------------------
If you want to run multiple Galera Cluster instances on one server, then you can do so by starting each instance with `[mysqld\_multi](../mysqld_multi/index)`, or if you are using [systemd](../systemd/index), then you can use the relevant [systemd method for interacting with multiple MariaDB instances](../systemd/index#interacting-with-multiple-mariadb-server-processes).
You need to ensure that each instance is configured with a different `[datadir](../server-system-variables/index#datadir)`.
You also need to ensure that each instance is configured with different [network ports](#network-ports).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb TO_SECONDS TO\_SECONDS
===========
Syntax
------
```
TO_SECONDS(expr)
```
Description
-----------
Returns the number of seconds from year 0 till `expr`, or NULL if `expr` is not a valid date or [datetime](../datetime/index).
Examples
--------
```
SELECT TO_SECONDS('2013-06-13');
+--------------------------+
| TO_SECONDS('2013-06-13') |
+--------------------------+
| 63538300800 |
+--------------------------+
SELECT TO_SECONDS('2013-06-13 21:45:13');
+-----------------------------------+
| TO_SECONDS('2013-06-13 21:45:13') |
+-----------------------------------+
| 63538379113 |
+-----------------------------------+
SELECT TO_SECONDS(NOW());
+-------------------+
| TO_SECONDS(NOW()) |
+-------------------+
| 63543530875 |
+-------------------+
SELECT TO_SECONDS(20130513);
+----------------------+
| TO_SECONDS(20130513) |
+----------------------+
| 63535622400 |
+----------------------+
1 row in set (0.00 sec)
SELECT TO_SECONDS(130513);
+--------------------+
| TO_SECONDS(130513) |
+--------------------+
| 63535622400 |
+--------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Connect Memory Usage Connect Memory Usage
====================
When creating a connection, a THD object is created for that connection. This contains all connection information and also caches to speed up queries and avoid frequent malloc() calls.
When creating a new connection, the following malloc() calls are done for the THD:
The following information is the state in [MariaDB 10.6.1](https://mariadb.com/kb/en/mariadb-1061-release-notes/) when compiled without debugging.
Local Thread Memory
-------------------
This is part of `select memory\_used from information\_schema.processlist`.
| Amount allocated | Where allocated | Description |
| --- | --- | --- |
| 26646 | THD::THD | Allocation of THD object |
| 256 | Statement\_map::Statement\_map(), my\_hash\_init(key\_memory\_prepared\_statement\_map, &st\_hash | Prepared statements |
| 256 | my\_hash\_init(key\_memory\_prepared\_statement\_map, &names\_hash | Names of used prepared statements |
| 128 | wsrep\_wfc(), Opt\_trace\_context(), dynamic\_array() | |
| 1024 | Diagnostics\_area::init(),init\_sql\_alloc(PSI\_INSTRUMENT\_ME, &m\_warn\_root | |
| 120 | Session\_sysvars\_tracker, global\_system\_variables.session\_track\_system\_variables | Tracking of changed session variables |
| 280 | THD::THD,my\_hash\_init(key\_memory\_user\_var\_entry,&user\_vars | |
| 280 | THD::THD,my\_hash\_init(PSI\_INSTRUMENT\_ME, &sequences | Cache of used sequences |
| 1048 | THD::THD, m\_token\_array= my\_malloc(PSI\_INSTRUMENT\_ME, max\_digest\_length | |
| 16416 | CONNECT::create\_thd(), my\_net\_init(), net\_allocate\_new\_packet() | This is for reading data from the connected user |
| 16416 | check\_connection(), thd->packet.alloc() | This is for sending data to connected user |
Objects Stored in THD->memroot During Connect
---------------------------------------------
| Amount allocated | Where allocated | Description |
| --- | --- | --- |
| 72 | send\_server\_handshake\_packet, mpvio->cached\_server\_packet.pkt= | |
| 64 | parse\_client\_handshake\_packet, thd->copy\_with\_error(...db,db\_len) | |
| 32 | parse\_client\_handshake\_packet, sctx->user= | |
| 368 | ACL\_USER::copy(), root= | Allocation of ACL\_USER object |
| 56 | ACL\_USER::copy(), dst->user= safe\_lexcstrdup\_root(root, user) | |
| 56 | ACL\_USER::copy() | Allocation of other connect attributes |
| 56 | ACL\_USER::copy() | |
| 64 | ACL\_USER::copy() | |
| 64 | ACL\_USER::copy() | |
| 32 | mysql\_change\_db() | Store current db in THD |
| 48 | dbname\_cache->insert(db\_name) | Store db name in db name cache |
| 40 | mysql\_change\_db(), my\_register\_filename(db.opt) | Store filename db.opt |
| 8216 | load\_db\_opt(), init\_io\_cache() | Disk cache for reading db.opt |
| 1112 | load\_db\_opts(), put\_dbopts() | Cache default database parameters |
State at First Call to mysql\_execute\_command
----------------------------------------------
```
(gdb) p thd->status_var.local_memory_used
$24 = 75496
(gdb) p thd->status_var.global_memory_used
$25 = 17544
(gdb) p thd->variables.query_prealloc_size
$30 = 24576
(gdb) p thd->variables.trans_prealloc_size
$37 = 4096
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ST_ASWKT ST\_ASWKT
=========
A synonym for [ST\_ASTEXT()](../st_astext/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb DROP EVENT DROP EVENT
==========
Syntax
------
```
DROP EVENT [IF EXISTS] event_name
```
Description
-----------
This statement drops the [event](../events/index) named `event_name`. The event immediately ceases being active, and is deleted completely from the server.
If the event does not exist, the error `ERROR 1517 (HY000): Unknown event 'event_name'` results. You can override this and cause the statement to generate a `NOTE` for non-existent events instead by using `IF EXISTS`. See `[SHOW WARNINGS](../show-warnings/index)`.
This statement requires the `[EVENT](../grant/index#database-privileges)` privilege. In MySQL 5.1.11 and earlier, an event could be dropped only by its definer, or by a user having the `[SUPER](../grant/index#global-privileges)` privilege.
Examples
--------
```
DROP EVENT myevent3;
```
Using the IF EXISTS clause:
```
DROP EVENT IF EXISTS myevent3;
Query OK, 0 rows affected, 1 warning (0.01 sec)
SHOW WARNINGS;
+-------+------+-------------------------------+
| Level | Code | Message |
+-------+------+-------------------------------+
| Note | 1305 | Event myevent3 does not exist |
+-------+------+-------------------------------+
```
See also
--------
* [Events Overview](../events-overview/index)
* [CREATE EVENT](../create-event/index)
* [SHOW CREATE EVENT](../show-create-event/index)
* [ALTER EVENT](../alter-event/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema ALL_PLUGINS Table Information Schema ALL\_PLUGINS Table
=====================================
Description
-----------
The [Information Schema](../information_schema/index) `ALL_PLUGINS` table contains information about [server plugins](../mariadb-plugins/index), whether installed or not.
It contains the following columns:
| Column | Description |
| --- | --- |
| `PLUGIN_NAME` | Name of the plugin. |
| `PLUGIN_VERSION` | Version from the plugin's general type descriptor. |
| `PLUGIN_STATUS` | Plugin status, one of `ACTIVE`, `INACTIVE`, `DISABLED`, `DELETED` or `NOT INSTALLED`. |
| `PLUGIN_TYPE` | Plugin type; `STORAGE ENGINE`, `INFORMATION_SCHEMA`, `AUTHENTICATION`, `REPLICATION`, `DAEMON` or `AUDIT`. |
| `PLUGIN_TYPE_VERSION` | Version from the plugin's type-specific descriptor. |
| `PLUGIN_LIBRARY` | Plugin's shared object file name, located in the directory specified by the `[plugin\_dir](../server-system-variables/index#plugin_dir)` system variable, and used by the `[INSTALL PLUGIN](../install-plugin/index)` and `[UNINSTALL PLUGIN](../uninstall-plugin/index)` statements. `NULL` if the plugin is complied in and cannot be uninstalled. |
| `PLUGIN_LIBRARY_VERSION` | Version from the plugin's API interface. |
| `PLUGIN_AUTHOR` | Author of the plugin. |
| `PLUGIN_DESCRIPTION` | Description. |
| `PLUGIN_LICENSE` | Plugin's licence. |
| `LOAD_OPTION` | How the plugin was loaded; one of `OFF`, `ON`, `FORCE` or `FORCE_PLUS_PERMANENT`. See [Installing Plugins](../plugin-overview/index#installing-plugins). |
| `PLUGIN_MATURITY` | Plugin's maturity level; one of `Unknown`, `Experimental`, `Alpha`, `Beta`,`'Gamma`, and `Stable`. |
| `PLUGIN_AUTH_VERSION` | Plugin's version as determined by the plugin author. An example would be '0.99 beta 1'. |
It provides a superset of the information shown by the `[SHOW PLUGINS SONAME](../show-plugins-soname/index)` statement, as well as the `[information\_schema.PLUGINS](../information-schema-plugins-table/index)` table. For specific information about storage engines (a particular type of plugin), see the [Information Schema ENGINES table](../information-schema-engines-table/index) and the `[SHOW ENGINES](../show-engines/index)` statement.
The table is not a standard Information Schema table, and is a MariaDB extension.
Example
-------
```
SELECT * FROM information_schema.all_plugins\G
*************************** 1. row ***************************
PLUGIN_NAME: binlog
PLUGIN_VERSION: 1.0
PLUGIN_STATUS: ACTIVE
PLUGIN_TYPE: STORAGE ENGINE
PLUGIN_TYPE_VERSION: 100314.0
PLUGIN_LIBRARY: NULL
PLUGIN_LIBRARY_VERSION: NULL
PLUGIN_AUTHOR: MySQL AB
PLUGIN_DESCRIPTION: This is a pseudo storage engine to represent the binlog in a transaction
PLUGIN_LICENSE: GPL
LOAD_OPTION: FORCE
PLUGIN_MATURITY: Stable
PLUGIN_AUTH_VERSION: 1.0
*************************** 2. row ***************************
PLUGIN_NAME: mysql_native_password
PLUGIN_VERSION: 1.0
PLUGIN_STATUS: ACTIVE
PLUGIN_TYPE: AUTHENTICATION
PLUGIN_TYPE_VERSION: 2.1
PLUGIN_LIBRARY: NULL
PLUGIN_LIBRARY_VERSION: NULL
PLUGIN_AUTHOR: R.J.Silk, Sergei Golubchik
PLUGIN_DESCRIPTION: Native MySQL authentication
PLUGIN_LICENSE: GPL
LOAD_OPTION: FORCE
PLUGIN_MATURITY: Stable
PLUGIN_AUTH_VERSION: 1.0
*************************** 3. row ***************************
PLUGIN_NAME: mysql_old_password
PLUGIN_VERSION: 1.0
PLUGIN_STATUS: ACTIVE
PLUGIN_TYPE: AUTHENTICATION
PLUGIN_TYPE_VERSION: 2.1
PLUGIN_LIBRARY: NULL
PLUGIN_LIBRARY_VERSION: NULL
PLUGIN_AUTHOR: R.J.Silk, Sergei Golubchik
PLUGIN_DESCRIPTION: Old MySQL-4.0 authentication
PLUGIN_LICENSE: GPL
LOAD_OPTION: FORCE
PLUGIN_MATURITY: Stable
PLUGIN_AUTH_VERSION: 1.0
...
*************************** 104. row ***************************
PLUGIN_NAME: WSREP_MEMBERSHIP
PLUGIN_VERSION: 1.0
PLUGIN_STATUS: NOT INSTALLED
PLUGIN_TYPE: INFORMATION SCHEMA
PLUGIN_TYPE_VERSION: 100314.0
PLUGIN_LIBRARY: wsrep_info.so
PLUGIN_LIBRARY_VERSION: 1.13
PLUGIN_AUTHOR: Nirbhay Choubey
PLUGIN_DESCRIPTION: Information about group members
PLUGIN_LICENSE: GPL
LOAD_OPTION: OFF
PLUGIN_MATURITY: Stable
PLUGIN_AUTH_VERSION: 1.0
*************************** 105. row ***************************
PLUGIN_NAME: WSREP_STATUS
PLUGIN_VERSION: 1.0
PLUGIN_STATUS: NOT INSTALLED
PLUGIN_TYPE: INFORMATION SCHEMA
PLUGIN_TYPE_VERSION: 100314.0
PLUGIN_LIBRARY: wsrep_info.so
PLUGIN_LIBRARY_VERSION: 1.13
PLUGIN_AUTHOR: Nirbhay Choubey
PLUGIN_DESCRIPTION: Group view information
PLUGIN_LICENSE: GPL
LOAD_OPTION: OFF
PLUGIN_MATURITY: Stable
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb format_path format\_path
============
Syntax
------
```
sys.format_path(path)
```
Description
-----------
`format_path` is a [stored function](../stored-functions/index) available with the [Sys Schema](../sys-schema/index) that, given a path, returns a modified path after replacing subpaths matching the values of various system variables with the variable name.
The system variables that are matched are, in order:
* [datadir](../server-system-variables/index#datadir)
* [tmpdir](../server-system-variables/index#tmpdir)
* [slave\_load\_tmpdir](../replication-and-binary-log-system-variables/index#slave_load_tmpdir)
* [innodb\_data\_home\_dir](../innodb-system-variables/index#innodb_data_home_dir)
* [innodb\_log\_group\_home\_dir](../innodb-system-variables/index#innodb_log_group_home_dir)
* [innodb\_undo\_directory](../innodb-system-variables/index#innodb_undo_directory)
* [basedir](../server-system-variables/index#basedir)
Examples
--------
```
SELECT @@tmpdir;
+------------------------------------+
| @@tmpdir |
+------------------------------------+
| /home/ian/sandboxes/msb_10_8_2/tmp |
+------------------------------------+
SELECT sys.format_path('/home/ian/sandboxes/msb_10_8_2/tmp/testdb.ibd');
+------------------------------------------------------------------+
| sys.format_path('/home/ian/sandboxes/msb_10_8_2/tmp/testdb.ibd') |
+------------------------------------------------------------------+
| @@tmpdir/testdb.ibd |
+------------------------------------------------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb RESET RESET
=====
Syntax
------
```
RESET reset_option [, reset_option] ...
```
Description
-----------
The `RESET` statement is used to clear the state of various server operations. You must have the `[RELOAD privilege](../grant/index)` to execute `RESET`.
`RESET` acts as a stronger version of the [FLUSH](../flush/index) statement.
The different `RESET` options are:
| Option | Description |
| --- | --- |
| [SLAVE ["connection\_name"] [ALL](../reset-slave/index)] | Deletes all [relay logs](../relay-log/index) from the slave and reset the replication position in the master [binary log](../binary-log/index). |
| [MASTER](../reset-master/index) | Deletes all old binary logs, makes the binary index file ([--log-bin-index](../mysqld-options-full-list/index)) empty and creates a new binary log file. This is useful when you want to reset the master to an initial state. If you want to just delete old, not used binary logs, you should use the [PURGE BINARY LOGS](../sql-commands-purge-logs/index) command. |
| QUERY CACHE | Removes all queries from [the query cache](../the-query-cache/index). See also [FLUSH QUERY CACHE](../flush-query-cache/index). |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb osmdb06.sql osmdb06.sql
===========
Below is the schema described in the [OpenStreetMap Dataset Use](../openstreetmap-dataset-use/index) article. To use, copy everything in the box below into a file called '`osmdb06.sql`', then continue with the instructions.
```
-- phpMyAdmin SQL Dump
-- version 2.11.9.3
-- http://www.phpmyadmin.net
--
-- Хост: mysql.leonenko.info
-- Час стварэньня: 16 Сак 2009, 15:12
-- Вэрсія сэрвэра: 5.0.67
-- Вэрсія PHP: 5.2.6
SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO";
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
--
-- База дадзеных: `osmapper_belarus`
--
-- --------------------------------------------------------
--
-- Структура табліцы `changesets`
--
DROP TABLE IF EXISTS `changesets`;
CREATE TABLE IF NOT EXISTS `changesets` (
`id` bigint(20) NOT NULL auto_increment,
`user_id` bigint(20) NOT NULL,
`created_at` datetime NOT NULL,
`min_lat` int(11) default NULL,
`max_lat` int(11) default NULL,
`min_lon` int(11) default NULL,
`max_lon` int(11) default NULL,
`closed_at` datetime NOT NULL,
`num_changes` int(11) NOT NULL default '0',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=20103 ;
-- --------------------------------------------------------
--
-- Структура табліцы `changeset_tags`
--
DROP TABLE IF EXISTS `changeset_tags`;
CREATE TABLE IF NOT EXISTS `changeset_tags` (
`id` bigint(64) NOT NULL,
`k` varchar(255) NOT NULL default '',
`v` varchar(255) NOT NULL default '',
KEY `changeset_tags_id_idx` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `current_nodes`
--
DROP TABLE IF EXISTS `current_nodes`;
CREATE TABLE IF NOT EXISTS `current_nodes` (
`id` bigint(64) NOT NULL auto_increment,
`latitude` int(11) NOT NULL,
`longitude` int(11) NOT NULL,
`changeset_id` bigint(20) NOT NULL,
`visible` tinyint(1) NOT NULL,
`timestamp` datetime NOT NULL,
`tile` int(10) unsigned NOT NULL,
`version` bigint(20) NOT NULL,
PRIMARY KEY (`id`),
KEY `current_nodes_timestamp_idx` (`timestamp`),
KEY `current_nodes_tile_idx` (`tile`),
KEY `changeset_id` (`changeset_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=348816842 ;
-- --------------------------------------------------------
--
-- Структура табліцы `current_node_tags`
--
DROP TABLE IF EXISTS `current_node_tags`;
CREATE TABLE IF NOT EXISTS `current_node_tags` (
`id` bigint(64) NOT NULL,
`k` varchar(255) NOT NULL default '',
`v` varchar(255) NOT NULL default '',
PRIMARY KEY (`id`,`k`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `current_relations`
--
DROP TABLE IF EXISTS `current_relations`;
CREATE TABLE IF NOT EXISTS `current_relations` (
`id` bigint(64) NOT NULL auto_increment,
`changeset_id` bigint(20) NOT NULL,
`timestamp` datetime NOT NULL,
`visible` tinyint(1) NOT NULL,
`version` bigint(20) NOT NULL,
PRIMARY KEY (`id`),
KEY `current_relations_timestamp_idx` (`timestamp`),
KEY `changeset_id` (`changeset_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=80283 ;
-- --------------------------------------------------------
--
-- Структура табліцы `current_relation_members`
--
DROP TABLE IF EXISTS `current_relation_members`;
CREATE TABLE IF NOT EXISTS `current_relation_members` (
`id` bigint(64) NOT NULL,
`member_type` enum('node','way','relation') NOT NULL default 'node',
`member_id` bigint(11) NOT NULL,
`member_role` varchar(255) NOT NULL default '',
`sequence_id` int(11) NOT NULL default '0',
PRIMARY KEY (`id`,`member_type`,`member_id`,`member_role`,`sequence_id`),
KEY `current_relation_members_member_idx` (`member_type`,`member_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `current_relation_tags`
--
DROP TABLE IF EXISTS `current_relation_tags`;
CREATE TABLE IF NOT EXISTS `current_relation_tags` (
`id` bigint(64) NOT NULL,
`k` varchar(255) NOT NULL default '',
`v` varchar(255) NOT NULL default '',
PRIMARY KEY (`id`,`k`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `current_ways`
--
DROP TABLE IF EXISTS `current_ways`;
CREATE TABLE IF NOT EXISTS `current_ways` (
`id` bigint(64) NOT NULL auto_increment,
`changeset_id` bigint(20) NOT NULL,
`timestamp` datetime NOT NULL,
`visible` tinyint(1) NOT NULL,
`version` bigint(20) NOT NULL,
PRIMARY KEY (`id`),
KEY `current_ways_timestamp_idx` (`timestamp`),
KEY `changeset_id` (`changeset_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=31336923 ;
-- --------------------------------------------------------
--
-- Структура табліцы `current_way_nodes`
--
DROP TABLE IF EXISTS `current_way_nodes`;
CREATE TABLE IF NOT EXISTS `current_way_nodes` (
`id` bigint(64) NOT NULL,
`node_id` bigint(64) NOT NULL,
`sequence_id` bigint(11) NOT NULL,
PRIMARY KEY (`id`,`sequence_id`),
KEY `current_way_nodes_node_idx` (`node_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `current_way_tags`
--
DROP TABLE IF EXISTS `current_way_tags`;
CREATE TABLE IF NOT EXISTS `current_way_tags` (
`id` bigint(64) NOT NULL,
`k` varchar(255) NOT NULL default '',
`v` varchar(255) NOT NULL default '',
PRIMARY KEY (`id`,`k`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `diary_comments`
--
DROP TABLE IF EXISTS `diary_comments`;
CREATE TABLE IF NOT EXISTS `diary_comments` (
`id` bigint(20) NOT NULL auto_increment,
`diary_entry_id` bigint(20) NOT NULL,
`user_id` bigint(20) NOT NULL,
`body` text NOT NULL,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `diary_comments_entry_id_idx` (`diary_entry_id`,`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
-- --------------------------------------------------------
--
-- Структура табліцы `diary_entries`
--
DROP TABLE IF EXISTS `diary_entries`;
CREATE TABLE IF NOT EXISTS `diary_entries` (
`id` bigint(20) NOT NULL auto_increment,
`user_id` bigint(20) NOT NULL,
`title` varchar(255) NOT NULL,
`body` text NOT NULL,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
`latitude` double default NULL,
`longitude` double default NULL,
`language` varchar(3) default NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
-- --------------------------------------------------------
--
-- Структура табліцы `friends`
--
DROP TABLE IF EXISTS `friends`;
CREATE TABLE IF NOT EXISTS `friends` (
`id` bigint(20) NOT NULL auto_increment,
`user_id` bigint(20) NOT NULL,
`friend_user_id` bigint(20) NOT NULL,
PRIMARY KEY (`id`),
KEY `user_id_idx` (`friend_user_id`),
KEY `friends_user_id_idx` (`user_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
-- --------------------------------------------------------
--
-- Структура табліцы `gps_points`
--
DROP TABLE IF EXISTS `gps_points`;
CREATE TABLE IF NOT EXISTS `gps_points` (
`altitude` float default NULL,
`trackid` int(11) NOT NULL,
`latitude` int(11) NOT NULL,
`longitude` int(11) NOT NULL,
`gpx_id` bigint(64) NOT NULL,
`timestamp` datetime default NULL,
`tile` int(10) unsigned NOT NULL,
KEY `points_gpxid_idx` (`gpx_id`),
KEY `points_tile_idx` (`tile`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `gpx_files`
--
DROP TABLE IF EXISTS `gpx_files`;
CREATE TABLE IF NOT EXISTS `gpx_files` (
`id` bigint(64) NOT NULL auto_increment,
`user_id` bigint(20) NOT NULL,
`visible` tinyint(1) NOT NULL default '1',
`name` varchar(255) NOT NULL default '',
`size` bigint(20) default NULL,
`latitude` double default NULL,
`longitude` double default NULL,
`timestamp` datetime NOT NULL,
`public` tinyint(1) NOT NULL default '1',
`description` varchar(255) NOT NULL default '',
`inserted` tinyint(1) NOT NULL,
PRIMARY KEY (`id`),
KEY `gpx_files_timestamp_idx` (`timestamp`),
KEY `gpx_files_visible_public_idx` (`visible`,`public`),
KEY `gpx_files_user_id_idx` (`user_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
-- --------------------------------------------------------
--
-- Структура табліцы `gpx_file_tags`
--
DROP TABLE IF EXISTS `gpx_file_tags`;
CREATE TABLE IF NOT EXISTS `gpx_file_tags` (
`gpx_id` bigint(64) NOT NULL default '0',
`tag` varchar(255) NOT NULL,
`id` bigint(20) NOT NULL auto_increment,
PRIMARY KEY (`id`),
KEY `gpx_file_tags_gpxid_idx` (`gpx_id`),
KEY `gpx_file_tags_tag_idx` (`tag`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
-- --------------------------------------------------------
--
-- Структура табліцы `messages`
--
DROP TABLE IF EXISTS `messages`;
CREATE TABLE IF NOT EXISTS `messages` (
`id` bigint(20) NOT NULL auto_increment,
`from_user_id` bigint(20) NOT NULL,
`title` varchar(255) NOT NULL,
`body` text NOT NULL,
`sent_on` datetime NOT NULL,
`message_read` tinyint(1) NOT NULL default '0',
`to_user_id` bigint(20) NOT NULL,
PRIMARY KEY (`id`),
KEY `messages_to_user_id_idx` (`to_user_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
-- --------------------------------------------------------
--
-- Структура табліцы `nodes`
--
DROP TABLE IF EXISTS `nodes`;
CREATE TABLE IF NOT EXISTS `nodes` (
`id` bigint(64) NOT NULL,
`latitude` int(11) NOT NULL,
`longitude` int(11) NOT NULL,
`changeset_id` bigint(20) NOT NULL,
`visible` tinyint(1) NOT NULL,
`timestamp` datetime NOT NULL,
`tile` int(10) unsigned NOT NULL,
`version` bigint(20) NOT NULL,
PRIMARY KEY (`id`,`version`),
KEY `nodes_timestamp_idx` (`timestamp`),
KEY `nodes_tile_idx` (`tile`),
KEY `changeset_id` (`changeset_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `node_tags`
--
DROP TABLE IF EXISTS `node_tags`;
CREATE TABLE IF NOT EXISTS `node_tags` (
`id` bigint(64) NOT NULL,
`version` bigint(20) NOT NULL,
`k` varchar(255) NOT NULL default '',
`v` varchar(255) NOT NULL default '',
PRIMARY KEY (`id`,`version`,`k`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `relations`
--
DROP TABLE IF EXISTS `relations`;
CREATE TABLE IF NOT EXISTS `relations` (
`id` bigint(64) NOT NULL default '0',
`changeset_id` bigint(20) NOT NULL,
`timestamp` datetime NOT NULL,
`version` bigint(20) NOT NULL,
`visible` tinyint(1) NOT NULL default '1',
PRIMARY KEY (`id`,`version`),
KEY `relations_timestamp_idx` (`timestamp`),
KEY `changeset_id` (`changeset_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `relation_members`
--
DROP TABLE IF EXISTS `relation_members`;
CREATE TABLE IF NOT EXISTS `relation_members` (
`id` bigint(64) NOT NULL default '0',
`member_type` enum('node','way','relation') NOT NULL default 'node',
`member_id` bigint(11) NOT NULL,
`member_role` varchar(255) NOT NULL default '',
`version` bigint(20) NOT NULL default '0',
`sequence_id` int(11) NOT NULL default '0',
PRIMARY KEY (`id`,`version`,`member_type`,`member_id`,`member_role`,`sequence_id`),
KEY `relation_members_member_idx` (`member_type`,`member_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `relation_tags`
--
DROP TABLE IF EXISTS `relation_tags`;
CREATE TABLE IF NOT EXISTS `relation_tags` (
`id` bigint(64) NOT NULL default '0',
`k` varchar(255) NOT NULL default '',
`v` varchar(255) NOT NULL default '',
`version` bigint(20) NOT NULL,
PRIMARY KEY (`id`,`version`,`k`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `schema_migrations`
--
DROP TABLE IF EXISTS `schema_migrations`;
CREATE TABLE IF NOT EXISTS `schema_migrations` (
`version` varchar(255) NOT NULL,
UNIQUE KEY `unique_schema_migrations` (`version`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `sessions`
--
DROP TABLE IF EXISTS `sessions`;
CREATE TABLE IF NOT EXISTS `sessions` (
`id` int(11) NOT NULL auto_increment,
`session_id` varchar(255) default NULL,
`data` text,
`created_at` datetime default NULL,
`updated_at` datetime default NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `sessions_session_id_idx` (`session_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=2 ;
-- --------------------------------------------------------
--
-- Структура табліцы `users`
--
DROP TABLE IF EXISTS `users`;
CREATE TABLE IF NOT EXISTS `users` (
`email` varchar(255) NOT NULL,
`id` bigint(20) NOT NULL auto_increment,
`active` int(11) NOT NULL default '0',
`pass_crypt` varchar(255) NOT NULL,
`creation_time` datetime NOT NULL,
`display_name` varchar(255) NOT NULL default '',
`data_public` tinyint(1) NOT NULL default '0',
`description` text NOT NULL,
`home_lat` double default NULL,
`home_lon` double default NULL,
`home_zoom` smallint(6) default '3',
`nearby` int(11) default '50',
`pass_salt` varchar(255) default NULL,
`image` text,
`administrator` tinyint(1) NOT NULL default '0',
`email_valid` tinyint(1) NOT NULL default '0',
`new_email` varchar(255) default NULL,
`visible` tinyint(1) NOT NULL default '1',
`creation_ip` varchar(255) default NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `users_email_idx` (`email`),
UNIQUE KEY `users_display_name_idx` (`display_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=101442 ;
-- --------------------------------------------------------
--
-- Структура табліцы `user_preferences`
--
DROP TABLE IF EXISTS `user_preferences`;
CREATE TABLE IF NOT EXISTS `user_preferences` (
`user_id` bigint(20) NOT NULL,
`k` varchar(255) NOT NULL,
`v` varchar(255) NOT NULL,
PRIMARY KEY (`user_id`,`k`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `user_tokens`
--
DROP TABLE IF EXISTS `user_tokens`;
CREATE TABLE IF NOT EXISTS `user_tokens` (
`id` bigint(20) NOT NULL auto_increment,
`user_id` bigint(20) NOT NULL,
`token` varchar(255) NOT NULL,
`expiry` datetime NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `user_tokens_token_idx` (`token`),
KEY `user_tokens_user_id_idx` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
-- --------------------------------------------------------
--
-- Структура табліцы `ways`
--
DROP TABLE IF EXISTS `ways`;
CREATE TABLE IF NOT EXISTS `ways` (
`id` bigint(64) NOT NULL default '0',
`changeset_id` bigint(20) NOT NULL,
`timestamp` datetime NOT NULL,
`version` bigint(20) NOT NULL,
`visible` tinyint(1) NOT NULL default '1',
PRIMARY KEY (`id`,`version`),
KEY `ways_timestamp_idx` (`timestamp`),
KEY `changeset_id` (`changeset_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `way_nodes`
--
DROP TABLE IF EXISTS `way_nodes`;
CREATE TABLE IF NOT EXISTS `way_nodes` (
`id` bigint(64) NOT NULL,
`node_id` bigint(64) NOT NULL,
`version` bigint(20) NOT NULL,
`sequence_id` bigint(11) NOT NULL,
PRIMARY KEY (`id`,`version`,`sequence_id`),
KEY `way_nodes_node_idx` (`node_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-- --------------------------------------------------------
--
-- Структура табліцы `way_tags`
--
DROP TABLE IF EXISTS `way_tags`;
CREATE TABLE IF NOT EXISTS `way_tags` (
`id` bigint(64) NOT NULL default '0',
`k` varchar(255) NOT NULL,
`v` varchar(255) NOT NULL,
`version` bigint(20) NOT NULL,
PRIMARY KEY (`id`,`version`,`k`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
--
-- Абмежаваньні для экспартаваных табліц
--
--
-- Абмежаваньні для табліцы `current_nodes`
--
ALTER TABLE `current_nodes`
ADD CONSTRAINT `current_nodes_ibfk_1` FOREIGN KEY (`changeset_id`) REFERENCES `changesets` (`id`);
--
-- Абмежаваньні для табліцы `current_node_tags`
--
ALTER TABLE `current_node_tags`
ADD CONSTRAINT `current_node_tags_ibfk_1` FOREIGN KEY (`id`) REFERENCES `current_nodes` (`id`);
--
-- Абмежаваньні для табліцы `current_relations`
--
ALTER TABLE `current_relations`
ADD CONSTRAINT `current_relations_ibfk_1` FOREIGN KEY (`changeset_id`) REFERENCES `changesets` (`id`);
--
-- Абмежаваньні для табліцы `current_relation_members`
--
ALTER TABLE `current_relation_members`
ADD CONSTRAINT `current_relation_members_ibfk_1` FOREIGN KEY (`id`) REFERENCES `current_relations` (`id`);
--
-- Абмежаваньні для табліцы `current_relation_tags`
--
ALTER TABLE `current_relation_tags`
ADD CONSTRAINT `current_relation_tags_ibfk_1` FOREIGN KEY (`id`) REFERENCES `current_relations` (`id`);
--
-- Абмежаваньні для табліцы `current_ways`
--
ALTER TABLE `current_ways`
ADD CONSTRAINT `current_ways_ibfk_1` FOREIGN KEY (`changeset_id`) REFERENCES `changesets` (`id`);
--
-- Абмежаваньні для табліцы `current_way_nodes`
--
ALTER TABLE `current_way_nodes`
ADD CONSTRAINT `current_way_nodes_ibfk_2` FOREIGN KEY (`node_id`) REFERENCES `current_nodes` (`id`),
ADD CONSTRAINT `current_way_nodes_ibfk_1` FOREIGN KEY (`id`) REFERENCES `current_ways` (`id`);
--
-- Абмежаваньні для табліцы `current_way_tags`
--
ALTER TABLE `current_way_tags`
ADD CONSTRAINT `current_way_tags_ibfk_1` FOREIGN KEY (`id`) REFERENCES `current_ways` (`id`);
--
-- Абмежаваньні для табліцы `nodes`
--
ALTER TABLE `nodes`
ADD CONSTRAINT `nodes_ibfk_1` FOREIGN KEY (`changeset_id`) REFERENCES `changesets` (`id`);
--
-- Абмежаваньні для табліцы `node_tags`
--
ALTER TABLE `node_tags`
ADD CONSTRAINT `node_tags_ibfk_1` FOREIGN KEY (`id`, `version`) REFERENCES `nodes` (`id`, `version`);
--
-- Абмежаваньні для табліцы `relations`
--
ALTER TABLE `relations`
ADD CONSTRAINT `relations_ibfk_1` FOREIGN KEY (`changeset_id`) REFERENCES `changesets` (`id`);
--
-- Абмежаваньні для табліцы `relation_members`
--
ALTER TABLE `relation_members`
ADD CONSTRAINT `relation_members_ibfk_1` FOREIGN KEY (`id`, `version`) REFERENCES `relations` (`id`, `version`);
--
-- Абмежаваньні для табліцы `relation_tags`
--
ALTER TABLE `relation_tags`
ADD CONSTRAINT `relation_tags_ibfk_1` FOREIGN KEY (`id`, `version`) REFERENCES `relations` (`id`, `version`);
--
-- Абмежаваньні для табліцы `ways`
--
ALTER TABLE `ways`
ADD CONSTRAINT `ways_ibfk_1` FOREIGN KEY (`changeset_id`) REFERENCES `changesets` (`id`);
--
-- Абмежаваньні для табліцы `way_nodes`
--
ALTER TABLE `way_nodes`
ADD CONSTRAINT `way_nodes_ibfk_1` FOREIGN KEY (`id`, `version`) REFERENCES `ways` (`id`, `version`);
--
-- Абмежаваньні для табліцы `way_tags`
--
ALTER TABLE `way_tags`
ADD CONSTRAINT `way_tags_ibfk_1` FOREIGN KEY (`id`, `version`) REFERENCES `ways` (`id`, `version`);
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb libmysqld libmysqld
==========
Articles about libmysqld.so, the embedded MariaDB server.
| Title | Description |
| --- | --- |
| [Embedded MariaDB Interface](../embedded-mariadb-interface/index) | Embedded MariaDB interface. |
| [mysqltest and mysqltest-embedded](../mysqltest-and-mysqltest-embedded/index) | Runs a test case against a MariaDB server, optionally comparing the output with a result file. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Changing a Replica to Become the Primary Changing a Replica to Become the Primary
========================================
The terms *master* and *slave* have historically been used in replication, but the terms terms *primary* and *replica* are now preferred. The old terms are used still used in parts of the documentation, and in MariaDB commands, although [MariaDB 10.5](../what-is-mariadb-105/index) has begun the process of renaming. The documentation process is ongoing. See [MDEV-18777](https://jira.mariadb.org/browse/MDEV-18777) to follow progress on this effort.
This article describes how to change a replica to become a primary and optionally to set the old primary as a replica for the new primary.
A typical scenario of when this is useful is if you have set up a new version of MariaDB as a replica, for example for testing, and want to upgrade your primary to the new version.
In MariaDB replication, a replica should be of a version same or newer than the primary. Because of this, one should first upgrades all replicas to the latest version before changing a replica to be a primary. In some cases one can have a replica to be of an older version than the primary, as long as one doesn't execute on the primary any SQL commands that the replica doesn't understand. This is however not guaranteed between all major MariaDB versions.
Note that in the examples below, `[connection_name]` is used as the [name of the connection](../multi-source-replication/index). If you are not using named connections you can ignore this.
### Stopping the Original Master.
First one needs to take down the original primary in such a way that the replica has all information on the primary.
If you are using [Semisynchronous Replication](../semisynchronous-replication/index) you can just stop the server with the [SHUTDOWN](../shutdown/index) command as the replicas should be automatically up to date.
If you are using [MariaDB MaxScale proxy](../maxscale/index), then you [can use MaxScale](https://mariadb.com/resources/blog/mariadb-maxscale-2-2-introducing-failover-switchover-and-automatic-rejoin) to handle the whole process of taking down the primary and replacing it with one of the replicas.
If neither of the above is true, you have to do this step manually:
#### Manually Take Down the Primary
First we have to set the primary to read only to ensure that there are no new updates on the primary:
```
FLUSH TABLES WITH READ LOCK;
```
Note that you should not disconnect this session as otherwise the read lock will disappear and you have to start from the beginning.
Then you should check the current position of the primary:
```
SHOW MASTER STATUS;
+--------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+--------------------+----------+--------------+------------------+
| mariadb-bin.000003 | 343 | | |
+--------------------+----------+--------------+------------------+
SELECT @@global.gtid_binlog_pos;
+--------------------------+
| @@global.gtid_binlog_pos |
+--------------------------+
| 0-1-2 |
+--------------------------+
```
And wait until you have the same position on the replica: (The following should be excepted on the replica)
```
SHOW SLAVE [connection_name] STATUS;
+-------------------+-------------------+
Master_Log_File | narttu-bin.000003 +
Read_Master_Log_Pos | 343 +
Exec_Master_Log_Pos | 343 +
...
Gtid_IO_Pos 0-1-2 +
+-------------------+-------------------+
```
The most important information to watch are `Master_Log_File` and `Exec_Master_Log_Pos` as when this matches the primary, it signals that all transactions has been committed on the replica.
Note that `Gtid_IO_Pos` on replica can contain many different positions separated with ',' if the replica has been connected to many different primaries. What is important is that all the sequences that are on the primary is also on the replica.
When replica is up to date, you can then take the **PRIMARY** down. This should be on the same connection where you executed [FLUSH TABLES WITH READ LOCK](../flush/index).
```
SHUTDOWN;
```
### Preparing the Replica to be a Primary
Stop all old connections to the old primary(s) and reset **read only mode**, if you had it enabled. You also want to save the values of [SHOW MASTER STATUS](../show-master-status/index) and `gtid_binlog_pos`, as you may need these to setup new replicas.
```
STOP ALL SLAVES;
RESET SLAVE ALL;
SHOW MASTER STATUS;
SELECT @@global.gtid_binlog_pos;
SET @@global.read_only=0;
```
### Reconnect Other Replicas to the New Primary
On the other replicas you have point them to the new primary (the replica you promoted to a primary).
```
STOP SLAVE [connection_name];
CHANGE MASTER [connection_name] TO MASTER_HOST="new_master_name",
MASTER_PORT=3306, MASTER_USER='root', MASTER_USE_GTID=current_pos,
MASTER_LOG_FILE="XXX", MASTER_LOG_POS=XXX;
START SLAVE;
```
The `XXX` values for `MASTER_LOG_FILE` and `MASTER_LOG_POS` should be the values you got from the `SHOW MASTER STATUS` command you did when you finished setting up the replica.
### Changing the Old Primary to be a Replica
Now you can upgrade the new primary to a newer version of MariaDB and then follow the same procedure to connect it as a replica.
When starting the original primary, it's good to start the `mysqld` executable with the `--with-skip-slave-start` and `--read-only` options to ensure that no old replica configurations could cause any conflicts.
For the same reason it's also good to execute the following commands on the old primary (same as for other replicas, but with some extra security). The `read_only` option below is there to ensure that old applications doesn't by accident try to update the old primary by mistake. It only affects normal connections to the replica, not changes from the new primary.
```
set @@global.read_only=1;
STOP ALL SLAVES;
RESET MASTER;
RESET SLAVE ALL;
CHANGE MASTER [connection_name] TO MASTER_HOST="new_master_name",
MASTER_PORT=3306, MASTER_USER='root', MASTER_USE_GTID=current_pos,
MASTER_LOG_FILE="XXX", MASTER_LOG_POS=XXX;
START SLAVE;
```
### Moving Applications to Use New Primary
You should now point your applications to use the new primary. If you are using the [MariaDB MaxScale proxy](../maxscale/index), then you don't have to do this step as MaxScale will take care of sending write request to the new primary.
### See Also
* [CHANGE MASTER TO](../change-master-to/index) command
* [MaxScale Blog about using Switchover to swap a primary and replica](https://mariadb.com/resources/blog/mariadb-maxscale-2-2-introducing-failover-switchover-and-automatic-rejoin)
* [Percona blog about how to upgrade replica to primary](https://www.percona.com/blog/2015/12/01/upgrade-master-server-minimal-downtime)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MyRocks Column Families MyRocks Column Families
=======================
[MyRocks](../myrocks/index) stores data in column families. These are similar to tablespaces. By default, the data is stored in the `default` column family.
One can specify which column family the data goes to by using index comments:
```
INDEX index_name(col1, col2, ...) COMMENT 'column_family_name'
```
If the column name starts with `rev:`, the column family is reverse-ordered.
Reasons for Column Families
---------------------------
Storage parameters like
* Bloom filter settings
* Compression settings
* Whether the data is stored in reverse order
are specified on a per-column family basis.
Creating a Column Family
------------------------
When creating a table or index, you can specify the name of the column family for it. If the column family doesn't exist, it will be automatically created.
Dropping a Column Family
------------------------
There is currently no way to drop a column family. RocksDB supports this internally but MyRocks doesn't provide any way to do it.
Setting Column Family Parameters
--------------------------------
Use these variables:
* [rocksdb\_default\_cf\_options](../myrocks-system-variables/index#rocksdb_default_cf_options) - a my.cnf parameter specifying default options for all column families.
* [rocksdb\_override\_cf\_options](../myrocks-system-variables/index#rocksdb_override_cf_options) - a my.cnf parameter specifying per-column family option overrides.
* [rocksdb\_update\_cf\_options](../myrocks-system-variables/index#rocksdb_update_cf_options) - a dynamically-settable variable which allows to change parameters online. Not all parameters can be changed.
### rocksdb\_override\_cf\_options
This parameter allows one to override column family options for specific column families. Here is an example of how to set option1=value1 and option2=value for column family cf1, and option3=value3 for column family cf3:
```
rocksdb_override_cf_options='cf1={option1=value1;option2=value2};cf2={option3=value3}'
```
One can check the contents of `INFORMATION_SCHEMA.ROCKSDB_CF_OPTIONS` to see what options are available.
Options that are frequently configured are:
* Data compression. See [myrocks-and-data-compression](../myrocks-and-data-compression/index).
* Bloom Filters. See [myrocks-and-bloom-filters](../myrocks-and-bloom-filters/index).
Examining Column Family Parameters
----------------------------------
See the [INFORMATION\_SCHEMA.ROCKSDB\_CF\_OPTIONS](../information-schema-rocksdb_cf_options-table/index) table.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Table Statements Table Statements
=================
Articles about creating, modifying, and maintaining tables in MariaDB.
| Title | Description |
| --- | --- |
| [ALTER](../alter/index) | The various ALTER statements in MariaDB. |
| [ANALYZE TABLE](../analyze-table/index) | Store key distributions for a table. |
| [CHECK TABLE](../check-table/index) | Check table for errors. |
| [CHECK VIEW](../check-view/index) | Check whether the view algorithm is correct. |
| [CHECKSUM TABLE](../checksum-table/index) | Report a table checksum. |
| [CREATE TABLE](../create-table/index) | Creates a new table. |
| [DELETE](../delete/index) | Delete rows from one or more tables. |
| [DROP TABLE](../drop-table/index) | Removes definition and data from one or more tables. |
| [Installing System Tables (mysql\_install\_db)](../installing-system-tables-mysql_install_db/index) | Using mysql\_install\_db to create the system tables in the 'mysql' database directory. |
| [mysqlcheck](../mysqlcheck/index) | Tool for checking, repairing, analyzing and optimizing tables. |
| [mysql\_upgrade](../mysql_upgrade/index) | Update to the latest version. |
| [OPTIMIZE TABLE](../optimize-table/index) | Reclaim unused space and defragment data. |
| [RENAME TABLE](../rename-table/index) | Change a table's name. |
| [REPAIR TABLE](../repair-table/index) | Rapairs a table, if the storage engine supports this statement. |
| [REPAIR VIEW](../repair-view/index) | Fix view if the algorithms are swapped. |
| [REPLACE](../replace/index) | Equivalent to DELETE + INSERT, or just an INSERT if no rows are returned. |
| [SHOW COLUMNS](../show-columns/index) | Column information. |
| [SHOW CREATE TABLE](../show-create-table/index) | Shows the CREATE TABLE statement that created the table. |
| [SHOW INDEX](../show-index/index) | Information about table indexes. |
| [TRUNCATE TABLE](../truncate-table/index) | DROP and re-CREATE a table. |
| [UPDATE](../update/index) | Modify rows in one or more tables. |
| [Obsolete Table Commands](../obsolete-table-commands/index) | Table commands that have been removed from MariaDB |
| [IGNORE](../ignore/index) | Suppress errors while trying to violate a UNIQUE constraint. |
| [System-Versioned Tables](../system-versioned-tables/index) | System-versioned tables record the history of all changes to table data. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CONNECT PIVOT Table Type CONNECT PIVOT Table Type
========================
This table type can be used to transform the result of another table or view (called the source table) into a pivoted table along “pivot” and “facts” columns. A pivot table is a great reporting tool that sorts and sums (by default) independent of the original data layout in the source table.
For example, let us suppose you have the following “Expenses” table:
| Who | Week | What | Amount |
| --- | --- | --- | --- |
| Joe | 3 | Beer | 18.00 |
| Beth | 4 | Food | 17.00 |
| Janet | 5 | Beer | 14.00 |
| Joe | 3 | Food | 12.00 |
| Joe | 4 | Beer | 19.00 |
| Janet | 5 | Car | 12.00 |
| Joe | 3 | Food | 19.00 |
| Beth | 4 | Beer | 15.00 |
| Janet | 5 | Beer | 19.00 |
| Joe | 3 | Car | 20.00 |
| Joe | 4 | Beer | 16.00 |
| Beth | 5 | Food | 12.00 |
| Beth | 3 | Beer | 16.00 |
| Joe | 4 | Food | 17.00 |
| Joe | 5 | Beer | 14.00 |
| Janet | 3 | Car | 19.00 |
| Joe | 4 | Food | 17.00 |
| Beth | 5 | Beer | 20.00 |
| Janet | 3 | Food | 18.00 |
| Joe | 4 | Beer | 14.00 |
| Joe | 5 | Food | 12.00 |
| Janet | 3 | Beer | 18.00 |
| Janet | 4 | Car | 17.00 |
| Janet | 5 | Food | 12.00 |
Pivoting the table contents using the 'Who' and 'Week' fields for the left columns, and the 'What' field for the top heading and summing the 'Amount' fields for each cell in the new table, gives the following desired result:
| Who | Week | Beer | Car | Food |
| --- | --- | --- | --- | --- |
| Beth | 3 | 16.00 | 0.00 | 0.00 |
| Beth | 4 | 15.00 | 0.00 | 17.00 |
| Beth | 5 | 20.00 | 0.00 | 12.00 |
| Janet | 3 | 18.00 | 19.00 | 18.00 |
| Janet | 4 | 0.00 | 17.00 | 0.00 |
| Janet | 5 | 33.00 | 12.00 | 12.00 |
| Joe | 3 | 18.00 | 20.00 | 31.00 |
| Joe | 4 | 49.00 | 0.00 | 34.00 |
| Joe | 5 | 14.00 | 0.00 | 12.00 |
Note that SQL enables you to get the same result presented differently by using the “group by” clause, namely:
```
select who, week, what, sum(amount) from expenses
group by who, week, what;
```
However there is no way to get the pivoted layout shown above just using SQL. Even using embedded SQL programming for some DBMS is not quite simple and automatic.
The Pivot table type of CONNECT makes doing this much simpler.
Using the PIVOT Tables Type
---------------------------
To get the result shown in the example above, just define it as a new table with the statement:
```
create table pivex
engine=connect table_type=pivot tabname=expenses;
```
You can now use it as any other table, for instance to display the result shown above, just say:
```
select * from pivex;
```
The CONNECT implementation of the PIVOT table type does much of the work required to transform the source table:
1. Finding the “Facts” column, by default the last column of the source table. Finding “Facts” or “Pivot” columns work only for table based pivot tables. They do not for view or srcdef based pivot tables, for which they must be explicitly specified.
2. Finding the “Pivot” column, by default the last remaining column.
3. Choosing the aggregate function to use, “SUM” by default.
4. Constructing and executing the “Group By” on the “Facts” column, getting its result in memory.
5. Getting all the distinct values in the “Pivot” column and defining a “Data” column for each.
6. Spreading the result of the intermediate memory table into the final table.
The source table “Pivot” column must not be nullable (there are no such things as a “null” column) The creation will be refused even is this nullable column actually does not contain null values.
If a different result is desired, Create Table options are available to change the defaults used by Pivot. For instance if we want to display the average expense for each person and product, spread in columns for each week, use the following statement:
```
create table pivex2
engine=connect table_type=pivot tabname=expenses
option_list='PivotCol=Week,Function=AVG';
```
Now saying:
```
select * from pivex2;
```
Will display the resulting table:
| Who | What | 3 | 4 | 5 |
| --- | --- | --- | --- | --- |
| Beth | Beer | 16.00 | 15.00 | 20.00 |
| Beth | Food | 0.00 | 17.00 | 12.00 |
| Janet | Beer | 18.00 | 0.00 | 16.50 |
| Janet | Car | 19.00 | 17.00 | 12.00 |
| Janet | Food | 18.00 | 0.00 | 12.00 |
| Joe | Beer | 18.00 | 16.33 | 14.00 |
| Joe | Car | 20.00 | 0.00 | 0.00 |
| Joe | Food | 15.50 | 17.00 | 12.00 |
Restricting the Columns in a Pivot Table
----------------------------------------
Let us suppose that we want a Pivot table from expenses summing the expenses for all people and products whatever week it was bought. We can do this just by removing from the pivex table the week column from the column list.
```
alter table pivex drop column week;
```
The result we get from the new table is:
| Who | Beer | Car | Food |
| --- | --- | --- | --- |
| Beth | 51.00 | 0.00 | 29.00 |
| Janet | 51.00 | 48.00 | 30.00 |
| Joe | 81.00 | 20.00 | 77.00 |
Note: Restricting columns is also needed when the source table contains extra columns that should not be part of the pivot table. This is true in particular for key columns that prevent a proper grouping.
PIVOT Create Table Syntax
-------------------------
The Create Table statement for PIVOT tables uses the following syntax:
```
create table pivot_table_name
[(column_definition)]
engine=CONNECT table_type=PIVOT
{tabname='source_table_name' | srcdef='source_table_def'}
[option_list='pivot_table_option_list'];
```
The column definition has two sets of columns:
1. A set of columns belonging to the source table, not including the “facts” and “pivot” columns.
2. “Data” columns receiving the values of the aggregated “facts” columns named from the values of the “pivot” column. They are indicated by the “flag” option.
The **options** and **sub-options** available for Pivot tables are:
| Option | Type | Description |
| --- | --- | --- |
| **Tabname** | *[DB.]Name* | The name of the table to “pivot”. If not set SrcDef must be specified. |
| **SrcDef** | *SQL\_statement* | The statement used to generate the intermediate mysql table. |
| **DBname** | *name* | The name of the database containing the source table. Defaults to the current database. |
| **Function\*** | *name* | The name of the aggregate function used for the data columns, SUM by default. |
| **PivotCol\*** | *name* | Specifies the name of the Pivot column whose values are used to fill the “data” columns having the flag option. |
| **FncCol\*** | *[func(]name[)]* | Specifies the name of the data “Facts” column. If the form func(name) is used, the aggregate function name is set to func. |
| **Groupby\*** | *Boolean* | Set it to True (1 or Yes) if the table already has a GROUP BY format. |
| **Accept\*** | *Boolean* | To accept non matching Pivot column values. |
* : These options must be specified in the OPTION\_LIST.
### Additional Access Options
There are four cases where pivot must call the server containing the source table or on which the SrcDef statement must be executed:
1. The source table is not a CONNECT table. 2. The SrcDef option is specified. 3. The source table is on another server. 4. The columns are not specified.
By default, pivot tries to call the currently used server using host=localhost, user=root not using password, and port=3306. However, this may not be what is needed, in particular if the local root user has a password in which case you can get an “access denied” error message when creating or using the pivot table.
Specify the host, user, password and/or port options in the option\_list to override the default connection options used to access the source table, get column specifications, execute the generated group by or SrcDef query.
Defining a Pivot Table
----------------------
There are principally two ways to define a PIVOT table:
1. From an existing table or view. 2. Directly giving the SQL statement returning the result to pivot.
### Defining a Pivot Table from a Source Table
The **tabname** standard table option is used to give the name of the source table or view.
For tables, the internal Group By will be internally generated, except when the GROUPBY option is specified as true. Do it only when the table or view has a valid GROUP BY format.
### Directly Defining the Source of a Pivot Table in SQL
Alternatively, the internal source can be directly defined using the **SrcDef** option that must have the proper group by format.
As we have seen above, a proper Pivot Table is made from an internal intermediate table resulting from the execution of a `GROUP BY` statement. In many cases, it is simpler or desirable to directly specify this when creating the pivot table. This may be because the source is the result of a complex process including filtering and/or joining tables.
To do this, use the **SrcDef** option, often replacing all other options. For instance, suppose that in the first example we are only interested in weeks 4 and 5. We could of course display it by:
```
select * from pivex where week in (4,5);
```
However, what if this table is a huge table? In this case, the correct way to do it is to define the pivot table as this:
```
create table pivex4
engine=connect table_type=pivot
option_list='PivotCol=what,FncCol=amount'
SrcDef='select who, week, what, sum(amount) from expenses
where week in (4,5) group by who, week, what';
```
If your source table has millions of records and you plan to pivot only a small subset of it, doing so will make a lot of a difference performance wise. In addition, you have entire liberty to use expressions, scalar functions, aliases, join, where and having clauses in your SQL statement. The only constraint is that you are responsible for the result of this statement to have the correct format for the pivot processing.
Using SrcDef also permits to use expressions and/or scalar functions. For instance:
```
create table xpivot (
Who char(10) not null,
What char(12) not null,
First double(8,2) flag=1,
Middle double(8,2) flag=1,
Last double(8,2) flag=1)
engine=connect table_type=PIVOT
option_list='PivotCol=wk,FncCol=amnt'
Srcdef='select who, what, case when week=3 then ''First'' when
week=5 then ''Last'' else ''Middle'' end as wk, sum(amount) *
6.56 as amnt from expenses group by who, what, wk';
```
Now the statement:
```
select * from xpivot;
```
Will display the result:
| Who | What | First | Middle | Last |
| --- | --- | --- | --- | --- |
| Beth | Beer | 104.96 | 98.40 | 131.20 |
| Beth | Food | 0.00 | 111.52 | 78.72 |
| Janet | Beer | 118.08 | 0.00 | 216.48 |
| Janet | Car | 124.64 | 111.52 | 78.72 |
| Janet | Food | 118.08 | 0.00 | 78.72 |
| Joe | Beer | 118.08 | 321.44 | 91.84 |
| Joe | Car | 131.20 | 0.00 | 0.00 |
| Joe | Food | 203.36 | 223.04 | 78.72 |
**Note 1:** to avoid multiple lines having the same fixed column values, it is mandatory in **SrcDef** to place the pivot column at the end of the group by list.
**Note 2:** in the create statement **SrcDef**, it is mandatory to give aliases **to** the columns containing expressions so they are recognized by the other options.
**Note 3:** in the **SrcDef** select statement, quotes must be escaped because the entire statement is passed to MariaDB between quotes. Alternatively, specify it between double quotes.
**Note 4:** We could have left CONNECT do the column definitions. However, because they are defined from the sorted names, the Middle column had been placed at the end of them.
Specifying the Columns Corresponding to the Pivot Column
--------------------------------------------------------
These columns must be named from the values existing in the “pivot” column. For instance, supposing we have the following *pet* table:
| name | race | number |
| --- | --- | --- |
| John | dog | 2 |
| Bill | cat | 1 |
| Mary | dog | 1 |
| Mary | cat | 1 |
| Lisbeth | rabbit | 2 |
| Kevin | cat | 2 |
| Kevin | bird | 6 |
| Donald | dog | 1 |
| Donald | fish | 3 |
Pivoting it using *race* as the pivot column is done with:
```
create table pivet
engine=connect table_type=pivot tabname=pet
option_list='PivotCol=race,groupby=1';
```
This gives the result:
| name | dog | cat | rabbit | bird | fish |
| --- | --- | --- | --- | --- | --- |
| John | 2 | 0 | 0 | 0 | 0 |
| Bill | 0 | 1 | 0 | 0 | 0 |
| Mary | 1 | 1 | 0 | 0 | 0 |
| Lisbeth | 0 | 0 | 2 | 0 | 0 |
| Kevin | 0 | 2 | 0 | 6 | 0 |
| Donald | 1 | 0 | 0 | 0 | 3 |
By the way, does this ring a bell? It shows that in a way PIVOT tables are doing the opposite of what OCCUR tables do.
We can alternatively define specifically the table columns but what happens if the Pivot column contains values that is not matching a “data” column? There are three cases depending on the specified options and flags.
**First case:** If no specific options are specified, this is an error an when trying to display the table. The query will abort with an error message stating that a non-matching value was met. Note that because the column list is established when creating the table, this is prone to occur if some rows containing new values for the pivot column are inserted in the source table. If this happens, you should re-create the table or manually add the new columns to the pivot table.
**Second case:** The accept option was specified. For instance:
```
create table xpivet2 (
name varchar(12) not null,
dog int not null default 0 flag=1,
cat int not null default 0 flag=1)
engine=connect table_type=pivot tabname=pet
option_list='PivotCol=race,groupby=1,Accept=1';
```
No error will be raised and the non-matching values will be ignored. This table will be displayed as:
| name | dog | cat |
| --- | --- | --- |
| John | 2 | 0 |
| Bill | 0 | 1 |
| Mary | 1 | 1 |
| Lisbeth | 0 | 0 |
| Kevin | 0 | 2 |
| Donald | 1 | 0 |
**Third case:** A “dump” column was specified with the flag value equal to 2. All non-matching values will be added in this column. For instance:
```
create table xpivet (
name varchar(12) not null,
dog int not null default 0 flag=1,
cat int not null default 0 flag=1,
other int not null default 0 flag=2)
engine=connect table_type=pivot tabname=pet
option_list='PivotCol=race,groupby=1';
```
This table will be displayed as:
| name | dog | cat | other |
| --- | --- | --- | --- |
| John | 2 | 0 | 0 |
| Bill | 0 | 1 | 0 |
| Mary | 1 | 1 | 0 |
| Lisbeth | 0 | 0 | 2 |
| Kevin | 0 | 2 | 6 |
| Donald | 1 | 0 | 3 |
It is a good idea to provide such a “dump” column if the source table is prone to be inserted new rows that can have a value for the pivot column that did not exist when the pivot table was created.
Pivoting Big Source Tables
--------------------------
This may sometimes be risky. If the pivot column contains too many distinct values, the resulting table may have too many columns. In all cases the process involved, finding distinct values when creating the table or doing the group by when using it, can be very long and sometimes can fail because of exhausted memory.
Restrictions by a where clause should be applied to the source table when creating the pivot table rather than to the pivot table itself. This can be done by creating an intermediate table or using as source a view or a srcdef option.
All PIVOT tables are read only.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb MyRocks Status Variables MyRocks Status Variables
========================
This page documents status variables related to the [MyRocks](../myrocks/index) storage engine. See [Server Status Variables](../server-status-variables/index) for a complete list of status variables that can be viewed with [SHOW STATUS](../show-status/index).
See also the [Full list of MariaDB options, system and status variables](../full-list-of-mariadb-options-system-and-status-variables/index).
#### `Rocksdb_block_cache_add`
* **Description:** Number of blocks added to the Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_block_cache_add_failures`
* **Description:** Number of failures when adding blocks to Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_block_cache_bytes_read`
* **Description:** Bytes read from Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_block_cache_bytes_write`
* **Description:** Bytes written to Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_block_cache_data_add`
* **Description:** Number of data blocks added to the Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_block_cache_data_bytes_insert`
* **Description:** Bytes added to the Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_block_cache_data_hit`
* **Description:** Number of hits when accessing the data block from the Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_block_cache_data_miss`
* **Description:** Number of misses when accessing the data block from the Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_block_cache_filter_add`
* **Description:** Number of bloom filter blocks added to the Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_block_cache_filter_bytes_evict`
* **Description:** Bytes of bloom filter blocks evicted from the Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_block_cache_filter_bytes_insert`
* **Description:** Bytes of bloom filter blocks added to the Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_block_cache_filter_hit`
* **Description:** Number of hits when accessing the filter block from the Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_block_cache_filter_miss`
* **Description:** Number of misses when accessing the filter block from the Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_block_cache_hit`
* **Description:** Total number of hits for the Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_block_cache_index_add`
* **Description:** Number of index blocks added to Block Cache index.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_block_cache_index_bytes_evict`
* **Description:** Bytes of index blocks evicted from the Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_block_cache_index_bytes_insert`
* **Description:** Bytes of index blocks added to the Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_block_cache_index_hit`
* **Description:** Number of hits for the Block Cache index.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_block_cache_index_miss`
* **Description:** Number of misses for the Block Cache index.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_block_cache_miss`
* **Description:** Total number of misses for the Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_block_cachecompressed_hit`
* **Description:** Number of hits for the compressed Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_block_cachecompressed_miss`
* **Description:** Number of misses for the compressed Block Cache.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_bloom_filter_full_positive`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.18](https://mariadb.com/kb/en/mariadb-10218-release-notes/), [MariaDB 10.3.10](https://mariadb.com/kb/en/mariadb-10310-release-notes/)
---
#### `Rocksdb_bloom_filter_full_true_positive`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.18](https://mariadb.com/kb/en/mariadb-10218-release-notes/), [MariaDB 10.3.10](https://mariadb.com/kb/en/mariadb-10310-release-notes/)
---
#### `Rocksdb_bloom_filter_prefix_checked`
* **Description:** Number of times the Bloom Filter checked before creating an iterator on a file.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_bloom_filter_prefix_useful`
* **Description:** Number of times the Bloom Filter check used to avoid creating an iterator on a file.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_bloom_filter_useful`
* **Description:** Number of times the Bloom Filter used instead of reading form file.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_bytes_read`
* **Description:** Total number of uncompressed bytes read from memtables, cache or table files.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_bytes_written`
* **Description:** Total number of uncompressed bytes written.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_compact_read_bytes`
* **Description:** Number of bytes read during compaction.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_compact_write_bytes`
* **Description:** Number of bytes written during compaction.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_compaction_key_drop_new`
* **Description:** Number of keys dropped during compaction due their being overwritten by new values.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_compaction_key_drop_obsolete`
* **Description:** Number of keys dropped during compaction due to their being obsolete.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_compaction_key_drop_user`
* **Description:** Number of keys dropped during compaction due to user compaction.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_covered_secondary_key_lookups`
* **Description:** Incremented when avoiding reading a record via a keyread. This indicates lookups that were performed via a secondary index containing a field that is only a prefix of the [VARCHAR](../varchar/index) column, and that could return all requested fields directly from the secondary index.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_flush_write_bytes`
* **Description:** Number of bytes written during flush.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_get_hit_l0`
* **Description:** Number of times reads got data from the L0 compaction layer.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_get_hit_l1`
* **Description:** Number of times reads got data from the L1 compaction layer.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_get_hit_l2_and_up`
* **Description:** Number of times reads got data from the L2 and up compaction layer.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_getupdatessince_calls`
* **Description:** Number of calls to the `GetUpdatesSince` function. You may find this useful when monitoring refreshes of the transaction log.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_iter_bytes_read`
* **Description:** Total uncompressed bytes read from an iterator, including the size of both key and value.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_l0_num_files_stall_micros`
* **Description:** Shows how long in microseconds throttled due to too mnay files in L0.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Removed:** [MariaDB 10.3.1](https://mariadb.com/kb/en/mariadb-1031-release-notes/), [MariaDB 10.2.8](https://mariadb.com/kb/en/mariadb-1028-release-notes/)
---
#### `Rocksdb_l0_slowdown_micros`
* **Description:** Total time spent waiting in microseconds while performing L0-L1 compactions.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Removed:** [MariaDB 10.3.1](https://mariadb.com/kb/en/mariadb-1031-release-notes/), [MariaDB 10.2.8](https://mariadb.com/kb/en/mariadb-1028-release-notes/)
---
#### `Rocksdb_manual_compactions_processed`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.18](https://mariadb.com/kb/en/mariadb-10218-release-notes/), [MariaDB 10.3.10](https://mariadb.com/kb/en/mariadb-10310-release-notes/)
---
#### `Rocksdb_manual_compactions_running`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.18](https://mariadb.com/kb/en/mariadb-10218-release-notes/), [MariaDB 10.3.10](https://mariadb.com/kb/en/mariadb-10310-release-notes/)
---
#### `Rocksdb_memtable_compaction_micros`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Removed:** [MariaDB 10.3.1](https://mariadb.com/kb/en/mariadb-1031-release-notes/), [MariaDB 10.2.8](https://mariadb.com/kb/en/mariadb-1028-release-notes/)
---
#### `Rocksdb_memtable_hit`
* **Description:** Number of memtable hits.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_memtable_miss`
* **Description:** Number of memtable misses.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_memtable_total`
* **Description:** Memory used, in bytes, of all memtables.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_memtable_unflushed`
* **Description:** Memory used, in bytes, of all unflushed memtables.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_no_file_closes`
* **Description:** Number of times files were closed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_no_file_errors`
* **Description:** Number of errors encountered while trying to read data from an SST file.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_no_file_opens`
* **Description:** Number of times files were opened.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_num_iterators`
* **Description:** Number of iterators currently open.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_block_not_compressed`
* **Description:** Number of uncompressed blocks.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_db_next`
* **Description:** Number of `next` calls.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_number_db_next_found`
* **Description:** Number of `next` calls that returned data.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_number_db_prev`
* **Description:** Number of `prev` calls.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_number_db_prev_found`
* **Description:** Number of `prev` calls that returned data.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_number_db_seek`
* **Description:** Number of `seek` calls.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_number_db_seek_found`
* **Description:** Number of `seek` calls that returned data.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_number_deletes_filtered`
* **Description:** Number of deleted records were not written to storage due to a nonexistent key.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_keys_read`
* **Description:** Number of keys have been read.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_keys_updated`
* **Description:** Number of keys have been updated.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_keys_written`
* **Description:** Number of keys have been written.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_merge_failures`
* **Description:** Number of failures encountered while performing merge operator actions.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_multiget_bytes_read`
* **Description:** Number of bytes read during RocksDB `MultiGet()` calls.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_multiget_get`
* **Description:** Number of RocksDB `MultiGet()` requests made.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_multiget_keys_read`
* **Description:** Number of keys read through RocksDB `MultiGet()` calls.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_reseeks_iteration`
* **Description:** Number of reseeks that have occurred inside an iteration that skipped over a large number of keys with the same user key.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_sst_entry_delete`
* **Description:** Number of delete markers written.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_sst_entry_merge`
* **Description:** Number of merge keys written.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_sst_entry_other`
* **Description:** Number of keys written that are not delete, merge or put keys.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_sst_entry_put`
* **Description:** Number of put keys written.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_sst_entry_singledelete`
* **Description:** Number of single-delete keys written.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_superversion_acquires`
* **Description:** Number of times the superversion structure acquired. This is useful when tracking files for the database.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_superversion_cleanups`
* **Description:** Number of times the superversion structure performed cleanups.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_number_superversion_releases`
* **Description:** Number of times the superversion structure released.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_queries_point`
* **Description:** Number of single-row queries.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_queries_range`
* **Description:** Number of multi-row queries.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_row_lock_deadlocks`
* **Description:** Number of deadlocks.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_row_lock_wait_timeouts`
* **Description:** Number of row lock wait timeouts.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_rows_deleted`
* **Description:** Number of rows deleted.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_rows_deleted_blind`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_rows_expired`
* **Description:** Number of expired rows.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_rows_filtered`
* **Description:** Number of TTL filtered rows.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.15](https://mariadb.com/kb/en/mariadb-10215-release-notes/), [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Rocksdb_rows_inserted`
* **Description:** Number of rows inserted.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_rows_read`
* **Description:** Number of rows read.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_rows_updated`
* **Description:** Number of rows updated.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_snapshot_conflict_errors`
* **Description:** Number of snapshot conflict errors that have occurred during transactions that forced a rollback.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_stall_l0_file_count_limit_slowdowns`
* **Description:** Write slowdowns due to L0 being near to full.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_stall_l0_file_count_limit_stops`
* **Description:** Write stops due to L0 being to full.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_stall_locked_l0_file_count_limit_slowdowns`
* **Description:** Write slowdowns due to L0 being near to full and L0 compaction in progress.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_stall_locked_l0_file_count_limit_stops`
* **Description:** Write stops due to L0 being full and L0 compaction in progress.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_stall_memtable_limit_slowdowns`
* **Description:** Write slowdowns due to approaching maximum permitted number of memtables.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.10](https://mariadb.com/kb/en/mariadb-10210-release-notes/), [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)
---
#### `Rocksdb_stall_memtable_limit_stops`
* **Description:** \* **Description:** Write stops due to reaching maximum permitted number of memtables.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.10](https://mariadb.com/kb/en/mariadb-10210-release-notes/), [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)
---
#### `Rocksdb_stall_micros`
* **Description:** Time in microseconds that the writer had to wait for the compaction or flush to complete.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_stall_pending_compaction_limit_slowdowns`
* **Description:** Write slowdowns due to nearing the limit for the maximum number of pending compaction bytes.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_stall_pending_compaction_limit_stops`
* **Description:** Write stops due to reaching the limit for the maximum number of pending compaction bytes.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_stall_total_slowdowns`
* **Description:** Total number of write slowdowns.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_stall_total_stops`
* **Description:** Total number of write stops.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_system_rows_deleted`
* **Description:** Number of rows deleted from system tables.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_system_rows_inserted`
* **Description:** Number of rows inserted into system tables.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_system_rows_read`
* **Description:** Number of rows read from system tables.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_system_rows_updated`
* **Description:** Number of rows updated for system tables.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_wal_bytes`
* **Description:** Number of bytes written to WAL.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_wal_group_syncs`
* **Description:** Number of group commit WAL file syncs have occurred. This is provided by MyRocks and is not a view of a RocksDB counter. Increased in `rocksdb_flush_wal()` when doing the `rdb->FlushWAL()` call.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_wal_synced`
* **Description:** Number of syncs made on RocksDB WAL file.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_write_other`
* **Description:** Number of writes processed by a thread other than the requesting thread.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_write_self`
* **Description:** Number of writes processed by requesting thread.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_write_timedout`
* **Description:** Number of writes that timed out.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rocksdb_write_wal`
* **Description:** Number of write calls that requested WAL.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Using CONNECT - Indexing Using CONNECT - Indexing
========================
[Indexing](../optimization-and-indexes/index) is one of the main ways to optimize queries. Key columns, in particular when they are used to join tables, should be indexed. But what should be done for columns that have only few distinct values? If they are randomly placed in the table they should not be indexed because reading many rows in random order can be slower than reading the entire table sequentially. However, if the values are sorted or clustered, indexing can be acceptable because [CONNECT](../connect/index) indexes store the values in the order they appear into the table and this will make retrieving them almost as fast as reading them sequentially.
CONNECT provides four indexing types:
1. Standard Indexing
2. Block Indexing
3. Remote Indexing
4. Dynamic Indexing
Standard Indexing
-----------------
CONNECT standard indexes are created and used as the ones of other storage engines although they have a specific internal format. The CONNECT handler supports the use of standard indexes for most of the file based table types.
You can define them in the [CREATE TABLE](../create-table/index) statement, or either using the CREATE INDEX statement or the [ALTER TABLE](../alter-table/index) statement. In all cases, the index files are automatically made. They can be dropped either using the [DROP INDEX](../drop-index/index) statement or the [ALTER TABLE](../alter-table/index) statement, and this erases the index files.
Indexes are automatically reconstructed when the table is created, modified by INSERT, UPDATE or DELETE commands, or when the SEPINDEX option is changed. If you have a lot of changes to do on a table at one moment, you can use table locking to prevent indexes to be reconstructed after each statement. The indexes will be reconstructed when unlocking the table. For instance:
```
lock table t1 write;
insert into t1 values(...);
insert into t1 values(...);
...
unlock tables;
```
If a table was modified by an external application that does not handle indexing, the indexes must be reconstructed to prevent returning false or incomplete results. To do this, use the [OPTIMIZE TABLE](../optimize-table/index) command.
For outward tables, index files are not erased when dropping the table. This is the same as for the data file and preserves the possibility of several users using the same data file via different tables.
Unlike other storage engines, CONNECT constructs the indexes as files that are named by default from the data file name, not from the table name, and located in the data file directory. Depending on the SEPINDEX table option, indexes are saved in a unique file or in separate files (if SEPINDEX is true). For instance, if indexes are in separate files, the primary index of the table *dept.dat* of type DOS is a file named *dept\_PRIMARY.dnx*. This makes possible to define several tables on the same data file, with eventual different options such as mapped or not mapped, and to share the index files as well.
If the index file should have a different name, for instance because several tables are created on the same data file with different indexes, specify the base index file name with the XFILE\_NAME option.
**Note1:** Indexed columns must be declared NOT NULL; CONNECT doesn't support indexes containing null values.
**Note 2:** MRR is used by standard indexing if it is enabled.
**Note 3:** Prefix indexing is not supported. If specified, the CONNECT engine ignores the prefix and builds a whole index.
### Handling index errors
The way CONNECT handles indexing is very specific. All table modifications are done regardless of indexing. Only after a table has been modified, or when an `OPTIMIZE TABLE` command is sent are the indexes made. If an error occurs, the corresponding index is not made. However, CONNECT being a non-transactional engine, it is unable to roll back the changes made to the table. The main causes of indexing errors are:
* Trying to index a nullable column. In this case, you can alter the table to declare the column as not nullable or, if the column is nullable indeed, make it not indexed.
* Entering duplicate values in a column indexed by a unique index. In this case, if the index was wrongly declared as unique, alter is declaration to reflect this. If the column should really contain unique values, you must manually remove or update the duplicate values.
In both cases, after correcting the error, remake the indexes with the [OPTIMIZE TABLE](../optimize-table/index) command.
### Index file mapping
To accelerate the indexing process, CONNECT makes an index structure in memory from the index file. This can be done by reading the index file or using it as if it was in memory by “file mapping”. On enabled versions, file mapping is used according to the boolean [connect\_indx\_map](../connect-system-variables/index#connect_indx_map) system variable. Set it to 0 (file read) or 1 (file mapping).
Block Indexing
--------------
To accelerate input/output, CONNECT uses when possible a read/write mode by blocks of n rows, n being the value given in the BLOCK \_ SIZE option of the Create Table, or a default value depending on the table type. This is automatic for fixed files ([FIX](../connect-table-types-data-files/index#dos-and-fix-table-types), [BIN](../connect-table-types-data-files/index#bin-table-type), [DBF](../connect-table-types-data-files/index#dbf-type) or [VEC](../connect-table-types-data-files/index#vec-table-type-vector)), but must be specified for variable files ([DOS](../connect-table-types-data-files/index#dos-and-fix-table-types) , [CSV](../connect-table-types-data-files/index#csv-and-fmt-table-types) or [FMT](../connect-table-types-data-files/index#fmt-type) ).
For blocked tables, further optimization can be achieved if the data values for some columns are “clustered” meaning that they are not evenly scattered in the table but grouped in some consecutive rows. Block indexing permits to skip blocks in which no rows fulfill a conditional predicate without having even to read the block. This is true in particular for sorted columns.
You indicate this when creating the table by using the DISTRIB =d column option. The enum value d can be *scattered*, *clustered*, or *sorted*. In general only one column can be sorted. Block indexing is used only for clustered and sorted columns.
### Difference between standard indexing and block indexing
* Block indexing is internally handled by CONNECT while reading sequentially a table data. This means in particular that when standard indexing is used on a table, block indexing is not used.
* In a query, only one standard index can be used. However, block indexing can combine the restrictions coming from a where clause implying several clustered/sorted columns.
* The block index files are faster to make and much smaller than standard index files.
### Notes for this Release:
* On all operations that create or modify a table, CONNECT automatically calculates or recalculates and saves the mini/maxi or bitmap values for each block, enabling it to skip block containing no acceptable values. In the case where the optimize file does not correspond anymore to the table, because it has been accidentally destroyed, or because some column definitions have been altered, you can use the OPTIMIZE TABLE command to reconstruct the optimization file.
* Sorted column special processing is currently restricted to ascending sort. Column sorted in descending order must be flagged as clustered. Improper sorting is not checked in Update or Insert operations but is flagged when optimizing the table.
* Block indexing can be done in two ways. Keeping the min/max values existing for each block, or keeping a bitmap allowing knowing what column distinct values are met in each blocks. This second ways often gives a better optimization, except for sorted columns for which both are equivalent. The bitmap approach can be done only on columns having not too many distinct values. This is estimated by the MAX \_ DIST option value associated to the column when creating the table. ~~Bitmap block indexing will be used if this number is not greater than the MAXBMP setting for the database~~.
* CONNECT cannot perform block indexing on case insensitive character columns. To force block indexing on a character column, specify its charset as not case insensitive, for instance as binary. However this will also apply to all other clauses, this column being now case sensitive.
Remote Indexing
---------------
Remote indexing is specific to the [MYSQL](../connect-table-types-mysql-table-type-accessing-mysqlmariadb-tables/index) table type. It is equivalent to what the [FEDERATED](../federatedx-storage-engine/index) storage does. A MYSQL table does not support indexes per se. Because access to the table is handled remotely, it is the remote table that supports the indexes. What the MYSQL table does is just to add a WHERE clause to the [SELECT](../select/index) command sent to the remote server allowing the remote server to use indexing when applicable. Note however that because CONNECT adds when possible all or part of the where clause of the original query, this happens often even if the remote indexed column is not declared locally indexed. The only, but very important, case a column should be locally declared indexed is when it is used to join tables. Otherwise, the required where clause would not be added to the sent SELECT query.
See [Indexing of MYSQL tables](../connect-table-types-mysql-table-type-accessing-mysqlmariadb-tables/index#indexing-of-mysql-tables) for more.
Dynamic Indexing
----------------
An indexed created as “dynamic” is a standard index which, in some cases, can be reconstructed for a specific query. This happens in particular for some queries where two tables are joined by an indexed key column. If the “from” table is big and the “to” big table reduced in size because of a where clause, it can be worthwhile to reconstruct the index on this reduced table.
Because of the time added by reconstructing the index, this will be valuable only if the time gained by reducing the index size if more than this reconstruction time. This is why this should not be done if the “from” table is small because there will not be enough row joining to compensate for the additional time. Otherwise, the gain of using a dynamic index is:
* Indexing time is a little faster if the index is smaller.
* The join process will return only the rows fulfilling the where clause.
* Because the table is read sequentially when reconstructing the index there no need for MRR.
* Constructing the index can be faster if the table is reduced by block indexing.
* While constructing the index, CONNECT also stores in memory the values of other used columns.
This last point is particularly important. It means that after the index is reconstructed, the join is done on a temporary memory table.
Unfortunately, storage engines being called independently by MariaDB for each table, CONNECT has no global information to decide when it is good to use dynamic indexing. This is why you should use it only on cases where you see that some important join queries take a very long time and only on columns used for joining the table. How to declare an index to be dynamic is by using the Boolean DYNAM index option. For instance, the query:
```
select d.diag, count(*) cnt from diag d, patients p where d.pnb =
p.pnb and ageyears < 17 and county = 30 and drg <> 11 and d.diag
between 4296 and 9434 group by d.diag order by cnt desc;
```
Such a query joining the diag table to the patients table may last a very long time if the tables are big. To declare the primary key on the pnb column of the patients table to be dynamic:
```
alter table patients drop primary key;
alter table patients add primary key (pnb) comment 'DYNAMIC' dynam=1;
```
Note 1: The comment is not mandatory here but useful to see that the index is dynamic if you use the [SHOW INDEX](../show-index/index) command.
Note 2: There is currently no way to just change the DYNAM option without dropping and adding the index. This is unfortunate because it takes time.
Virtual Indexing
----------------
It applies only to the virtual tables of type [VIR](../connect-table-types-vir/index) and must be made on a column specifying `SPECIAL=ROWID` or `SPECIAL=ROWNUM`.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb INET4 INET4
=====
**MariaDB starting with [10.10.0](https://mariadb.com/kb/en/mariadb-10100-release-notes/)**The INET4 data type was added in [MariaDB 10.10.0](https://mariadb.com/kb/en/mariadb-10100-release-notes/)
INET4 is a data type to store IPv4 addresses, as 4-byte binary strings.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Index Condition Pushdown Index Condition Pushdown
========================
Index Condition Pushdown is an optimization that is applied for access methods that access table data through indexes: `range`, `ref`, `eq_ref`, `ref_or_null`, and [Batched Key Access](../block-based-join-algorithms/index#batch-key-access-join).
The idea is to check part of the WHERE condition that refers to index fields (we call it *Pushed Index Condition*) as soon as we've accessed the index. If the *Pushed Index Condition* is not satisfied, we won't need to read the whole table record.
Index Condition Pushdown is **on** by default. To disable it, set its optimizer\_switch flag like so:
```
SET optimizer_switch='index_condition_pushdown=off'
```
When Index Condition Pushdown is used, EXPLAIN will show "Using index condition":
```
MariaDB [test]> explain select * from tbl where key_col1 between 10 and 11 and key_col2 like '%foo%';
+----+-------------+-------+-------+---------------+----------+---------+------+------+-----------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+----------+---------+------+------+-----------------------+
| 1 | SIMPLE | tbl | range | key_col1 | key_col1 | 5 | NULL | 2 | Using index condition |
+----+-------------+-------+-------+---------------+----------+---------+------+------+-----------------------+
```
The Idea Behind Index Condition Pushdown
----------------------------------------
In disk-based storage engines, making an index lookup is done in two steps, like shown on the picture:
Index Condition Pushdown optimization tries to cut down the number of full record reads by checking whether index records satisfy part of the WHERE condition that can be checked for them:
How much speed will be gained depends on - How many records will be filtered out - How expensive it was to read them
The former depends on the query and the dataset. The latter is generally bigger when table records are on disk and/or are big, especially when they have [blobs](../blob/index).
Example Speedup
---------------
I used DBT-3 benchmark data, with scale factor=1. Since the benchmark defines very few indexes, we've added a multi-column index (index condition pushdown is usually useful with multi-column indexes: the first component(s) is what index access is done for, the subsequent have columns that we read and check conditions on).
```
alter table lineitem add index s_r (l_shipdate, l_receiptdate);
```
The query was to find big (l\_quantity > 40) orders that were made in January 1993 that took more than 25 days to ship:
```
select count(*) from lineitem
where
l_shipdate between '1993-01-01' and '1993-02-01' and
datediff(l_receiptdate,l_shipdate) > 25 and
l_quantity > 40;
```
EXPLAIN without Index Condition Pushdown:
```
-+----------+-------+----------------------+-----+---------+------+--------+-------------+
| table | type | possible_keys | key | key_len | ref | rows | Extra |
-+----------+-------+----------------------+-----+---------+------+--------+-------------+
| lineitem | range | s_r | s_r | 4 | NULL | 152064 | Using where |
-+----------+-------+----------------------+-----+---------+------+--------+-------------+
```
with Index Condition Pushdown:
```
-+-----------+-------+---------------+-----+---------+------+--------+------------------------------------+
| table | type | possible_keys | key | key_len | ref | rows | Extra |
-+-----------+-------+---------------+-----+---------+------+--------+------------------------------------+
| lineitem | range | s_r | s_r | 4 | NULL | 152064 | Using index condition; Using where |
-+-----------+-------+---------------+-----+---------+------+--------+------------------------------------+
```
The speedup was:
* Cold buffer pool: from 5 min down to 1 min
* Hot buffer pool: from 0.19 sec down to 0.07 sec
Status Variables
----------------
There are two server status variables:
| Variable name | Meaning |
| --- | --- |
| [Handler\_icp\_attempts](../server-status-variables/index#handler_icp_attempts) | Number of times pushed index condition was checked. |
| [Handler\_icp\_match](../server-status-variables/index#handler_icp_match) | Number of times the condition was matched. |
That way, the value `Handler_icp_attempts - Handler_icp_match` shows the number records that the server did not have to read because of Index Condition Pushdown.
See Also
--------
* [What is MariaDB 5.3](../what-is-mariadb-53/index)
* [Index Condition Pushdown](http://dev.mysql.com/doc/refman/5.6/en/index-condition-pushdown-optimization.html) in MySQL 5.6 manual (MariaDB's and MySQL 5.6's Index Condition Pushdown implementations have the same ancestry so are very similar to one another).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb NATURAL_SORT_KEY NATURAL\_SORT\_KEY
==================
**MariaDB starting with [10.7.0](https://mariadb.com/kb/en/mariadb-1070-release-notes/)**NATURAL\_SORT\_KEY was added in [MariaDB 10.7.0](https://mariadb.com/kb/en/mariadb-1070-release-notes/).
Syntax
------
```
NATURAL_SORT_KEY(str)
```
Description
-----------
The `NATURAL_SORT_KEY` function is used for sorting that is closer to natural sorting. Strings are sorted in alphabetical order, while numbers are treated in a way such that, for example, `10` is greater than `2`, whereas in other forms of sorting, `2` would be greater than `10`, just like `z` is greater than `ya`.
There are multiple natural sort implementations, differing in the way they handle leading zeroes, fractions, i18n, negatives, decimals and so on.
MariaDB's implementation ignores leading zeroes when performing the sort.
You can use also use `NATURAL_SORT_KEY` with [generated columns](../generated-columns/index). The value is not stored permanently in the table. When using a generated column, the virtual column must be longer than the base column to cater for embedded numbers in the string and [MDEV-24582](https://jira.mariadb.org/browse/MDEV-24582).
Examples
--------
### Strings and Numbers
```
CREATE TABLE t1 (c TEXT);
INSERT INTO t1 VALUES ('b1'),('a2'),('a11'),('a1');
SELECT c FROM t1;
+------+
| c |
+------+
| b1 |
| a2 |
| a11 |
| a1 |
+------+
SELECT c FROM t1 ORDER BY c;
+------+
| c |
+------+
| a1 |
| a11 |
| a2 |
| b1 |
+------+
```
Unsorted, regular sort and natural sort:
```
TRUNCATE t1;
INSERT INTO t1 VALUES
('5.5.31'),('10.7.0'),('10.2.1'),
('10.1.22'),('10.3.32'),('10.2.12');
SELECT c FROM t1;
+---------+
| c |
+---------+
| 5.5.31 |
| 10.7.0 |
| 10.2.1 |
| 10.1.22 |
| 10.3.32 |
| 10.2.12 |
+---------+
SELECT c FROM t1 ORDER BY c;
+---------+
| c |
+---------+
| 10.1.22 |
| 10.2.1 |
| 10.2.12 |
| 10.3.32 |
| 10.7.0 |
| 5.5.31 |
+---------+
SELECT c FROM t1 ORDER BY NATURAL_SORT_KEY(c);
+---------+
| c |
+---------+
| 5.5.31 |
| 10.1.22 |
| 10.2.1 |
| 10.2.12 |
| 10.3.32 |
| 10.7.0 |
+---------+
```
### IPs
Sorting IPs, unsorted, regular sort and natural sort::
```
TRUNCATE t1;
INSERT INTO t1 VALUES
('192.167.3.1'),('192.167.1.12'),('100.200.300.400'),
('100.50.60.70'),('100.8.9.9'),('127.0.0.1'),('0.0.0.0');
SELECT c FROM t1;
+-----------------+
| c |
+-----------------+
| 192.167.3.1 |
| 192.167.1.12 |
| 100.200.300.400 |
| 100.50.60.70 |
| 100.8.9.9 |
| 127.0.0.1 |
| 0.0.0.0 |
+-----------------+
SELECT c FROM t1 ORDER BY c;
+-----------------+
| c |
+-----------------+
| 0.0.0.0 |
| 100.200.300.400 |
| 100.50.60.70 |
| 100.8.9.9 |
| 127.0.0.1 |
| 192.167.1.12 |
| 192.167.3.1 |
+-----------------+
SELECT c FROM t1 ORDER BY NATURAL_SORT_KEY(c);
+-----------------+
| c |
+-----------------+
| 0.0.0.0 |
| 100.8.9.9 |
| 100.50.60.70 |
| 100.200.300.400 |
| 127.0.0.1 |
| 192.167.1.12 |
| 192.167.3.1 |
+-----------------+
```
### Generated Columns
Using with a [generated column](../generated-columns/index):
```
CREATE TABLE t(c VARCHAR(3), k VARCHAR(4) AS (NATURAL_SORT_KEY(c)) INVISIBLE);
INSERT INTO t(c) VALUES ('b1'),('a2'),('a11'),('a10');
SELECT * FROM t ORDER by k;
+------+
| c |
+------+
| a2 |
| a10 |
| a11 |
| b1 |
+------+
```
Note that if the virtual column is not longer, results may not be as expected:
```
CREATE TABLE t2(c VARCHAR(3), k VARCHAR(3) AS (NATURAL_SORT_KEY(c)) INVISIBLE);
INSERT INTO t2(c) VALUES ('b1'),('a2'),('a11'),('a10');
SELECT * FROM t2 ORDER by k;
+------+
| c |
+------+
| a2 |
| a11 |
| a10 |
| b1 |
+------+
```
### Leading Zeroes
Ignoring leading zeroes can lead to undesirable results in certain contexts. For example:
```
CREATE TABLE t3 (a VARCHAR(4));
INSERT INTO t3 VALUES
('a1'), ('a001'), ('a10'), ('a001'), ('a10'),
('a01'), ('a01'), ('a01b'), ('a01b'), ('a1');
SELECT a FROM t3 ORDER BY a;
+------+
| a |
+------+
| a001 |
| a001 |
| a01 |
| a01 |
| a01b |
| a01b |
| a1 |
| a1 |
| a10 |
| a10 |
+------+
10 rows in set (0.000 sec)
SELECT a FROM t3 ORDER BY NATURAL_SORT_KEY(a);
+------+
| a |
+------+
| a1 |
| a01 |
| a01 |
| a001 |
| a001 |
| a1 |
| a01b |
| a01b |
| a10 |
| a10 |
+------+
```
This may not be what we were hoping for in a 'natural' sort. A workaround is to sort by both NATURAL\_SORT\_KEY and regular sort.
```
SELECT a FROM t3 ORDER BY NATURAL_SORT_KEY(a), a;
+------+
| a |
+------+
| a001 |
| a001 |
| a01 |
| a01 |
| a1 |
| a1 |
| a01b |
| a01b |
| a10 |
| a10 |
+------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Galera Cluster Status Variables Galera Cluster Status Variables
===============================
Viewing Galera Cluster Status variables
---------------------------------------
Galera Status variables can be viewed with the [SHOW STATUS](../show-status/index) statement.
```
SHOW STATUS LIKE 'wsrep%';
```
See also the [Full list of MariaDB options, system and status variables](../full-list-of-mariadb-options-system-and-status-variables/index).
List of Galera Cluster Status variables
---------------------------------------
MariaDB Galera Cluster has the following status variables:
#### `wsrep_applier_thread_count`
* **Description:** Stores current number of applier threads to make clear how many slave threads of this type there are.
* **Introduced:** [MariaDB 10.2.26](https://mariadb.com/kb/en/mariadb-10226-release-notes/), [MariaDB 10.3.17](https://mariadb.com/kb/en/mariadb-10317-release-notes/), [MariaDB 10.4.7](https://mariadb.com/kb/en/mariadb-1047-release-notes/)
---
#### `wsrep_apply_oooe`
* **Description:** How often writesets have been applied out of order, an indicators of parallelization efficiency.
---
#### `wsrep_apply_oool`
* **Description:** How often writesets with a higher sequence number were applied before ones with a lower sequence number, implying slow writesets.
---
#### `wsrep_apply_window`
* **Description:** Average distance between highest and lowest concurrently applied seqno.
---
#### `wsrep_cert_deps_distance`
* **Description:** Average distance between the highest and the lowest sequence numbers that can possibly be applied in parallel, or the potential degree of parallelization.
---
#### `wsrep_cert_index_size`
* **Description:** The number of entries in the certification index.
---
#### `wsrep_cert_interval`
* **Description:** Average number of transactions received while a transaction replicates.
---
#### `wsrep_cluster_capabilities`
* **Description:**
---
#### `wsrep_cluster_conf_id`
* **Description:** Total number of cluster membership changes that have taken place.
---
#### `wsrep_cluster_size`
* **Description:** Number of nodes currently in the cluster.
---
#### `wsrep_cluster_state_uuid`
* **Description:** UUID state of the cluster. If it matches the value in [wsrep\_local\_state\_uuid](#wsrep_local_state_uuid), the local and cluster nodes are in sync.
---
#### `wsrep_cluster_status`
* **Description:** Cluster component status. Possible values are `PRIMARY` (primary group configuration, quorum present), `NON_PRIMARY` (non-primary group configuration, quorum lost) or `DISCONNECTED` (not connected to group, retrying).
---
#### `wsrep_cluster_weight`
* **Description:** The total weight of the current members in the cluster. The value is counted as a sum of of pc.weight of the nodes in the current Primary Component.
---
#### `wsrep_commit_oooe`
* **Description:** How often a transaction was committed out of order.
---
#### `wsrep_commit_oool`
* **Description:** No meaning.
---
#### `wsrep_commit_window`
* **Description:** Average distance between highest and lowest concurrently committed seqno.
---
#### `wsrep_connected`
* **Description:** Whether or not MariaDB is connected to the wsrep provider. Possible values are `ON` or `OFF`.
---
#### `wsrep_desync_count`
* **Description:** Returns the number of operations in progress that require the node to temporarily desync from the cluster.
---
#### `wsrep_evs_delayed`
* **Description:** Provides a comma separated list of all the nodes this node has registered on its delayed list.
---
#### `wsrep_evs_evict_list`
* **Description:** Lists the UUID’s of all nodes evicted from the cluster. Evicted nodes cannot rejoin the cluster until you restart their mysqld processes.
---
#### `wsrep_evs_repl_latency`
* **Description:** This status variable provides figures for the replication latency on group communication. It measures latency (in seconds) from the time point when a message is sent out to the time point when a message is received. As replication is a group operation, this essentially gives you the slowest ACK and longest RTT in the cluster. Format is min/avg/max/stddev
---
#### `wsrep_evs_state`
* **Description:** Shows the internal state of the EVS Protocol.
---
#### `wsrep_flow_control_paused`
* **Description:** The fraction of time since the last FLUSH STATUS command that replication was paused due to flow control.
---
#### `wsrep_flow_control_paused_ns`
* **Description:** The total time spent in a paused state measured in nanoseconds.
---
#### `wsrep_flow_control_recv`
* **Description:** Number of FC\_PAUSE events received as well as sent since the most recent status query.
---
#### `wsrep_flow_control_sent`
* **Description:** Number of FC\_PAUSE events sent since the most recent status query
---
#### `wsrep_gcomm_uuid`
* **Description:** The UUID assigned to the node.
---
#### `wsrep_incoming_addresses`
* **Description:** Comma-separated list of incoming server addresses in the cluster component.
---
#### `wsrep_last_committed`
* **Description:** Sequence number of the most recently committed transaction.
---
#### `wsrep_local_bf_aborts`
* **Description:** Total number of local transactions aborted by slave transactions while being executed
---
#### `wsrep_local_cached_downto`
* **Description:** The lowest sequence number, or seqno, in the write-set cache (GCache).
---
#### `wsrep_local_cert_failures`
* **Description:** Total number of local transactions that failed the certification test.
---
#### `wsrep_local_commits`
* **Description:** Total number of local transactions committed on the node.
---
#### `wsrep_local_index`
* **Description:** The node's index in the cluster. The index is zero-based.
---
#### `wsrep_local_recv_queue`
* **Description:** Current length of the receive queue, which is the number of writesets waiting to be applied.
---
#### `wsrep_local_recv_queue_avg`
* **Description:** Average length of the receive queue since the most recent status query. If this value is noticeably larger than zero, the node is likely to be overloaded, and cannot apply the writesets as quickly as they arrive, resulting in replication throttling.
---
#### `wsrep_local_recv_queue_max`
* **Description:** The maximum length of the recv queue since the last FLUSH STATUS command.
---
#### `wsrep_local_recv_queue_min`
* **Description:** The minimum length of the recv queue since the last FLUSH STATUS command.
---
#### `wsrep_local_replays`
* **Description:** Total number of transaction replays due to asymmetric lock granularity.
---
#### `wsrep_local_send_queue`
* **Description:** Current length of the send queue, which is the number of writesets waiting to be sent.
---
#### `wsrep_local_send_queue_avg`
* **Description:** Average length of the send queue since the most recent status query. If this value is noticeably larger than zero, there is most likely network throughput or replication throttling issues.
---
#### `wsrep_local_send_queue_max`
* **Description:** The maximum length of the send queue since the last FLUSH STATUS command.
---
#### `wsrep_local_send_queue_min`
* **Description:** The minimum length of the send queue since the last FLUSH STATUS command.
---
#### `wsrep_local_state`
* **Description:** Internal Galera Cluster FSM state number.
---
#### `wsrep_local_state_comment`
* **Description:** Human-readable explanation of the state.
---
#### `wsrep_local_state_uuid`
* **Description:** The node's UUID state. If it matches the value in [wsrep\_cluster\_state\_uuid](#wsrep_cluster_state_uuid), the local and cluster nodes are in sync.
---
#### `wsrep_open_connections`
* **Description:** The number of open connection objects inside the wsrep provider.
---
#### `wsrep_open_transactions`
* **Description:** The number of locally running transactions which have been registered inside the wsrep provider. This means transactions which have made operations which have caused write set population to happen. Transactions which are read only are not counted.
---
#### `wsrep_protocol_version`
* **Description:** The wsrep protocol version being used.
---
#### `wsrep_provider_name`
* **Description:** The name of the provider. The default is "Galera".
---
#### `wsrep_provider_vendor`
* **Description:** The vendor string.
---
#### `wsrep_provider_version`
* **Description:** The version number of the Galera wsrep provider
---
#### `wsrep_ready`
* **Description:** Whether or not the Galera wsrep provider is ready. Possible values are `ON` or `OFF`
---
#### `wsrep_received`
* **Description:** Total number of writesets received from other nodes.
---
#### `wsrep_received_bytes`
* **Description:** Total size in bytes of all writesets received from other nodes.
---
#### `wsrep_repl_data_bytes`
* **Description:** Total size of data replicated.
---
#### `wsrep_repl_keys`
* **Description:** Total number of keys replicated.
---
#### `wsrep_repl_keys_bytes`
* **Description:** Total size of keys replicated.
---
#### `wsrep_repl_other_bytes`
* **Description:** Total size of other bits replicated.
---
#### `wsrep_replicated`
* **Description:** Total number of writesets replicated to other nodes.
---
#### `wsrep_replicated_bytes`
* **Description:** Total size in bytes of all writesets replicated to other nodes.
---
#### `wsrep_rollbacker_thread_count`
* **Description:** Stores current number of rollbacker threads to make clear how many slave threads of this type there are.
* **Introduced:** [MariaDB 10.2.26](https://mariadb.com/kb/en/mariadb-10226-release-notes/), [MariaDB 10.3.17](https://mariadb.com/kb/en/mariadb-10317-release-notes/), [MariaDB 10.4.7](https://mariadb.com/kb/en/mariadb-1047-release-notes/)
---
#### `wsrep_thread_count`
* **Description:** Total number of wsrep (applier/rollbacker) threads.
* **Introduced:** `MariaDB Galera Cluster 5.5.38` `MariaDB Galera Cluster 10.0.11`
---
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CONNECT - External Table Types CONNECT - External Table Types
==============================
Because so many ODBC and JDBC drivers exist and only the main ones have been heavily tested, these table types cannot be ranked as stable. Use them with care in production applications.
These types can be used to access tables belonging to the current or another database server. Six types are currently provided:
[ODBC](../connect-table-types-odbc-table-type-accessing-tables-from-other-dbms/index): To be used to access tables from a database management system providing an ODBC connector. ODBC is a standard of Microsoft and is currently available on Windows. On Linux, it can also be used provided a specific application emulating ODBC is installed. Currently only unixODBC is supported.
[JDBC](../connect-jdbc-table-type-accessing-tables-from-other-dbms/index): To be used to access tables from a database management system providing a JDBC connector. JDBC is an Oracle standard implemented in Java and principally meant to be used by Java applications. Using it directly from C or C++ application seems to be almost impossible due to an Oracle bug still not fixed. However, this can be achieved using a Java wrapper class used as an interface between C++ and JDBC. On another hand, JDBC is available on all platforms and operating systems.
[Mongo](../connect-mongo-table-type-accessing-collections-from-mongodb/index): To access MongoDB collections as tables via their MongoDB C Driver. Because this requires both MongoDB and the C Driver to be installed and operational, this table type is not currently available in binary distributions but only when compiling MariaDB from source.
[MySQL](../connect-table-types-mysql-table-type-accessing-mysqlmariadb-tables/index): This type is the preferred way to access tables belonging to another MySQL or MariaDB server. It uses the MySQL API to access the external table. Even though this can be obtained using the FEDERATED(X) plugin, this specific type is used internally by CONNECT because it also makes it possible to access tables belonging to the current server.
[PROXY](mariadb/connect-table-types-proxy-table-type): Internally used by some table types to access other tables from one table.
### External Table Specification
The four main external table types – odbc, jdbc, mongo and mysql – are specified giving the following information:
1. The data source. This is specified in the connection option.
2. The remote table or view to access. This can be specified within the connection string or using specific CONNECT options.
3. The column definitions. This can be also left to CONNECT to find them using the discovery MariaDB feature.
4. The optional Quoted option. Can be set to 1 to quote the identifiers in the query sent to the remote server. This is required if columns or table names can contain blanks.
The way this works is by establishing a connection to the external data source and by sending it an SQL statement (or its equivalent using API functions for MONGO) enabling it to execute the original query. To enhance performance, it is necessary to have the remote data source do the maximum processing. This is needed in particular to reduce the amount of data returned by the data source.
This is why, for SELECT queries, CONNECT uses the [cond\_push](../using-connect-condition-pushdown/index) MariaDB feature to retrieve the maximum of the where clause of the original query that can be added to the query sent to the data source. This is automatic and does not require anything to be done by the user.
However, more can be done. In addition to accessing a remote table, CONNECT offers the possibility to specify what the remote server must do. This is done by specifying it as a view in the srcdef option. For example:
```
CREATE TABLE custnum ENGINE=CONNECT TABLE_TYPE=XXX
CONNECTION='connecton string'
SRCDEF='select pays as country, count(*) as customers from custnum group by pays';
```
Doing so, the group by clause will be done by the remote server considerably reducing the amount of data sent back on the connection.
This may even be increased by adding to the srcdef part of the “compatible” part of the query where clauses like this are done for table-based tables. Note that for MariaDB, this table has two columns, country and customers. Supposing the original query is:
```
SELECT * FROM custnum WHERE (country = 'UK' OR country = 'USA') AND customers > 5;
```
How can we make the where clause be added to the sent srcdef? There are many problems:
1. Where to include the additional information.
2. What about the use of alias.
3. How to know what will be a where clause or a having clause.
The first problem is solved by preparing the srcdef view to receive clauses. The above example srcdef becomes:
```
SRCDEF='select pays as country, count(*) as customers from custnum where %s group by pays having %s';
```
The *%s* in the srcdef are place holders for eventual compatible parts of the original query where clause. If the select query does not specify a where clause, or a gives an unacceptable where clause, place holders will be filled by dummy clauses (1=1).
The other problems must be solved by adding to the create table a list of columns that must be translated because they are aliases or/and aliases on aggregate functions that must become a having clause. For example, in this case:
```
CREATE TABLE custnum ENGINE=CONNECT TABLE_TYPE=XXX
CONNECTION='connecton string'
SRCDEF='select pays as country, count(*) as customers from custnum where %s group by pays having %s'
OPTION_LIST='Alias=customers=*count(*);country=pays';
```
This is specified by the alias option, to be used in the option list. It is made of a semi-colon separated list of items containing:
1. The local column name (alias in the remote server)
2. An equal sign.
3. An eventual ‘\*’ indicating this is column correspond to an aggregate function.
4. The remote column name.
With this information, CONNECT will be able to make the query sent to the remote data source:
```
select pays as country, count(*) as customers from custnum where (pays = 'UK' OR pays = 'USA') group by country having count(*) > 5
```
Note: Some data sources, including MySQL and MariaDB, accept aliases in the having clause. In that case, the alias option could have been specified as:
```
OPTION_LIST='Alias=customers=*;country=pays';
```
Another option exists, phpos, enabling to specify what place holders are present and in what order. To be specified as “W”, “WH”, “H”, or “HW”. It is rarely used because by default CONNECT can set it from the srcdef content. The only cases it is needed is when the srcdef contains only a having place holder or when the having place holder occurs before the where place holder, which can occur on queries containing joins. CONNECT cannot handle more than one place holder of each type.
SRCDEF is not available for MONGO tables, but other ways of achieving this exist and are described in the MONGO table type chapter.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Cursors Cursors
========
A cursor is a structure that allows you to go over records sequentially, and perform processing based on the result.
| Title | Description |
| --- | --- |
| [Cursor Overview](../cursor-overview/index) | Structure for traversing and processing results sequentially. |
| [DECLARE CURSOR](../declare-cursor/index) | Declares a cursor which can be used inside stored programs. |
| [OPEN](../open/index) | Open a previously declared cursor. |
| [FETCH](../fetch/index) | Fetch a row from a cursor. |
| [CLOSE](../close/index) | Close a previously opened cursor. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Operator Precedence Operator Precedence
===================
The precedence is the order in which the SQL operators are evaluated.
The following list shows the SQL operator precedence. Operators that appear first in the list have a higher precedence. Operators which are listed together have the same precedence.
* `[INTERVAL](../date-and-time-units/index)`
* `[BINARY](../binary-operator/index)`, `[COLLATE](../setting-character-sets-and-collations/index#literals)`
* `[!](../not/index)`
* `[-](../subtraction-operator-/index)` (unary minus), [[bitwise-not|]] (unary bit inversion)
* `||` (string concatenation)
* `[^](../bitwise-xor/index)`
* `[\*](../multiplication-operator/index)`, `[/](../division-operator/index)`, `[DIV](../div/index)`, `[%](../modulo-operator/index)`, `[MOD](../mod/index)`
* `[-](../subtraction-operator-/index)`, `[+](../addition-operator/index)`
* `[<<](../shift-left/index)`, `[>>](../shift-right/index)`
* `[&](bitwise-and)`
* `[|](../bitwise-or/index)`
* `[=](../equal/index)` (comparison), `[<=>](../null-safe-equal/index)`, `[>=](../greater-than-or-equal/index)`, `[>](../greater-than/index)`, `[<=](../less-than-or-equal/index)`, `[<](../less-than/index)`, `[<>](../not-equal/index)`, `[!=](../not-equal/index)`, `[IS](../is/index)`, `[LIKE](../like/index)`, `[REGEXP](../regexp/index)`, `[IN](../in/index)`
* `[BETWEEN](../between-and/index)`, [`CASE`, `WHEN`, `THEN`, `ELSE`, `END`](../case-operator/index)
* `[NOT](../not/index)`
* `[&&](../and/index)`, `[AND](../and/index)`
* `[XOR](../xor/index)`
* `[||](../or/index)` (logical or), `[OR](../or/index)`
* `[=](../assignment-operators-assignment-operator/index)` (assignment), `[:=](../assignment-operator/index)`
Functions precedence is always higher than operators precedence.
In this page `CASE` refers to the [CASE operator](../case-operator/index), not to the [CASE statement](../case-statement/index)`.`
If the `HIGH_NOT_PRECEDENCE` [SQL\_MODE](../sql-mode/index) is set, `NOT` has the same precedence as `!`.
The `||` operator's precedence, as well as its meaning, depends on the `PIPES_AS_CONCAT` [SQL\_MODE](../sql-mode/index) flag: if it is on, `||` can be used to concatenate strings (like the [CONCAT()](../concat/index) function) and has a higher precedence.
The `=` operator's precedence depends on the context - it is higher when `=` is used as a comparison operator.
[Parenthesis](../parenthesis/index) can be used to modify the operators precedence in an expression.
Short-circuit evaluation
------------------------
The `AND`, `OR`, `&&` and `||` operators support short-circuit evaluation. This means that, in some cases, the expression on the right of those operators is not evaluated, because its result cannot affect the result. In the following cases, short-circuit evaluation is used and `x()` is not evaluated:
* `FALSE AND x()`
* `FALSE && x()`
* `TRUE OR x()`
* `TRUE || x()`
* `NULL BETWEEN x() AND x()`
Note however that the short-circuit evaluation does *not* apply to `NULL AND x()`. Also, `BETWEEN`'s right operands are not evaluated if the left operand is `NULL`, but in all other cases all the operands are evaluated.
This is a speed optimization. Also, since functions can have side-effects, this behavior can be used to choose whether execute them or not using a concise syntax:
```
SELECT some_function() OR log_error();
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Performance Schema replication_applier_configuration Table Performance Schema replication\_applier\_configuration Table
============================================================
**MariaDB starting with [10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/)**The `replication_applier_configuration` table, along with many other new [Performance Schema tables](../list-of-performance-schema-tables/index), was added in [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/).
The [Performance Schema](../performance-schema/index) replication\_applier\_configuration table contains configuration settings affecting replica transactions.
It contains the following fields.
| Field | Type | Null | Description |
| --- | --- | --- | --- |
| CHANNEL\_NAME | char(64) | NO | Replication channel name. |
| DESIRED\_DELAY | int(11) | NO | Target number of seconds the replica should be delayed to the master. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb OQGRAPH Overview OQGRAPH Overview
================
The Open Query GRAPH computation engine, or OQGRAPH as the engine itself is called, allows you to handle hierarchies (tree structures) and complex graphs (nodes having many connections in several directions).
OQGRAPH is unlike other storage engines, consisting of an entirely different architecture to a regular storage engine such as Aria, MyISAM or InnoDB.
It is intended to be used for retrieving hierarchical information, such as those used for graphs, routes or social relationships, in plain SQL.
Installing
----------
See [Installing OQGRAPH](../installing-oqgraph/index). Note that if the [query cache](../query-cache/index) is enabled, OQGRAPH will not use it.
Creating a Table
----------------
The following documentation is based upon [MariaDB 10.0.7](https://mariadb.com/kb/en/mariadb-1007-release-notes/) and OQGRAPH v3.
Example with origin and destination nodes only
----------------------------------------------
To create an OQGRAPH v3 table, a backing table must first be created. This backing table will store the actual data, and will be used for all INSERTs, UPDATEs and so on. It must be a regular table, not a view. Here's a simple example to start with:
```
CREATE TABLE oq_backing (
origid INT UNSIGNED NOT NULL,
destid INT UNSIGNED NOT NULL,
PRIMARY KEY (origid, destid),
KEY (destid)
);
```
Some data can be inserted into the backing table to test with later:
```
INSERT INTO oq_backing(origid, destid)
VALUES (1,2), (2,3), (3,4), (4,5), (2,6), (5,6);
```
Now the read-only OQGRAPH table is created. The CREATE statement must match the format below - any difference will result in an error.
```
CREATE TABLE oq_graph (
latch VARCHAR(32) NULL,
origid BIGINT UNSIGNED NULL,
destid BIGINT UNSIGNED NULL,
weight DOUBLE NULL,
seq BIGINT UNSIGNED NULL,
linkid BIGINT UNSIGNED NULL,
KEY (latch, origid, destid) USING HASH,
KEY (latch, destid, origid) USING HASH
)
ENGINE=OQGRAPH
data_table='oq_backing' origid='origid' destid='destid';
```
An older format (prior to [MariaDB 10.0.7](https://mariadb.com/kb/en/mariadb-1007-release-notes/)) has the latch field as a SMALLINT rather than a VARCHAR. The format is still valid, but gives an error by default:
```
CREATE TABLE oq_old (
latch SMALLINT UNSIGNED NULL,
origid BIGINT UNSIGNED NULL,
destid BIGINT UNSIGNED NULL,
weight DOUBLE NULL,
seq BIGINT UNSIGNED NULL,
linkid BIGINT UNSIGNED NULL,
KEY (latch, origid, destid) USING HASH,
KEY (latch, destid, origid) USING HASH
)
ENGINE=OQGRAPH
data_table='oq_backing' origid='origid' destid='destid';
ERROR 1005 (HY000): Can't create table `test`.`oq_old` (errno: 140 "Wrong create options")
```
The old, deprecated format can still be used if the value of the [oqgraph\_allow\_create\_integer\_latch](../oqgraph-system-and-status-variables/index#oqgraph_allow_create_integer_latch) system variable is changed from its default, `FALSE`, to `TRUE`.
```
SET GLOBAL oqgraph_allow_create_integer_latch=1;
CREATE TABLE oq_old (
latch SMALLINT UNSIGNED NULL,
origid BIGINT UNSIGNED NULL,
destid BIGINT UNSIGNED NULL,
weight DOUBLE NULL,
seq BIGINT UNSIGNED NULL,
linkid BIGINT UNSIGNED NULL,
KEY (latch, origid, destid) USING HASH,
KEY (latch, destid, origid) USING HASH
)
ENGINE=OQGRAPH
data_table='oq_backing' origid='origid' destid='destid';
Query OK, 0 rows affected, 1 warning (0.19 sec)
SHOW WARNINGS;
+---------+------+-----------------------------------------------------------------------------------------------------------------------------------+
| Level | Code | Message |
+---------+------+-----------------------------------------------------------------------------------------------------------------------------------+
| Warning | 1287 | 'latch SMALLINT UNSIGNED NULL' is deprecated and will be removed in a future release. Please use 'latch VARCHAR(32) NULL' instead |
+---------+------+-----------------------------------------------------------------------------------------------------------------------------------+
```
Data is only inserted into the backing table, not the OQGRAPH table.
Now, having created the `oq_graph` table linked to a backing table, it is now possible to query the `oq_graph` table directly. The `weight` field, since it was not specified in this example, defaults to `1`.
```
SELECT * FROM oq_graph;
+-------+--------+--------+--------+------+--------+
| latch | origid | destid | weight | seq | linkid |
+-------+--------+--------+--------+------+--------+
| NULL | 1 | 2 | 1 | NULL | NULL |
| NULL | 2 | 3 | 1 | NULL | NULL |
| NULL | 2 | 6 | 1 | NULL | NULL |
| NULL | 3 | 4 | 1 | NULL | NULL |
| NULL | 4 | 5 | 1 | NULL | NULL |
| NULL | 5 | 6 | 1 | NULL | NULL |
+-------+--------+--------+--------+------+--------+
```
The data here represents one-directional starting and ending nodes. So node 2 has paths to node 3 and node 6, while node 6 has no paths to any other node.
Manipulating Weight
-------------------
There are three fields which can be manipulated: `origid`, `destid` (the example above uses these two), as well as `weight`. To create a backing table with a `weight` field as well, the following syntax can be used:
```
CREATE TABLE oq2_backing (
origid INT UNSIGNED NOT NULL,
destid INT UNSIGNED NOT NULL,
weight DOUBLE NOT NULL,
PRIMARY KEY (origid, destid),
KEY (destid)
);
```
```
INSERT INTO oq2_backing(origid, destid, weight)
VALUES (1,2,1), (2,3,1), (3,4,3), (4,5,1), (2,6,10), (5,6,2);
```
```
CREATE TABLE oq2_graph (
latch VARCHAR(32) NULL,
origid BIGINT UNSIGNED NULL,
destid BIGINT UNSIGNED NULL,
weight DOUBLE NULL,
seq BIGINT UNSIGNED NULL,
linkid BIGINT UNSIGNED NULL,
KEY (latch, origid, destid) USING HASH,
KEY (latch, destid, origid) USING HASH
)
ENGINE=OQGRAPH
data_table='oq2_backing' origid='origid' destid='destid' weight='weight';
```
```
SELECT * FROM oq2_graph;
+-------+--------+--------+--------+------+--------+
| latch | origid | destid | weight | seq | linkid |
+-------+--------+--------+--------+------+--------+
| NULL | 1 | 2 | 1 | NULL | NULL |
| NULL | 2 | 3 | 1 | NULL | NULL |
| NULL | 2 | 6 | 10 | NULL | NULL |
| NULL | 3 | 4 | 3 | NULL | NULL |
| NULL | 4 | 5 | 1 | NULL | NULL |
| NULL | 5 | 6 | 2 | NULL | NULL |
+-------+--------+--------+--------+------+--------+
```
See [OQGRAPH Examples](../oqgraph-examples/index) for OQGRAPH usage examples.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SECOND SECOND
======
Syntax
------
```
SECOND(time)
```
Description
-----------
Returns the second for a given `time` (which can include [microseconds](../microseconds-in-mariadb/index)), in the range 0 to 59, or `NULL` if not given a valid time value.
Examples
--------
```
SELECT SECOND('10:05:03');
+--------------------+
| SECOND('10:05:03') |
+--------------------+
| 3 |
+--------------------+
SELECT SECOND('10:05:01.999999');
+---------------------------+
| SECOND('10:05:01.999999') |
+---------------------------+
| 1 |
+---------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema events_statements_summary_global_by_event_name Table Performance Schema events\_statements\_summary\_global\_by\_event\_name Table
=============================================================================
The [Performance Schema](../performance-schema/index) `events_statements_summary_global_by_event_name` table contains statement events summarized by event name. It contains the following columns:
| Column | Description |
| --- | --- |
| `EVENT_NAME` | Event name. |
| `COUNT_STAR` | Number of summarized events |
| `SUM_TIMER_WAIT` | Total wait time of the summarized events that are timed. |
| `MIN_TIMER_WAIT` | Minimum wait time of the summarized events that are timed. |
| `AVG_TIMER_WAIT` | Average wait time of the summarized events that are timed. |
| `MAX_TIMER_WAIT` | Maximum wait time of the summarized events that are timed. |
| `SUM_LOCK_TIME` | Sum of the `LOCK_TIME` column in the `events_statements_current` table. |
| `SUM_ERRORS` | Sum of the `ERRORS` column in the `events_statements_current` table. |
| `SUM_WARNINGS` | Sum of the `WARNINGS` column in the `events_statements_current` table. |
| `SUM_ROWS_AFFECTED` | Sum of the `ROWS_AFFECTED` column in the `events_statements_current` table. |
| `SUM_ROWS_SENT` | Sum of the `ROWS_SENT` column in the `events_statements_current` table. |
| `SUM_ROWS_EXAMINED` | Sum of the `ROWS_EXAMINED` column in the `events_statements_current` table. |
| `SUM_CREATED_TMP_DISK_TABLES` | Sum of the `CREATED_TMP_DISK_TABLES` column in the `events_statements_current` table. |
| `SUM_CREATED_TMP_TABLES` | Sum of the `CREATED_TMP_TABLES` column in the `events_statements_current` table. |
| `SUM_SELECT_FULL_JOIN` | Sum of the `SELECT_FULL_JOIN` column in the `events_statements_current` table. |
| `SUM_SELECT_FULL_RANGE_JOIN` | Sum of the `SELECT_FULL_RANGE_JOIN` column in the `events_statements_current` table. |
| `SUM_SELECT_RANGE` | Sum of the `SELECT_RANGE` column in the `events_statements_current` table. |
| `SUM_SELECT_RANGE_CHECK` | Sum of the `SELECT_RANGE_CHECK` column in the `events_statements_current` table. |
| `SUM_SELECT_SCAN` | Sum of the `SELECT_SCAN` column in the `events_statements_current` table. |
| `SUM_SORT_MERGE_PASSES` | Sum of the `SORT_MERGE_PASSES` column in the `events_statements_current` table. |
| `SUM_SORT_RANGE` | Sum of the `SORT_RANGE` column in the `events_statements_current` table. |
| `SUM_SORT_ROWS` | Sum of the `SORT_ROWS` column in the `events_statements_current` table. |
| `SUM_SORT_SCAN` | Sum of the `SORT_SCAN` column in the `events_statements_current` table. |
| `SUM_NO_INDEX_USED` | Sum of the `NO_INDEX_USED` column in the `events_statements_current` table. |
| `SUM_NO_GOOD_INDEX_USED` | Sum of the `NO_GOOD_INDEX_USED` column in the `events_statements_current` table. |
The `*_TIMER_WAIT` columns only calculate results for timed events, as non-timed events have a `NULL` wait time.
Example
-------
```
SELECT * FROM events_statements_summary_global_by_event_name\G
...
*************************** 173. row ***************************
EVENT_NAME: statement/com/Error
COUNT_STAR: 0
SUM_TIMER_WAIT: 0
MIN_TIMER_WAIT: 0
AVG_TIMER_WAIT: 0
MAX_TIMER_WAIT: 0
SUM_LOCK_TIME: 0
SUM_ERRORS: 0
SUM_WARNINGS: 0
SUM_ROWS_AFFECTED: 0
SUM_ROWS_SENT: 0
SUM_ROWS_EXAMINED: 0
SUM_CREATED_TMP_DISK_TABLES: 0
SUM_CREATED_TMP_TABLES: 0
SUM_SELECT_FULL_JOIN: 0
SUM_SELECT_FULL_RANGE_JOIN: 0
SUM_SELECT_RANGE: 0
SUM_SELECT_RANGE_CHECK: 0
SUM_SELECT_SCAN: 0
SUM_SORT_MERGE_PASSES: 0
SUM_SORT_RANGE: 0
SUM_SORT_ROWS: 0
SUM_SORT_SCAN: 0
SUM_NO_INDEX_USED: 0
SUM_NO_GOOD_INDEX_USED: 0
*************************** 174. row ***************************
EVENT_NAME: statement/com/
COUNT_STAR: 0
SUM_TIMER_WAIT: 0
MIN_TIMER_WAIT: 0
AVG_TIMER_WAIT: 0
MAX_TIMER_WAIT: 0
SUM_LOCK_TIME: 0
SUM_ERRORS: 0
SUM_WARNINGS: 0
SUM_ROWS_AFFECTED: 0
SUM_ROWS_SENT: 0
SUM_ROWS_EXAMINED: 0
SUM_CREATED_TMP_DISK_TABLES: 0
SUM_CREATED_TMP_TABLES: 0
SUM_SELECT_FULL_JOIN: 0
SUM_SELECT_FULL_RANGE_JOIN: 0
SUM_SELECT_RANGE: 0
SUM_SELECT_RANGE_CHECK: 0
SUM_SELECT_SCAN: 0
SUM_SORT_MERGE_PASSES: 0
SUM_SORT_RANGE: 0
SUM_SORT_ROWS: 0
SUM_SORT_SCAN: 0
SUM_NO_INDEX_USED: 0
SUM_NO_GOOD_INDEX_USED: 0
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema events_statements_summary_by_host_by_event_name Table Performance Schema events\_statements\_summary\_by\_host\_by\_event\_name Table
===============================================================================
The [Performance Schema](../performance-schema/index) events\_statements\_summary\_by\_host\_by\_event\_name table contains statement events summarized by host and event name. It contains the following columns:
| Column | Description |
| --- | --- |
| `HOST` | Host. Used together with `EVENT_NAME` for grouping events. |
| `EVENT_NAME` | Event name. Used together with `HOST` for grouping events. |
| `COUNT_STAR` | Number of summarized events |
| `SUM_TIMER_WAIT` | Total wait time of the summarized events that are timed. |
| `MIN_TIMER_WAIT` | Minimum wait time of the summarized events that are timed. |
| `AVG_TIMER_WAIT` | Average wait time of the summarized events that are timed. |
| `MAX_TIMER_WAIT` | Maximum wait time of the summarized events that are timed. |
| `SUM_LOCK_TIME` | Sum of the `LOCK_TIME` column in the `events_statements_currentd` table. |
| `SUM_ERRORS` | Sum of the `ERRORS` column in the `events_statements_current` table. |
| `SUM_WARNINGS` | Sum of the `WARNINGS` column in the `events_statements_current` table. |
| `SUM_ROWS_AFFECTED` | Sum of the `ROWS_AFFECTED` column in the `events_statements_current` table. |
| `SUM_ROWS_SENT` | Sum of the `ROWS_SENT` column in the `events_statements_current` table. |
| `SUM_ROWS_EXAMINED` | Sum of the `ROWS_EXAMINED` column in the `events_statements_current` table. |
| `SUM_CREATED_TMP_DISK_TABLES` | Sum of the `CREATED_TMP_DISK_TABLES` column in the `events_statements_current` table. |
| `SUM_CREATED_TMP_TABLES` | Sum of the `CREATED_TMP_TABLES` column in the `events_statements_current` table. |
| `SUM_SELECT_FULL_JOIN` | Sum of the `SELECT_FULL_JOIN` column in the `events_statements_current` table. |
| `SUM_SELECT_FULL_RANGE_JOIN` | Sum of the `SELECT_FULL_RANGE_JOINW` column in the `events_statements_current` table. |
| `SUM_SELECT_RANGE` | Sum of the `SELECT_RANGE` column in the `events_statements_current` table. |
| `SUM_SELECT_RANGE_CHECK` | Sum of the `SELECT_RANGE_CHECK` column in the `events_statements_current` table. |
| `SUM_SELECT_SCAN` | Sum of the `SELECT_SCAN` column in the `events_statements_current` table. |
| `SUM_SORT_MERGE_PASSES` | Sum of the `SORT_MERGE_PASSES` column in the `events_statements_current` table. |
| `SUM_SORT_RANGE` | Sum of the `SORT_RANGE` column in the `events_statements_current` table. |
| `SUM_SORT_ROWS` | Sum of the `SORT_ROWS` column in the `events_statements_current` table. |
| `SUM_SORT_SCAN` | Sum of the `SORT_SCAN` column in the `events_statements_current` table. |
| `SUM_NO_INDEX_USED` | Sum of the `NO_INDEX_USED` column in the `events_statements_current` table. |
| `SUM_NO_GOOD_INDEX_USED` | Sum of the `NO_GOOD_INDEX_USED` column in the `events_statements_current` table. |
The `*_TIMER_WAIT` columns only calculate results for timed events, as non-timed events have a `NULL` wait time.
Example
-------
```
SELECT * FROM events_statements_summary_by_host_by_event_name\G
...
*************************** 347. row ***************************
HOST: NULL
EVENT_NAME: statement/com/Error
COUNT_STAR: 0
SUM_TIMER_WAIT: 0
MIN_TIMER_WAIT: 0
AVG_TIMER_WAIT: 0
MAX_TIMER_WAIT: 0
SUM_LOCK_TIME: 0
SUM_ERRORS: 0
SUM_WARNINGS: 0
SUM_ROWS_AFFECTED: 0
SUM_ROWS_SENT: 0
SUM_ROWS_EXAMINED: 0
SUM_CREATED_TMP_DISK_TABLES: 0
SUM_CREATED_TMP_TABLES: 0
SUM_SELECT_FULL_JOIN: 0
SUM_SELECT_FULL_RANGE_JOIN: 0
SUM_SELECT_RANGE: 0
SUM_SELECT_RANGE_CHECK: 0
SUM_SELECT_SCAN: 0
SUM_SORT_MERGE_PASSES: 0
SUM_SORT_RANGE: 0
SUM_SORT_ROWS: 0
SUM_SORT_SCAN: 0
SUM_NO_INDEX_USED: 0
SUM_NO_GOOD_INDEX_USED: 0
*************************** 348. row ***************************
HOST: NULL
EVENT_NAME: statement/com/
COUNT_STAR: 0
SUM_TIMER_WAIT: 0
MIN_TIMER_WAIT: 0
AVG_TIMER_WAIT: 0
MAX_TIMER_WAIT: 0
SUM_LOCK_TIME: 0
SUM_ERRORS: 0
SUM_WARNINGS: 0
SUM_ROWS_AFFECTED: 0
SUM_ROWS_SENT: 0
SUM_ROWS_EXAMINED: 0
SUM_CREATED_TMP_DISK_TABLES: 0
SUM_CREATED_TMP_TABLES: 0
SUM_SELECT_FULL_JOIN: 0
SUM_SELECT_FULL_RANGE_JOIN: 0
SUM_SELECT_RANGE: 0
SUM_SELECT_RANGE_CHECK: 0
SUM_SELECT_SCAN: 0
SUM_SORT_MERGE_PASSES: 0
SUM_SORT_RANGE: 0
SUM_SORT_ROWS: 0
SUM_SORT_SCAN: 0
SUM_NO_INDEX_USED: 0
SUM_NO_GOOD_INDEX_USED: 0
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb GIS Resources GIS Resources
=============
Here are a few resources for those interested in GIS in MariaDB.
* [OGC Simple Feature Access](http://www.opengeospatial.org/standards/sfs) - the Open Geospatial Consortium's OpenGIS Simple Features Specifications For SQL.
* [Geo/Spatial Search with MySQL](http://www.scribd.com/doc/2569355/Geo-Distance-Search-with-MySQL) - a presentation by Alexander Rubin, from the MySQL Conference in 2006.
There are currently no differences between GIS in stable versions of MariaDB and GIS in MySQL. There are, however, some extensions and enhancements being worked on. See "[MariaDB Plans - GIS](../mariadb-plans-gis/index)" for more information.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb mysql.ndb_binlog_index Table mysql.ndb\_binlog\_index Table
==============================
The `mysql.ndb_binlog_index` table is not used by MariaDB. It was kept for MySQL compatibility reasons, and is used there for MySQL Cluster. It was removed in [MariaDB 10.0.4](https://mariadb.com/kb/en/mariadb-1004-release-notes/).
For MariaDB clustering, see [Galera](../galera/index).
The table contains the following fields:
| Field | Type | Null | Key | Default |
| --- | --- | --- | --- | --- |
| `Position` | `bigint(20) unsigned` | NO | | `NULL` |
| `File` | `varchar(255)` | NO | | `NULL` |
| `epoch` | `bigint(20) unsigned` | NO | PRI | `NULL` |
| `inserts` | `bigint(20) unsigned` | NO | | `NULL` |
| `updates` | `bigint(20) unsigned` | NO | | `NULL` |
| `deletes` | `bigint(20) unsigned` | NO | | `NULL` |
| `schemaops` | `bigint(20) unsigned` | NO | | `NULL` |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb NEXT VALUE for sequence_name NEXT VALUE for sequence\_name
=============================
**MariaDB starting with [10.3](../what-is-mariadb-103/index)**SEQUENCEs were introduced in [MariaDB 10.3](../what-is-mariadb-103/index)
Syntax
------
```
NEXT VALUE FOR sequence
```
or
```
NEXTVAL(sequence_name)
```
or in Oracle mode ([SQL\_MODE=ORACLE](../sql-mode/index))
```
sequence_name.nextval
```
`NEXT VALUE FOR` is ANSI SQL syntax while `NEXTVAL()` is PostgreSQL syntax.
Description
-----------
Generate next value for a `SEQUENCE`.
* You can greatly speed up `NEXT VALUE` by creating the sequence with the `CACHE` option. If not, every `NEXT VALUE` usage will cause changes in the stored `SEQUENCE` table.
* When using `NEXT VALUE` the value will be reserved at once and will not be reused, except if the `SEQUENCE` was created with `CYCLE`. This means that when you are using `SEQUENCE`s you have to expect gaps in the generated sequence numbers.
* If one updates the `SEQUENCE` with [SETVAL()](../setval/index) or [ALTER SEQUENCE ... RESTART](../alter-sequence/index), `NEXT VALUE FOR` will notice this and start from the next requested value.
* [FLUSH TABLES](../flush/index) will close the sequence and the next sequence number generated will be according to what's stored in the `SEQUENCE` object. In effect, this will discard the cached values.
* A server restart (or closing the current connection) also causes a drop of all cached values. The cached sequence numbers are reserved only for the current connection.
* `NEXT VALUE` requires the `INSERT` [privilege](../grant/index).
**MariaDB starting with [10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)*** You can also use `NEXT VALUE FOR sequence` for column `DEFAULT`.
See Also
--------
* [Sequence Overview](../sequence-overview/index)
* [CREATE SEQUENCE](../create-sequence/index)
* [ALTER SEQUENCE](../alter-sequence/index)
* [PREVIOUS VALUE FOR](../previous-value-for-sequence_name/index)
* [SETVAL()](../setval/index). Set next value for the sequence.
* [AUTO\_INCREMENT](../auto_increment/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysql.spider_table_crd Table mysql.spider\_table\_crd Table
==============================
The `mysql.spider_table_crd` table is installed by the [Spider storage engine](../spider/index).
**MariaDB starting with [10.4](../what-is-mariadb-104/index)**In [MariaDB 10.4](../what-is-mariadb-104/index) and later, this table uses the [Aria](../aria/index) storage engine.
**MariaDB until [10.3](../what-is-mariadb-103/index)**In [MariaDB 10.3](../what-is-mariadb-103/index) and before, this table uses the [MyISAM](../myisam-storage-engine/index) storage engine.
It contains the following fields:
| Field | Type | Null | Key | Default | Description |
| --- | --- | --- | --- | --- | --- |
| db\_name | char(64) | NO | PRI | | |
| table\_name | char(199) | NO | PRI | | |
| key\_seq | int(10) unsigned | NO | PRI | 0 | |
| cardinality | bigint(20) | NO | | 0 | |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb DATE DATE
====
Syntax
------
```
DATE
```
Description
-----------
A date. The supported range is '`1000-01-01`' to '`9999-12-31`'. MariaDB displays `DATE` values in '`YYYY-MM-DD`' format, but can be assigned dates in looser formats, including strings or numbers, as long as they make sense. These include a short year, `YY-MM-DD`, no delimiters, `YYMMDD`, or any other acceptable delimiter, for example `YYYY/MM/DD`. For details, see [date and time literals](../date-and-time-literals/index).
'`0000-00-00`' is a permitted special value (zero-date), unless the [NO\_ZERO\_DATE](../sql-mode/index#no_zero_date) [SQL\_MODE](../sql-mode/index) is used. Also, individual components of a date can be set to 0 (for example: '`2015-00-12`'), unless the [NO\_ZERO\_IN\_DATE](../sql-mode/index#no_zero_in_date) [SQL\_MODE](../sql-mode/index) is used. In many cases, the result of en expression involving a zero-date, or a date with zero-parts, is `NULL`. If the [ALLOW\_INVALID\_DATES](../sql-mode/index#allow_invalid_dates) SQL\_MODE is enabled, if the day part is in the range between 1 and 31, the date does not produce any error, even for months that have less than 31 days.
### Oracle Mode
**MariaDB starting with [10.3](../what-is-mariadb-103/index)**In [Oracle mode from MariaDB 10.3](../sql_modeoracle-from-mariadb-103/index#synonyms-for-basic-sql-types), `DATE` with a time portion is a synonym for [DATETIME](../datetime/index). See also [mariadb\_schema](../mariadb_schema/index).
Examples
--------
```
CREATE TABLE t1 (d DATE);
INSERT INTO t1 VALUES ("2010-01-12"), ("2011-2-28"), ('120314'),('13*04*21');
SELECT * FROM t1;
+------------+
| d |
+------------+
| 2010-01-12 |
| 2011-02-28 |
| 2012-03-14 |
| 2013-04-21 |
+------------+
```
See Also
--------
* [mariadb\_schema](../mariadb_schema/index) data type qualifier
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB ColumnStore Performance Related Configuration Settings MariaDB ColumnStore Performance Related Configuration Settings
==============================================================
Introduction
============
A number of system configuration variables exist to allow fine tuning of the system to suit the physical hardware and query characteristics. In general the default values will work relatively well for many cases.
The configuration parameters are maintained in the /usr/local/mariadb/columnstore/etc/Columnstore.xml file. In a multiple server deployment these should only be edited on the PM1 server as this will be automatically replicated to other servers by the system. A system restart will be required for the configuration change to take affect.
Convenience utility programs *getConfig* and *setConfig* are available to safely update the Columnstore.xml without needing to be comfortable with editing XML files. The -h argument will display usage information.
Memory management - *NumBlocksPct* and *TotalUmMemory*
======================================================
The *NumBlocksPct* configuration parameter specifies the percentage of physical memory to utilize for disk block caching. For a Single Server or Combined Multi Server deployment the default value is 50 to ensure enough physical memory for the UM and for a non combined multi serve deployment the default value is 70.
The *TotalUmMemory* configuration parameter specifies the percentage of physical memory to utilize for joins, intermediate results and set operations on the UM. This specifies an upper limit for small table results in joins rather than a pre-allocation of memory.
In a single server or combined deployment, the sum of *NumBlocksPct* and *TotalUmMemory* should typically not exceed 75% of physical memory. With very large memory servers this could be raised but the key point is to leave enough memory for other processes including mysqld.
With version 1.2.2 onwards these can be set to static numeric limits instead of percentages by entering a number with 'M' or 'G' at the end to signify MiB or GiB.
Query concurrency - *MaxOutstandingRequests*
============================================
ColumnStore handles concurrent query execution by managing the rate of concurrent batch primitive steps from the UM to the PM. This is configured using the *MaxOutstandingRequests* parameter and has a default value of 20. Each batch primitive step is executed within the context of 1 extent column according to this high level process:
* The UM issues up to *MaxOutstandingRequests* number of batch primitive steps.
* The PM processes the request using many threads and returns its response. These generally take a fraction of a second up to a low number of seconds depending on the amount of Physical I/O and the performance of that storage.
* The UM will issue new requests as prior requests complete maintaining the maximum number of outstanding requests.
This scheme allows for large queries to use all available resources when not otherwise being consumed and for smaller queries to execute with minimal delay. Lower values optimize for higher throughput of smaller queries while a larger value optimizes for response time of that query. The default value should work well under most circumstances however the value should be increased as the number of PM nodes is increased.
How many Queries are running and how many queries are currently in the queue can be checked with
```
select calgetsqlcount();
```
Join tuning - *PmMaxMemorySmallSide*
====================================
ColumnStore maintains statistics for table and utilizes this to determine which is the larger table of the two. This is based both on the number of blocks in that table and estimation of the predicate cardinality. The first step is to apply any filters as appropriate to the smaller table and returning this data set to the UM. The size of this data set is compared against the configuration parameter *PmMaxMemorySmallSide* which has a default value of 64 (MB). This value can be set all the way up to 4GB. This default allows for approximately 1M rows on the small table side to be joined against billions (or trillions) on the large table side. If the size of the small data set is less than *PmMaxMemorySmallSide* the dataset will be sent to the PM for creation of a distributed hashmap otherwise it is created on the UM. Thus this setting is important to tuning of joins and whether the operation can be distributed or not. This should be set to support your largest expected small table join size up to available memory:
* Although this will increase the size of data sent from the UM to PM to support the join, it means that the join and subsequent aggregates are pushed down, scaled out, and a smaller data set is returned back to the UM.
* In a multiple PM deployment, the sizing should be based from available physical memory on the PM servers, how much memory to reserve for block caching, and the number of simultaneous join operations that can be expected to run times the average small table join data size.
Multi table join tuning
=======================
The above logic for a single table join extrapolates out to multi table joins where the small table values are precalculated and performed as one single scan against the large table. This works well for the typical star schema case joining multiple dimension tables with a large fact table. For some join scenarios it may be necessary to sequence joins to create the intermediate datasets for joining, this would happen for instance with a snowflake schema structure. In some extreme cases it may be hard for the optimizer to be able to determine the most optimal join path. In this case a hint is available to force a join ordering. The INFINIDB\_ORDERED hint will force the first table in the from clause to be considered the largest table and override any statistics based decision, for example:
```
select /*! INFINIDB_ORDERED */ r_regionkey
from region r, customer c, nation n
where r.r_regionkey = n.n_regionkey
and n.n_nationkey = c.c_nationkey
```
Note: INFINIDB\_ORDERED is deprecated and not work anymore for 1.2 and above.
use set infinidb\_ordered\_only=ON;
and for 1.4 set columnstore\_ordered\_only=ON;
Disk based joins - *AllowDiskBasedJoin*
=======================================
When a join is very large and exceeds the *PmMaxMemorySmallSide* setting it is performed in memory in the UM server. For very large joins this could exceed the available memory in which case this is detected and a query error reported. A number of configuration parameters are available to enable and configure usage of disk overflow should this occur:
* *AllowDiskBasedJoin* – Controls the option to use disk Based joins or not. Valid values are Y (enabled) or N (disabled). By default, this option is disabled.
* *TempFileCompression* – Controls whether the disk join files are compressed or noncompressed. Valid values are Y (use compressed files) or N (use non-compressed files).
* *TempFilePath* – The directory path used for the disk joins. By default, this path is the tmp directory for your installation (i.e., /usr/local/mariadb/columnstore/tmp). Files (named infinidb-join-data\*) in this directory will be created and cleaned on an as needed basis. The entire directory is removed and recreated by ExeMgr at startup. **It is strongly recommended that this directory is stored on a dedicated partition**.
A mariadb global or session variable is available to specify a memory limit at which point the query is switched over to disk based joins:
* infinidb\_um\_mem\_limit - Memory limit in MB per user (i.e. switch to disk based join if this limit is exceeded). By default, this limit is not set (value of 0).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Silent Column Changes Silent Column Changes
=====================
When a [CREATE TABLE](../create-table/index) or [ALTER TABLE](../alter-table/index) command is issued, MariaDB will silently change a column specification in the following cases:
* [PRIMARY KEY](../getting-started-with-indexes/index#primary-key) columns are always NOT NULL.
* Any trailing spaces from [SET](../set-data-type/index) and [ENUM](../enum/index) values are discarded.
* [TIMESTAMP](../timestamp/index) columns are always NOT NULL, and display sizes are discarded
* A row-size limit of 65535 bytes applies
* If [strict SQL mode](../sql-mode/index#strict-mode) is not enabled (it is enabled by default from [MariaDB 10.2](../what-is-mariadb-102/index)), a [VARCHAR](../varchar/index) column longer than 65535 become [TEXT](../text/index), and a [VARBINARY](../varbinary/index) columns longer than 65535 becomes a [BLOB](../blob/index). If strict mode is enabled the silent changes will not be made, and an error will occur.
* If a USING clause specifies an index that's not permitted by the storage engine, the engine will instead use another available index type that can be applied without affecting results.
* If the CHARACTER SET binary attribute is specified, the column is created as the matching binary data type. A TEXT becomes a BLOB, CHAR a BINARY and VARCHAR a VARBINARY. ENUMs and SETs are created as defined.
To ease imports from other RDBMSs, MariaDB will also silently map the following data types:
| Other Vendor Type | MariaDB Type |
| --- | --- |
| BOOL | [TINYINT](../tinyint/index) |
| BOOLEAN | [TINYINT](../tinyint/index) |
| CHARACTER VARYING(M) | [VARCHAR](../varchar/index)(M) |
| FIXED | [DECIMAL](../decimal/index) |
| FLOAT4 | [FLOAT](../float/index) |
| FLOAT8 | [DOUBLE](../double/index) |
| INT1 | [TINYINT](../tinyint/index) |
| INT2 | [SMALLINT](../smallint/index) |
| INT3 | [MEDIUMINT](../mediumint/index) |
| INT4 | [INT](../int/index) |
| INT8 | [BIGINT](../bigint/index) |
| LONG VARBINARY | [MEDIUMBLOB](../mediumblob/index) |
| LONG VARCHAR | [MEDIUMTEXT](../mediumtext/index) |
| LONG | [MEDIUMTEXT](../mediumtext/index) |
| MIDDLEINT | [MEDIUMINT](../mediumint/index) |
| NUMERIC | [DECIMAL](../decimal/index) |
Currently, all MySQL types are supported in MariaDB.
For type mapping between Cassandra and MariaDB, see [Cassandra storage engine](../cassandra-storage-engine/index#datatypes).
Example
-------
Silent changes in action:
```
CREATE TABLE SilenceIsGolden
(
f1 TEXT CHARACTER SET binary,
f2 VARCHAR(15) CHARACTER SET binary,
f3 CHAR CHARACTER SET binary,
f4 ENUM('x','y','z') CHARACTER SET binary,
f5 VARCHAR (65536),
f6 VARBINARY (65536),
f7 INT1
);
Query OK, 0 rows affected, 2 warnings (0.31 sec)
SHOW WARNINGS;
+-------+------+-----------------------------------------------+
| Level | Code | Message |
+-------+------+-----------------------------------------------+
| Note | 1246 | Converting column 'f5' from VARCHAR to TEXT |
| Note | 1246 | Converting column 'f6' from VARBINARY to BLOB |
+-------+------+-----------------------------------------------+
DESCRIBE SilenceIsGolden;
+-------+-------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------------+------+-----+---------+-------+
| f1 | blob | YES | | NULL | |
| f2 | varbinary(15) | YES | | NULL | |
| f3 | binary(1) | YES | | NULL | |
| f4 | enum('x','y','z') | YES | | NULL | |
| f5 | mediumtext | YES | | NULL | |
| f6 | mediumblob | YES | | NULL | |
| f7 | tinyint(4) | YES | | NULL | |
+-------+-------------------+------+-----+---------+-------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SIN SIN
===
Syntax
------
```
SIN(X)
```
Description
-----------
Returns the sine of X, where X is given in radians.
Examples
--------
```
SELECT SIN(1.5707963267948966);
+-------------------------+
| SIN(1.5707963267948966) |
+-------------------------+
| 1 |
+-------------------------+
SELECT SIN(PI());
+----------------------+
| SIN(PI()) |
+----------------------+
| 1.22460635382238e-16 |
+----------------------+
SELECT ROUND(SIN(PI()));
+------------------+
| ROUND(SIN(PI())) |
+------------------+
| 0 |
+------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb MariaDB Plans - Statistics and Monitoring MariaDB Plans - Statistics and Monitoring
=========================================
**Note:** This page is obsolete. The information is old, outdated, or otherwise currently incorrect. We are keeping the page for historical reasons only. **Do not** rely on the information in this article.
Notes from the Statistics and Monitoring group:
* Strategic direction: Enterprise monitoring
+ graphing and data aggregation tools, server monitoring, etc.
+ customer has reported that Merlin is inadequate, should we enter into this market?
* QA request: better EXPLAIN:
+ required in order to debug performance issues in queries without knowing the query or the data;
+ the customer will only provide EXPLAIN and SHOW output, we need to debug based on that;
* QA request: PERSISTENT TABLE STATISTICS
+ required to ensure repeatable query execution for InnoDB;
+ may allow various statistics to be reported by the server regardless of engine;
* U/C at Oracle: OPTIMIZER tracing spetrunia: report actual estimates, and all decisions of the optimizer, including why an index was \*not\* picked, etc.
* Developed by Serg for [MariaDB 5.3](../what-is-mariadb-53/index): Phone Home todo: make a web page on mariadb.org showing the results from the data being collected; pstoev: do we need to allow people to run their own reporting servers;
* Present in MySQL 5.5: Performance Schema
+ what do we want to do with it, embrace it, extend it?
+ or it is better to have more SHOW commands and INFORMATION\_SCHEMA tables?
+ are going to use Facebook's user stats/index stats patch or create a PERFORMANCE\_SCHEMA-based solution?
* FB request: log all SQL errors
+ serg: possible via AUDIT plugin, must back-port audit infrastructure from MySQL 5.5
* FB request: more options for controlling the slow query log
+ sample one out of every N queries or transactions ; with N ~ 99
+ filter queries based on rows examined, I/O performed, total lock wait time;
* idea: collect statistics per query text, or normalized query text and report;
* FB request: EXPLAIN the \*actual\* plan on a \*running\* statement; no progress indicators and numbers are needed;
* request by community: progress bar for queries such as LOAD DATA and SELECT;
+ something like SHOW PROGRESS PROCESSLIST ; SHOW QUERY PROGRESS;
+ what numbers are to be reported? time to elapsed, time to completion, number of rows processed?
+ how to estimate the total runing time of the query;
* FB request: limit total temptable size on the server; Already available per-query, but per-server needed;
* FB patch: Admission Control
+ limit number of concurrently running queries per user;
+ if all user queries are blocked, allow a few more queries to join;
* Kurt: Integration with log watching tools
+ alter log formats to make them compatible with tools;
+ include logwatch mysql-specific config file in packages/distributions;
* FB request: Better monitoring for replication:
+ seconds\_behind\_master computation is incorrect, sometimes is zero
+ a counter for the total number of bytes read by I/O thread that does not rotate on log rotation;
+ "seconds behind real master" to report the actual time the slave I/O thread is behind;
* community request: prevent full scans from running at all above a certain table size;
+ is existing max-join-size variable sufficient or more granula control is needed?
* FB patch: report the time spent in individual phases of query processing
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Partition Maintenance Partition Maintenance
=====================
Preface
-------
This article covers
* PARTITIONing uses and non-uses
* How to Maintain a time-series PARTITIONed table
* AUTO\_INCREMENT secrets
First, my Opinions on PARTITIONing
Taken from [Rick's RoTs - Rules of Thumb](http://mysql.rjweb.org/doc.php/ricksrots)
* #1: Don't use [PARTITIONing](../managing-mariadb-partitioning/index) until you know how and why it will help.
* Don't use PARTITION unless you will have >1M rows
* No more than 50 PARTITIONs on a table (open, show table status, etc, are impacted) (fixed in MySQL 5.6.6?; a better fix coming eventually in 5.7)
* PARTITION BY RANGE is the only useful method.
* SUBPARTITIONs are not useful.
* The partition field should not be the field first in any key.
* It is OK to have an [AUTO\_INCREMENT](../auto_increment/index) as the first part of a compound key, or in a non-UNIQUE index.
It is so tempting to believe that PARTITIONing will solve performance problems. But it is so often wrong.
PARTITIONing splits up one table into several smaller tables. But table size is rarely a performance issue. Instead, I/O time and indexes are the issues.
A common fallacy: "Partitioning will make my queries run faster". It won't. Ponder what it takes for a 'point query'. Without partitioning, but with an appropriate index, there is a BTree (the index) to drill down to find the desired row. For a billion rows, this might be 5 levels deep. With partitioning, first the partition is chosen and "opened", then a smaller BTree (of say 4 levels) is drilled down. Well, the savings of the shallower BTree is consumed by having to open the partition. Similarly, if you look at the disk blocks that need to be touched, and which of those are likely to be cached, you come to the conclusion that about the same number of disk hits is likely. Since disk hits are the main cost in a query, Partitioning does not gain any performance (at least for this typical case). The 2D case (below) gives the main contradiction to this discussion.
Use Cases for PARTITIONing
--------------------------
**Use case #1 -- time series**. Perhaps the most common use case where PARTITIONing shines is in a dataset where "old" data is periodically deleted from the table. RANGE PARTITIONing by day (or other unit of time) lets you do a nearly instantaneous DROP PARTITION plus REORGANIZE PARTITION instead of a much slower DELETE. Much of this blog is focused on this use case. This use case is also discussed in [Big DELETEs](../big-deletes/index)
The big win for Case #1: DROP PARTITION is a lot faster than DELETEing a lot of rows.
**Use case #2 -- 2-D index**. INDEXes are inherently one-dimensional. If you need two "ranges" in the WHERE clause, try to migrate one of them to PARTITIONing.
Finding the nearest 10 pizza parlors on a map needs a 2D index. Partition pruning sort of gives a second dimension. See Latitude/Longitude Indexing That uses PARTITION BY RANGE(latitude) together with PRIMARY KEY(longitude, ...)
The big win for Case #2: Scanning fewer rows.
**Use case #3 -- hot spot**. This is a bit complicated to explain. Given this combination:
* A table's index is too big to be cached, but the index for one partition is cacheable, and
* The index is randomly accessed, and
* Data ingestion would normally be I/O bound due to updating the index Partitioning can keep all the index "hot" in RAM, thereby avoiding a lot of I/O.
The big win for Case #3: Improving caching to decrease I/O to speed up operations.
AUTO\_INCREMENT in PARTITION
----------------------------
* For [AUTO\_INCREMENT](../auto_increment/index) to work (in any table), it must be the first field in some index. Period. There are no other requirements on indexing it.
* Being the first field in some index lets the engine find the 'next' value when opening the table.
* AUTO\_INCREMENT need not be UNIQUE. What you lose: prevention of explicitly inserting a duplicate id. (This is rarely needed, anyway.)
Examples (where id is AUTO\_INCREMENT):
* PRIMARY KEY (...), INDEX(id)
* PRIMARY KEY (...), UNIQUE(id, partition\_key) -- not useful
* INDEX(id), INDEX(...) (but no UNIQUE keys)
* PRIMARY KEY(id), ... -- works only if id is the partition key (not very useful)
PARTITION maintenance for the time-series case
----------------------------------------------
Let's focus on the maintenance task involved in Case #1, as described above.
You have a large table that is growing on one end and being pruned on the other. Examples include news, logs, and other transient information. PARTITION BY RANGE is an excellent vehicle for such a table.
* DROP PARTITION is much faster than DELETE. (This is the big reason for doing this flavor of partitioning.)
* Queries often limit themselves to 'recent' data, thereby taking advantage of "partition pruning".
Depending on the type of data, and how long before it expires, you might have daily or weekly or hourly (etc) partitions.
There is no simple SQL statement to "drop partitions older than 30 days" or "add a new partition for tomorrow". It would be tedious to do this by hand every day.
High level view of the code
---------------------------
```
ALTER TABLE tbl
DROP PARTITION from20120314;
ALTER TABLE tbl
REORGANIZE PARTITION future INTO (
PARTITION from20120415 VALUES LESS THAN (TO_DAYS('2012-04-16')),
PARTITION future VALUES LESS THAN MAXVALUE);
```
After which you have...
```
CREATE TABLE tbl (
dt DATETIME NOT NULL, -- or DATE
...
PRIMARY KEY (..., dt),
UNIQUE KEY (..., dt),
...
)
PARTITION BY RANGE (TO_DAYS(dt)) (
PARTITION start VALUES LESS THAN (0),
PARTITION from20120315 VALUES LESS THAN (TO_DAYS('2012-03-16')),
PARTITION from20120316 VALUES LESS THAN (TO_DAYS('2012-03-17')),
...
PARTITION from20120414 VALUES LESS THAN (TO_DAYS('2012-04-15')),
PARTITION from20120415 VALUES LESS THAN (TO_DAYS('2012-04-16')),
PARTITION future VALUES LESS THAN MAXVALUE
);
```
Why?
----
Perhaps you noticed some odd things in the example. Let me explain them.
* Partition naming: Make them useful.
* from20120415 ... 04-16: Note that the LESS THAN is the next day's date
* The "start" partition: See paragraph below.
* The "future" partition: This is normally empty, but it can catch overflows; more later.
* The range key (dt) must be included in any PRIMARY or UNIQUE key.
* The range key (dt) should be last in any keys it is in -- You have already "pruned" with it; it is almost useless in the index, especially at the beginning.
* DATETIME, etc -- I picked this datatype because it is typical for a time series. Newer MySQL versions allow TIMESTAMP. INT could be used; etc.
* There is an extra day (03-16 thru 04-16): The latest day is only partially full.
Why the bogus "start" partition? If an invalid datetime (Feb 31) were to be used, the datetime would turn into NULL. NULLs are put into the first partition. Since any SELECT could have an invalid date (yeah, this stretching things), the partition pruner always includes the first partition in the resulting set of partitions to search. So, if the SELECT must scan the first partition, it would be slightly more efficient if that partition were empty. Hence the bogus "start" partition. Longer discussion, by The Data Charmer 5.5 eliminates the bogus check, but only if you switch to a new syntax:
```
PARTITION BY RANGE COLUMNS(dt) (
PARTITION day_20100226 VALUES LESS THAN ('2010-02-27'), ...
```
More on the "future" partition. Sooner or later the cron/EVENT to add tomorrow's partition will fail to run. The worst that could happen is for tomorrow's data to be lost. The easiest way to prevent that is to have a partition ready to catch it, even if this partition is normally always empty.
Having the "future" partition makes the ADD PARTITION script a little more complex. Instead, it needs to take tomorrow's data from "future" and put it into a new partition. This is done with the REORGANIZE command shown. Normally nothing need be moved, and the ALTER takes virtually zero time.
When to do the ALTERs?
----------------------
* DROP if the oldest partition is "too old".
* Add 'tomorrow' near the end of today, but don't try to add it twice.
* Do not count partitions -- there are two extra ones. Use the partition names or information\_schema.PARTITIONS.PARTITION\_DESCRIPTION.
* DROP/Add only once in the script. Rerun the script if you need more.
* Run the script more often than necessary. For daily partitions, run the script twice a day, or even hourly. Why? Automatic repair.
Variants
--------
As I have said many times, in many places, BY RANGE is perhaps the only useful variant. And a time series is the most common use for PARTITIONing.
* (as discussed here) DATETIME/DATE with TO\_DAYS()
* DATETIME/DATE with TO\_DAYS(), but with 7-day intervals
* TIMESTAMP with TO\_DAYS(). (version 5.1.43 or later)
* PARTITION BY RANGE COLUMNS(DATETIME) (5.5.0)
* PARTITION BY RANGE(TIMESTAMP) (version 5.5.15 / 5.6.3)
* PARTITION BY RANGE(TO\_SECONDS()) (5.6.0)
* INT UNSIGNED with constants computed as unix timestamps.
* INT UNSIGNED with constants for some non-time-based series.
* MEDIUMINT UNSIGNED containing an "hour id": FLOOR(FROM\_UNIXTIME(timestamp) / 3600)
* Months, Quarters, etc: Concoct a notation that works.
How many partitions?
* Under, say, 5 partitions -- you get very little of the benefits.
* Over, say, 50 partitions, and you hit inefficiencies elsewhere.
* Certain operations (SHOW TABLE STATUS, opening the table, etc) open every partition.
* [MyISAM](../myisam/index), before version 5.6.6, would lock all partitions before pruning!
* Partition pruning does not happen on INSERTs (until Version 5.6.7), so INSERT needs to open all the partitions.
* A possible 2-partition use case: <http://forums.mysql.com/read.php?24,633179,633179>
* 8192 partitions is a hard limit (1024 before 5.6.7).
* Before "native partitions" (5.7.6), each partition consumed a chunk of memory.
Detailed code
-------------
[Reference implementation, in Perl, with demo of daily partitions](http://mysql.rjweb.org/demo_part_maint.pl.txt)
The complexity of the code is in the discovery of the PARTITION names, especially of the oldest and the 'next'.
To run the demo,
* Install Perl and DBIx::DWIW (from CPAN).
* copy the txt file (link above) to demo\_part\_maint.pl
* execute perl demo\_part\_maint.pl to get the rest of the instructions
The program will generate and execute (when needed) either of these:
```
ALTER TABLE tbl REORGANIZE PARTITION
future
INTO (
PARTITION from20150606 VALUES LESS THAN (736121),
PARTITION future VALUES LESS THAN MAXVALUE
)
ALTER TABLE tbl
DROP PARTITION from20150603
```
Postlog
-------
Original writing -- Oct, 2012; Use cases added: Oct, 2014; Refreshed: June, 2015; 8.0: Sep, 2016
[Slides from Percona Amsterdam 2015](http://mysql.rjweb.org/slides/Partition.pdf)
PARTITIONing requires at least MySQL 5.1
The tips in this document apply to MySQL, MariaDB, and Percona.
* [More on PARTITIONing](http://www.mysqlperformanceblog.com/2010/12/11/mysql-partitioning-can-save-you-or-kill-you/)
* [LinkedIn discussion](http://www.linkedin.com/groups/MySql-Horizontal-partitioning-ProsCons-78638.S.5861525157444595715?qid=0d54d3f9-21d7-43e8-9b75-dbc0270c7236&trk=groups_guest_most_popular-0-b-ttl&goback=%2Egmp_78638)
* [Why NOT Partition](http://dba.stackexchange.com/questions/107408/why-not-partition)
* [Geoff Montee's Stored Proc](http://www.geoffmontee.com/automatically-dropping-old-partitions-in-mysql-and-mariadb-part-2/)
Future (as envisioned in 2016):
* MySQL 5.7.6 has "native partitioning for InnoDB".
* FOREIGN KEY support, perhaps in a later 8.0.xx.
* "GLOBAL INDEX" -- this would avoid the need for putting the partition key in every unique index, but make DROP PARTITION costly. This will be farther into the future.
MySQL 8.0, released Sep, 2016, not yet GA)
* Only InnoDB tables can be partitioned -- MariaDB is likely to continue maintaining Partitioning on non-InnoDB tables, but Oracle is clearly not.
* Some of the problems having lots of partitions are lessened by the Data-Dictionary-in-a-table.
Native partitioning will give:
* This will improve performance slightly by combining two "handlers" into one.
* Decreased memory usage, especially when using a large number of partitions.
See also
--------
Rick James graciously allowed us to use this article in the Knowledge Base.
[Rick James' site](http://mysql.rjweb.org/) has other useful tips, how-tos, optimizations, and debugging tips.
Original source: <http://mysql.rjweb.org/doc.php/partitionmaint>
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb FirstMatch Strategy FirstMatch Strategy
===================
`FirstMatch` is an execution strategy for [Semi-join subqueries](../semi-join-subquery-optimizations/index).
The idea
--------
It is very similar to how `IN/EXISTS` subqueries were executed in MySQL 5.x.
Let's take the usual example of a search for countries with big cities:
```
select * from Country
where Country.code IN (select City.Country
from City
where City.Population > 1*1000*1000)
and Country.continent='Europe'
```
Suppose, our execution plan is to find countries in Europe, and then, for each found country, check if it has any big cities. Regular inner join execution will look as follows:
Since Germany has two big cities (in this diagram), it will be put into the query output twice. This is not correct, `SELECT ... FROM Country` should not produce the same country record twice. The `FirstMatch` strategy avoids the production of duplicates by short-cutting execution as soon as the first genuine match is found:
Note that the short-cutting has to take place after "Using where" has been applied. It would have been wrong to short-cut after we found Trier.
FirstMatch in action
--------------------
The `EXPLAIN` for the above query will look as follows:
```
MariaDB [world]> explain select * from Country where Country.code IN
(select City.Country from City where City.Population > 1*1000*1000)
and Country.continent='Europe';
+----+-------------+---------+------+--------------------+-----------+---------+--------------------+------+----------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------+------+--------------------+-----------+---------+--------------------+------+----------------------------------+
| 1 | PRIMARY | Country | ref | PRIMARY,continent | continent | 17 | const | 60 | Using index condition |
| 1 | PRIMARY | City | ref | Population,Country | Country | 3 | world.Country.Code | 18 | Using where; FirstMatch(Country) |
+----+-------------+---------+------+--------------------+-----------+---------+--------------------+------+----------------------------------+
2 rows in set (0.00 sec)
```
`FirstMatch(Country)` in the Extra column means that *as soon as we have produced one matching record combination, short-cut the execution and jump back to the Country* table.
`FirstMatch`'s query plan is very similar to one you would get in MySQL:
```
MySQL [world]> explain select * from Country where Country.code IN
(select City.Country from City where City.Population > 1*1000*1000)
and Country.continent='Europe';
+----+--------------------+---------+----------------+--------------------+-----------+---------+-------+------+------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+---------+----------------+--------------------+-----------+---------+-------+------+------------------------------------+
| 1 | PRIMARY | Country | ref | continent | continent | 17 | const | 60 | Using index condition; Using where |
| 2 | DEPENDENT SUBQUERY | City | index_subquery | Population,Country | Country | 3 | func | 18 | Using where |
+----+--------------------+---------+----------------+--------------------+-----------+---------+-------+------+------------------------------------+
2 rows in set (0.01 sec)
```
and these two particular query plans will execute in the same time.
Difference between FirstMatch and IN->EXISTS
--------------------------------------------
The general idea behind the `FirstMatch` strategy is the same as the one behind the `IN->EXISTS` transformation, however, `FirstMatch` has several advantages:
* Equality propagation works across semi-join bounds, but not subquery bounds. Therefore, converting a subquery to semi-join and using `FirstMatch` can still give a better execution plan. (TODO example)
* There is only one way to apply the `IN->EXISTS` strategy and MySQL will do it unconditionally. With `FirstMatch`, the optimizer can make a choice between whether it should run the `FirstMatch` strategy as soon as all tables used in the subquery are in the join prefix, or at some later point in time. (TODO: example)
FirstMatch factsheet
--------------------
* The `FirstMatch` strategy works by executing the subquery and short-cutting its execution as soon as the first match is found.
* This means, subquery tables must be after all of the parent select's tables that are referred from the subquery predicate.
* `EXPLAIN` shows `FirstMatch` as "`FirstMatch(tableN)`".
* The strategy can handle correlated subqueries.
* But it cannot be applied if the subquery has meaningful `GROUP BY` and/or aggregate functions.
* Use of the `FirstMatch` strategy is controlled with the `firstmatch=on|off` flag in the [optimizer\_switch](../server-system-variables/index#optimizer_switch) variable.
See Also
--------
* [Semi-join subquery optimizations](../semi-join-subquery-optimizations/index)
In-depth material:
* [WL#3750: initial specification for FirstMatch](http://forge.mysql.com/worklog/task.php?id=3750)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Information Schema ROCKSDB_TRX Table Information Schema ROCKSDB\_TRX Table
=====================================
The [Information Schema](../information_schema/index) `ROCKSDB_TRX` table is included as part of the [MyRocks](../myrocks/index) storage engine.
The `PROCESS` [privilege](../grant/index) is required to view the table.
It contains the following columns:
| Column | Description |
| --- | --- |
| `TRANSACTION_ID` | |
| `STATE` | |
| `NAME` | |
| `WRITE_COUNT` | |
| `LOCK_COUNT` | |
| `TIMEOUT_SEC` | |
| `WAITING_KEY` | |
| `WAITING_COLUMN_FAMILY_ID` | |
| `IS_REPLICATION` | |
| `SKIP_TRX_API` | |
| `READ_ONLY` | |
| `HAS_DEADLOCK_DETECTION` | |
| `NUM_ONGOING_BULKLOAD` | |
| `THREAD_ID` | |
| `QUERY` | |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SPIDER_FLUSH_TABLE_MON_CACHE SPIDER\_FLUSH\_TABLE\_MON\_CACHE
================================
Syntax
------
```
SPIDER_FLUSH_TABLE_MON_CACHE()
```
Description
-----------
A [UDF](../user-defined-functions/index) installed with the [Spider Storage Engine](../spider/index), this function is used for refreshing monitoring server information. It returns a value of `1`.
Examples
--------
```
SELECT SPIDER_FLUSH_TABLE_MON_CACHE();
+--------------------------------+
| SPIDER_FLUSH_TABLE_MON_CACHE() |
+--------------------------------+
| 1 |
+--------------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Y Y
=
A synonym for [ST\_Y](../st_y/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb JSON_MERGE_PATCH JSON\_MERGE\_PATCH
==================
**MariaDB starting with [10.2.25](https://mariadb.com/kb/en/mariadb-10225-release-notes/)**`JSON_MERGE_PATCH` was introduced in [MariaDB 10.2.25](https://mariadb.com/kb/en/mariadb-10225-release-notes/), [MariaDB 10.3.16](https://mariadb.com/kb/en/mariadb-10316-release-notes/) and [MariaDB 10.4.5](https://mariadb.com/kb/en/mariadb-1045-release-notes/).
Syntax
------
```
JSON_MERGE_PATCH(json_doc, json_doc[, json_doc] ...)
```
Description
-----------
Merges the given JSON documents, returning the merged result, or NULL if any argument is NULL.
`JSON_MERGE_PATCH` is an RFC 7396-compliant replacement for [JSON\_MERGE](../json_merge/index), which has been deprecated.
Example
-------
```
SET @json1 = '[1, 2]';
SET @json2 = '[2, 3]';
SELECT JSON_MERGE_PATCH(@json1,@json2),JSON_MERGE_PRESERVE(@json1,@json2);
+---------------------------------+------------------------------------+
| JSON_MERGE_PATCH(@json1,@json2) | JSON_MERGE_PRESERVE(@json1,@json2) |
+---------------------------------+------------------------------------+
| [2, 3] | [1, 2, 2, 3] |
+---------------------------------+------------------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb IS IS
==
Syntax
------
```
IS boolean_value
```
Description
-----------
Tests a value against a boolean value, where `boolean_value` can be TRUE, FALSE, or UNKNOWN.
There is an important difference between using IS TRUE or comparing a value with TRUE using `=`. When using `=`, only `1` equals to TRUE. But when using IS TRUE, all values which are logically true (like a number > 1) return TRUE.
Examples
--------
```
SELECT 1 IS TRUE, 0 IS FALSE, NULL IS UNKNOWN;
+-----------+------------+-----------------+
| 1 IS TRUE | 0 IS FALSE | NULL IS UNKNOWN |
+-----------+------------+-----------------+
| 1 | 1 | 1 |
+-----------+------------+-----------------+
```
Difference between `=` and `IS TRUE`:
```
SELECT 2 = TRUE, 2 IS TRUE;
+----------+-----------+
| 2 = TRUE | 2 IS TRUE |
+----------+-----------+
| 0 | 1 |
+----------+-----------+
```
See Also
--------
* [Boolean Literals](../sql-language-structure-boolean-literals/index)
* [BOOLEAN Data Type](../boolean/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb 10.2.14 Release Upgrade Tests 10.2.14 Release Upgrade Tests
=============================
### Tested revision
b3cdafcb93da2e57d49f26f0846dc957458ee72c
### Test date
2018-04-17 05:25:43
### Summary
Known bugs [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103), [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094). A few upgrades from MySQL and old MariaDB fail because the old versions hang on shutdown.
### Details
| type | pagesize | OLD version | file format | encrypted | compressed | | NEW version | file format | encrypted | compressed | readonly | result | notes |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| recovery | 16 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| recovery | 16 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| recovery | 4 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| recovery | 4 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| recovery | 32 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| recovery | 32 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| recovery | 64 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| recovery | 64 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| recovery | 8 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| recovery | 8 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| recovery | 16 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| recovery | 16 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| recovery | 4 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| recovery | 4 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| recovery | 32 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| recovery | 32 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| recovery | 64 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| recovery | 64 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| recovery | 8 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| recovery | 8 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo-recovery | 16 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo-recovery | 4 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo-recovery | 32 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo-recovery | 64 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo-recovery | 8 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo-recovery | 16 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo-recovery | 4 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo-recovery | 32 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo-recovery | 64 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo-recovery | 8 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo-recovery | 16 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | TEST\_FAILURE |
| undo-recovery | 4 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | TEST\_FAILURE |
| undo-recovery | 32 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | TEST\_FAILURE |
| undo-recovery | 64 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | TEST\_FAILURE |
| undo-recovery | 8 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | TEST\_FAILURE |
| undo-recovery | 16 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo-recovery | 4 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo-recovery | 32 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo-recovery | 64 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo-recovery | 8 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 16 | 10.2.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 16 | 10.2.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 16 | 10.2.13 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 16 | 10.2.13 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.13 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.13 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.13 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.13 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.13 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.13 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.13 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.13 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| crash | 16 | 10.2.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 16 | 10.2.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 4 | 10.2.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 4 | 10.2.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 32 | 10.2.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 32 | 10.2.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 64 | 10.2.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 64 | 10.2.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 8 | 10.2.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 8 | 10.2.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 16 | 10.2.13 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 16 | 10.2.13 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 4 | 10.2.13 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 4 | 10.2.13 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 32 | 10.2.13 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 32 | 10.2.13 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 64 | 10.2.13 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 64 | 10.2.13 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 8 | 10.2.13 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 8 | 10.2.13 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 16 | 10.2.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 4 | 10.2.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 32 | 10.2.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 64 | 10.2.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 8 | 10.2.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 16 | 10.2.13 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 4 | 10.2.13 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 32 | 10.2.13 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 64 | 10.2.13 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 8 | 10.2.13 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 16 | 10.2.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 4 | 10.2.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 32 | 10.2.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 64 | 10.2.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 8 | 10.2.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 16 | 10.2.13 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 4 | 10.2.13 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 32 | 10.2.13 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 64 | 10.2.13 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 8 | 10.2.13 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 16 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 16 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 16 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 16 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| crash | 16 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 16 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 4 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 4 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 32 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 32 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 64 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 64 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 8 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 8 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 16 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 16 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 4 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 4 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 32 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 32 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 64 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 64 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 8 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 8 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 16 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 4 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 32 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 64 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 8 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 16 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 4 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 32 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 64 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 8 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 16 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 4 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 32 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 64 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 8 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 16 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 4 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 32 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 64 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 8 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 16 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 16 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 4 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 4 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 32 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 32 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 64 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 64 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 8 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 8 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 16 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 16 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 4 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 4 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 32 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 32 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 64 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 64 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 8 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 8 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 16 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 4 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 32 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 64 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 8 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 16 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 4 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 32 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 64 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 8 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 16 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 4 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 32 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 64 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 8 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 16 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 4 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 32 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 64 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 8 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 16 | 10.1.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 16 | 10.1.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 4 | 10.1.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 4 | 10.1.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 32 | 10.1.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 32 | 10.1.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 64 | 10.1.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 64 | 10.1.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 8 | 10.1.13 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 8 | 10.1.13 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 16 | 10.1.10 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 16 | 10.1.10 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 4 | 10.1.10 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 4 | 10.1.10 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 32 | 10.1.10 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 32 | 10.1.10 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 64 | 10.1.10 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 64 | 10.1.10 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 8 | 10.1.10 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 8 | 10.1.10 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 16 | 10.1.22 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 4 | 10.1.22 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 32 | 10.1.22 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 64 | 10.1.22 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 8 | 10.1.22 (inbuilt) | Barracuda | on | - | => | 10.2.14 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 16 | 10.1.22 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 4 | 10.1.22 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 32 | 10.1.22 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 64 | 10.1.22 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 8 | 10.1.22 (inbuilt) | Barracuda | - | - | => | 10.2.14 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 16 | 10.1.22 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 4 | 10.1.22 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 32 | 10.1.22 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 64 | 10.1.22 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 8 | 10.1.22 (inbuilt) | Barracuda | on | zlib | => | 10.2.14 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 16 | 10.1.22 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 4 | 10.1.22 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 32 | 10.1.22 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 64 | 10.1.22 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 8 | 10.1.22 (inbuilt) | Barracuda | - | zlib | => | 10.2.14 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 4 | 10.0.34 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 8 | 10.0.34 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 16 | 10.0.34 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 16 | 10.0.34 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| normal | 4 | 10.0.34 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| normal | 8 | 10.0.34 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| undo | 16 | 10.0.34 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| undo | 4 | 10.0.34 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| undo | 8 | 10.0.34 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| undo | 16 | 10.0.34 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| undo | 4 | 10.0.34 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| undo | 8 | 10.0.34 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 4 | 10.0.14 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 8 | 10.0.14 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 16 | 10.0.14 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 16 | 10.0.14 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| normal | 4 | 10.0.14 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| normal | 8 | 10.0.14 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| undo | 16 | 10.0.18 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | **FAIL** | TEST\_FAILURE |
| undo | 4 | 10.0.18 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | **FAIL** | TEST\_FAILURE |
| undo | 8 | 10.0.18 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| undo | 16 | 10.0.18 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | **FAIL** | TEST\_FAILURE |
| undo | 4 | 10.0.18 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| undo | 8 | 10.0.18 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 64 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 8 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 16 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 32 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 4 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 4 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| normal | 8 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| normal | 16 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| normal | 64 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| normal | 32 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| crash | 64 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| crash | 8 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| crash | 16 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| crash | 32 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| crash | 4 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| crash | 4 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| crash | 8 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| crash | 16 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| crash | 64 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| crash | 32 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| undo | 16 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| undo | 4 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| undo | 32 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| undo | 64 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| undo | 8 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| undo | 16 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| undo | 4 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| undo | 32 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| undo | 64 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| undo | 8 | 5.7.21 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 4 | 5.6.39 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 8 | 5.6.39 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 16 | 5.6.39 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| normal | 16 | 5.6.39 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| normal | 4 | 5.6.39 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| normal | 8 | 5.6.39 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| undo | 16 | 5.6.39 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| undo | 4 | 5.6.39 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | **FAIL** | TEST\_FAILURE |
| undo | 8 | 5.6.39 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | on | - | - | OK | |
| undo | 16 | 5.6.39 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | **FAIL** | TEST\_FAILURE |
| undo | 4 | 5.6.39 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | OK | |
| undo | 8 | 5.6.39 (inbuilt) | | - | - | => | 10.2.14 (inbuilt) | | - | - | - | **FAIL** | TEST\_FAILURE |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Status Variables Added in MariaDB 10.0 Status Variables Added in MariaDB 10.0
======================================
This is a list of [status variables](../server-status-variables/index) that were added in the [MariaDB 10.0](../what-is-mariadb-100/index) series.
The list excludes status related to the following storage engines included in [MariaDB 10.0](../what-is-mariadb-100/index):
* [Galera Status Variables](../galera-cluster-status-variables/index)
* [Mroonga Status Variables](../mroonga-status-variables/index)
* [Spider Status Variables](../spider-server-status-variables/index)
| Variable | Added |
| --- | --- |
| [Binlog\_group\_commit\_trigger\_count](../replication-and-binary-log-status-variables/index#binlog_group_commit_trigger_count) | [MariaDB 10.0.18](https://mariadb.com/kb/en/mariadb-10018-release-notes/) |
| [Binlog\_group\_commit\_trigger\_timeout](../replication-and-binary-log-status-variables/index#binlog_group_commit_trigger_timeout) | [MariaDB 10.0.18](https://mariadb.com/kb/en/mariadb-10018-release-notes/) |
| [Binlog\_group\_commit\_trigger\_lock\_wait](../replication-and-binary-log-status-variables/index#binlog_group_commit_trigger_lock_wait) | [MariaDB 10.0.18](https://mariadb.com/kb/en/mariadb-10018-release-notes/) |
| [Com\_create\_role](../server-status-variables/index#com_create_role) | [MariaDB 10.0.5](https://mariadb.com/kb/en/mariadb-1005-release-notes/) |
| [Com\_drop\_role](../server-status-variables/index#com_drop_role) | [MariaDB 10.0.5](https://mariadb.com/kb/en/mariadb-1005-release-notes/) |
| [Com\_get\_diagnostics](../server-status-variables/index#com_get_diagnostics) | [MariaDB 10.0.4](https://mariadb.com/kb/en/mariadb-1004-release-notes/) |
| [Com\_grant\_role](../server-status-variables/index#com_grant_role) | [MariaDB 10.0.5](https://mariadb.com/kb/en/mariadb-1005-release-notes/) |
| [Com\_revoke\_grant](../server-status-variables/index#com_revoke_grant) | [MariaDB 10.0.5](https://mariadb.com/kb/en/mariadb-1005-release-notes/) |
| [Com\_show\_explain](../server-status-variables/index#com_show_explain) | [MariaDB 10.0.0](https://mariadb.com/kb/en/mariadb-1000-release-notes/) |
| [Com\_start\_all\_slaves](../replication-and-binary-log-status-variables/index#com_start_all_slaves) | [MariaDB 10.0.0](https://mariadb.com/kb/en/mariadb-1000-release-notes/) |
| [Com\_start\_slave](../replication-and-binary-log-status-variables/index#com_start_slave) | [MariaDB 10.0.0](https://mariadb.com/kb/en/mariadb-1000-release-notes/) |
| [Com\_stop\_all\_slaves](../replication-and-binary-log-status-variables/index#com_stop_all_slaves) | [MariaDB 10.0.0](https://mariadb.com/kb/en/mariadb-1000-release-notes/) |
| [Com\_stop\_slave](../replication-and-binary-log-status-variables/index#com_stop_slave) | [MariaDB 10.0.0](https://mariadb.com/kb/en/mariadb-1000-release-notes/) |
| [Connection\_errors\_accept](../server-status-variables/index#connection_errors_accept) | [MariaDB 10.0.4](https://mariadb.com/kb/en/mariadb-1004-release-notes/) |
| [Connection\_errors\_internal](../server-status-variables/index#connection_errors_internal) | [MariaDB 10.0.4](https://mariadb.com/kb/en/mariadb-1004-release-notes/) |
| [Connection\_errors\_max\_connections](../server-status-variables/index#connection_errors_max_connections) | [MariaDB 10.0.4](https://mariadb.com/kb/en/mariadb-1004-release-notes/) |
| [Connection\_errors\_peer\_address](../server-status-variables/index#connection_errors_peer_address) | [MariaDB 10.0.4](https://mariadb.com/kb/en/mariadb-1004-release-notes/) |
| [Connection\_errors\_select](../server-status-variables/index#connection_errors_select) | [MariaDB 10.0.4](https://mariadb.com/kb/en/mariadb-1004-release-notes/) |
| [Connection\_errors\_tcpwrap](../server-status-variables/index#connection_errors_tcpwrap) | [MariaDB 10.0.4](https://mariadb.com/kb/en/mariadb-1004-release-notes/) |
| [Delete\_scan](../server-status-variables/index#delete_scan) | [MariaDB 10.0.27](https://mariadb.com/kb/en/mariadb-10027-release-notes/) |
| [Feature\_delay\_key\_write](../server-status-variables/index#feature_delay_key_write) | [MariaDB 10.0.13](https://mariadb.com/kb/en/mariadb-10013-release-notes/) |
| [Handler\_external\_lock](../server-status-variables/index#handler_external_lock) | [MariaDB 10.0.0](https://mariadb.com/kb/en/mariadb-1000-release-notes/) |
| [Handler\_read\_retry](../server-status-variables/index#handler_read_retry) | [MariaDB 10.0.27](https://mariadb.com/kb/en/mariadb-10027-release-notes/) |
| [Innodb\_buffer\_pool\_dump\_status](../xtradbinnodb-server-status-variables/index#innodb_buffer_pool_dump_status) | [MariaDB 10.0.0](https://mariadb.com/kb/en/mariadb-1000-release-notes/) |
| [Innodb\_buffer\_pool\_load\_status](../xtradbinnodb-server-status-variables/index#innodb_buffer_pool_load_status) | [MariaDB 10.0.0](https://mariadb.com/kb/en/mariadb-1000-release-notes/) |
| [Innodb\_master\_thread\_active\_loops](../xtradbinnodb-server-status-variables/index#innodb_master_thread_active_loops) | [MariaDB 10.0.9](https://mariadb.com/kb/en/mariadb-1009-release-notes/) |
| [Innodb\_master\_thread\_idle\_loops](../xtradbinnodb-server-status-variables/index#innodb_master_thread_idle_loops) | [MariaDB 10.0.9](https://mariadb.com/kb/en/mariadb-1009-release-notes/) |
| [Innodb\_num\_open\_files](../xtradbinnodb-server-status-variables/index#innodb_num_open_files) | [MariaDB 10.0.0](https://mariadb.com/kb/en/mariadb-1000-release-notes/) |
| [Innodb\_system\_rows\_deleted](../xtradbinnodb-server-status-variables/index#innodb_system_rows_deleted) | [MariaDB 10.0.15](https://mariadb.com/kb/en/mariadb-10015-release-notes/) |
| [Innodb\_system\_rows\_inserted](../xtradbinnodb-server-status-variables/index#innodb_system_rows_inserted) | [MariaDB 10.0.15](https://mariadb.com/kb/en/mariadb-10015-release-notes/) |
| [Innodb\_system\_rows\_read](../xtradbinnodb-server-status-variables/index#innodb_system_rows_read) | [MariaDB 10.0.15](https://mariadb.com/kb/en/mariadb-10015-release-notes/) |
| [Innodb\_system\_rows\_updated](../xtradbinnodb-server-status-variables/index#innodb_system_rows_updated) | [MariaDB 10.0.15](https://mariadb.com/kb/en/mariadb-10015-release-notes/) |
| [Memory\_used](../server-status-variables/index#memory_used) | [MariaDB 10.0.1](https://mariadb.com/kb/en/mariadb-1001-release-notes/) |
| [Oqgraph\_boost\_version](../server-status-variables/index#oqgraph_boost_version) | [MariaDB 10.0.7](https://mariadb.com/kb/en/mariadb-1007-release-notes/) |
| [Performance\_schema\_accounts\_lost](../server-status-variables/index#performance_schema_accounts_lost) | [MariaDB 10.0.0](https://mariadb.com/kb/en/mariadb-1000-release-notes/) |
| [Performance\_schema\_hosts\_lost](../server-status-variables/index#performance_schema_hosts_lost) | [MariaDB 10.0.0](https://mariadb.com/kb/en/mariadb-1000-release-notes/) |
| [Performance\_schema\_socket\_classes\_lost](../server-status-variables/index#performance_schema_socket_classes_lost) | [MariaDB 10.0.0](https://mariadb.com/kb/en/mariadb-1000-release-notes/) |
| [Performance\_schema\_socket\_instances\_lost](../server-status-variables/index#performance_schema_socket_instances_lost) | [MariaDB 10.0.0](https://mariadb.com/kb/en/mariadb-1000-release-notes/) |
| [Performance\_schema\_stage\_classes\_lost](../server-status-variables/index#performance_schema_stage_classes_lost) | [MariaDB 10.0.0](https://mariadb.com/kb/en/mariadb-1000-release-notes/) |
| [Performance\_schema\_statement\_classes\_lost](../server-status-variables/index#performance_schema_statement_classes_lost) | [MariaDB 10.0.0](https://mariadb.com/kb/en/mariadb-1000-release-notes/) |
| [Slave\_skipped\_errors](../replication-and-binary-log-status-variables/index#slave_skipped_errors) | [MariaDB 10.0.18](https://mariadb.com/kb/en/mariadb-10018-release-notes/) |
| [Sort\_priority\_queue\_sorts](../server-status-variables/index#sort_priority_queue_sorts) | [MariaDB 10.0.13](https://mariadb.com/kb/en/mariadb-10013-release-notes/) |
| [Update\_scan](../server-status-variables/index#update_scan) | [MariaDB 10.0.27](https://mariadb.com/kb/en/mariadb-10027-release-notes/) |
See also
--------
* [System variables added in MariaDB 10.0](../system-variables-added-in-mariadb-100/index)
* [Status variables added in MariaDB 10.1](../status-variables-added-in-mariadb-101/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysql.time_zone_leap_second Table mysql.time\_zone\_leap\_second Table
====================================
The `mysql.time_zone_leap_second` table is one of the mysql system tables that can contain [time zone](../time-zones/index) information. It is usually preferable for the system to handle the time zone, in which case the table will be empty (the default), but you can populate the mysql time zone tables using the [mysql\_tzinfo\_to\_sql](../mysql_tzinfo_to_sql/index) utility. See [Time Zones](../time-zones/index) for details.
**MariaDB starting with [10.4](../what-is-mariadb-104/index)**In [MariaDB 10.4](../what-is-mariadb-104/index) and later, this table uses the [Aria](../aria/index) storage engine.
**MariaDB until [10.3](../what-is-mariadb-103/index)**In [MariaDB 10.3](../what-is-mariadb-103/index) and before, this table uses the [MyISAM](../myisam-storage-engine/index) storage engine.
The `mysql.time_zone_leap_second` table contains the following fields:
| Field | Type | Null | Key | Default | Description |
| --- | --- | --- | --- | --- | --- |
| `Transition_time` | `bigint(20)` | NO | PRI | `NULL` | |
| `Correction` | `int(11)` | NO | | `NULL` | |
See Also
--------
* [mysql.time\_zone table](../mysqltime_zone-table/index)
* [mysql.time\_zone\_name table](../mysqltime_zone_name-table/index)
* [mysql.time\_zone\_transition table](../mysqltime_zone_transition-table/index)
* [mysql.time\_zone\_transition\_type table](../mysqltime_zone_transition_type-table/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb String Literals String Literals
===============
Strings are sequences of characters and are enclosed with quotes.
The syntax is:
```
[_charset_name]'string' [COLLATE collation_name]
```
For example:
```
'The MariaDB Foundation'
_utf8 'Foundation' COLLATE utf8_unicode_ci;
```
Strings can either be enclosed in single quotes or in double quotes (the same character must be used to both open and close the string).
The ANSI SQL-standard does not permit double quotes for enclosing strings, and although MariaDB does by default, if the MariaDB server has enabled the [ANSI\_QUOTES\_SQL](../sql-mode/index#ansi_quotes) [SQL\_MODE](../sql-mode/index), double quotes will be treated as being used for [identifiers](../identifier-names/index) instead of strings.
Strings that are next to each other are automatically concatenated. For example:
```
'The ' 'MariaDB ' 'Foundation'
```
and
```
'The MariaDB Foundation'
```
are equivalent.
The `\` (backslash character) is used to escape characters (unless the [SQL\_MODE](../sql-mode/index) hasn't been set to [NO\_BACKSLASH\_ESCAPES](../sql-mode/index#no_backslash_escapes)). For example:
```
'MariaDB's new features'
```
is not a valid string because of the single quote in the middle of the string, which is treated as if it closes the string, but is actually meant as part of the string, an apostrophe. The backslash character helps in situations like this:
```
'MariaDB\'s new features'
```
is now a valid string, and if displayed, will appear without the backslash.
```
SELECT 'MariaDB\'s new features';
+------------------------+
| MariaDB's new features |
+------------------------+
| MariaDB's new features |
+------------------------+
```
Another way to escape the quoting character is repeating it twice:
```
SELECT 'I''m here', """Double""";
+----------+----------+
| I'm here | "Double" |
+----------+----------+
| I'm here | "Double" |
+----------+----------+
```
Escape Sequences
----------------
There are other escape sequences also. Here is a full list:
| Escape sequence | Character |
| --- | --- |
| `\0` | ASCII NUL (0x00). |
| `\'` | Single quote (“'”). |
| `\"` | Double quote (“"”). |
| `\b` | Backspace. |
| `\n` | Newline, or linefeed,. |
| `\r` | Carriage return. |
| `\t` | Tab. |
| `\Z` | ASCII 26 (Control+Z). See note following the table. |
| `\\` | Backslash (“\”). |
| `\%` | “%” character. See note following the table. |
| `\_` | A “\_” character. See note following the table. |
Escaping the `%` and `_` characters can be necessary when using the [LIKE](../like/index) operator, which treats them as special characters.
The ASCII 26 character (`\Z`) needs to be escaped when included in a batch file which needs to be executed in Windows. The reason is that ASCII 26, in Windows, is the end of file (EOF).
Backslash (`\`), if not used as an escape character, must always be escaped. When followed by a character that is not in the above table, backslashes will simply be ignored.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb PolyFromWKB PolyFromWKB
===========
A synonym for [ST\_PolyFromWKB](../st_polyfromwkb/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB Performance & Advanced Configurations MariaDB Performance & Advanced Configurations
==============================================
Articles of how to setup your MariaDB optimally on different systems
| Title | Description |
| --- | --- |
| [Fusion-io](../fusion-io/index) | This category contains information about Fusion-io support in MariaDB |
| [Atomic Write Support](../atomic-write-support/index) | Enabling atomic writes to speed up InnoDB on selected SSD cards. |
| [Configuring Linux for MariaDB](../configuring-linux-for-mariadb/index) | Linux kernel settings IO scheduler For optimal IO performance running a da... |
| [Configuring MariaDB for Optimal Performance](../configuring-mariadb-for-optimal-performance/index) | How to get optimal performance. |
| [Configuring Swappiness](../configuring-swappiness/index) | Setting Linux swappiness. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb BIT_AND BIT\_AND
========
Syntax
------
```
BIT_AND(expr) [over_clause]
```
Description
-----------
Returns the bitwise AND of all bits in *expr*. The calculation is performed with 64-bit ([BIGINT](../bigint/index)) precision. It is an [aggregate function](../aggregate-functions/index), and so can be used with the [GROUP BY](../group-by/index) clause.
If no rows match, `BIT_AND` will return a value with all bits set to 1. NULL values have no effect on the result unless all results are NULL, which is treated as no match.
From [MariaDB 10.2.0](https://mariadb.com/kb/en/mariadb-1020-release-notes/), `BIT_AND` can be used as a [window function](../window-functions/index) with the addition of the *over\_clause*.
Examples
--------
```
CREATE TABLE vals (x INT);
INSERT INTO vals VALUES(111),(110),(100);
SELECT BIT_AND(x), BIT_OR(x), BIT_XOR(x) FROM vals;
+------------+-----------+------------+
| BIT_AND(x) | BIT_OR(x) | BIT_XOR(x) |
+------------+-----------+------------+
| 100 | 111 | 101 |
+------------+-----------+------------+
```
As an [aggregate function](../aggregate-functions/index):
```
CREATE TABLE vals2 (category VARCHAR(1), x INT);
INSERT INTO vals2 VALUES
('a',111),('a',110),('a',100),
('b','000'),('b',001),('b',011);
SELECT category, BIT_AND(x), BIT_OR(x), BIT_XOR(x)
FROM vals GROUP BY category;
+----------+------------+-----------+------------+
| category | BIT_AND(x) | BIT_OR(x) | BIT_XOR(x) |
+----------+------------+-----------+------------+
| a | 100 | 111 | 101 |
| b | 0 | 11 | 10 |
+----------+------------+-----------+------------+
```
No match:
```
SELECT BIT_AND(NULL);
+----------------------+
| BIT_AND(NULL) |
+----------------------+
| 18446744073709551615 |
+----------------------+
```
See Also
--------
* [BIT\_OR](../bit_or/index)
* [BIT\_XOR](../bit_xor/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CURTIME CURTIME
=======
Syntax
------
```
CURTIME([precision])
```
Description
-----------
Returns the current time as a value in 'HH:MM:SS' or HHMMSS.uuuuuu format, depending on whether the function is used in a string or numeric context. The value is expressed in the current [time zone](../time-zones/index).
The optional *precision* determines the microsecond precision. See [Microseconds in MariaDB](../microseconds-in-mariadb/index).
Examples
--------
```
SELECT CURTIME();
+-----------+
| CURTIME() |
+-----------+
| 12:45:39 |
+-----------+
SELECT CURTIME() + 0;
+---------------+
| CURTIME() + 0 |
+---------------+
| 124545.000000 |
+---------------+
```
With precision:
```
SELECT CURTIME(2);
+-------------+
| CURTIME(2) |
+-------------+
| 09:49:08.09 |
+-------------+
```
See Also
--------
* [Microseconds in MariaDB](../microseconds-in-mariadb/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ENUM ENUM
====
Syntax
------
```
ENUM('value1','value2',...) [CHARACTER SET charset_name] [COLLATE collation_name]
```
Description
-----------
An enumeration. A string object that can have only one value, chosen from the list of values 'value1', 'value2', ..., NULL or the special '' error value. In theory, an `ENUM` column can have a maximum of 65,535 distinct values; in practice, the real maximum depends on many factors. `ENUM` values are represented internally as integers.
Trailing spaces are automatically stripped from ENUM values on table creation.
ENUMs require relatively little storage space compared to strings, either one or two bytes depending on the number of enumeration values.
### NULL and empty values
An ENUM can also contain NULL and empty values. If the ENUM column is declared to permit NULL values, NULL becomes a valid value, as well as the default value (see below). If [strict SQL Mode](../sql_mode/index) is not enabled, and an invalid value is inserted into an ENUM, a special empty string, with an index value of zero (see Numeric index, below), is inserted, with a warning. This may be confusing, because the empty string is also a possible value, and the only difference if that is this case its index is not 0. Inserting will fail with an error if strict mode is active.
If a `DEFAULT` clause is missing, the default value will be:
* `NULL` if the column is nullable;
* otherwise, the first value in the enumeration.
### Numeric index
ENUM values are indexed numerically in the order they are defined, and sorting will be performed in this numeric order. We suggest not using ENUM to store numerals, as there is little to no storage space benefit, and it is easy to confuse the enum integer with the enum numeral value by leaving out the quotes.
An ENUM defined as ENUM('apple','orange','pear') would have the following index values:
| Index | Value |
| --- | --- |
| NULL | NULL |
| 0 | '' |
| 1 | 'apple' |
| 2 | 'orange' |
| 3 | 'pear' |
Examples
--------
```
CREATE TABLE fruits (
id INT NOT NULL auto_increment PRIMARY KEY,
fruit ENUM('apple','orange','pear'),
bushels INT);
DESCRIBE fruits;
+---------+-------------------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------+-------------------------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| fruit | enum('apple','orange','pear') | YES | | NULL | |
| bushels | int(11) | YES | | NULL | |
+---------+-------------------------------+------+-----+---------+----------------+
INSERT INTO fruits
(fruit,bushels) VALUES
('pear',20),
('apple',100),
('orange',25);
INSERT INTO fruits
(fruit,bushels) VALUES
('avocado',10);
ERROR 1265 (01000): Data truncated for column 'fruit' at row 1
SELECT * FROM fruits;
+----+--------+---------+
| id | fruit | bushels |
+----+--------+---------+
| 1 | pear | 20 |
| 2 | apple | 100 |
| 3 | orange | 25 |
+----+--------+---------+
```
Selecting by numeric index:
```
SELECT * FROM fruits WHERE fruit=2;
+----+--------+---------+
| id | fruit | bushels |
+----+--------+---------+
| 3 | orange | 25 |
+----+--------+---------+
```
Sorting is according to the index value:
```
CREATE TABLE enums (a ENUM('2','1'));
INSERT INTO enums VALUES ('1'),('2');
SELECT * FROM enums ORDER BY a ASC;
+------+
| a |
+------+
| 2 |
| 1 |
+------+
```
It's easy to get confused between returning the enum integer with the stored value, so we don't suggest using ENUM to store numerals. The first example returns the 1st indexed field ('2' has an index value of 1, as it's defined first), while the second example returns the string value '1'.
```
SELECT * FROM enums WHERE a=1;
+------+
| a |
+------+
| 2 |
+------+
SELECT * FROM enums WHERE a='1';
+------+
| a |
+------+
| 1 |
+------+
```
See Also
--------
* [Data Type Storage Requirements](../data-type-storage-requirements/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb ColumnStore System Variables ColumnStore System Variables
============================
Variables
---------
| Name | Cmd-Line | Scope | Data type | Default Value | Range |
| --- | --- | --- | --- | --- | --- |
| [infinidb\_compression\_type](#compression-mode) | Yes | Both | enumeration | 2 | 0,2 |
| [infinidb\_decimal\_scale](#columnstore-decimal-scale) | Yes | Both | numeric | 8 | |
| [infinidb\_diskjoin\_bucketsize](#disk-based-joins) | Yes | Both | numeric | 100 | |
| [infinidb\_diskjoin\_largesidelimit](#disk-based-joins) | Yes | Both | numeric | 0 | |
| [infinidb\_diskjoin\_smallsidelimit](#disk-based-joins) | Yes | Both | numeric | 0 | |
| [infinidb\_double\_for\_decimal\_math](#columnstore-decimal-to-double-math) | Yes | Both | enumeration | OFF | OFF, ON |
| [infinidb\_import\_for\_batchinsert\_delimiter](#batch-insert-mode-for-inserts) | Yes | Both | numeric | 7 | |
| [infinidb\_import\_for\_batchinsert\_enclosed\_by](#batch-insert-mode-for-inserts) | Yes | Both | numeric | 17 | |
| [infinidb\_local\_query](#local-pm-query-mode) | Yes | Both | enumeration | 0 | 0,1 |
| infinidb\_ordered\_only | Yes | Both | enumeration | OFF | OFF, ON |
| infinidb\_string\_scan\_threshold | Yes | Both | numeric | 10 | |
| infinidb\_stringtable\_threshold | Yes | Both | numeric | 20 | |
| [infinidb\_um\_mem\_limit](#disk-based-joins) | Yes | Both | numeric | 0 | |
| [infinidb\_use\_decimal\_scale](#columnstore-decimal-scale) | Yes | Both | enumeration | OFF | OFF, ON |
| [infinidb\_use\_import\_for\_batchinsert](#batch-insert-mode-for-inserts) | Yes | Both | enumeration | ON | OFF, ON |
| infinidb\_varbin\_always\_hex | Yes | Both | enumeration | ON | OFF, ON |
| [infinidb\_vtable\_mode](#operating-mode) | Yes | Both | enumeration | 1 | 0,1,2 |
`<<toc>>`
Compression mode
----------------
MariaDB ColumnStore has the ability to compress data and this is controlled through a compression mode. This compression mode may be set as a default for the instance or set at the session level.
To set the compression mode at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.
```
set infinidb_compression_type = n
```
where n is:
* 0) compression is turned off. Any subsequent table create statements run will have compression turned off for that table unless any statement overrides have been performed. Any alter statements run to add a column will have compression turned off for that column unless any statement override has been performed.
* 2) compression is turned on. Any subsequent table create statements run will have compression turned on for that table unless any statement overrides have been performed. Any alter statements run to add a column will have compression turned on for that column unless any statement override has been performed. ColumnStore uses snappy compression in this mode.
ColumnStore decimal to double math
----------------------------------
`<<toc title='' layout=standalone>>` MariaDB ColumnStore has the ability to change intermediate decimal mathematical results from decimal type to double. The decimal type has approximately 17-18 digits of precision, but a smaller maximum range. Whereas the double type has approximately 15-16 digits of precision, but a much larger maximum range. In typical mathematical and scientific applications, the ability to avoid overflow in intermediate results with double math is likely more beneficial than the additional two digits of precisions. In banking applications, however, it may be more appropriate to leave in the default decimal setting to ensure accuracy to the least significant digit.
### Enable/Disable decimal to double math
The infinidb\_double\_for\_decimal\_math variable is used to control the data type for intermediate decimal results. This decimal for double math may be set as a default for the instance, set at the session level, or at the statement level by toggling this variable on and off.
To enable/disable the use of the decimal to double math at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.
```
set infinidb_double_for_decimal_math = on
```
where n is:
* off (disabled, default)
* on (enabled)
### ColumnStore decimal scale
ColumnStore has the ability to support varied internal precision on decimal calculations. *infinidb\_decimal\_scale* is used internally by the ColumnStore engine to control how many significant digits to the right of the decimal point are carried through in suboperations on calculated columns. If, while running a query, you receive the message ‘aggregate overflow’, try reducing *infinidb\_decimal\_scale* and running the query again. Note that,as you decrease *infinidb\_decimal\_scale*, you may see reduced accuracy in the least significant digit(s) of a returned calculated column. *infinidb\_use\_decimal\_scale* is used internally by the ColumnStore engine to turn the use of this internal precision on and off. These two system variables may be set as a default for the instance or set at the session level.
#### Enable/disable decimal scale
To enable/disable the use of the decimal scale at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.
```
set infinidb_use_decimal_scale = on
```
where *n* is off (disabled) or on (enabled).
#### Set decimal scale level
To set the decimal scale at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.
```
set infinidb_decimal_scale = n
```
where *n* is the amount of precision desired for calculations.
Disk-based joins
----------------
`<<toc title='' layout=standalone>>`
### Introduction
Joins are performed in-memory on the [UM](../columnstore-user-module/index) node. When a join operation exceeds the memory allocated on the UM for query joins, the query is aborted with an error code IDB-2001. Disk-based joins enable such queries to use disk for intermediate join data in case when the memory needed for join exceeds the memory limit on the UM. Although slower in performance as compared to a fully in-memory join, and bound by the temporary space on disk, it does allow such queries to complete.
**Note:Disk-based joins does not include aggregation and DML joins.**
The following variables in the ***HashJoin*** element in the Columnstore.xml configuration file relate to disk-based joins. Columnstore.xml resides in the etc directory for your installation(/usr/local/mariadb/columnstore/etc).
* ***AllowDiskBasedJoin*** – Option to use disk-based joins. Valid values are Y (enabled) or N (disabled). Default is disabled.
* ***TempFileCompression*** – Option to use compression for disk join files. Valid values are Y (use compressed files) or N (use non-compressed files).
* ***TempFilePath*** – The directory path used for the disk joins. By default, this path is the tmp directory for your installation (i.e., /usr/local/mariadb/columnstore/tmp). Files (named infinidb-join-data\*) in this directory will be created and cleaned on an as needed basis. The entire directory is removed and recreated by ExeMgr at startup.)
**Note: When using disk-based joins, it is strongly recommended that the TempFilePath reside on its own partition as the partition may fill up as queries are executed.**
### Per user join memory limit
In addition to the system wide flags, at SQL global and session level, the following system variables exists for managing per user memory limit for joins.
* ***infinidb\_um\_mem\_limit*** - A value for memory limit in MB per user. When this limit is exceeded by a join, it will switch to a disk-based join. By default the limit is not set (value of 0).
For modification at the global level: In my.cnf file (typically /usr/local/mariadb/columnstore/mysql):
```
[mysqld]
...
infinidb_um_mem_limit = value
where value is the value in Mb for in memory limitation per user.
```
For modification at the session level, before issuing your join query from the SQL client, set the session variable as follows.
```
set infinidb_um_mem_limit = value
```
Batch insert mode for INSERTS
-----------------------------
`<<toc title='' layout=standalone>>`
### Introduction
MariaDB ColumnStore has the ability to utilize the cpimport fast data import tool for non-transactional [LOAD DATA INFILE](../load-data-infile/index) and [INSERT INTO SELECT FROM](../insert/index) SQL statements. Using this method results in a significant increase in performance in loading data through these two SQL statements. This optimization is independent of the storage engine used for the tables in the select statement.
### Enable/disable using cpimport for batch insert
The infinidb\_use\_import\_for\_batchinsert variable is used to control if cpimport is used for these statements. This variable may be set as a default for the instance, set at the session level, or at the statement level by toggling this variable on and off.
To enable/disable the use of the use cpimport for batch insert at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.
```
set infinidb_use_import_for_batchinsert = n
where n is:
* 0 (disabled)
* 1 (enabled)
```
### Changing default delimiter for INSERT SELECT
* The infinidb\_import\_for\_batchinsert\_delimiter variable is used internally by MariaDB ColumnStore on a non-transactional INSERT INTO SELECT FROM statement as the default delimiter passed to the cpimport tool. With a default value ascii 7, there should be no need to change this value unless your data contains ascii 7 values.
To change this variable value at the at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.
```
set infinidb_import_for_batchinsert_delimiter = ascii_value
where ascii_value is an ascii value representation of the delimiter desired.
```
Note that this setting may cause issues with multi byte character set data. It is recommended to utilize UTF8 files directly with cpimport.
### Version buffer file management
If the following error is received, most likely with a transaction LOAD DATA INFILE or INSERT INTO SELECT then it is recommended to break up the load into multiple smaller chunks, increase the VersionBufferFileSize setting, or consider a non transactional LOAD DATA INFILE or to use cpimport.
```
ERROR 1815 (HY000) at line 1 in file: 'ldi.sql': Internal error: CAL0006: IDB-2008: The version buffer overflowed. Increase VersionBufferFileSize or limit the rows to be processed.
```
The VersionBufferFileSize setting is updated in the ColumnStore.xml typically located under /usr/local/mariadb/columnstore/etc. This dictates the size of the version buffer file on disk which provides DML transactional consistency. The default value is '1GB' which reserves up to a 1 Gigabyte file size. Modify this on the PM1 node and restart the system if you require a larger value.
Local PM query mode
-------------------
MariaDB ColumnStore has the ability to query data from just a single [PM](../columnstore-performance-module/index) instead of the whole database through the [UM](../columnstore-user-module/index). In order to accomplish this, the infinidb\_local\_query variable in the my.cnf configuration file is used and maybe set as a default at system wide or set at the session level.
`<<toc title='' layout=standalone>>`
### Enable local PM query during installation
Local PM query can be enabled system wide during the install process when running the install script postConfigure. Answer 'y' to this prompt during the install process.
```
NOTE: Local Query Feature allows the ability to query data from a single Performance
Module. Check MariaDB ColumnStore Admin Guide for additional information.
Enable Local Query feature? [y,n] (n) >
```
[https://mariadb.com/kb/en/library/installing-and-configuring-a-multi-server-columnstore-system-11x/](../library/installing-and-configuring-a-multi-server-columnstore-system-11x/index)
### Enable local PM query systemwide
To enable the use of the local PM Query at the instance level, specify `infinidb_local_query =1` (enabled) in the my.cnf configuration file at /usr/local/mariadb/columnstore/mysql. The default is 0 (disabled).
### Enable/disable local PM query at the session level
To enable/disable the use of the local PM Query at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.
```
set infinidb_local_query = n
where n is:
* 0 (disabled)
* 1 (enabled)
```
At the session level, this variable applies only to executing a query on an individual [PM](../columnstore-performance-module/index) and will error if executed on the [UM](../columnstore-user-module/index). The PM must be set up with the local query option during installation.
### Local PM Query Examples
#### Example 1 - SELECT from a single table on local PM to import back on local PM:
With the infinidb\_local\_query variable set to 1 (default with local PM Query):
```
mcsmysql -e 'select * from source_schema.source_table;' –N | /usr/local/Calpont/bin/cpimport target_schema target_table -s '\t' –n1
```
#### Example 2 - SELECT involving a join between a fact table on the PM node and dimension table across all the nodes to import back on local PM:
With the infinidb\_local\_query variable set to 0 (default with local PM Query):
Create a script (i.e., extract\_query\_script.sql in our example) similar to the following:
```
set infinidb_local_query=0;
select fact.column1, dim.column2
from fact join dim using (key)
where idbPm(fact.key) = idbLocalPm();
```
The infinidb\_local\_query is set to 0 to allow query across all PMs.
The query is structured so that the UM process on the PM node gets the fact table data locally from the PM node (as indicated by the use of the [idbLocalPm()](../mariadb/columnstore-information-functions/index) function), while the dimension table data is extracted from all the PM nodes.
Then you can execute the script to pipe it directly into cpimport:
```
mcsmysql source_schema -N < extract_query_script.sql | /usr/local/mariadb/columnstore/bin/cpimport target_schema target_table -s '\t' –n1
```
Operating mode
--------------
ColumnStore has the ability to support full MariaDB query syntax through an operating mode. This operating mode may be set as a default for the instance or set at the session level. To set the operating mode at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.
```
set infinidb_vtable_mode = n
```
where n is:
* 0) a generic, highly compatible row-by-row processing mode. Some WHERE clause components can be processed by ColumnStore, but joins are processed entirely by mysqld using a nested-loop join mechanism.
* 1) (the default) query syntax is evaluated by ColumnStore for compatibility with distributed execution and incompatible queries are rejected. Queries executed in this mode take advantage of distributed execution and typically result in higher performance.
* 2) auto-switch mode: ColumnStore will attempt to process the query internally, if it cannot, it will automatically switch the query to run in row-by-row mode.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SHOW INDEX SHOW INDEX
==========
Syntax
------
```
SHOW {INDEX | INDEXES | KEYS}
FROM tbl_name [FROM db_name]
[WHERE expr]
```
Description
-----------
`SHOW INDEX` returns table index information. The format resembles that of the SQLStatistics call in ODBC.
You can use `db_name.tbl_name` as an alternative to the `tbl_name FROM db_name` syntax. These two statements are equivalent:
```
SHOW INDEX FROM mytable FROM mydb;
SHOW INDEX FROM mydb.mytable;
```
`SHOW KEYS` and `SHOW INDEXES` are synonyms for `SHOW INDEX`.
You can also list a table's indexes with the [mariadb-show/mysqlshow](../mysqlshow/index) command:
```
mysqlshow -k db_name tbl_name
```
The [information\_schema.STATISTICS](../information-schema-statistics-table/index) table stores similar information.
The following fields are returned by `SHOW INDEX`.
| Field | Description |
| --- | --- |
| **`Table`** | Table name |
| **`Non_unique`** | `1` if the index permits duplicate values, `0` if values must be unique. |
| **`Key_name`** | Index name. The primary key is always named `PRIMARY`. |
| **`Seq_in_index`** | The column's sequence in the index, beginning with `1`. |
| **`Column_name`** | Column name. |
| **`Collation`** | Either `A`, if the column is sorted in ascending order in the index, or `NULL` if it's not sorted. |
| **`Cardinality`** | Estimated number of unique values in the index. The cardinality statistics are calculated at various times, and can help the optimizer make improved decisions. |
| **`Sub_part`** | `NULL` if the entire column is included in the index, or the number of included characters if not. |
| **`Packed`** | `NULL` if the index is not packed, otherwise how the index is packed. |
| **`Null`** | `NULL` if `NULL` values are permitted in the column, an empty string if `NULL`s are not permitted. |
| **`Index_type`** | The index type, which can be `BTREE`, `FULLTEXT`, `HASH` or `RTREE`. See [Storage Engine Index Types](../storage-engine-index-types/index). |
| **`Comment`** | Other information, such as whether the index is disabled. |
| **`Index_comment`** | Contents of the `COMMENT` attribute when the index was created. |
| **`Ignored`** | Whether or not an index will be ignored by the optimizer. See [Ignored Indexes](../ignored-indexes/index). From [MariaDB 10.6.0](https://mariadb.com/kb/en/mariadb-1060-release-notes/). |
The `WHERE` and `LIKE` clauses can be given to select rows using more general conditions, as discussed in [Extended SHOW](../extended-show/index).
Examples
--------
```
CREATE TABLE IF NOT EXISTS `employees_example` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`first_name` varchar(30) NOT NULL,
`last_name` varchar(40) NOT NULL,
`position` varchar(25) NOT NULL,
`home_address` varchar(50) NOT NULL,
`home_phone` varchar(12) NOT NULL,
`employee_code` varchar(25) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `employee_code` (`employee_code`),
KEY `first_name` (`first_name`,`last_name`)
) ENGINE=Aria;
INSERT INTO `employees_example` (`first_name`, `last_name`, `position`, `home_address`, `home_phone`, `employee_code`)
VALUES
('Mustapha', 'Mond', 'Chief Executive Officer', '692 Promiscuous Plaza', '326-555-3492', 'MM1'),
('Henry', 'Foster', 'Store Manager', '314 Savage Circle', '326-555-3847', 'HF1'),
('Bernard', 'Marx', 'Cashier', '1240 Ambient Avenue', '326-555-8456', 'BM1'),
('Lenina', 'Crowne', 'Cashier', '281 Bumblepuppy Boulevard', '328-555-2349', 'LC1'),
('Fanny', 'Crowne', 'Restocker', '1023 Bokanovsky Lane', '326-555-6329', 'FC1'),
('Helmholtz', 'Watson', 'Janitor', '944 Soma Court', '329-555-2478', 'HW1');
```
```
SHOW INDEXES FROM employees_example\G
*************************** 1. row ***************************
Table: employees_example
Non_unique: 0
Key_name: PRIMARY
Seq_in_index: 1
Column_name: id
Collation: A
Cardinality: 6
Sub_part: NULL
Packed: NULL
Null:
Index_type: BTREE
Comment:
Index_comment:
Ignored: NO
*************************** 2. row ***************************
Table: employees_example
Non_unique: 0
Key_name: employee_code
Seq_in_index: 1
Column_name: employee_code
Collation: A
Cardinality: 6
Sub_part: NULL
Packed: NULL
Null:
Index_type: BTREE
Comment:
Index_comment:
Ignored: NO
*************************** 3. row ***************************
Table: employees_example
Non_unique: 1
Key_name: first_name
Seq_in_index: 1
Column_name: first_name
Collation: A
Cardinality: NULL
Sub_part: NULL
Packed: NULL
Null:
Index_type: BTREE
Comment:
Index_comment:
Ignored: NO
*************************** 4. row ***************************
Table: employees_example
Non_unique: 1
Key_name: first_name
Seq_in_index: 2
Column_name: last_name
Collation: A
Cardinality: NULL
Sub_part: NULL
Packed: NULL
Null:
Index_type: BTREE
Comment:
Index_comment:
Ignored: NO
```
See Also
--------
* [Ignored Indexes](../ignored-indexes/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb UCASE UCASE
=====
Syntax
------
```
UCASE(str)
```
Description
-----------
`UCASE()` is a synonym for `[UPPER](../upper/index)()`.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SHOW SHOW
=====
Articles on the various SHOW commands.
| Title | Description |
| --- | --- |
| [About SHOW](../about-show/index) | General information about the SHOW statement. |
| [Extended Show](../extended-show/index) | Extended SHOW with WHERE and LIKE. |
| [SHOW AUTHORS](../show-authors/index) | Information about the people who work on MariaDB. |
| [SHOW BINARY LOGS](../show-binary-logs/index) | SHOW BINARY LOGS lists all binary logs on the server. |
| [SHOW BINLOG EVENTS](../show-binlog-events/index) | Show events in the binary log. |
| [SHOW CHARACTER SET](../show-character-set/index) | Available character sets. |
| [SHOW CLIENT\_STATISTICS](../show-client-statistics/index) | Statistics about client connections. |
| [SHOW COLLATION](../show-collation/index) | Supported collations. |
| [SHOW COLUMNS](../show-columns/index) | Column information. |
| [SHOW CONTRIBUTORS](../show-contributors/index) | Companies and people who financially contribute to MariaDB. |
| [SHOW CREATE DATABASE](../show-create-database/index) | Shows the CREATE DATABASE statement that created the database. |
| [SHOW CREATE EVENT](../show-create-event/index) | Displays the CREATE EVENT statement needed to re-create a given event |
| [SHOW CREATE FUNCTION](../show-create-function/index) | Statement that created the function. |
| [SHOW CREATE PACKAGE](../show-create-package/index) | Show the CREATE statement that creates the given package specification. |
| [SHOW CREATE PACKAGE BODY](../show-create-package-body/index) | Show the CREATE statement that creates the given package body (i.e. implementation). |
| [SHOW CREATE PROCEDURE](../show-create-procedure/index) | Returns the string used for creating a stored procedure. |
| [SHOW CREATE SEQUENCE](../show-create-sequence/index) | Shows the CREATE SEQUENCE statement that created the sequence. |
| [SHOW CREATE TABLE](../show-create-table/index) | Shows the CREATE TABLE statement that created the table. |
| [SHOW CREATE TRIGGER](../show-create-trigger/index) | Shows the CREATE TRIGGER statement used to create the trigger |
| [SHOW CREATE USER](../show-create-user/index) | Show the CREATE USER statement for a specified user. |
| [SHOW CREATE VIEW](../show-create-view/index) | Show the CREATE VIEW statement that created a view. |
| [SHOW DATABASES](../show-databases/index) | Lists the databases on the server. |
| [SHOW ENGINE](../show-engine/index) | Show storage engine information. |
| [SHOW ENGINE INNODB STATUS](../show-engine-innodb-status/index) | Display extensive InnoDB information. |
| [SHOW ENGINES](../show-engines/index) | Server storage engine info |
| [SHOW ERRORS](../show-errors/index) | Displays errors. |
| [SHOW EVENTS](../show-events/index) | Shows information about events |
| [SHOW EXPLAIN](../show-explain/index) | Shows an execution plan for a running query. |
| [SHOW FUNCTION CODE](../show-function-code/index) | Representation of the internal implementation of the stored function |
| [SHOW FUNCTION STATUS](../show-function-status/index) | Stored function characteristics |
| [SHOW GRANTS](../show-grants/index) | View GRANT statements. |
| [SHOW INDEX](../show-index/index) | Information about table indexes. |
| [SHOW INDEX\_STATISTICS](../show-index-statistics/index) | Index usage statistics. |
| [SHOW INNODB STATUS (removed)](../show-innodb-status-removed/index) | Removed synonym for SHOW ENGINE INNODB STATUS |
| [SHOW LOCALES](../show-locales/index) | View locales information. |
| [SHOW MASTER STATUS](../show-binlog-status/index) | Status information about the binary log. |
| [SHOW OPEN TABLES](../show-open-tables/index) | List non-temporary open tables. |
| [SHOW PACKAGE BODY STATUS](../show-package-body-status/index) | Returns characteristics of stored package bodies (implementations). |
| [SHOW PACKAGE STATUS](../show-package-status/index) | Returns characteristics of stored package specifications. |
| [SHOW PLUGINS](../show-plugins/index) | Display information about installed plugins. |
| [SHOW PLUGINS SONAME](../show-plugins-soname/index) | Information about all available plugins, installed or not. |
| [SHOW PRIVILEGES](../show-privileges/index) | Shows the list of supported system privileges. |
| [SHOW PROCEDURE CODE](../show-procedure-code/index) | Display internal implementation of a stored procedure. |
| [SHOW PROCEDURE STATUS](../show-procedure-status/index) | Stored procedure characteristics. |
| [SHOW PROCESSLIST](../show-processlist/index) | Running threads and information about them. |
| [SHOW PROFILE](../show-profile/index) | Display statement resource usage |
| [SHOW PROFILES](../show-profiles/index) | Show statement resource usage |
| [SHOW QUERY\_RESPONSE\_TIME](../show-query_response_time/index) | Retrieving information from the QUERY\_RESPONSE\_TIME plugin. |
| [SHOW RELAYLOG EVENTS](../show-relaylog-events/index) | Show events in the relay log. |
| [SHOW SLAVE HOSTS](../show-replica-hosts/index) | Display replicas currently registered with the primary. |
| [SHOW SLAVE STATUS](../show-replica-status/index) | Show status for one or all primaries. |
| [SHOW STATUS](../show-status/index) | Server status information. |
| [SHOW TABLE STATUS](../show-table-status/index) | SHOW TABLES with information about non-temporary tables. |
| [SHOW TABLES](../show-tables/index) | List of non-temporary tables, views or sequences. |
| [SHOW TABLE\_STATISTICS](../show-table-statistics/index) | Table usage statistics. |
| [SHOW TRIGGERS](../show-triggers/index) | Shows currently-defined triggers |
| [SHOW USER\_STATISTICS](../show-user-statistics/index) | User activity statistics. |
| [SHOW VARIABLES](../show-variables/index) | Displays the values of system variables. |
| [SHOW WARNINGS](../show-warnings/index) | Displays errors, warnings and notes. |
| [SHOW WSREP\_MEMBERSHIP](../show-wsrep_membership/index) | Galera node cluster membership information. |
| [SHOW WSREP\_STATUS](../show-wsrep_status/index) | Galera node cluster status information. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema table_lock_waits_summary_by_table Table Performance Schema table\_lock\_waits\_summary\_by\_table Table
===============================================================
The `table_lock_waits_summary_by_table` table records table lock waits by table.
| Column | Description |
| --- | --- |
| `OBJECT_TYPE` | Since this table records waits by table, always set to `TABLE`. |
| `OBJECT_SCHEMA` | Schema name. |
| `OBJECT_NAME` | Table name. |
| `COUNT_STAR` | Number of summarized events and the sum of the `x_READ` and `x_WRITE` columns. |
| `SUM_TIMER_WAIT` | Total wait time of the summarized events that are timed. |
| `MIN_TIMER_WAIT` | Minimum wait time of the summarized events that are timed. |
| `AVG_TIMER_WAIT` | Average wait time of the summarized events that are timed. |
| `MAX_TIMER_WAIT` | Maximum wait time of the summarized events that are timed. |
| `COUNT_READ` | Number of all read operations, and the sum of the equivalent `x_READ_NORMAL`, `x_READ_WITH_SHARED_LOCKS`, `x_READ_HIGH_PRIORITY` and `x_READ_NO_INSERT` columns. |
| `SUM_TIMER_READ` | Total wait time of all read operations that are timed. |
| `MIN_TIMER_READ` | Minimum wait time of all read operations that are timed. |
| `AVG_TIMER_READ` | Average wait time of all read operations that are timed. |
| `MAX_TIMER_READ` | Maximum wait time of all read operations that are timed. |
| `COUNT_WRITE` | Number of all write operations, and the sum of the equivalent `x_WRITE_ALLOW_WRITE`, `x_WRITE_CONCURRENT_INSERT`, `x_WRITE_DELAYED`, `x_WRITE_LOW_PRIORITY` and `x_WRITE_NORMAL` columns. |
| `SUM_TIMER_WRITE` | Total wait time of all write operations that are timed. |
| `MIN_TIMER_WRITE` | Minimum wait time of all write operations that are timed. |
| `AVG_TIMER_WRITE` | Average wait time of all write operations that are timed. |
| `MAX_TIMER_WRITE` | Maximum wait time of all write operations that are timed. |
| `COUNT_READ_NORMAL` | Number of all internal read normal locks. |
| `SUM_TIMER_READ_NORMAL` | Total wait time of all internal read normal locks that are timed. |
| `MIN_TIMER_READ_NORMAL` | Minimum wait time of all internal read normal locks that are timed. |
| `AVG_TIMER_READ_NORMAL` | Average wait time of all internal read normal locks that are timed. |
| `MAX_TIMER_READ_NORMAL` | Maximum wait time of all internal read normal locks that are timed. |
| `COUNT_READ_WITH_SHARED_LOCKS` | Number of all internal read with shared locks. |
| `SUM_TIMER_READ_WITH_SHARED_LOCKS` | Total wait time of all internal read with shared locks that are timed. |
| `MIN_TIMER_READ_WITH_SHARED_LOCKS` | Minimum wait time of all internal read with shared locks that are timed. |
| `AVG_TIMER_READ_WITH_SHARED_LOCKS` | Average wait time of all internal read with shared locks that are timed. |
| `MAX_TIMER_READ_WITH_SHARED_LOCKS` | Maximum wait time of all internal read with shared locks that are timed. |
| `COUNT_READ_HIGH_PRIORITY` | Number of all internal read high priority locks. |
| `SUM_TIMER_READ_HIGH_PRIORITY` | Total wait time of all internal read high priority locks that are timed. |
| `MIN_TIMER_READ_HIGH_PRIORITY` | Minimum wait time of all internal read high priority locks that are timed. |
| `AVG_TIMER_READ_HIGH_PRIORITY` | Average wait time of all internal read high priority locks that are timed. |
| `MAX_TIMER_READ_HIGH_PRIORITY` | Maximum wait time of all internal read high priority locks that are timed. |
| `COUNT_READ_NO_INSERT` | Number of all internal read no insert locks. |
| `SUM_TIMER_READ_NO_INSERT` | Total wait time of all internal read no insert locks that are timed. |
| `MIN_TIMER_READ_NO_INSERT` | Minimum wait time of all internal read no insert locks that are timed. |
| `AVG_TIMER_READ_NO_INSERT` | Average wait time of all internal read no insert locks that are timed. |
| `MAX_TIMER_READ_NO_INSERT` | Maximum wait time of all internal read no insert locks that are timed. |
| `COUNT_READ_EXTERNAL` | Number of all external read locks. |
| `SUM_TIMER_READ_EXTERNAL` | Total wait time of all external read locks that are timed. |
| `MIN_TIMER_READ_EXTERNAL` | Minimum wait time of all external read locks that are timed. |
| `AVG_TIMER_READ_EXTERNAL` | Average wait time of all external read locks that are timed. |
| `MAX_TIMER_READ_EXTERNAL` | Maximum wait time of all external read locks that are timed. |
| `COUNT_WRITE_ALLOW_WRITE` | Number of all internal read normal locks. |
| `SUM_TIMER_WRITE_ALLOW_WRITE` | Total wait time of all internal write allow write locks that are timed. |
| `MIN_TIMER_WRITE_ALLOW_WRITE` | Minimum wait time of all internal write allow write locks that are timed. |
| `AVG_TIMER_WRITE_ALLOW_WRITE` | Average wait time of all internal write allow write locks that are timed. |
| `MAX_TIMER_WRITE_ALLOW_WRITE` | Maximum wait time of all internal write allow write locks that are timed. |
| `COUNT_WRITE_CONCURRENT_INSERT` | Number of all internal concurrent insert write locks. |
| `SUM_TIMER_WRITE_CONCURRENT_INSERT` | Total wait time of all internal concurrent insert write locks that are timed. |
| `MIN_TIMER_WRITE_CONCURRENT_INSERT` | Minimum wait time of all internal concurrent insert write locks that are timed. |
| `AVG_TIMER_WRITE_CONCURRENT_INSERT` | Average wait time of all internal concurrent insert write locks that are timed. |
| `MAX_TIMER_WRITE_CONCURRENT_INSERT` | Maximum wait time of all internal concurrent insert write locks that are timed. |
| `COUNT_WRITE_DELAYED` | Number of all internal write delayed locks. |
| `SUM_TIMER_WRITE_DELAYED` | Total wait time of all internal write delayed locks that are timed. |
| `MIN_TIMER_WRITE_DELAYED` | Minimum wait time of all internal write delayed locks that are timed. |
| `AVG_TIMER_WRITE_DELAYED` | Average wait time of all internal write delayed locks that are timed. |
| `MAX_TIMER_WRITE_DELAYED` | Maximum wait time of all internal write delayed locks that are timed. |
| `COUNT_WRITE_LOW_PRIORITY` | Number of all internal write low priority locks. |
| `SUM_TIMER_WRITE_LOW_PRIORITY` | Total wait time of all internal write low priority locks that are timed. |
| `MIN_TIMER_WRITE_LOW_PRIORITY` | Minimum wait time of all internal write low priority locks that are timed. |
| `AVG_TIMER_WRITE_LOW_PRIORITY` | Average wait time of all internal write low priority locks that are timed. |
| `MAX_TIMER_WRITE_LOW_PRIORITY` | Maximum wait time of all internal write low priority locks that are timed. |
| `COUNT_WRITE_NORMAL` | Number of all internal write normal locks. |
| `SUM_TIMER_WRITE_NORMAL` | Total wait time of all internal write normal locks that are timed. |
| `MIN_TIMER_WRITE_NORMAL` | Minimum wait time of all internal write normal locks that are timed. |
| `AVG_TIMER_WRITE_NORMAL` | Average wait time of all internal write normal locks that are timed. |
| `MAX_TIMER_WRITE_NORMAL` | Maximum wait time of all internal write normal locks that are timed. |
| `COUNT_WRITE_EXTERNAL` | Number of all external write locks. |
| `SUM_TIMER_WRITE_EXTERNAL` | Total wait time of all external write locks that are timed. |
| `MIN_TIMER_WRITE_EXTERNAL` | Minimum wait time of all external write locks that are timed. |
| `AVG_TIMER_WRITE_EXTERNAL` | Average wait time of all external write locks that are timed. |
| `MAX_TIMER_WRITE_EXTERNAL` | Maximum wait time of all external write locks that are timed. |
You can [TRUNCATE](../truncate-table/index) the table, which will reset all counters to zero.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Buildbot Setup Notes Buildbot Setup Notes
====================
General setup instructions are available in the [BuildBot manual](http://docs.buildbot.net/current/manual/index.html), specifically in the section on [Setting up a build slave](http://docs.buildbot.net/current/manual/installation/worker.html).
In addition to installing BuildBot on the slave host, it is also necessary to install all the tools needed to branch MariaDB from Launchpad and compile it. It is a good idea to first manually branch the code from Launchpad and successfully build it, as otherwise a lot of time may be needed fixing things one at a time as new builds are started and fail in one way or the other.
Unfortunately, bzr is memory hungry, so at least 1 Gigabyte of memory is recommended (you may be able to squeeze through with less, but bzr is a real memory hog). A few Gigabytes of disk space are also needed to hold the build directory.
Here are some detailed instructions for various systems:
* [Buildbot Setup for Ubuntu-Debian](../buildbot_setup_for_ubuntu-debian/index)
* [Buildbot Setup for MacOSX](../buildbot_setup_for_macosx/index)
* [Buildbot Setup for Solaris](../buildbot_setup_for_solaris/index)
* [Buildbot Setup for Windows](../buildbot_setup_for_windows/index)
See the [Buildbot TODO](../buildbot-todo/index) for plans and ideas on improving Buildbot.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema events_stages_current Table Performance Schema events\_stages\_current Table
================================================
The `events_stages_current` table contains current stage events, with each row being a record of a thread and its most recent stage event.
The table contains the following columns:
| Column | Description |
| --- | --- |
| `THREAD_ID` | Thread associated with the event. Together with `EVENT_ID` uniquely identifies the row. |
| `EVENT_ID` | Thread's current event number at the start of the event. Together with `THREAD_ID` uniquely identifies the row. |
| `END_EVENT_ID` | `NULL` when the event starts, set to the thread's current event number at the end of the event. |
| `EVENT_NAME` | Event instrument name and a `NAME` from the `setup_instruments` table |
| `SOURCE` | Name and line number of the source file containing the instrumented code that produced the event. |
| `TIMER_START` | Value in picoseconds when the event timing started or `NULL` if timing is not collected. |
| `TIMER_END` | Value in picoseconds when the event timing ended, or `NULL` if the event has not ended or timing is not collected. |
| `TIMER_WAIT` | Value in picoseconds of the event's duration or `NULL` if the event has not ended or timing is not collected. |
| `NESTING_EVENT_ID` | `EVENT_ID` of event within which this event nests. |
| `NESTING_EVENT_TYPE` | Nesting event type. One of `transaction`, `statement`, `stage` or `wait`. |
It is possible to empty this table with a `TRUNCATE TABLE` statement.
The related tables, [events\_stages\_history](../performance-schema-events_stages_history-table/index) and [events\_stages\_history\_long](../performance-schema-events_stages_history_long-table/index) derive their values from the current events.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore Batch Insert Mode ColumnStore Batch Insert Mode
=============================
### Introduction
MariaDB ColumnStore has the ability to utilize the cpimport fast data import tool for non-transactional [LOAD DATA INFILE](../load-data-infile/index) and [INSERT INTO SELECT FROM](../insert/index) SQL statements. Using this method results in a significant increase in performance in loading data through these two SQL statements. This optimization is independent of the storage engine used for the tables in the select statement.
### Enable/disable using cpimport for batch insert
The infinidb\_use\_import\_for\_batchinsert variable is used to control if cpimport is used for these statements. This variable may be set as a default for the instance, set at the session level, or at the statement level by toggling this variable on and off.
To enable/disable the use of the use cpimport for batch insert at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.
```
set infinidb_use_import_for_batchinsert = n
where n is:
* 0 (disabled)
* 1 (enabled)
```
### Changing default delimiter for INSERT SELECT
* The infinidb\_import\_for\_batchinsert\_delimiter variable is used internally by MariaDB ColumnStore on a non-transactional INSERT INTO SELECT FROM statement as the default delimiter passed to the cpimport tool. With a default value ascii 7, there should be no need to change this value unless your data contains ascii 7 values.
To change this variable value at the at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.
```
set infinidb_import_for_batchinsert_delimiter = ascii_value
where ascii_value is an ascii value representation of the delimiter desired.
```
Note that this setting may cause issues with multi byte character set data. It is recommended to utilize UTF8 files directly with cpimport.
### Version buffer file management
If the following error is received, most likely with a transaction LOAD DATA INFILE or INSERT INTO SELECT then it is recommended to break up the load into multiple smaller chunks, increase the VersionBufferFileSize setting, or consider a non transactional LOAD DATA INFILE or to use cpimport.
```
ERROR 1815 (HY000) at line 1 in file: 'ldi.sql': Internal error: CAL0006: IDB-2008: The version buffer overflowed. Increase VersionBufferFileSize or limit the rows to be processed.
```
The VersionBufferFileSize setting is updated in the ColumnStore.xml typically located under /usr/local/mariadb/columnstore/etc. This dictates the size of the version buffer file on disk which provides DML transactional consistency. The default value is '1GB' which reserves up to a 1 Gigabyte file size. Modify this on the PM1 node and restart the system if you require a larger value.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb MariaDB ColumnStore software upgrade 1.1.7 GA to 1.2.5 GA MariaDB ColumnStore software upgrade 1.1.7 GA to 1.2.5 GA
=========================================================
MariaDB ColumnStore software upgrade 1.1.7 GA to 1.2.5 GA
---------------------------------------------------------
This upgrade also applies to 1.2.0 Alpha to 1.2.5 GA upgrades
### Changes in 1.2.1 and later
#### libjemalloc dependency
ColumnStore 1.2.3 onwards requires libjemalloc to be installed. For Ubuntu & Debian based distributions this is installed using the package "libjemalloc1" in the standard repositories.
For CentOS the package is in RedHat's EPEL repository:
```
sudo yum -y install epel-release
sudo yum install jemalloc
```
#### Non-distributed is the default distribution mode in postConfigure
The default distribution mode has changed from 'distributed' to 'non-distributed'. During an upgrade, however, the default is to use the distribution mode used in the original installation. The options '-d' and '-n' can always be used to override the default.
#### Non-root user sudo setup
Root-level permissions are no longer required to install or upgrade ColumnStore for some types of installations. Installations requiring some level of sudo access, and the instructions, are listed here: [https://mariadb.com/kb/en/library/preparing-for-columnstore-installation-121/#update-sudo-configuration-if-needed-by-root-user](../library/preparing-for-columnstore-installation-121/index#update-sudo-configuration-if-needed-by-root-user)
#### Running the mysql\_upgrade script
As part of the upgrade process to 1.2.5, the user is required to run the mysql\_upgrade script on all of the following nodes.
* All User Modules on a system configured with separate User and Performance Modules
* All Performance Modules on a system configured with separate User and Performance Modules and Local Query Feature is enabled
* All Performance Modules on a system configured with combined User and Performance Modules
mysql\_upgrade should be run once the upgrade has been completed.
This is an example of how it run on a root user install:
```
/usr/local/mariadb/columnstore/mysql/bin/mysql_upgrade --defaults-file=/usr/local/mariadb/columnstore/mysql/my.cnf --force
```
This is an example of how it run on a non-root user install, assuming ColumnStore is installed under the user's home directory:
```
$HOME/mariadb/columnstore/mysql/bin/mysql_upgrade --defaults-file=$HOME/mariadb/columnstore/mysql/my.cnf --force
```
In addition you should run the upgrade stored procedure below for a major version upgrade.
#### Executing the upgrade stored procedure
If you are upgrading from 1.1.7 or have upgraded in the past you should run the MariaDB ColumnStore stored procedure. This updates the MariaDB FRM files by altering every ColumnStore table with a blank table comment. This will not affect options set using table comments but will erase any table comment the user has manually set.
You only need to execute this as part of a major version upgrade. It is executed using the following query which should be executed by a user which has access to alter every ColumnStore table:
```
call columnstore_info.columnstore_upgrade();
```
### Setup
In this section, we will refer to the directory ColumnStore is installed in as <CSROOT>. If you installed the RPM or DEB package, then your <CSROOT> will be /usr/local. If you installed it from the tarball, <CSROOT> will be where you unpacked it.
#### Columnstore.xml / my.cnf
Configuration changes made manually are not automatically carried forward during the upgrade. These modifications will need to be made again manually after the upgrade is complete.
After the upgrade process the configuration files will be saved at:
* <CSROOT>/mariadb/columnstore/etc/Columnstore.xml.rpmsave
* <CSROOT>/mariadb/columnstore/mysql/my.cnf.rpmsave
#### MariaDB root user database password
If you have specified a root user database password (which is good practice), then you must configure a .my.cnf file with user credentials for the upgrade process to use. Create a .my.cnf file in the user home directory with 600 file permissions with the following content (updating PASSWORD as appropriate):
```
[mysqladmin]
user = root
password = PASSWORD
```
### Choosing the type of upgrade
Note, softlinks may cause a problem during the upgrade if you use the RPM or DEB packages. If you have linked a directory above /usr/local/mariadb/columnstore, the softlinks will be deleted and the upgrade will fail. In that case you will need to upgrade using the binary tarball instead. If you have only linked the data directories (ie /usr/local/MariaDB/columnstore/data\*), the RPM/DEB package upgrade will work.
#### Root User Installs
##### Upgrading MariaDB ColumnStore using the tarball of RPMs (distributed mode)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Download the package mariadb-columnstore-1.2.5-1-centos#.x86\_64.rpm.tar.gz to the PM1 server where you are installing MariaDB ColumnStore.**
**Shutdown the MariaDB ColumnStore system:**
```
# mcsadmin shutdownsystem y
```
* Unpack the tarball, which will generate a set of RPMs that will reside in the /root/ directory.
```
# tar -zxf mariadb-columnstore-1.2.5-1-centos#.x86_64.rpm.tar.gz
```
* Uninstall the old packages, then install the new packages. The MariaDB ColumnStore software will be installed in /usr/local/.
```
# rpm -e --nodeps $(rpm -qa | grep '^mariadb-columnstore')
# rpm -ivh mariadb-columnstore-*1.2.5*rpm
```
* Run postConfigure using the upgrade option
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-117-ga-to-125-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-117-ga-to-125-ga/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using RPM Package Repositories (non-distributed mode)
The system can be upgraded when it was previously installed from the Package Repositories. This will need to be run on each module in the system.
Additional information can be found in this document on how to setup and install using the 'yum' package repo command:
[https://mariadb.com/kb/en/library/installing-mariadb-ax-from-the-package-repositories](../library/installing-mariadb-ax-from-the-package-repositories)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Shutdown the MariaDB ColumnStore system:**
```
# mcsadmin shutdownsystem y
```
* Uninstall MariaDB ColumnStore Packages
```
# yum remove mariadb-columnstore*
```
* Install MariaDB ColumnStore Packages
```
# yum --enablerepo=mariadb-columnstore clean metadata
# yum install mariadb-columnstore*
```
NOTE: On all modules except for PM1, start the columnstore service
```
# /usr/local/mariadb/columnstore/bin/columnstore start
```
* Run postConfigure using the upgrade and non-distributed options
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u -n
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-117-ga-to-125-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-117-ga-to-125-ga/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using the binary tarball (distributed mode)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /usr/local directory mariadb-columnstore-1.2.5-1.x86\_64.bin.tar.gz
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Run pre-uninstall script
```
# /usr/local/mariadb/columnstore/bin/pre-uninstall
```
* Unpack the tarball in the /usr/local/ directory.
```
# tar -zxvf mariadb-columnstore-1.2.5-1.x86_64.bin.tar.gz
```
* Run post-install scripts
```
# /usr/local/mariadb/columnstore/bin/post-install
```
* Run postConfigure using the upgrade option
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-117-ga-to-125-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-117-ga-to-125-ga/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using the DEB tarball (distributed mode)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /root directory mariadb-columnstore-1.2.5-1.amd64.deb.tar.gz
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Unpack the tarball, which contains DEBs.
```
# tar -zxf mariadb-columnstore-1.2.5-1.amd64.deb.tar.gz
```
* Remove and install all MariaDB ColumnStore debs
```
# cd /root/
# dpkg -r $(dpkg --list | grep 'mariadb-columnstore' | awk '{print $2}')
# dpkg --install mariadb-columnstore-*1.2.5-1*deb
```
* Run postConfigure using the upgrade option
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-117-ga-to-125-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-117-ga-to-125-ga/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using DEB Package Repositories (non-distributed mode)
The system can be upgraded when it was previously installed from the Package Repositories. This will need to be run on each module in the system
Additional information can be found in this document on how to setup and install using the 'apt-get' package repo command:
[https://mariadb.com/kb/en/library/installing-mariadb-ax-from-the-package-repositories](../library/installing-mariadb-ax-from-the-package-repositories)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Shutdown the MariaDB ColumnStore system:**
```
# mcsadmin shutdownsystem y
```
* Uninstall MariaDB ColumnStore Packages
```
# apt-get remove mariadb-columnstore*
```
* Install MariaDB ColumnStore Packages
```
# apt-get update
# sudo apt-get install mariadb-columnstore*
```
NOTE: On all modules except for PM1, start the columnstore service
```
# /usr/local/mariadb/columnstore/bin/columnstore start
```
* Run postConfigure using the upgrade and non-distributed options
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u -n
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-117-ga-to-125-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-117-ga-to-125-ga/index#running-the-mysql_upgrade-script)
#### Non-Root User Installs
##### Upgrade MariaDB ColumnStore from the binary tarball without sudo access (non-distributed mode)
This upgrade method applies when root/sudo access is not an option.
The uninstall script for 1.1.7 requires root access to perform some operations. These operations are the following:
* removing /etc/profile.d/columnstore{Alias,Env}.sh to remove aliases and environment variables from all users.
* running '<CSROOT>/mysql/columnstore/bin/syslogSetup.sh uninstall' to remove ColumnStore from the logging system
* removing the columnstore startup script
* remove /etc/ld.so.conf.d/columnstore.conf to ColumnStore directories from the ld library search path
Because you are upgrading ColumnStore and not uninstalling it, they are not necessary. If at some point you wish to uninstall it, you (or your sysadmin) will have to perform those operations by hand.
The upgrade instructions:
* Download the binary tarball to the current installation location on all nodes. See <https://downloads.mariadb.com/ColumnStore/>
* Shutdown the MariaDB ColumnStore system:
```
$ mcsadmin shutdownsystem y
```
* Copy Columnstore.xml to Columnstore.xml.rpmsave, and my.cnf to my.cnf.rpmsave
```
$ cp <CSROOT>/mariadb/columnstore/etc/Columnstore{.xml,.xml.rpmsave}
$ cp <CSROOT>/mariadb/columnstore/mysql/my{.cnf,.cnf.rpmsave}
```
* On all nodes, untar the new files in the same location as the old ones
```
$ tar zxf columnstore-1.2.5-1.x86_64.bin.tar.gz
```
* On all nodes, run post-install, specifying where ColumnStore is installed
```
$ <CSROOT>/mariadb/columnstore/bin/post-install --installdir=<CSROOT>/mariadb/columnstore
```
* On all nodes except for PM1, start the columnstore service
```
$ <CSROOT>/mariadb/columnstore/bin/columnstore start
```
* On PM1 only, run postConfigure, specifying the upgrade, non-distributed installation mode, and the location of the installation
```
$ <CSROOT>/mariadb/columnstore/bin/postConfigure -u -n -i <CSROOT>/mariadb/columnstore
```
* Run the mysql\_upgrade script on the nodes documented above for a non-root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-117-ga-to-125-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-117-ga-to-125-ga/index#running-the-mysql_upgrade-script)
##### Upgrade MariaDB ColumnStore from the binary tarball (distributed mode)
Upgrade MariaDB ColumnStore as user USER on the server designated as PM1:
* Download the package into the user's home directory mariadb-columnstore-1.2.5-1.x86\_64.bin.tar.gz
* Shutdown the MariaDB ColumnStore system:
```
$ mcsadmin shutdownsystem y
```
* Run the pre-uninstall script; this will require sudo access as you are running a script from 1.1.7.
```
$ <CSROOT>/mariadb/columnstore/bin/pre-uninstall --installdir=<CSROOT>/mariadb/columnstore
```
* Make the sudo changes as noted at the beginning of this document
* Unpack the tarball in the same place as the original installation
```
$ tar -zxvf mariadb-columnstore-1.2.5-1.x86_64.bin.tar.gz
```
* Run post-install scripts
```
$ <CSROOT>/mariadb/columnstore/bin/post-install --installdir=<CSROOT>/mariadb/columnstore
```
* Run postConfigure using the upgrade option
```
$ <CSROOT>/mariadb/columnstore/bin/postConfigure -u -i <CSROOT>/mariadb/columnstore
```
* Run the mysql\_upgrade script on the nodes documented above for a non-root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-117-ga-to-125-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-117-ga-to-125-ga/index#running-the-mysql_upgrade-script)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb System-Versioned Tables System-Versioned Tables
=======================
MariaDB supports temporal data tables in the form of system-versioning tables (allowing you to query and operate on historic data, discussed below), [application-time periods](../application-time-periods/index) (allow you to query and operate on a temporal range of data), and [bitemporal tables](../bitemporal-tables/index) (which combine both system-versioning and [application-time periods](../application-time-periods/index)).
System-Versioned Tables
-----------------------
**MariaDB starting with [10.3.4](https://mariadb.com/kb/en/mariadb-1034-release-notes/)**Support for system-versioned tables was added in [MariaDB 10.3.4](https://mariadb.com/kb/en/mariadb-1034-release-notes/).
System-versioned tables store the history of all changes, not only data which is currently valid. This allows data analysis for any point in time, auditing of changes and comparison of data from different points in time. Typical uses cases are:
* Forensic analysis & legal requirements to store data for N years.
* Data analytics (retrospective, trends etc.), e.g. to get your staff information as of one year ago.
* Point-in-time recovery - recover a table state as of particular point in time.
System-versioned tables were first introduced in the SQL:2011 standard.
### Creating a System-Versioned Table
The [CREATE TABLE](../create-table/index) syntax has been extended to permit creating a system-versioned table. To be system-versioned, according to SQL:2011, a table must have two generated columns, a period, and a special table option clause:
```
CREATE TABLE t(
x INT,
start_timestamp TIMESTAMP(6) GENERATED ALWAYS AS ROW START,
end_timestamp TIMESTAMP(6) GENERATED ALWAYS AS ROW END,
PERIOD FOR SYSTEM_TIME(start_timestamp, end_timestamp)
) WITH SYSTEM VERSIONING;
```
In MariaDB one can also use a simplified syntax:
```
CREATE TABLE t (
x INT
) WITH SYSTEM VERSIONING;
```
In the latter case no extra columns will be created and they won't clutter the output of, say, `SELECT * FROM t`. The versioning information will still be stored, and it can be accessed via the pseudo-columns `ROW_START` and `ROW_END`:
```
SELECT x, ROW_START, ROW_END FROM t;
```
### Adding or Removing System Versioning To/From a Table
An existing table can be [altered](../alter-table/index) to enable system versioning for it.
```
CREATE TABLE t(
x INT
);
```
```
ALTER TABLE t ADD SYSTEM VERSIONING;
```
```
SHOW CREATE TABLE t\G
*************************** 1. row ***************************
Table: t
Create Table: CREATE TABLE `t` (
`x` int(11) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1 WITH SYSTEM VERSIONING
```
Similarly, system versioning can be removed from a table:
```
ALTER TABLE t DROP SYSTEM VERSIONING;
```
```
SHOW CREATE TABLE t\G
*************************** 1. row ***************************
Table: t
Create Table: CREATE TABLE `t` (
`x` int(11) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1
```
One can also add system versioning with all columns created explicitly:
```
ALTER TABLE t ADD COLUMN ts TIMESTAMP(6) GENERATED ALWAYS AS ROW START,
ADD COLUMN te TIMESTAMP(6) GENERATED ALWAYS AS ROW END,
ADD PERIOD FOR SYSTEM_TIME(ts, te),
ADD SYSTEM VERSIONING;
```
```
SHOW CREATE TABLE t\G
*************************** 1. row ***************************
Table: t
Create Table: CREATE TABLE `t` (
`x` int(11) DEFAULT NULL,
`ts` timestamp(6) GENERATED ALWAYS AS ROW START,
`te` timestamp(6) GENERATED ALWAYS AS ROW END,
PERIOD FOR SYSTEM_TIME (`ts`, `te`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 WITH SYSTEM VERSIONING
```
### Querying Historical Data
#### `SELECT`
To query the historical data one uses the clause `FOR SYSTEM_TIME` directly after the table name (before the table alias, if any). SQL:2011 provides three syntactic extensions:
* `AS OF` is used to see the table as it was at a specific point in time in the past:
```
SELECT * FROM t FOR SYSTEM_TIME AS OF TIMESTAMP'2016-10-09 08:07:06';
```
* `BETWEEN start AND end` will show all rows that were visible at any point between two specified points in time. It works inclusively, a row visible exactly at *start* or exactly at *end* will be shown too.
```
SELECT * FROM t FOR SYSTEM_TIME BETWEEN (NOW() - INTERVAL 1 YEAR) AND NOW();
```
* `FROM start TO end` will also show all rows that were visible at any point between two specified points in time, including *start*, but **excluding** *end*.
```
SELECT * FROM t FOR SYSTEM_TIME FROM '2016-01-01 00:00:00' TO '2017-01-01 00:00:00';
```
Additionally MariaDB implements a non-standard extension:
* `ALL` will show all rows, historical and current.
```
SELECT * FROM t FOR SYSTEM_TIME ALL;
```
If the `FOR SYSTEM_TIME` clause is not used, the table will show the *current* data, as if one had specified `FOR SYSTEM_TIME AS OF CURRENT_TIMESTAMP`.
#### Views and Subqueries
When a system-versioned tables is used in a view or in a subquery in the from clause, `FOR SYSTEM_TIME` can be used directly in the view or subquery body, or (non-standard) applied to the whole view when it's being used in a `SELECT`:
```
CREATE VIEW v1 AS SELECT * FROM t FOR SYSTEM_TIME AS OF TIMESTAMP'2016-10-09 08:07:06';
```
Or
```
CREATE VIEW v1 AS SELECT * FROM t;
SELECT * FROM v1 FOR SYSTEM_TIME AS OF TIMESTAMP'2016-10-09 08:07:06';
```
#### Use in Replication and Binary Logs
Tables that use system-versioning implicitly add the `row_end` column to the Primary Key. While this is generally not an issue for most use cases, it can lead to problems when re-applying write statements from the binary log or in replication environments, where a primary retries an SQL statement on the replica.
Specifically, these writes include a value on the `row_end` column containing the timestamp from when the write was initially made. The re-occurrence of the Primary Key with the old system-versioning columns raises an error due to the duplication.
To mitigate this with MariaDB Replication, set the [secure\_timestamp](../server-system-variables/index#secure_timestamp) system variable to `YES` on the replica. When set, the replica uses its own system clock when applying to the row log, meaning that the primary can retry as many times as needed without causing a conflict. The retries generate new historical rows with new values for the `row_start` and `row_end` columns.
### Transaction-Precise History in InnoDB
A point in time when a row was inserted or deleted does not necessarily mean that a change became visible at the same moment. With transactional tables, a row might have been inserted in a long transaction, and became visible hours after it was inserted.
For some applications — for example, when doing data analytics on one-year-old data — this distinction does not matter much. For others — forensic analysis — it might be crucial.
MariaDB supports transaction-precise history (only for the [InnoDB storage engine](../innodb/index)) that allows seeing the data exactly as it would've been seen by a new connection doing a `SELECT` at the specified point in time — rows inserted *before* that point, but committed *after* will not be shown.
To use transaction-precise history, InnoDB needs to remember not timestamps, but transaction identifier per row. This is done by creating generated columns as `BIGINT UNSIGNED`, not `TIMESTAMP(6)`:
```
CREATE TABLE t(
x INT,
start_trxid BIGINT UNSIGNED GENERATED ALWAYS AS ROW START,
end_trxid BIGINT UNSIGNED GENERATED ALWAYS AS ROW END,
PERIOD FOR SYSTEM_TIME(start_trxid, end_trxid)
) WITH SYSTEM VERSIONING;
```
These columns must be specified explicitly, but they can be made [INVISIBLE](../invisible-columns/index) to avoid cluttering `SELECT *` output.
When one uses transaction-precise history, one can optionally use transaction identifiers in the `FOR SYSTEM_TIME` clause:
```
SELECT * FROM t FOR SYSTEM_TIME AS OF TRANSACTION 12345;
```
This will show the data, exactly as it was seen by the transaction with the identifier 12345.
### Storing the History Separately
When the history is stored together with the current data, it increases the size of the table, so current data queries — table scans and index searches — will take more time, because they will need to skip over historical data. If most queries on that table use only current data, it might make sense to store the history separately, to reduce the overhead from versioning.
This is done by partitioning the table by `SYSTEM_TIME`. Because of the [partition pruning](../partition-pruning-and-selection/index) optimization, all current data queries will only access one partition, the one that stores current data.
This example shows how to create such a partitioned table:
```
CREATE TABLE t (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME (
PARTITION p_hist HISTORY,
PARTITION p_cur CURRENT
);
```
In this example all history will be stored in the partition `p_hist` while all current data will be in the partition `p_cur`. The table must have exactly one current partition and at least one historical partition.
Partitioning by `SYSTEM_TIME` also supports automatic partition rotation. One can rotate historical partitions by time or by size. This example shows how to rotate partitions by size:
```
CREATE TABLE t (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME LIMIT 100000 (
PARTITION p0 HISTORY,
PARTITION p1 HISTORY,
PARTITION pcur CURRENT
);
```
MariaDB will start writing history rows into partition `p0`, and when it reaches a size of 100000 rows, MariaDB will switch to partition `p1`. There are only two historical partitions, so when `p1` overflows, MariaDB will issue a warning, but will continue writing into it.
Similarly, one can rotate partitions by time:
```
CREATE TABLE t (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 WEEK (
PARTITION p0 HISTORY,
PARTITION p1 HISTORY,
PARTITION p2 HISTORY,
PARTITION pcur CURRENT
);
```
This means that the history for the first week after the table was created will be stored in `p0`. The history for the second week — in `p1`, and all later history will go into `p2`. One can see the exact rotation time for each partition in the [INFORMATION\_SCHEMA.PARTITIONS](../information-schema-partitions-table/index) table.
It is possible to combine partitioning by `SYSTEM_TIME` and subpartitions:
```
CREATE TABLE t (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME
SUBPARTITION BY KEY (x)
SUBPARTITIONS 4 (
PARTITION ph HISTORY,
PARTITION pc CURRENT
);
```
#### Default Partitions
**MariaDB starting with [10.5.0](https://mariadb.com/kb/en/mariadb-1050-release-notes/)**Since partitioning by current and historical data is such a typical usecase, from [MariaDB 10.5](../what-is-mariadb-105/index), it is possible to use a simplified statement to do so. For example, instead of
```
CREATE TABLE t (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME (
PARTITION p0 HISTORY,
PARTITION pn CURRENT
);
```
you can use
```
CREATE TABLE t (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME;
```
You can also specify the number of partitions, which is useful if you want to rotate history by time, for example:
```
CREATE TABLE t (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME
INTERVAL 1 MONTH
PARTITIONS 12;
```
Specifying the number of partitions without specifying a rotation condition will result in a warning:
```
CREATE OR REPLACE TABLE t (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME PARTITIONS 12;
Query OK, 0 rows affected, 1 warning (0.518 sec)
Warning (Code 4115): Maybe missing parameters: no rotation condition for multiple HISTORY partitions.
```
while specifying only 1 partition will result in an error:
```
CREATE OR REPLACE TABLE t (x INT) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME PARTITIONS 1;
ERROR 4128 (HY000): Wrong partitions for `t`: must have at least one HISTORY and exactly one last CURRENT
```
#### Automatically Creating Partitions
**MariaDB starting with [10.9.1](https://mariadb.com/kb/en/mariadb-1091-release-notes/)**From [MariaDB 10.9.1](https://mariadb.com/kb/en/mariadb-1091-release-notes/), the `AUTO` keyword can be used to automatically create history partitions.
For example
```
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 MONTH
STARTS '2021-01-01 00:00:00' AUTO PARTITIONS 12;
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME LIMIT 1000 AUTO;
```
Or with explicit partitions:
```
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO
(PARTITION p0 HISTORY, PARTITION pn CURRENT);
```
To disable or enable auto-creation one can use ALTER TABLE by adding or removing AUTO from the partitioning specification:
```
CREATE TABLE t1 (x int) WITH SYSTEM VERSIONING
PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;
# Disables auto-creation:
ALTER TABLE t1 PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR;
# Enables auto-creation:
ALTER TABLE t1 PARTITION BY SYSTEM_TIME INTERVAL 1 HOUR AUTO;
```
If the rest of the partitioning specification is identical to CREATE TABLE, no repartitioning will be done (for details see [MDEV-27328](https://jira.mariadb.org/browse/MDEV-27328)).
### Removing Old History
Because it stores all the history, a system-versioned table might grow very large over time. There are many options to trim down the space and remove the old history.
One can completely drop the versioning from the table and add it back again, this will delete all the history:
```
ALTER TABLE t DROP SYSTEM VERSIONING;
ALTER TABLE t ADD SYSTEM VERSIONING;
```
It might be a rather time-consuming operation, though, as the table will need to be rebuilt, possibly twice (depending on the storage engine).
Another option would be to use partitioning and drop some of historical partitions:
```
ALTER TABLE t DROP PARTITION p0;
```
Note, that one cannot drop a current partition or the only historical partition.
And the third option; one can use a variant of the [DELETE](../delete/index) statement to prune the history:
```
DELETE HISTORY FROM t;
```
or only old history up to a specific point in time:
```
DELETE HISTORY FROM t BEFORE SYSTEM_TIME '2016-10-09 08:07:06';
```
or to a specific transaction (with `BEFORE SYSTEM_TIME TRANSACTION xxx`).
To protect the integrity of the history, this statement requires a special [DELETE HISTORY](../grant/index#table-privileges) privilege.
Currently, using the DELETE HISTORY statement with a BEFORE SYSTEM\_TIME greater than the ROW\_END of the active records (as a [TIMESTAMP](../timestamp/index), this has a maximum value of '2038-01-19 03:14:07' [UTC](../coordinated-universal-time/index)) will result in the historical records being dropped, and the active records being deleted and moved to history. See [MDEV-25468](https://jira.mariadb.org/browse/MDEV-25468).
Prior to [MariaDB 10.4.5](https://mariadb.com/kb/en/mariadb-1045-release-notes/), the [TRUNCATE TABLE](../truncate-table/index) statement drops all historical records from a system-versioned-table.
From [MariaDB 10.4.5](https://mariadb.com/kb/en/mariadb-1045-release-notes/), historic data is protected from TRUNCATE statements, as per the SQL standard, and an Error 4137 is instead raised:
```
TRUNCATE t;
ERROR 4137 (HY000): System-versioned tables do not support TRUNCATE TABLE
```
### Excluding Columns From Versioning
Another MariaDB extension allows to version only a subset of columns in a table. This is useful, for example, if you have a table with user information that should be versioned, but one column is, let's say, a login counter that is incremented often and is not interesting to version. Such a column can be excluded from versioning by declaring it `WITHOUT VERSIONING`
```
CREATE TABLE t (
x INT,
y INT WITHOUT SYSTEM VERSIONING
) WITH SYSTEM VERSIONING;
```
A column can also be declared `WITH VERSIONING`, that will automatically make the table versioned. The statement below is equivalent to the one above:
```
CREATE TABLE t (
x INT WITH SYSTEM VERSIONING,
y INT
);
```
Changes in other sections: [https://mariadb.com/kb/en/create-table/](../create-table/index) [https://mariadb.com/kb/en/alter-table/](../alter-table/index) [https://mariadb.com/kb/en/join-syntax/](../join-syntax/index) [https://mariadb.com/kb/en/partitioning-types-overview/](../partitioning-types-overview/index) [https://mariadb.com/kb/en/date-and-time-units/](../date-and-time-units/index) [https://mariadb.com/kb/en/delete/](../delete/index) [https://mariadb.com/kb/en/grant/](../grant/index)
they all reference back to this page
Also, TODO:
* limitations (size, speed, adding history to unique not nullable columns)
System Variables
----------------
There are a number of system variables related to system-versioned tables:
#### system\_versioning\_alter\_history
* **Description:** SQL:2011 does not allow [ALTER TABLE](../alter-table/index) on system-versioned tables. When this variable is set to `ERROR`, an attempt to alter a system-versioned table will result in an error. When this variable is set to `KEEP`, ALTER TABLE will be allowed, but the history will become incorrect — querying historical data will show the new table structure. This mode is still useful, for example, when adding new columns to a table. Note that if historical data contains or would contain nulls, attempting to ALTER these columns to be `NOT NULL` will return an error (or warning if [strict\_mode](../sql-mode/index#strict-mode) is not set).
* **Commandline:** `--system-versioning-alter-history=value`
* **Scope:** Global, Session
* **Dynamic:** Yes
* **Type:** Enum
* **Default Value:** `ERROR`
* **Valid Values:** `ERROR`, `KEEP`
* **Introduced:** [MariaDB 10.3.4](https://mariadb.com/kb/en/mariadb-1034-release-notes/)
---
#### `system_versioning_asof`
* **Description:** If set to a specific timestamp value, an implicit `FOR SYSTEM_TIME AS OF` clause will be applied to all queries. This is useful if one wants to do many queries for history at the specific point in time. Set it to `DEFAULT` to restore the default behavior. Has no effect on DML, so queries such as [INSERT .. SELECT](../insert-select/index) and [REPLACE .. SELECT](../replace/index) need to state AS OF explicitly.
* **Commandline:** None
* **Scope:** Global, Session
* **Dynamic:** Yes
* **Type:** Varchar
* **Default Value:** `DEFAULT`
* **Introduced:** [MariaDB 10.3.4](https://mariadb.com/kb/en/mariadb-1034-release-notes/)
---
#### system\_versioning\_innodb\_algorithm\_simple
* **Description:** Never fully implemented and removed in the following release.
* **Commandline:** `--system-versioning-innodb-algorithm-simple[={0|1}]`
* **Scope:** Global, Session
* **Dynamic:** Yes
* **Type:** Boolean
* **Default Value:** `ON`
* **Introduced:** [MariaDB 10.3.4](https://mariadb.com/kb/en/mariadb-1034-release-notes/)
* **Removed:** [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/)
---
Limitations
-----------
* Versioning clauses can not be applied to [generated (virtual and persistent) columns](../generated-columns/index).
* [mysqldump](../mysqldump/index) does not read historical rows from versioned tables, and so historical data will not be backed up. Also, a restore of the timestamps would not be possible as they cannot be defined by an insert/a user.
See Also
--------
* [Application-Time Periods](../application-time-periods/index)
* [Bitemporal Tables](../bitemporal-tables/index)
* [mysql.transaction\_registry Table](../mysqltransaction_registry-table/index)
* [MariaDB Temporal Tables](https://youtu.be/uBoUlTsU1Tk) (video)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Release Notes - MariaDB Audit Plugin Release Notes - MariaDB Audit Plugin
=====================================
Notice that as of MariaDB versions 5.5.42 and 10.0.10 the Audit Plugin is included with MariaDB and not distributed separately, so separate release notes are no longer kept.
| Title | Description |
| --- | --- |
| [MariaDB Audit Plugin 1.1.7 Release Notes](https://mariadb.com/kb/en/mariadb-audit-plugin-117-release-notes/) | Status: Stable | Release Date: 1 May 2014 |
| [MariaDB Audit Plugin 1.1.6 Release Notes](https://mariadb.com/kb/en/mariadb-audit-plugin-116-release-notes/) | Status: Stable | Release Date: 27 Mar 2014 |
| [MariaDB Audit Plugin 1.1.5 Release Notes](https://mariadb.com/kb/en/mariadb-audit-plugin-115-release-notes/) | Status: Stable | Release Date: 25 Feb 2014 |
| [MariaDB Audit Plugin 1.1.4 Release Notes](https://mariadb.com/kb/en/mariadb-audit-plugin-114-release-notes/) | Status: Stable | Release Date: 21 Feb 2014 |
| [MariaDB Audit Plugin 1.1.3 Release Notes](https://mariadb.com/kb/en/mariadb-audit-plugin-113-release-notes/) | Status: Stable | Release Date: 7 Nov 2013 |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb System Troubleshooting MariaDB ColumnStore System Troubleshooting MariaDB ColumnStore
==========================================
### MariaDB ColumnStore alias commands
During the installation, these alias commands are defined and placed in the .bashrc file of the install user. 1.1 and later releases, the alias will reside in /etc/profile.d/columnstoreAlias.sh
This example is from a non-root install:
```
alias mcsmysql='/home/mariadb-user/mariadb/columnstore/mysql/bin/mysql --defaults-file=/home/mariadb-user/mariadb/columnstore/mysql/my.cnf -u root'
alias ma=/home/mariadb-user/mariadb/columnstore/bin/mcsadmin
alias mcsadmin=/home/mariadb-user/mariadb/columnstore/bin/mcsadmin
alias cpimport=/home/mariadb-user/mariadb/columnstore/bin/cpimport
alias home='cd /home/mariadb-user/mariadb/columnstore'
alias log='cd /var/log/mariadb/columnstore/'
alias dbrm='cd /home/mariadb-user/mariadb/columnstore/data1/systemFiles/dbrm'
alias module='cat /home/mariadb-user/mariadb/columnstore/local/module'
mcsmysql - Access the MariaDB ColumnStore MySQL Console
ma and mcsadmin - Access the MariaDB ColumnStore Admin Console
cpimport - short-cut to run the Bulk Load Process, cpimport
home - cd to MariaDB ColumnStore home directory
log - cd to MariaDB ColumnStore log directory
dbrm - cd to MariaDB ColumnStore DBRM file directory
module - outputs the MariaDB ColumnStore local mode name, like 'pm1'
```
### MariaDB ColumnStore Support Report tool
This tool can be executed by users, called "columnstoreSupport", that will generated a report that contains the log files and other system data that is used by MariaDB personnel to help diagnose system related issues and errors within the MariaDB ColumnStore Product.
Here is how to run it:
On a single server:
```
/usr/local/mariadb/columnstore/bin/columnstoreSupport -a -bl
```
On a multi-node combo server, run on pm1
```
/usr/local/mariadb/columnstore/bin/columnstoreSupport -a -bl -p 'user-password'
```
On a multi-node separate server, run on um1
```
/usr/local/mariadb/columnstore/bin/columnstoreSupport -a -bl -p 'user-password'
```
NOTE: If ssh-keys are setup, enter the work 'ssh' for user-password. If no ssh-keys, then enter the Unix User password that is used to ssh login to the system.
NOTE: If there is a MariaDB/MySQL root user password setup for the mysql console, you will need to add that password into the ColumnStores my.cnf file.
When password is not set in the my.cnf, this error will be reported:
```
NOTE: MariaDB Columnstore root user password is set
NOTE: No password provide on command line or found uncommented in my.cnf
```
Add this line to the my.cnf file
```
password='root-password'
```
Here is an example of it getting run the the report that is generated:
```
/usr/local/mariadb/columnstore/bin/columnstoreSupport -a
Get software report data for pm1
Get config report data for pm1
Get log report data for pm1
Get log config data for pm1
Get hardware report data for pm1
Get resource report data for pm1
Get dbms report data for pm1
Columnstore Support Script Successfully completed, files located in columnstoreSupportReport.tar.gz
```
columnstoreSupportReport.tar.gz is what you would provide to MariaDB personnel or attach to a JIRA.
This is what the report consist of:
1. Compressed log files from each module : pm1\_logReport.tar.gz a. This is the directory /var/log/mariadb/columnstore from pm1 which will contain:
* system logs for ColumnStore, debug, info, err, warning, and critical
* the alarm logs, alarm.log and activealarmLog
* UI command log, uiCommands.log, which are commands entered into mcsadmin
2. Config report from each module: pm1\_configReport.txt. NOTE: on a single server system, pm1 report will contain more configuration data. On a multi-node seperate system um1 report will contain more data
* /etc/tstab
* Server processes - ps command info and top
* System network information
* System configuration information including storage
* System status information at the time the report was run
* System configuration file, Columnstore.xml
3. Hardware report for each module: pm1\_hardwareReport.txt
* OS version
* CPU information
* Memory information
* Storage mount information
* Ip address information, ifconfig
4. Resource report for each module: pm1\_resourceReport.txt
* Shared memory
* Disk usage
* DBRM files
* Active table locks
* BRM extent map
5. Software report for each module: pm1\_softwareReport.txt
* MariaDB ColumnStore software version
6. DBMR report, from the front-end modules: um1\_dbmsReport.txt
* MariaDB version
* System catalog and tables
* MariaDB Columnstore usernames
* MariaDB ColumnStore variables
* MariaDB ColumnStore configuration file, my.cnf
* List of active queries at the time the report was run
7. MariaDB ColumnStore MySQL log file: um1\_mysqllogReport.tar.gz
### MariaDB ColumnStore logging
MariaDB ColumnStore utilizes the install system logging tool, whether it's syslog, rsyslog, or syslog-ng. The logs are located in /var/log/mariadb/columnstore. There are these 5 logs:
* crit.log
* err.log
* warning.log
* info.log
* debug.log
log format:
timestamp hostname process name[pid] time | session id | txn id | thread id | logging level subsystem\_id message
| log part | Description |
| --- | --- |
| `timestamp` | Format mmm dd hh24:mm:ss |
| `hostname` | hostname of the logging server |
| `processname[pid]:` | The Name of the Process (example: ProcessMonitor) followed by the pid enclosed into [] |
| `time` | execution time for this step in seconds. |
| `session id` | Session ID. Thread number in the processlist. If N/A than value is 0. |
| `txn id` | MariaDB Transaction ID |
| `thread id` | Thread ID |
| `logging level` | D = Debug I = Info W = Warning E = Error C = Critical |
| `subsystem id` | 1=ddljoblist 2=ddlpackage 3=dmlpackage 4=execplan 5=joblist 6=resultset 7=mcsadmin 8=oamcpp 9=ServerMonitor 10=traphandler 11=alarmmanager 12=configcpp 13=loggingcpp 14=messageqcpp 15=DDLProc 16=ExeMgr 17=ProcessManager 18=ProcessMonitor 19=writeengine 20=DMLProc 21=dmlpackageproc 22=threadpool 23=ddlpackageproc 24=dbcon 25=DiskManager 26=RouteMsg 27=SQLBuffMgr 28=PrimProc 29=controllernode 30=workernode 31=messagequeue 32=writeengineserver 33=writeenginesplit 34=cpimport.bin 35=IDBFile |
| `message` | logging message |
We also utilize the log rotate tool and by default we are configured to keep 7 days of log files. They are stored in /var/log/mariadb/columnstore/archive.
The MariaDB ColumnStore logrotate file is located in
/etc/logrotate.d/columnstore
Also in the /var/log/mariadb/columnstore directory, there are a few other logs that are kept:
* activeAlarms – List of active alarms currently set on the system
* alarm.log – list of all the alarms and associated clear alarms
* mcsadmin.log – list of the mcsadmin commands entered
The MariaDB ColumnStore process corefiles would be stored in /var/log/mariadb/columnstore/corefiles, that is if core file dumping is enabled on the system.
#### MariaDB ColumnStore log files and what goes in them
* Crit, err, and warning used to log problems by a MariaDB ColumnStore Process.
* Info will have logs showing high level actions that are going on in the system. During a stop/startsystem, it will show the high level commands of the process/modules being stopped and started. The bulk-load (cpimport) tool logs is high level actions there also.
* Debug will have the lower level actions from the MariaDB ColumnStore Processes, which will include queries.
The MariaDB Server will be logged here.
MariaDB ColumnStore-MySQL logs are stored here:
```
/usr/local/mariadb/columnstore/mysql/db/’server-name’.err
```
NOTE: Other informational log files will be written to the log directory as well as the Temporary Directory from MariaDB ColumnStore process during certain operations. So you will see a few other logs show up in the log directory besides these.
Temporary Directory for root installs is located in /tmp/columnstore/tmp/filles/ Temporary Directory for non-root installs is located in $HOME/.tmp
#### MariaDB ColumnStore log files and how to setup
The MariaDB ColumnStore log files setup is done as part of the post-install/postConfigure installation process. If some some reason the MariaDB ColumnStore log files aren't being generated or the log-rotation is not working, then there might have been some install/setup error that occurred.
Run the following command to get the logging setup:
Root install:
```
/usr/local/mariadb/columnstore/bin/syslogSetup.sh install
```
Non-root install, run as root user. The below example is assuming 'mysql' as the non-root user.
```
export COLUMNSTORE_INSTALL_DIR=/home/mysql/mariadb/columnstore
export LD_LIBRARY_PATH=:/home/mysql/mariadb/columnstore/lib:/home/mysql/mariadb/columnstore/mysql/lib:/home/mysql/mariadb/columnstore/lib:/home/mysql/mariadb/columnstore/mysql/lib
/home/mysql/mariadb/columnstore/bin/syslogSetup.sh --installdir=/home/mysql/mariadb/columnstore --user=mysql install
```
To test the logs, run the following and check the directory
```
# mcsadmin getlogconfig
# ls -ltr /var/log/mariadb/columnstore
// want to see if any of these logs are now showing up
# ls -ltr
total 172
-rwxr-xr-x 1 syslog adm 398 Oct 12 18:27 warning.log
-rwxr-xr-x 1 syslog adm 398 Oct 12 18:27 err.log
-rwxrwxrwx 1 syslog adm 8100 Oct 12 18:27 info.log
-rwxrwxrwx 1 syslog adm 139975 Oct 12 18:27 debug.log
```
If so, then logging is now working...
#### Process STDOUT/STDERR logging
MariaDB ColumnStore Processes have built in STDOUT/STDERR logging that can be enabled. This could be used for additional debugging of issues. This is enabled on a Process by Process Level. Here is an example of how to enable and disable. In this example, locate the Process to enable, like DDLProc. Then enter the Process name and Module-type on the 'setprocessconfig' command.
NOTE: run from PM1
```
mcsadmin> getprocessconfig
getprocessconfig Tue Sep 26 19:21:14 2017
Process Configuration
Process #1 Configuration information
ProcessName = ProcessMonitor
ModuleType = ChildExtOAMModule
ProcessLocation = /home/mariadb-user/mariadb/columnstore/bin/ProcMon
BootLaunch = 0
LaunchID = 1
RunType = LOADSHARE
LogFile = off
Process #2 Configuration information
ProcessName = ProcessManager
ModuleType = ParentOAMModule
ProcessLocation = /home/mariadb-user/mariadb/columnstore/bin/ProcMgr
BootLaunch = 1
LaunchID = 2
RunType = ACTIVE_STANDBY
LogFile = off
Process #3 Configuration information
ProcessName = DBRMControllerNode
ModuleType = ParentOAMModule
ProcessLocation = /home/mariadb-user/mariadb/columnstore/bin/controllernode
ProcessArg1 = /home/mariadb-user/mariadb/columnstore/bin/controllernode
ProcessArg2 = fg
BootLaunch = 2
LaunchID = 4
DepModuleName1 = @
DepProcessName1 = ProcessManager
RunType = SIMPLEX
LogFile = off
Process #4 Configuration information
ProcessName = ServerMonitor
ModuleType = ChildOAMModule
ProcessLocation = /home/mariadb-user/mariadb/columnstore/bin/ServerMonitor
ProcessArg1 = /home/mariadb-user/mariadb/columnstore/bin/ServerMonitor
BootLaunch = 2
LaunchID = 6
RunType = LOADSHARE
LogFile = off
Process #5 Configuration information
ProcessName = DBRMWorkerNode
ModuleType = ChildExtOAMModule
ProcessLocation = /home/mariadb-user/mariadb/columnstore/bin/workernode
ProcessArg1 = /home/mariadb-user/mariadb/columnstore/bin/workernode
ProcessArg2 = DBRM_Worker
ProcessArg3 = fg
BootLaunch = 2
LaunchID = 7
RunType = LOADSHARE
LogFile = off
Process #6 Configuration information
ProcessName = DecomSvr
ModuleType = pm
ProcessLocation = //home/mariadb-user/mariadb/columnstore/bin/DecomSvr
BootLaunch = 2
LaunchID = 15
RunType = LOADSHARE
LogFile = off
Process #7 Configuration information
ProcessName = PrimProc
ModuleType = pm
ProcessLocation = /home/mariadb-user/mariadb/columnstore/bin/PrimProc
BootLaunch = 2
LaunchID = 20
RunType = LOADSHARE
LogFile = off
Process #8 Configuration information
ProcessName = ExeMgr
ModuleType = pm
ProcessLocation = /home/mariadb-user/mariadb/columnstore/bin/ExeMgr
BootLaunch = 2
LaunchID = 30
DepModuleName1 = pm*
DepProcessName1 = PrimProc
RunType = LOADSHARE
LogFile = off
Process #9 Configuration information
ProcessName = WriteEngineServer
ModuleType = pm
ProcessLocation = /home/mariadb-user/mariadb/columnstore/bin/WriteEngineServer
BootLaunch = 2
LaunchID = 40
RunType = LOADSHARE
LogFile = off
Process #10 Configuration information
ProcessName = DDLProc
ModuleType = pm
ProcessLocation = /home/mariadb-user/mariadb/columnstore/bin/DDLProc
BootLaunch = 2
LaunchID = 50
DepModuleName1 = pm*
DepProcessName1 = WriteEngineServer
DepModuleName2 = *
DepProcessName2 = DBRMWorkerNode
DepModuleName3 = *
DepProcessName3 = ExeMgr
RunType = SIMPLEX
LogFile = off
Process #11 Configuration information
ProcessName = DMLProc
ModuleType = pm
ProcessLocation = /home/mariadb-user/mariadb/columnstore/bin/DMLProc
BootLaunch = 2
LaunchID = 51
DepModuleName1 = pm*
DepProcessName1 = WriteEngineServer
DepModuleName2 = *
DepProcessName2 = DBRMWorkerNode
DepModuleName3 = @
DepProcessName3 = DDLProc
RunType = SIMPLEX
LogFile = off
Process #12 Configuration information
ProcessName = mysqld
ModuleType = pm
ProcessLocation = /home/mariadb-user/mariadb/columnstore/mysql/libexec/mysqld
BootLaunch = 0
LaunchID = 100
RunType = LOADSHARE
LogFile = off
mcsadmin> setprocessconfig DDLProc pm LogFile on
setprocessconfig Tue Sep 26 19:22:00 2017
Successfully set LogFile = on
mcsadmin> shutdownsystem y
shutdownsystem Tue Sep 26 19:23:59 2017
This command stops the processing of applications on all Modules within the MariaDB ColumnStore System
Checking for active transactions
Stopping System...
Successful stop of System
Shutting Down System...
Successful shutdown of System
mcsadmin> startsystem
startsystem Tue Sep 26 19:24:36 2017
startSystem command, 'columnstore' service is down, sending command to
start the 'columnstore' service on all modules
System being started, please wait..................
Successful start of System
mcsadmin>
```
In the log directory, there will be the following 2 files. So all the STDOUT/STDERR from the process will be logged here
```
pwd
/var/log/mariadb/columnstore
ll DDLProc.*
-rw-r--r-- 1 mariadb-user mariadb-user 0 Sep 26 19:25 DDLProc.err
-rw-r--r-- 1 mariadb-user mariadb-user 34 Sep 26 19:25 DDLProc.out
```
To Disable, just set the LogFile setting back to off and do the shutdown/startsystem
```
setprocessconfig DDLProc pm LogFile off
```
### Crash trace files
MariaDB ColumnStore 1.0.13 / 1.1.3 onwards includes a special crash handler which will log details of a crash from the main UM and PM daemons. These can be found in:
```
/var/log/mariadb/columnstore/trace
```
The filenames will be in the form `<processName>.<processID>.log`. These are similar to the crash traces that can be found in the MariaDB server log files if MariaDB server crashes.
### Enable/Disable Core Files
Since core files are very large (1gb) and can take up a lot of disk space, Core File Generating for MariaDB ColumnStore platform processes is defaulted to disable.
The location of the MariaDB ColumnStore Process corefiles get placed here:
/var/log/mariadb/columnstore/corefiles/
You can redirect the location to another disk with more space by using a soft-link.
* Enable from pm1
```
# ma shutdownsystem
# /usr/local/mariadb/columnstore/bin/setConfig Installation CoreFileFlag y
# ma startsystem
```
* Disable from pm1
```
# ma shutdownsystem
# /usr/local/mariadb/columnstore/bin/setConfig Installation CoreFileFlag n
# ma startsystem
```
### MariaDB ColumnStore database files
The MariaDB ColumnStore has 3 sets of database files. These files are also always backed up and restored together as part of the backup and restore process.
* MariaDB ColumnStore-MySQL schemes - /usr/local/mariadb/columnstore/mysql/db/\*
* MariaDB ColumnStore Database - /usr/local/mariadb/columnstore/dataX/000.dir
* MariaDB ColumnStore DBRM files - /usr/local/mariadb/columnstore/data1/systemFiles/dbrm/\*
MariaDB ColumnStore Database: The X represents the DBroot ID #. DBroot is the file directory containing the meta-data. We generally assign 1 DBroot per Performance Module. This DBroot can be pointing to local disk storage area or mounted to external disk. So for a single-server setup, we would have ‘data1’ as the dbroot. If we had a system will 2 Performance modules, then we would have ‘data1’ on PM1 and ‘data2’ on PM2.
MariaDB ColumnStore DBRM files: This is where the Extent Map and Versioning files are located. These make up the DBRM files. There are 3 copies of the files that are keep, one is the current active set and the other 2 are the backup. The current active copy file name is located in this file:
```
/usr/local/mariadb/columnstore/data1/systemFiles/dbrm/BRM_saves_current
```
The Extent Map is loaded into shared memory on each of the nodes during the start-system process time. The version in shared memory on the PM1 node is the main copy. Changes are applied to that version in memory. Then changes are made to the disk version that only exist on PM1 disk storage and a copy of those changes are sent out to the other nodes and their memory copies are updated.
NOTE: The following utility can be used to dump the internal memory copy of the Extent Map `/usr/local/mariadb/columnstore/bin/editem` : there are a few options with this command, -i dumps a raw copy and -d dumps a formatted copy
### MariaDB ColumnStore utilities
Here are a few of the common utilities that are used to view and troubleshoot issues. All of these commands are located in /usr/local/mariadb/columnstore/bin/
These are utilizes to be used to view or set system variables. They should be run on the Active OAM Parent Module, which generally is the Performance Module #1.
* editem – used to view the Extent Map in internal memory, discussed in previous section
* configxml.sh – used to get and set parameters from the system config file, ColumnStore.xml
+ ./configxml.sh getconfig ExeMgr1 Port
+ ./configxml.sh setconfig ExeMgr1 Port 8601
* dbrmctl – used to display are view or change the DBRM status
+ Example, display the current the dbrm status
- ./dbrmctl status
+ Example, the dbrm might be in a readonly state, this would unlock
- ./dbrmctl resume
These are utilizes to be used to view or clear Database Table Locks. They should be run on the Active OAM Parent Module, which generally is the Performance Module #1.
* viewtablelock – will display what tables are locked. There might be times when a DML command might fail and leave a table in a locked state. You can run this command to find which tables are locked, and then use the ‘cleartablelock’ command
* cleartablelock – use to clear a table lock. As explained above, could be used to clear a lock on a table that was left set on a failed command
These are utilities that would be run on all nodes in the system
* clearShm – used to clear the shared internal memory, used at times after a system-shutdown command is do just to make sure the memory is cleared
### Tables locks and clearing
A Table lock might be left set due to come failure on processing a DML/DDL command. Normally this lock can be cleared with the utility mentioned above, cleartablelock. But in the case where it doesn't clear the lock, it can also be cleared by restarting the Active DMLProc on the system. This will cause DMLProc to perform the rollback processing that will clear any table locks. And as a third option when the first to doesn't work, there is a tablelock file that can be removed
##### viewtablelock and cleartablelock
Run viewtablelock to get the list of table locks:
```
# /usr/local/mariadb/columnstore/bin/viewtablelock
```
Run cleartablelock with the table lock ID shown in the viewtablelock
```
# /usr/local/mariadb/columnstore/bin/cleartablelock XX
```
##### Restart DMLProc
1. run command to get the Active DMLProc
```
# mcsadmin getsystemi
```
2. Run the follow command that will restart the DMLProc
```
# mcsadmin restartProcess DMLProc xxx (xxx is probably um1 or pm1, based on the system)
```
When the status of DMLProc goes to ACTIVE from BUSY\_INIT (meaning is performing rollbacks), then check to see if the lock still exist.
##### delete the tablelock file
If the previous 2 commands didn't work, you can delete the tablelock file if it exists
This is done from PM1:
```
# cd /usr/local/mariadb/columnstore/data1/systemFiles/dbrm
# rm -f tablelock // if it exist
# mcsadmin restartSystem y
```
### Multi-node install problems and how to diagnose
Once you install the packages on the initial server, pm1, run post-install and postConfigure.
If it's failing in the remote server install section, go via the install logs in /tmp to see why the failure occurred. It could be related to these issues:
1. user password or ssh-keys not setup, failing to log in
2. Dependent package isn't installed on a remote server
3. Incompatible OS's between nodes, all have to be the same
If you get to the point where it says Starting system processes, but it seems to hang or not return. Here are some things to check:
1. Check the locale setting on all servers and make sure they are all the same
2. on pm1, create the alias if you haven't already
1. . /usr/local/mariadb/columnstore/bin/columnstoreAlias then run following command and check the process status:
2. mcsadmin getsysteminfo check if ProcMon is ACTIVE on all configured servers, if not, check the log files on the asscouiated server to see what error ProcMon is reporting. Also make sure the ProcMgr is ACTIVE on pm1.
logs are located in:
/var/log/mariadb/columnstore
generally when ProcMon/ProcMgr isn't active, it's because one of these issues:
1. if external storage, an pm /etc/fstab isnt setup
2. message issue between the servers that is causing ProcMon's and ProcMgr to fail to communicate. Make sure all server firewalls are disable along with SElinux.
### Add Module install problems and how to diagnose
There are a number of reasons why an addModule command might fail, missing dependent packages, password or ssh key is not setup, etc. Here are some things to investigate when this 'mcsadmin' command does fail.
1. Check the log files on the local node where the command is running. Generally an log in the error, warning, or critical log will be reported when a failure occurs.
2. Depending on how far the addModule command got, another log will be generated in /tmp. Look for a log file with binary\_installer, user/performance\_installer. This will echo back the installer script the commands it runs, so it would flag an issue in there.
3. Also make sure you can log into the new server/instance from the local one. Not being able to log in would cause a failure.
4. Also depending on how far the command got, it might have added an entry in the system configuration file for the new module. So you will need to check that and if you were adding pm2 and it failed, then it shows up in the system configuration via the 'getsystemn'. You would then need to remove that module before trying the addModule command.
5. If your get the error return from the 'addModule' command if File Open error, that means it could locate the MariaDB ColumnStore rpm/deb/binary in the $HOME directory, i.e. /root for root user install. The logic takes the packages from here and pushes them to the new server.
### postConfigure install problems and how to diagnose
The installation script, postConfigure, is run at install and upgrade times. The first part of the script takes information from the user and setup the system configuration, which is updating the Columnstore.xml and the ProcessConfig.xml configuration files.
The second part of the script performs a remote install of all of the other servers in the system, which is for a multi-node install configuration. The installation of the remote nodes are done simultaneously and the remote install logs are placed in /tmp on 'pm1', i.e. "pm1\_installer.log". The actual log file name will be different based on if you are doing an rpm, debian, or binary install. So if postConfigure reports that a failure occurred during the remote server install phase, you can look at these logs in /tmp. The main reasons why this might fail:
1. ssh access to the remote node from pm1 failed, password or ssh setup issues
2. A missing dependency package on the remote node
The third part of postConfigure is the starting up of the system, which consist of starting up of the 'columnstore' service script on each node. If a failure happens during this time frame, do the following to help determine the failure:
1. first, you might need to run this script to get the 'mcsadmin' alias command defined
```
# . ./usr/local/mariadb/columnstore/bin/aliasColumnstore
```
2. get the system statuses
```
# mcsadmin getsystemi
```
3. Here are some things that can point to you why the system didn't come up: a. Make sure that all ProcMon processes are active on all nodes. If they aren't, then here are some of the reasons why they might not be: i. Firewall is enabled on the 'pm1' node are the installing node, check that. ii. ProcMon might have run into another issues at startup, like failing to mount to an external disk. So check the log files on the remote server where ProcMon failed to go ACTIVE. b. If it reports a module status of FAILED, then check the log files from that module.
4. Also check the log files from the local 'pm1' module.
### startSystem problems and how to diagnose
So this is assuming that the system has made it successfully though a postConfigure install or upgrade. At some point, you might need to do a stop or shutdownsystem for some maintenance or some other reason. And then do the startsystem. If any failures occur with the startSystem command, you can check the following:
1. get the system statuses
```
# mcsadmin getsystemi
```
2. Here are some things that can point to you why the system didn't come up: a. Make sure that all ProcMon processes are active on all nodes. If they aren't, then here are some of the reasons why they might not be: i. Firewall is enabled on the 'pm1' node are the installing node, check that. ii. ProcMon might have run into another issues at startup, like failing to mount to an external disk. So check the log files on the remote server where ProcMon failed to go ACTIVE. b. If any of the DBRM Processes, Controller or Worker nodes are in a FAILED state, the most likely reason for this the there is an issue with the DBRM files. These files are loaded from disk into shared-memory. This is load fails, it will mark the DBRM Process as FAILED. If this is the case, then please contact MariaDB Customer Support. The DBRM fails contain the Extent Map and other metadata related to the MariaDB Columnstore Database files. c. If it reports a module status of FAILED, then check the log files from that module.
3. Also check the log files from the local 'pm1' module.
### System in DBRM Read-Only Mode
The System can go into DBRM Read-Only Mode due to these conditions, a failure while doing a DDL/DML command, network problem between servers where the DBRM could get distributed to the other servers from Performance Module 1, and some failover scenarios. It will be shown by the follow alarm. This alarm along with all critical alarms will be displayed when user logs into the Columnstore Admin Console 'mcsadmin'.
AlarmID = 31 Brief Description = DBRM\_READ\_ONLY Alarm Severity = CRITICAL Time Issued = Wed Sep 13 14:32:37 2017 Reporting Module = pm1 Reporting Process = DBRMControllerNode Reported Device = System
If the system ever gets into DBRM Read-Only Mode, it is best resolved by do a restart system form the 'pm1' module:
```
# mcsadmin restartsystem y
```
DBRM Read-Only Mode means that changes cannot be made to the MariaDB ColumnStore Database while it is in this state. Queries can still be processed.
### Non-Root System, PrimProc Process fails to startup
For non-root systems, the user file settings is required to be set as shown in the Preparing Guide. So if you have a Non-Root install where it fails to start and the 'mcsadmin getsystemi' shows that the PrimProc Process is in a failed state. Double check the user file settings on each node.
[https://mariadb.com/kb/en/library/preparing-for-columnstore-installation/#set-the-user-file-limits-by-root-user](../library/preparing-for-columnstore-installation/index#set-the-user-file-limits-by-root-user)
### Create table error - Error occurred when calling system catalog
If you are having a problem creating a table after an new install is performed and you get the error "Error occurred when calling system catalog", chances are the System Catalog didnt get created by postConfigure. The call to create happens at the very end of postConfigure, so it is possible that postConfigure didn't successfully complete or there was an error when trying to create it.
Run the following command from PM1 to create the System Catalog:
/usr/local/mariadb/columnstore/bin/dbbuilder 7
NOTE: This example is assuming it's a root user install
### Missing MariaDB ColumnStore Function or Engine
After new Install or Upgrade, a MariaDB ColumnStore Function or Engine type might be missing from the MariaDB Database. If this occurs, you can run the following procedure on each of the UMs or PMs with UM front-end modules on the system. This procedure should get all of the Functions and Engines created.
From Performance Module #1
```
mcsadmin shutdownsystem y
```
On all User Modules or Performance Modules with mysqld installed. This example assumes a root install in /usr/local/, run the following scripts
```
/usr/local/mariadb/columnstore/bin/post-mysqld-install
/usr/local/mariadb/columnstore/bin/post-mysql-install
```
From Performance Module #1
```
mcsadmin startsystem y
```
### Truncate Table Failure with error of Columnstore engine ID different
This issue has been reported after a system was upgraded from 1.1.x to 1.2.x versions of MariaDB ColumnStore. If the truncate error occurs with the Columnstore engine ID being different, then the existing table would need to be dropped and recreated as a fix.
### MariaDB ColumnStore Process Restarting due to Allocating to much memory
In the 1.2.2 and earlier releases, MariaDB ColumnStore Process like PrimProc, ExeMgr, or WriteEngineService can automatically restart due to over allocation of memory. PrimProc and WriteEngineService run on the Performance Module. ExeMgr runs on the User Module.
This can be caused by a few situations like these:
* Doing very large aggregates
* Spike memory usage when dealing with large result sets to buffer the rowgroups
This is a log that is reported for this issue. This example shows PrimProc using 95% of memory. The next Log is reporting that it is restarting.
Feb 29 16:22:34 ip-192-168-1-112 ServerMonitor[1951]: 39.784588 |0|0|0| I 09 CAL0000: Memory Usage for Process: PrimProc : Memory Used 12296930 : % Used 95 Feb 29 16:23:00 ip-192-168-1-112 ProcessMonitor[1578]: 05.478658 |0|0|0| C 18 CAL0000: **\***MariaDB ColumnStore Process Restarting: PrimProc, old PID = 5667
When this issue is occurring on a system running 1.2.2 or earlier, this is a work-around that can be applied.
IMPORTANT: it is recommenced that you check with MariaDB ColumnStore support first on if this change should be made so it can be confirmed that this is the best work-around.
Work-around Process:
```
On pm1
# mcsadmin shutdownsystem y
So this will need to be done on all nodes, both User Modules and Performance Modules
1. install jemalloc package
Install jemalloc package(libjemalloc1 for Ubuntu 18, Centos 7 provides the jemalloc package via EPEL repository). After you install the package, find jemalloc shared object file path(LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1 in Ubuntu 18)
2. Edit mariadb/columnstore/bin/run.sh
Then edit $install_dir/bin/run.sh using the patch. Please note that libjemalloc.so path must be changed according with your distribution.
--- /run.sh 2019-01-31 05:07:19.473718632 +0000
+++ /usr/local/mariadb/columnstore/bin/run.sh 2019-01-31 05:07:34.057051852 +0000
@@ -49,7 +49,7 @@
fi
while [ $keep_going -ne 0 ]; do
- $exename $args
+ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1 $exename $args
if [ -e ${lopt}/StopColumnstore ]; then
exit 0
fi
On pm1
<<code>>
# mcsadmin startsystem y
```
### Error in forking cpimport.bin (errno-12); Cannot allocate memory
This Error is an indication that ExeMgr on the User Module doesnt have enough local memory to run the Bulk Load Process cpimport.bin. So check for Process Memory allocation by other processes on the User Module when this error gets reported.
Here is one example of when this problem has been seen. The setting of innodb\_buffer\_pool\_size in my.cnf is set too high. In general, it shouldn't be set any higher than 50GB or 25% of the total memory.
### How to Switch Primary Master User Module
The commands below will show how to make a different User Module the Master MySQL replication module. There are some cases where User Module #2 make have become the Master during a failover scenario or some other case and you want to make User Module #1 the Master once again. These are the commands to do this.
First, check the System Status with the following command to see whether User Module #1 is disabled or not:
On pm1
```
# mcsadmin getsysteminfo
```
If User Module #1 is disabled, run to enable it:
```
# mcsadmin altersystem-enablemodule um1 y
```
Now run the commands to get User Module #1 as the Master. This is assuming User Module #2 is the current the Master:
```
# mcsadmin altersystem-disablemodule um2 y
# mcsadmin altersystem-enablemodule um2 y
```
### How to Recover when system when DBROOT is incorrectly assigned or other Configuration problem
There are times when the system gets into a state where it will not successfully Start due to a configuration problem. And since it will not start, the user can't get it to the point to where it can be fixed via 'mcsadmin'. So when that happen's, they the user will need to run 'postConfigure' to correct the issue.
Here is a couple of examples of a configuration issue that could cause this situation.
1. DBROOT 1 get reassigned to a different PM than PM1 when the system only has local storage, meaning DBOOT 1 is only on PM1. 2. UM or PM is disabled and the user wants to get it enabled.
Here is the process on how to recover. Run from PM1
```
# mcsadmin shutdownsystem y
```
Run 'postConfigure' based on the type of install it is, root or non root. Command line arguments will be different between the two. This example shows for a root install.
```
# /usr/local/mariadb/columnstore/bin/postConfigure
// if ask to use old configuration, answer n
// If you get to the Module that is disabled and you want enabled, answer 'y' to the enable module prompt
// on each of the PM DBROOT prompts, enter the DBOOT number that goes to the PM
// When it completes the configuration part and ask for ssh password, enter 'exit' to exit postConfigure
# mcsadmin startsystem
```
### Query failure MessageQueueClient :: setup (): unknown name or service
Due to a known issue in MariaDB Columnstore 1.2.5 and earlier, if a User Module or a Performance Module on a combined server is removed, the ColumnStore.xml entry for the ExeMgr setting gets set to "unassigned". This setting will cause queries to fail especially when running multiple queries in parallel.
The work-around fix is to delete the ExeMgr entry from the Columnstore.xml file.
```
<ExeMgr8>
<IPAddr>unassigned</IPAddr>
<Port>8601</Port>
<Module>unassigned</Module>
</ExeMgr8>
```
### Replication Data out-of-sync causing mysqld to not start and the System to not startup.
In the case where the system fails to startup or the MariaDB ColumnStore server (mysqld) fails to startup and it isreporting an replication error on Drop, Rename, Move Table or View, this could mean that the Binary Logs on the Module, usually a Slave User Module are out of sync with the Database. An example would be when mysqld startups on User Module #2, a slave module, it will go through the Replication bin-logs and run commands to capture up with the Master DB. If it tried to Drop a Table or View that doesn't exist in the Slave Database, it will reported an error and shutdown. This is an indication that the Replication and maybe the Data itself is out-of-sync between the Master and the slave Modules, usually UM1 and UM2. It resolve the issue to where the UM2 slave mysqld will run, the following procedure needs to be execute to get the UMs back in-sync.
UM1: log into mcsmysql and purge the bin-logs providing todays date
```
mcsmysql
sql > PURGE BINARY LOGS BEFORE '2013-04-21';
```
UM2: move the bin-log and relay-logs to a backup directory. Can delete, but best ti move instead
```
cd ../mariadb/columnstore/mysql/db
# mv mysql-bin.* /tmp/.
# mv relay-bin.* /tmp/.
```
Compare the InnoDB file (ibdata1) on UM1 to UM2 to make sure that file is insync. If it is different between the 2 UMs, then scp the one from UM1 to UM2. XXX.XXX.XXX.XXX is UM1 IP Address.
ON UM2:
```
mv ibdata1 /tmp/.
# scp XXX.XXX.XXX.XXX:/usr/local/mariadb/columnstore/mysqld/db/ibdata1 .
```
At this point, lets try to restart the system
FROM PM1:
```
# mcsadmin
> shutdownSystem y
> startsystem
```
### enableMySQLReplication failure
The Front-end Replication can be disabled and enabled via the 'mcsadmin' console. If the Replication stopped working between the User Modules, the user can run the enableMySQLReplication to get it setup and working.
```
# mcsadmin
> enableMySQLReplication
Enter the 'User' Password or 'ssh' if configured with ssh-keys
Please enter: ssh
```
But in the case when this command fails, here is what to do to debug why it failed
```
# mcsadmin
> enableMySQLReplication
Enter the 'User' Password or 'ssh' if configured with ssh-keys
Please enter: ssh
**** enableRep Failed : API Failure return in enableMySQLRep API
```
On User Module #1:
1. Check the Columnstore log files to see which step failed 2. There are also additional log files that scripts will update providing additional information. They are located on UM1 in /tmp/columnstore\_tmp\_files for root install and .tmp in the non-root user home directory.
### Problem Dropping table or Creating table an existing table
Sometimes if the Table information between the front-end and the back-end get out-of-sync, the following errors will be reported not allowing the customer to drop or create the table
```
DROP TABLE TABLE1;
Error Code: 1815. Internal error: CAL0009: Drop table failed due to IDB-2006: 'TABLE1' does not exist in Columnstore.
```
```
CREATE TABLE `TABLE1` ( `cover_key` int(10) unsigned DEFAULT NULL, `count_num` bigint(20) unsigned DEFAULT NULL ) ENGINE=Columnstore DEFAULT CHARSET=latin1;
Error Code: 1050. Table 'TABLE1' already exists
```
This problem can be cleaned up by Dropping the table with RESTRICT option.
```
DROP TABLE TABLE1 RESTRICT;
```
Then the customer should be able to create the table at this point.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb MultiPointFromText MultiPointFromText
==================
A synonym for [MPointFromText](../mpointfromtext/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Update Debian 4 mirrors for Buildbot VMs Update Debian 4 mirrors for Buildbot VMs
========================================
Debian 4 has become so old that the apt repository has been moved out of the main Debian 4 mirror servers, and into the archive of old versions. This needs to be fixed by pointing the Debian 4 images to a different mirror:
64-bit:
```
kvm -m 512 -hda vm-debian4-amd64-install.qcow2 -redir 'tcp:2200::22' -boot c -smp 1 -cpu qemu64 -net nic,model=e1000 -net user -nographic
sudo vi /etc/apt/sources.list
# replace http://ftp.dk.debian.org/debian/ with http://ftp.de.debian.org/archive/debian/
```
32-bit:
```
kvm -m 512 -hda vm-debian4-i386-install.qcow2 -redir 'tcp:2200::22' -boot c -smp 1 -cpu qemu64 -net nic,model=e1000 -net user -nographic
sudo vi /etc/apt/sources.list
# replace http://ftp.dk.debian.org/debian/ with http://ftp.de.debian.org/archive/debian/
```
After that, it is necessary to re-do from scratch the -update and -update2 debian4 images (as these are built on top of the -install images).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb FederatedX FederatedX
===========
Information about the FederatedX Storage Engine
| Title | Description |
| --- | --- |
| [About FederatedX](../about-federatedx/index) | Federated Storage Engine fork that uses uses libmysql to talk to the data source |
| [Differences Between FederatedX and Federated](../differences-between-federatedx-and-federated/index) | Main differences between FederatedX and Federated |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Aborting Statements that Exceed a Certain Time to Execute Aborting Statements that Exceed a Certain Time to Execute
=========================================================
Overview
--------
[MariaDB 10.1.1](https://mariadb.com/kb/en/mariadb-1011-release-notes/) introduced the [max\_statement\_time](../server-system-variables/index#max_statement_time) system variable. When set to a non-zero value, any queries taking longer than this time in seconds will be aborted. The default is zero, and no limits are then applied. The aborted query has no effect on any larger transaction or connection contexts. The variable is of type double, thus you can use subsecond timeout. For example you can use value 0.01 for 10 milliseconds timeout.
The value can be set globally or per session, as well as per user or per query (see below). Slave's are not affected by this variable.
An associated status variable, [max\_statement\_time\_exceeded](../server-status-variables/index#max_statement_time_exceeded), stores the number of queries that have exceeded the execution time specified by [max\_statement\_time](../server-system-variables/index#max_statement_time), and a `MAX_STATEMENT_TIME_EXCEEDED` column was added to the [CLIENT\_STATISTICS](../information-schema-client_statistics-table/index) and [USER STATISTICS](../information-schema-user_statistics-table/index) Information Schema tables.
The feature was based upon a patch by Davi Arnaut.
User [max\_statement\_time](../server-system-variables/index#max_statement_time)
--------------------------------------------------------------------------------
[max\_statement\_time](../server-system-variables/index#max_statement_time) can be stored per user with the [GRANT ... MAX\_STATEMENT\_TIME](../grant/index) syntax.
Per-query [max\_statement\_time](../server-system-variables/index#max_statement_time)
-------------------------------------------------------------------------------------
By using [max\_statement\_time](../server-system-variables/index#max_statement_time) in conjunction with [SET STATEMENT](../set-statement/index), it is possible to limit the execution time of individual queries. For example:
```
SET STATEMENT max_statement_time=100 FOR
SELECT field1 FROM table_name ORDER BY field1;
```
max\_statement\_time per query Individual queries can also be limited by adding a `MAX_STATEMENT_TIME` clause to the query. For example:
```
SELECT MAX_STATEMENT_TIME=2 * FROM t1;
```
Limitations
-----------
* [max\_statement\_time](../server-system-variables/index#max_statement_time) does not work in embedded servers.
* [max\_statement\_time](../server-system-variables/index#max_statement_time) does not work for [COMMIT](../commit/index) statements in a Galera cluster (see [MDEV-18673](https://jira.mariadb.org/browse/MDEV-18673) for discussion).
Differences Between the MariaDB and MySQL Implementations
---------------------------------------------------------
MySQL 5.7.4 introduced similar functionality, but the MariaDB implementation differs in a number of ways.
* The MySQL version of [max\_statement\_time](../server-system-variables/index#max_statement_time) (`max_execution_time`) is defined in millseconds, not seconds
* MySQL's implementation can only kill SELECTs, while MariaDB's can kill any queries (excluding stored procedures).
* MariaDB only introduced the [max\_statement\_time\_exceeded](../server-status-variables/index#max_statement_time_exceeded) status variable, while MySQL also introduced a number of other variables which were not seen as necessary in MariaDB.
* The `SELECT MAX_STATEMENT_TIME = N ...` syntax is not valid in MariaDB.
See Also
--------
* [Query limits and timeouts](../query-limits-and-timeouts/index)
* [lock\_wait\_timeout](../server-system-variables/index#lock_wait_timeout) variable
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb User-Defined Functions Security User-Defined Functions Security
===============================
The MariaDB server imposes a number of limitations on [user-defined functions](../user-defined-functions/index) for security purposes.
* The INSERT privilege for the mysql database is required to run [CREATE FUNCTION](../create-function-udf/index), as a record will be added to the [mysql.func-table](../mysqlfunc-table/index).
* The DELETE privilege for the mysql database is required to run [DROP FUNCTION](../drop-function-udf/index) as the corresponding record will be removed from the [mysql.func-table](../mysqlfunc-table/index).
* UDF object files can only be placed in the plugin directory, as specified by the value of the [plugin\_dir](../server-system-variables/index#plugin_dir) system variable.
* At least one symbol, beyond the required *x()* - corresponding to an SQL function *X())* - is required. These can be *x\_init()*, *x\_deinit()*, *xxx\_reset()*, *x\_clear()* and *x\_add()* functions (see [Creating User-defined Functions](../creating-user-defined-functions/index)). The [allow-suspicious-udfs](../mysqld-options/index#-allow-suspicious-udfs) mysqld option (by default unset) provides a workaround, permitting only one symbol to be used. This is not recommended, as it opens the possibility of loading shared objects that are not legitimate user-defined functions.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb About Non-blocking Operation in the Client Library About Non-blocking Operation in the Client Library
==================================================
MariaDB, starting with version 5.5.21 supports *non-blocking* operations in the client-library. This allows an application to start a query or other operation against the database, and then continue to do other work (in the same thread) while the request is sent over the network, the query is processed in the server, and the result travels back. As parts of the result become ready, the application can — at its leisure — call back into the library to continue processing, repeating this until the operation is completed.
Non-blocking operation is implemented entirely within the client library. This means no special server support is necessary and non-blocking operation works with any version of the MariaDB or MySQL server, the same as the normal blocking API. It also means that it is not possible to have two queries running at the same time on the same connection (this is a protocol limitation). But a single thread can have any number of non-blocking queries running at the same time, each using its own MYSQL connection object.
Non-blocking operation is useful when an application needs to run a number of independent queries in parallel at the same time, to speed up operation compared to running them sequentially one after the other. This could be multiple queries against a single server (to better utilize multiple CPU cores and/or a high-capacity I/O system on the server), or it could be queries against multiple servers (e.g. `[SHOW STATUS](../show-status/index)` against all running servers for monitoring, or a map/reduce-like operation against a big sharded database).
Non-blocking operation is also very useful in applications that are already written in a non-blocking style, for example using a framework like [libevent](http://libevent.org/), or, for example, a GUI-application using an event loop. Using the non-blocking client library allows the integrations of database queries into such applications, without the risk of long-running queries "hanging" the user interface or stalling the event loop, and without having to manually spawn separate threads to run the queries and re-synchronize with the threads to get the results back.
In this context, "blocking" means the situation where communication on the network socket to the server has to wait while processing the query. Waiting can be necessary because the server has not yet had time to process the query, or because the data needs to travel over the network from the server, or even because the first part of a large request needs to be sent out on the network before local socket buffers can accept the last part. Whenever such a wait is necessary, control returns to the application. The application will then run `select()` or `poll()` (or something similar) to detect when any wait condition is satisfied, and then call back into the library to continue processing.
An example program is available in the MariaDB source tree:
```
tests/async_queries.c
```
It uses `libevent` to run a set of queries in parallel from within a single thread / event loop. This is a good example of how to integrate non-blocking query processing into an event-based framework.
The non-blocking API in the client library is entirely optional. The new library is completely ABI- and source-compatible with existing applications. Also, applications not using non-blocking operations are not affected, nor is there any significant performance penalty for having support for non-blocking operations in the library for applications which do not use them.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Data-in-Transit Encryption Data-in-Transit Encryption
===========================
Data can be encrypted in transit using the Transport Layer Security (TLS) protocol.
| Title | Description |
| --- | --- |
| [Secure Connections Overview](../secure-connections-overview/index) | Data can be encrypted in transit using the TLS protocol. |
| [Certificate Creation with OpenSSL](../certificate-creation-with-openssl/index) | How to generate a self-signed certificate in OpenSSL. |
| [Securing Connections for Client and Server](../securing-connections-for-client-and-server/index) | Enabling TLS encryption in transit on both the client and server. |
| [Replication with Secure Connections](../replication-with-secure-connections/index) | Enabling TLS encryption in transit for MariaDB replication. |
| [Securing Communications in Galera Cluster](../securing-communications-in-galera-cluster/index) | Enabling TLS encryption in transit for Galera Cluster. |
| [SSL/TLS System Variables](../ssltls-system-variables/index) | List and description of Transport Layer Security (TLS)-related system variables. |
| [SSL/TLS Status Variables](../ssltls-status-variables/index) | List and description of Transport Layer Security (TLS)-related status variables. |
| [Using TLSv1.3](../using-tlsv13/index) | TLSv1.3 is a major rewrite of the protocol. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb OVERLAPS OVERLAPS
========
Syntax
------
```
OVERLAPS(g1,g2)
```
Description
-----------
Returns `1` or `0` to indicate whether `g1` spatially overlaps `g2`. The term spatially overlaps is used if two geometries intersect and their intersection results in a geometry of the same dimension but not equal to either of the given geometries.
OVERLAPS() is based on the original MySQL implementation and uses object bounding rectangles, while [ST\_OVERLAPS()](../st_overlaps/index) uses object shapes.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Using MariaDB with Your Programs (API) Using MariaDB with Your Programs (API)
=======================================
| Title | Description |
| --- | --- |
| [Error Codes](../error-codes/index) | MariaDB error codes and SQLSTATE codes |
| [libMariaDB](../libmariadb/index) | |
| [libmysqld](../libmysqld/index) | The Embedded, Stand-Alone MariaDB Server. |
| [Non-Blocking Client Library](../non-blocking-client-library/index) | Non-blocking client library documentation. |
| [Progress Reporting](../progress-reporting/index) | Progress reporting for long running commands. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb LONGBLOB LONGBLOB
========
Syntax
------
```
LONGBLOB
```
Description
-----------
A [BLOB](../blob/index) column with a maximum length of 4,294,967,295 bytes or 4GB (232 - 1). The effective maximum length of LONGBLOB columns depends on the configured maximum packet size in the client/server protocol and available memory. Each LONGBLOB value is stored using a four-byte length prefix that indicates the number of bytes in the value.
### Oracle Mode
**MariaDB starting with [10.3](../what-is-mariadb-103/index)**In [Oracle mode from MariaDB 10.3](../sql_modeoracle-from-mariadb-103/index#synonyms-for-basic-sql-types), `BLOB` is a synonym for `LONGBLOB`.
See Also
--------
* [BLOB](../blob/index)
* [BLOB and TEXT Data Types](../blob-and-text-data-types/index)
* [Data Type Storage Requirements](../data-type-storage-requirements/index)
* [Oracle mode from MariaDB 10.3](../sql_modeoracle-from-mariadb-103/index#synonyms-for-basic-sql-types)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb LOCK IN SHARE MODE LOCK IN SHARE MODE
==================
InnoDB supports row-level locking. Selected rows can be locked using `LOCK IN SHARE MODE` or [FOR UPDATE](../for-update/index). In both cases, a lock is acquired on the rows read by the query, and it will be released when the current transaction is committed.
When `LOCK IN SHARE MODE` is specified in a [SELECT](../select/index) statement, MariaDB will wait until all transactions that have modified the rows are committed. Then, a write lock is acquired. All transactions can read the rows, but if they want to modify them, they have to wait until your transaction is committed.
If `autocommit` is set to 1, the LOCK IN SHARE MODE and [FOR UPDATE](../for-update/index) clauses have no effect.
See Also
--------
* [SELECT](../select/index)
* [FOR UPDATE](../for-update/index)
* [InnoDB Lock Modes](../innodb-lock-modes/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore Select ColumnStore Select
==================
The SELECT statement is used to query the database and display table data. You can add many clauses to filter the data.
Syntax
------
```
SELECT
[ALL | DISTINCT ]
select_expr [, select_expr ...]
[ FROM table_references
[WHERE where_condition]
[GROUP BY {col_name | expr | position} [ASC | DESC], ... [WITH ROLLUP]]
[HAVING where_condition]
[ORDER BY {col_name | expr | position} [ASC | DESC], ...]
[LIMIT {[offset,] row_count | row_count OFFSET offset}]
[PROCEDURE procedure_name(argument_list)]
[INTO OUTFILE 'file_name' [CHARACTER SET charset_name] [export_options]
| INTO DUMPFILE 'file_name' | INTO var_name [, var_name] ]
export_options:
[{FIELDS | COLUMNS}
[TERMINATED BY 'string']
[[OPTIONALLY] ENCLOSED BY 'char']
[ESCAPED BY 'char']
]
[LINES
[STARTING BY 'string']
[TERMINATED BY 'string']
]
```
`<<toc>>`
Projection List (SELECT)
------------------------
If the same column needs to be referenced more than once in the projection list, a unique name is required for each column using a column alias.The total length of the name of a column, inclusive of length of functions, in the projection list must be 64 characters or less.
WHERE
-----
The WHERE clause filters data retrieval based on criteria. Note that *column\_alias* cannot be used in the WHERE clause.The following statement returns rows in the region table where the region = ‘ASIA’:
```
SELECT * FROM region WHERE name = ’ASIA’;
```
GROUP BY
--------
GROUP BY groups data based on values in one or more specific columns. A maximum of 10 columns will be supported in the GROUP BY clause.The following statement returns rows from the *lineitem* table where /orderkey *is less than 1 000 000 and groups them by the quantity.*
```
SELECT quantity, count(*) FROM lineitem WHERE orderkey < 1000000 GROUP BY quantity;
```
HAVING
------
HAVING is used in combination with the GROUP BY clause. It can be used in a SELECT statement to filter the records that a GROUP BY returns.The following statement returns shipping dates, and the respective quantity where the quantity is 2500 or more.
```
SELECT shipdate, count(*) FROM lineitem GROUP BYshipdate HAVING count(*) >= 2500;
```
ORDER BY
--------
The ORDER BY clause presents results in a specific order. Note that the ORDER BY clause represents a statement that is post-processed by MariaDB. The following statement returns an ordered *quantity* column from the *lineitem* table.
```
SELECT quantity FROM lineitem WHERE orderkey < 1000000order by quantity;
```
The following statement returns an ordered *shipmode* column from the *lineitem* table.
```
Select shipmode from lineitem where orderkey < 1000000order by 1;
```
**NOTE: When ORDER BY is used in an inner query and LIMIT on an outer query, LIMIT is applied first and then ORDER BY is applied when returning results.**
UNION
-----
Used to combine the result from multiple SELECT statements into a single result set.The UNION or UNION DISTINCT clause returns query results from multiple queries into one display and discards duplicate results. The UNION ALL clause displays query results from multiple queries and does not discard the duplicates. The following statement returns the *p\_name* rows in the *part* table and the *partno* table and discards the duplicate results:
```
SELECT p_name FROM part UNION select p_name FROM partno;
```
The following statement returns all the *p\_name rows* in the *part* table and the *partno* table:
```
SELECT p_name FROM part UNION ALL select p_name FROM partno;
```
LIMIT
-----
Limit is used to constrain the number of rows returned by the SELECT statement. LIMIT can have up to two arguments. LIMIT must contain a rowcount and may optionally contain an offset of the first row to return (initial row is 0). The following statement returns 5 customer keys from the customer table:
```
SELECT custkey from customer limit 5;
```
The following statement returns 5 customer keys from the customer table beginning at offset 1000:
```
SELECT custkey from customer limit 1000,5;
```
**NOTE: When LIMIT is applied on a nested query's results, and the inner query contains ORDER BY, LIMIT is applied first and then ORDER BY is applied.(Valid for Columnstore 1.0.x - 1.2.x)**
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Database Normalization Database Normalization
=======================
This section introduces you to a powerful tool for designing databases: normalization.
| Title | Description |
| --- | --- |
| [Database Normalization Overview](../database-normalization-overview/index) | A sample system going through the process of normalization |
| [Database Normalization: 1st Normal Form](../database-normalization-1st-normal-form/index) | Moving from unnormalized to 1st normal form |
| [Database Normalization: 2nd Normal Form](../database-normalization-2nd-normal-form/index) | From 1st to 2nd normal form |
| [Database Normalization: 3rd Normal Form](../database-normalization-3rd-normal-form/index) | From 2nd to 3rd normal form |
| [Database Normalization: Boyce-Codd Normal Form](../database-normalization-boyce-codd-normal-form/index) | Beyond 3rd normal form with Boyce-Codd normal form |
| [Database Normalization: 4th Normal Form](../database-normalization-4th-normal-form/index) | Beyond Boyce-Codd normal form with 4th normal form |
| [Database Normalization: 5th Normal Form and Beyond](../database-normalization-5th-normal-form-and-beyond/index) | Normal forms beyond 4th are mainly of academic interest |
| [Understanding Denormalization](../understanding-denormalization/index) | Denormalization is the process of reversing the transformations made during... |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Configuring MariaDB Replication between Two MariaDB Galera Clusters Configuring MariaDB Replication between Two MariaDB Galera Clusters
===================================================================
[MariaDB replication](../high-availability-performance-tuning-mariadb-replication/index) can be used to replication between two [MariaDB Galera Clusters](../galera-cluster/index). This article will discuss how to do that.
Configuring the Clusters
------------------------
Before we set up replication, we need to ensure that the clusters are configured properly. This involves the following steps:
* Set `[log\_slave\_updates=ON](../replication-and-binary-log-system-variables/index#log_slave_updates)` on all nodes in both clusters. See [Configuring MariaDB Galera Cluster: Writing Replicated Write Sets to the Binary Log](../configuring-mariadb-galera-cluster/index#writing-replicated-write-sets-to-the-binary-log) and [Using MariaDB Replication with MariaDB Galera Cluster: Configuring a Cluster Node as a Replication Master](../using-mariadb-replication-with-mariadb-galera-cluster-using-mariadb-replica/index#configuring-a-cluster-node-as-a-replication-master) for more information on why this is important. This is also needed to [enable wsrep GTID mode](../using-mariadb-gtids-with-mariadb-galera-cluster/index#enabling-wsrep-gtid-mode).
* Set `[server\_id](../replication-and-binary-log-system-variables/index#server_id)` to the same value on all nodes in a given cluster, but be sure to use a different value in each cluster. See [Using MariaDB Replication with MariaDB Galera Cluster: Setting server\_id on Cluster Nodes](../using-mariadb-replication-with-mariadb-galera-cluster-using-mariadb-replica/index#setting-server_id-on-cluster-nodes) for more information on what this means.
### Configuring Wsrep GTID Mode
If you want to use [GTID](../gtid/index) replication, then you also need to configure some things to [enable wsrep GTID mode](../using-mariadb-gtids-with-mariadb-galera-cluster/index#enabling-wsrep-gtid-mode). For example:
* `[wsrep\_gtid\_mode=ON](../galera-cluster-system-variables/index#wsrep_gtid_mode)` needs to be set on all nodes in each cluster.
* `[wsrep\_gtid\_domain\_id](../galera-cluster-system-variables/index#wsrep_gtid_domain_id)` needs to be set to the same value on all nodes in a given cluster, so that each cluster node uses the same domain when assigning [GTIDs](../gtid/index) for Galera Cluster's write sets. Each cluster should have this set to a different value, so that each cluster uses different domains when assigning [GTIDs](../gtid/index) for their write sets.
* `[log\_slave\_updates](../replication-and-binary-log-system-variables/index#log_slave_updates)` needs to be enabled on all nodes in the cluster. See [MDEV-9855](https://jira.mariadb.org/browse/MDEV-9855) about that.
* `[log\_bin](../replication-and-binary-log-server-system-variables/index#log_bin)` needs to be set to the same path on all nodes in the cluster. See [MDEV-9856](https://jira.mariadb.org/browse/MDEV-9856) about that.
And as an extra safety measure:
* `[gtid\_domain\_id](../gtid/index#gtid_domain_id)` should be set to a different value on all nodes in a given cluster, and each of these values should be different than the configured `[wsrep\_gtid\_domain\_id](../galera-cluster-system-variables/index#wsrep_gtid_domain_id)` value. This is to prevent a node from using the same domain used for Galera Cluster's write sets when assigning [GTIDs](../gtid/index) for non-Galera transactions, such as DDL executed with `[wsrep\_sst\_method=RSU](../galera-cluster-system-variables/index#wsrep_sst_method)` set or DML executed with `[wsrep\_on=OFF](../galera-cluster-system-variables/index#wsrep_on)` set.
Setting up Replication
----------------------
Our process to set up replication is going to be similar to the process described at [Setting up a Replication Slave with Mariabackup](../setting-up-a-replication-slave-with-mariabackup/index), but it will be modified a bit to work in this context.
### Start the First Cluster
The very first step is to start the nodes in the first cluster. The first node will have to be [bootstrapped](../getting-started-with-mariadb-galera-cluster/index#bootstrapping-a-new-cluster). The other nodes can be [started normally](../starting-and-stopping-mariadb-starting-and-stopping-mariadb/index).
Once the nodes are started, you need to pick a specific node that will act as the replication primary for the second cluster.
### Backup the Database on the First Cluster's Primary Node and Prepare It
The first step is to simply take and prepare a fresh [full backup](../full-backup-and-restore-with-mariabackup/index) of the node that you have chosen to be the replication primary. For example:
```
$ mariabackup --backup \
--target-dir=/var/mariadb/backup/ \
--user=mariabackup --password=mypassword
```
And then you would prepare the backup as you normally would. For example:
```
$ mariabackup --prepare \
--target-dir=/var/mariadb/backup/
```
### Copy the Backup to the Second Cluster's Replica
Once the backup is done and prepared, you can copy it to the node in the second cluster that will be acting as replica. For example:
```
$ rsync -avrP /var/mariadb/backup c2dbserver:/var/mariadb/backup
```
### Restore the Backup on the Second Cluster's Replica
At this point, you can restore the backup to the [datadir](../server-system-variables/index#datadir), as you normally would. For example:
```
$ mariabackup --copy-back \
--target-dir=/var/mariadb/backup/
```
And adjusting file permissions, if necessary:
```
$ chown -R mysql:mysql /var/lib/mysql/
```
### Bootstrap the Second Cluster's Replica
Now that the backup has been restored to the second cluster's replica, you can start the server by [bootstrapping](../getting-started-with-mariadb-galera-cluster/index#bootstrapping-a-new-cluster) the node.
### Create a Replication User on the First Cluster's Primary
Before the second cluster's replica can begin replicating from the first cluster's primary, you need to [create a user account](../create-user/index) on the primary that the replica can use to connect, and you need to [grant](../grant/index) the user account the [REPLICATION SLAVE](../grant/index#global-privileges) privilege. For example:
```
CREATE USER 'repl'@'c2dbserver1' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'c2dbserver1';
```
### Start Replication on the Second Cluster's Replica
At this point, you need to get the replication coordinates of the primary from the original backup.
The coordinates will be in the [xtrabackup\_binlog\_info](../files-created-by-mariabackup/index#xtrabackup_binlog_info) file.
Mariabackup dumps replication coordinates in two forms: [GTID strings](../gtid/index) and [binary log](../binary-log/index) file and position coordinates, like the ones you would normally see from [SHOW MASTER STATUS](../show-master-status/index) output. In this case, it is probably better to use the [GTID](../gtid/index) coordinates.
For example:
```
mariadb-bin.000096 568 0-1-2
```
Regardless of the coordinates you use, you will have to set up the primary connection using [CHANGE MASTER TO](../change-master-to/index) and then start the replication threads with [START SLAVE](../start-slave/index).
#### GTIDs
If you want to use GTIDs, then you will have to first set [gtid\_slave\_pos](../gtid/index#gtid_slave_pos) to the [GTID](../gtid/index) coordinates that we pulled from the [xtrabackup\_binlog\_info](../files-created-by-mariabackup/index#xtrabackup_binlog_info) file, and we would set `MASTER_USE_GTID=slave_pos` in the [CHANGE MASTER TO](../change-master-to/index) command. For example:
```
SET GLOBAL gtid_slave_pos = "0-1-2";
CHANGE MASTER TO
MASTER_HOST="c1dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_USE_GTID=slave_pos;
START SLAVE;
```
#### File and Position
If you want to use the [binary log](../binary-log/index) file and position coordinates, then you would set `MASTER_LOG_FILE` and `MASTER_LOG_POS` in the [CHANGE MASTER TO](../change-master-to/index) command to the file and position coordinates that we pulled the [xtrabackup\_binlog\_info](../files-created-by-mariabackup/index#xtrabackup_binlog_info) file. For example:
```
CHANGE MASTER TO
MASTER_HOST="c1dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_LOG_FILE='mariadb-bin.000096',
MASTER_LOG_POS=568,
START SLAVE;
```
### Check the Status of the Second Cluster's Replica
You should be done setting up the replica now, so you should check its status with [SHOW SLAVE STATUS](../show-slave-status/index). For example:
```
SHOW SLAVE STATUS\G
```
### Start the Second Cluster
If the replica is replicating normally, then the next step would be to [start the MariaDB Server process](../starting-and-stopping-mariadb-starting-and-stopping-mariadb/index) on the other nodes in the second cluster.
Now that the second cluster is up, ensure that it does not start accepting writes yet if you want to set up [circular replication](../replication-overview/index#ring-replication) between the two clusters.
Setting up Circular Replication
-------------------------------
You can also set up [circular replication](../replication-overview/index#ring-replication) between the two clusters, which means that the second cluster replicates from the first cluster, and the first cluster also replicates from the second cluster.
### Create a Replication User on the Second Cluster's Primary
Before circular replication can begin, you also need to [create a user account](../create-user/index) on the second cluster's primary that the first cluster's replica can use to connect, and you need to [grant](../grant/index) the user account the [REPLICATION SLAVE](../grant/index#global-privileges) privilege. For example:
```
CREATE USER 'repl'@'c1dbserver1' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'c1dbserver1';
```
### Start Circular Replication on the First Cluster
How this is done would depend on whether you want to use the [GTID](../gtid/index) coordinates or the [binary log](../binary-log/index) file and position coordinates.
Regardless, you need to ensure that the second cluster is not accepting any writes other than those that it replicates from the first cluster at this stage.
#### GTIDs
To get the GTID coordinates on the second cluster, you can check `[gtid\_current\_pos](../gtid/index#gtid_current_pos)` by executing:
```
SHOW GLOBAL VARIABLES LIKE 'gtid_current_pos';
```
Then on the first cluster, you can set up replication by setting [gtid\_slave\_pos](../gtid/index#gtid_slave_pos) to the GTID that was returned and then executing [CHANGE MASTER TO](../change-master-to/index):
```
SET GLOBAL gtid_slave_pos = "0-1-2";
CHANGE MASTER TO
MASTER_HOST="c2dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_USE_GTID=slave_pos;
START SLAVE;
```
#### File and Position
To get the [binary log](../binary-log/index) file and position coordinates on the second cluster, you can execute [SHOW MASTER STATUS](../show-master-status/index):
```
SHOW MASTER STATUS
```
Then on the first cluster, you would set `master_log_file` and `master_log_pos` in the [CHANGE MASTER TO](../change-master-to/index) command. For example:
```
CHANGE MASTER TO
MASTER_HOST="c2dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_LOG_FILE='mariadb-bin.000096',
MASTER_LOG_POS=568;
START SLAVE;
```
### Check the Status of the Circular Replication
You should be done setting up the circular replication on the node in the first cluster now, so you should check its status with [SHOW SLAVE STATUS](../show-slave-status/index). For example:
```
SHOW SLAVE STATUS\G
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema QUERY_RESPONSE_TIME Table Information Schema QUERY\_RESPONSE\_TIME Table
==============================================
Description
-----------
The [Information Schema](../information_schema/index) `QUERY_RESPONSE_TIME` table contains information about queries that take a long time to execute . It is only available if the [QUERY\_RESPONSE\_TIME](../query_response_time-plugin/index) plugin has been installed.
It contains the following columns:
| Column | Description |
| --- | --- |
| `TIME` | Time interval |
| `COUNT` | Count of queries falling into the time interval |
| `TOTAL` | Total execution time of all queries for this interval |
See [QUERY\_RESPONSE\_TIME](../query_response_time-plugin/index) plugin for a full description.
The table is not a standard Information Schema table, and is a MariaDB extension.
`SHOW QUERY_RESPONSE_TIME` is available from [MariaDB 10.1.1](https://mariadb.com/kb/en/mariadb-1011-release-notes/) as an alternative for retrieving the data.
Example
-------
```
SELECT * FROM information_schema.QUERY_RESPONSE_TIME;
+----------------+-------+----------------+
| TIME | COUNT | TOTAL |
+----------------+-------+----------------+
| 0.000001 | 0 | 0.000000 |
| 0.000010 | 17 | 0.000094 |
| 0.000100 | 4301 0.236555 |
| 0.001000 | 1499 | 0.824450 |
| 0.010000 | 14851 | 81.680502 |
| 0.100000 | 8066 | 443.635693 |
| 1.000000 | 0 | 0.000000 |
| 10.000000 | 0 | 0.000000 |
| 100.000000 | 1 | 55.937094 |
| 1000.000000 | 0 | 0.000000 |
| 10000.000000 | 0 | 0.000000 |
| 100000.000000 | 0 | 0.000000 |
| 1000000.000000 | 0 | 0.000000 |
| TOO LONG | 0 | TOO LONG |
+----------------+-------+----------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysql.procs_priv Table mysql.procs\_priv Table
=======================
The `mysql.procs_priv` table contains information about [stored procedure](../stored-procedures/index) and [stored function](../stored-functions/index) privileges. See [CREATE PROCEDURE](../create-procedure/index) and [CREATE FUNCTION](../create-function/index) on creating these.
The [INFORMATION\_SCHEMA.ROUTINES](../information-schema-routines-table/index) table derives its contents from `mysql.procs_priv`.
**MariaDB starting with [10.4](../what-is-mariadb-104/index)**In [MariaDB 10.4](../what-is-mariadb-104/index) and later, this table uses the [Aria](../aria/index) storage engine.
**MariaDB until [10.3](../what-is-mariadb-103/index)**In [MariaDB 10.3](../what-is-mariadb-103/index) and before, this table uses the [MyISAM](../myisam-storage-engine/index) storage engine.
The `mysql.procs_priv` table contains the following fields:
| Field | Type | Null | Key | Default | Description |
| --- | --- | --- | --- | --- | --- |
| `Host` | `char(60)` | NO | PRI | | Host (together with `Db`, `User`, `Routine_name` and `Routine_type` makes up the unique identifier for this record). |
| `Db` | `char(64)` | NO | PRI | | Database (together with `Host`, `User`, `Routine_name` and `Routine_type` makes up the unique identifier for this record). |
| `User` | `char(80)` | NO | PRI | | User (together with `Host`, `Db`, `Routine_name` and `Routine_type` makes up the unique identifier for this record). |
| `Routine_name` | `char(64)` | NO | PRI | | Routine\_name (together with `Host`, `Db` `User` and `Routine_type` makes up the unique identifier for this record). |
| `Routine_type` | `enum('FUNCTION','PROCEDURE', 'PACKAGE', 'PACKAGE BODY')` | NO | PRI | `NULL` | Whether the routine is a [stored procedure](../stored-procedures/index), [stored function](../stored-functions/index), or, from [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/), a [package](../create-package/index) or [package body](../create-package-body/index). |
| `Grantor` | `char(141)` | NO | MUL | | |
| `Proc_priv` | `set('Execute','Alter Routine','Grant')` | NO | | | The routine privilege. See [Function Privileges](../grant/index#function-privileges) and [Procedure Privileges](../grant/index#procedure-privileges) for details. |
| `Timestamp` | `timestamp` | NO | | `CURRENT_TIMESTAMP` | |
The [Acl\_function\_grants](../server-status-variables/index#acl_function_grants) status variable, added in [MariaDB 10.1.4](https://mariadb.com/kb/en/mariadb-1014-release-notes/), indicates how many rows the `mysql.columns_priv` table contains with the `FUNCTION` routine type.
The [Acl\_procedure\_grants](../server-status-variables/index#acl_procedure_grants) status variable, added in [MariaDB 10.1.4](https://mariadb.com/kb/en/mariadb-1014-release-notes/), indicates how many rows the `mysql.columns_priv` table contains with the `PROCEDURE` routine type.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb IF IF
==
Syntax
------
```
IF search_condition THEN statement_list
[ELSEIF search_condition THEN statement_list] ...
[ELSE statement_list]
END IF;
```
Description
-----------
`IF` implements a basic conditional construct. If the `search_condition` evaluates to true, the corresponding SQL statement list is executed. If no `search_condition` matches, the statement list in the `ELSE` clause is executed. Each statement\_list consists of one or more statements.
See Also
--------
* The [IF() function](../if-function/index), which differs from the `IF` statement described above.
* [Changes in Oracle mode from MariaDB 10.3](../sql_modeoracle-from-mariadb-103/index#simple-syntax-compatibility)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb SHOW PROCESSLIST SHOW PROCESSLIST
================
Syntax
------
```
SHOW [FULL] PROCESSLIST
```
Description
-----------
`SHOW PROCESSLIST` shows you which threads are running. You can also get this information from the [information\_schema.PROCESSLIST](../information-schema-processlist-table/index) table or the [mysqladmin processlist](../mysqladmin/index) command. If you have the `[PROCESS privilege](../show-privileges/index)`, you can see all threads. Otherwise, you can see only your own threads (that is, threads associated with the MariaDB account that you are using). If you do not use the `FULL` keyword, only the first 100 characters of each statement are shown in the Info field.
The columns shown in `SHOW PROCESSLIST` are:
| Name | Description |
| --- | --- |
| **`ID`** | The client's process ID. |
| **`USER`** | The username associated with the process. |
| **`HOST`** | The host the client is connected to. |
| **`DB`** | The default database of the process (NULL if no default). |
| **`COMMAND`** | The command type. See [Thread Command Values](../thread-command-values/index). |
| **`TIME`** | The amount of time, in seconds, the process has been in its current state. For a replica SQL thread before [MariaDB 10.1](../what-is-mariadb-101/index), this is the time in seconds between the last replicated event's timestamp and the replica machine's real time. |
| **`STATE`** | See [Thread States](../thread-states/index). |
| **`INFO`** | The statement being executed. |
| **`PROGRESS`** | The total progress of the process (0-100%) (see [Progress Reporting](../progress-reporting/index)). |
See `TIME_MS` column in [information\_schema.PROCESSLIST](../time_ms-column-in-information_schemaprocesslist/index) for differences in the `TIME` column between MariaDB and MySQL.
The [information\_schema.PROCESSLIST](../information-schema-processlist-table/index) table contains the following additional columns:
| Name | Description |
| --- | --- |
| **`TIME_MS`** | The amount of time, in milliseconds, the process has been in its current state. |
| **`STAGE`** | The stage the process is currently in. |
| **`MAX_STAGE`** | The maximum number of stages. |
| **`PROGRESS`** | The progress of the process within the current stage (0-100%). |
| **`MEMORY_USED`** | The amount of memory used by the process. |
| **`EXAMINED_ROWS`** | The number of rows the process has examined. |
| **`QUERY_ID`** | Query ID. |
Note that the `PROGRESS` field from the information schema, and the `PROGRESS` field from `SHOW PROCESSLIST` display different results. `SHOW PROCESSLIST` shows the total progress, while the information schema shows the progress for the current stage only.
Threads can be killed using their thread\_id or their query\_id, with the [KILL](../data-manipulation-kill-connection-query/index) statement.
Since queries on this table are locking, if the [performance\_schema](../performance-schema/index) is enabled, you may want to query the [THREADS](../performance-schema-threads-table/index) table instead.
Examples
--------
```
SHOW PROCESSLIST;
+----+-----------------+-----------+------+---------+------+------------------------+------------------+----------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+----+-----------------+-----------+------+---------+------+------------------------+------------------+----------+
| 2 | event_scheduler | localhost | NULL | Daemon | 2693 | Waiting on empty queue | NULL | 0.000 |
| 4 | root | localhost | NULL | Query | 0 | Table lock | SHOW PROCESSLIST | 0.000 |
+----+-----------------+-----------+------+---------+------+------------------------+------------------+----------+
```
See Also
--------
[CONNECTION\_ID()](../connection_id/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysqld Configuration Files and Groups mysqld Configuration Files and Groups
=====================================
For all about configuring mysqld, see [Configuring MariaDB with Option Files](../configuring-mariadb-with-option-files/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Build Environment Setup for Mac Build Environment Setup for Mac
===============================
XCode
-----
* Install Xcode from Apple (free registration required): <https://developer.apple.com/xcode/> or from your Mac OS X installation disk (macports needs XCode >= 3.1, so if you do not have that version or greater you will need to download the latest version, which is 900+ MB)
You can install the necessary dependencies using either MacPorts or Homebrew.
Using MacPorts
--------------
* [Download](http://svn.macports.org/repository/macports/downloads/) and install the MacPorts dmg image from <http://www.macports.org>
* After installing, update it from the terminal: `sudo port -v selfupdate`
`sudo port install cmake jemalloc judy openssl boost gnutls`
Using Homebrew
--------------
* Download and install Homebrew from [https:*brew.sh*](https://brew.sh/)
`brew install cmake jemalloc traildb/judy/judy openssl boost gnutls`
Your Mac should now have everything it needs to get, compile, and otherwise work with the MariaDB source code. The next step is to actually get a copy of the code. For help with this see the [Getting the MariaDB Source Code](../getting_the_mariadb_source_code/index) page.
When building with Mac, you'll need `-DOPENSSL_ROOT_DIR=/usr/local/openssl` passed as a `cmake` argument to build against openssl correctly.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Ansible Overview for MariaDB Users Ansible Overview for MariaDB Users
==================================
Ansible is a tool to automate servers configuration management. It is produced by Red Hat and it is open source software released under the terms of the GNU GPL.
It is entirely possible to use Ansible to automate MariaDB deployments and configuration. This page contains generic information for MariaDB users who want to learn, or evaluate, Ansible.
For information about how to install Ansible, see [Installing Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) in Ansible documentation.
Automation Hubs
---------------
Normally, Ansible can run from any computer that has access to the target hosts to be automated. It is not uncommon that all members of a team has Ansible installed on their own laptop, and use it to deploy.
Red Hat offers a commercial version of Ansible called [Ansible Tower](https://docs.ansible.com/ansible/latest/reference_appendices/tower.html). It consists of a REST API and a web-based interface that work as a hub that handles all normal Ansible operations.
An alternative is [AWX](https://github.com/ansible/awx). AWX is the open source upstream project from which many Ansible Tower features are originally developed. AWX is released under the terms of the Apache License 2.0. However, Red Hat does not recommend to run AWX in production.
AWX development is fast. It has several features that may or may not end up in Ansible Tower. Ansible Tower is more focused on making AWS features more robust, providing a stable tool to automate production environments.
Design Principles
-----------------
Ansible allows us to write **playbooks** that describe how our servers should be configured. Playbooks are lists of **tasks**.
Tasks are usually **declarative**. You don't explain *how* to do something, you declare *what* should be done.
Playbooks are **idempotent**. When you apply a playbook, tasks are only run if necessary.
Here is a task example:
```
- name: Install Perl
package:
name: perl
state: present
```
"Install Perl" is just a description that will appear on screen when the task is applied. Then we use the `package` module to declare that a package called "perl" should be installed. When we apply the playbook, if Perl is already installed nothing happens. Otherwise, Ansible installs it.
When we apply a playbook, the last information that appears on the screen is a recap like the following:
```
PLAY RECAP ***************************************************************************************************
mariadb-01 : ok=6 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
This means that six tasks were already applied (so no action was taken), and two tasks were applied.
As the above example shows, Ansible playbooks are written in YAML.
Modules (like `package`) can be written in any language, as long as they are able to process a JSON input and produce a JSON output. However the Ansible community prefers to write them in Python, which is the language Ansible is written in.
Concepts
--------
A piece of Ansible code that can be applied to a server is called a **playbook**.
A **task** is the smallest brick of code in a playbook. The name is a bit misleading, though, because an Ansible task should not be seen as "something to do". Instead, it is a minimal description of a component of a server. In the example above, we can see a task.
A task uses a single **module**, which is an interface that Ansible uses to interact with a specific system component. In the example, the module is "package".
A task also has attributes, that describe what should be done with that module, and how. In the example above, "name" and "state" are both tasks. The `state` attribute exists for every module, by convention (though there may be exceptions). Typically, it has at least the "present" and "absent" state, to indicate if an object should exist or not.
Other important code concepts are:
* An **inventory** determines which **hosts** Ansible should be able to deploy. Each host may belong to one or more **groups**. Groups may have **children**, forming a hierarchy. This is useful because it allows us to deploy on a group, or to assign variables to a group.
* A **role** describes the state that a host, or group of hosts, should reach after a deploy.
* A **play** associates hosts or groups to their roles. Each role/group can have more than one role.
* A role is a playbook that describes how certain servers should be configured, based on the logical role they have in the infrastructure. Servers can have multiple roles, for example the same server could have both the "mariadb" and the "mydumper" role, meaning that they run MariaDB and they have mydumper installed (as shown later).
* Tasks can use **variables**. They can affect how a task is executed (for example a variable could be a file name), or even whether a task is executed or not. Variables exist at role, group or host level. Variables can also be passed by the user when a play is applied.
* **Facts** are data that Ansible retrieves from remote hosts before deploying. This is a very important step, because facts may determine which tasks are executed or how they are executed. Facts include, for example, the operating system family or its version. A playbook sees facts as pre-set variables.
* **Modules** implement **actions** that tasks can use. Action examples are **file** (to declare that files and directories must exist) or **mysql\_variables** (to declare MySQL/MariaDB variables that need to be set).
#### Example
Let's describe a hypothetical infrastructure to find out how these concepts can apply to MariaDB.
The **inventory** could define the following groups:
* "db-main" for the cluster used by our website. All nodes belong to this group.
* "db-analytics" for our replicas used by data analysts.
* "dump" for one or more servers that take dumps from the replicas.
* "proxysql" for one or more hosts that run ProxySQL.
Then we'll need the following nodes:
* "mariadb-node" for the nodes in "db-main". This role describes how to setup nodes of a cluster using Galera.
* "mariadb-replica" for the members of "db-analytics". It describes a running replica, and it includes the tasks that are necessary to provision the node if the data directory is empty when the playbook is applied. The hostname of the primary server is defined in a variable.
* "mariadb". The aforementioned "mariadb-node" and "mariadb-replica" can be children of this group. They have many things in common (filesystem for the data directory, some basic MariaDB configuration, some installed tools...), so it could make sense to avoid duplication and describe the common traits in a super-role.
* A "mariabackup" role to take backups with [Mariabackup](../mariabackup/index), running jobs during the night. We can associate this role to the "db-main" group, or we could create a child group for servers that will take the backups.
* "mariadb-dump" for the server that takes dumps with [mariadb-dump](../mysqldump/index). Note that we may decide to take dumps on a replica, so the same host may belong to "db-analytics" and "mariadb-dump".
* "proxysql" for the namesake group.
Architecture
------------
Ansible architecture is extremely simple. Ansible can run on any host. To apply playbooks, it connects to the target hosts and runs system commands. By default the connection happens via ssh, though it is possible to develop connection plugins to use different methods. Applying playbooks locally without establishing a connection is also possible.
Modules can be written in any language, though Python is the most common choice in the Ansible community. Modules receive JSON "requests" and facts from Ansible core, they are supposed to run useful commands on a target host, and then they should return information in JSON. Their output informs Ansible whether something has changed on the remote server and if the operations succeeded.
Ansible is not centralized. It can run on any host, and it is common for a team to run it from several laptops. However, to simplify things and improve security, it may be desirable to run it from a dedicated host. Users will connect to that host, and apply Ansible playbooks.
Ansible Resources and References
--------------------------------
* [Ansible.com](https://www.ansible.com/)
* [AWX](https://github.com/ansible/awx)
* [Ansible Tower](https://docs.ansible.com/ansible/latest/reference_appendices/tower.html)
* [Ansible Galaxy](https://galaxy.ansible.com/)
* [Ansible on Wikipedia](https://en.wikipedia.org/wiki/Ansible_(software))
* [Ansible Automation Platform](https://www.youtube.com/c/AnsibleAutomation/videos) YouTube channel
* [Ansible: Getting Started](https://www.ansible.com/resources/get-started)
* [MariaDB Deployment and Management with Ansible](https://youtu.be/CV8-56Fgjc0) (video)
Further information about the concepts discussed in this page can be found in Ansible documentation:
* [Basic Concepts](https://docs.ansible.com/ansible/latest/network/getting_started/basic_concepts.html).
* [Glossary](https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html).
---
Content initially contributed by [Vettabase Ltd](https://vettabase.com/).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb GeometryN GeometryN
=========
A synonym for [ST\_GeometryN](../st_geometryn/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb LOCK TABLES LOCK TABLES
===========
Syntax
------
```
LOCK TABLE[S]
tbl_name [[AS] alias] lock_type
[, tbl_name [[AS] alias] lock_type] ...
[WAIT n|NOWAIT]
lock_type:
READ [LOCAL]
| [LOW_PRIORITY] WRITE
| WRITE CONCURRENT
UNLOCK TABLES
```
Description
-----------
The *lock\_type* can be one of:
| Option | Description |
| --- | --- |
| READ | Read lock, no writes allowed |
| READ LOCAL | Read lock, but allow [concurrent inserts](../concurrent-inserts/index) |
| WRITE | Exclusive write lock. No other connections can read or write to this table |
| LOW\_PRIORITY WRITE | Exclusive write lock, but allow new read locks on the table until we get the write lock. |
| WRITE CONCURRENT | Exclusive write lock, but allow READ LOCAL locks to the table. |
MariaDB enables client sessions to acquire table locks explicitly for the purpose of cooperating with other sessions for access to tables, or to prevent other sessions from modifying tables during periods when a session requires exclusive access to them. A session can acquire or release locks only for itself. One session cannot acquire locks for another session or release locks held by another session.
Locks may be used to emulate transactions or to get more speed when updating tables.
`LOCK TABLES` explicitly acquires table locks for the current client session. Table locks can be acquired for base tables or views. To use `LOCK TABLES`, you must have the `LOCK TABLES` privilege, and the `SELECT` privilege for each object to be locked. See `[GRANT](../grant/index)`
For view locking, `LOCK TABLES` adds all base tables used in the view to the set of tables to be locked and locks them automatically. If you lock a table explicitly with `LOCK TABLES`, any tables used in triggers are also locked implicitly, as described in [Triggers and Implicit Locks](../triggers-and-implicit-locks/index).
[UNLOCK TABLES](../transactions-unlock-tables/index) explicitly releases any table locks held by the current session.
**MariaDB starting with [10.3.0](https://mariadb.com/kb/en/mariadb-1030-release-notes/)**### WAIT/NOWAIT
Set the lock wait timeout. See [WAIT and NOWAIT](../wait-and-nowait/index).
Limitations
-----------
* `LOCK TABLES` [doesn't work when using Galera cluster](../mariadb-galera-cluster-known-limitations/index). You may experience crashes or locks when used with Galera.
* `LOCK TABLES` works on XtraDB/InnoDB tables only if the [innodb\_table\_locks](../xtradbinnodb-server-system-variables/index#innodb_table_locks) system variable is set to 1 (the default) and [autocommit](../server-system-variables/index#autocommit) is set to 0 (1 is default). Please note that no error message will be returned on LOCK TABLES with innodb\_table\_locks = 0.
* `LOCK TABLES` [implicitly commits](../sql-statements-that-cause-an-implicit-commit/index) the active transaction, if any. Also, starting a transaction always releases all table locks acquired with LOCK TABLES. This means that there is no way to have table locks and an active transaction at the same time. The only exceptions are the transactions in [autocommit](../start-transaction/index#autocommit) mode. To preserve the data integrity between transactional and non-transactional tables, the [GET\_LOCK()](../get_lock/index) function can be used.
* When using `LOCK TABLES` on a `TEMPORARY` table, it will always be locked with a `WRITE` lock.
* While a connection holds an explicit read lock on a table, it cannot modify it. If you try, the following error will be produced:
```
ERROR 1099 (HY000): Table 'tab_name' was locked with a READ lock and can't be updated
```
* While a connection holds an explicit lock on a table, it cannot access a non-locked table. If you try, the following error will be produced:
```
ERROR 1100 (HY000): Table 'tab_name' was not locked with LOCK TABLES
```
* While a connection holds an explicit lock on a table, it cannot issue the following: INSERT DELAYED, CREATE TABLE, CREATE TABLE ... LIKE, and DDL statements involving stored programs and views (except for triggers). If you try, the following error will be produced:
```
ERROR 1192 (HY000): Can't execute the given command because you have active locked tables or an active transaction
```
* `LOCK TABLES` can not be used in stored routines - if you try, the following error will be produced on creation. This restriction was removed in [MariaDB 10.6.2](https://mariadb.com/kb/en/mariadb-1062-release-notes/):
```
ERROR 1314 (0A000): LOCK is not allowed in stored procedures
```
See Also
--------
* [UNLOCK TABLES](../transactions-unlock-tables/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb FEDERATED Storage Engine FEDERATED Storage Engine
========================
The FEDERATED Storage Engine is a legacy storage engine no longer being supported. A fork, [FederatedX](../federatedx/index) is being actively maintained. Since [MariaDB 10.0](../what-is-mariadb-100/index), the [CONNECT](../connect/index) storage engine also permits accessing a remote database via MySQL or ODBC connection (table types: [MYSQL](../connect-table-types-mysql-table-type-accessing-mysqlmariadb-tables/index), [ODBC](../connect-table-types-odbc-table-type-accessing-tables-from-other-dbms/index)).
The FEDERATED Storage Engine was originally designed to let one access data remotely without using clustering or replication, and perform local queries that automatically access the remote data.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Database Lifecycle Database Lifecycle
==================
This article follows on from [Database Design: Overview](../database-design-overview/index).
Like everything else, databases have a finite lifespan. They are born in a flush of optimism and make their way through life achieving fame, fortune, and peaceful anonymity, or notoriety as the case may be, before fading out once more. Even the most successful database at some time is replaced by another, more flexible and up-to-date structure, and so begins life anew. Although exact definitions differ, there are generally six stages of the database lifecycle.
#### Analysis
The analysis phase is where the stakeholders are interviewed and any existing system is examined to identify problems, possibilities and constraints. The objectives and scope of the new system are determined.
#### Design
The design phase is where a conceptual design is created from the previously determined requirements, and a logical and physical design are created that will ready the database for implementation.
#### Implementation
The implementation phase is where the database management system (DBMS) is installed, the databases are created, and the data are loaded or imported.
#### Testing
The testing phase is where the database is tested and fine-tuned, usually in conjunction with the associated applications.
#### Operation
The operation phase is where the database is working normally, producing information for its users.
#### Maintenance
The maintenance phase is where changes are made to the database in response to new requirements or changed operating conditions (such as heavier load).
Database development is not independent of systems development, often being one component of the greater systems development process. The stages of systems development basically mirror the stages of a database lifecycle but are a superset. Whereas database design deals with designing the system to store the data, systems design is also concerned with the processes that will impact on the data.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB Plans - Replication MariaDB Plans - Replication
===========================
**Note:** This page is obsolete. The information is old, outdated, or otherwise currently incorrect. We are keeping the page for historical reasons only. **Do not** rely on the information in this article.
We are discussing points for the replication part of the MariaDB roadmap.
So far discussed:
* ~~Replication filters, like --replicate-do-db and friends, need to be possible to change dynamically, without having to restart the server. Having to stop the slave should ideally also not be needed, but is less of a problem.~~ (complete)
* ~~Transactional storage of slave state, rather than file-based master.info and relay-log.info . So the slave can recover consistently after a crash.~~ (complete)
* ~~Global transaction ID, so the slave state becomes recoverable, and facilitate automatic moving a slave to a new master across multi-level hierarchies.~~ (complete)
* ~~Support in global transaction ID for master\_pos\_wait()~~ (complete)
* Hooks around rotation of the binlog, so user can configure shell commands when a new log is started and when it is ended. The command must be run asynchroneously, and get the old and new log file name as arguments.
* Sending of heartbeats from master to slaves, so slaves starting up can know in finite time where the master is.
* Replication APIs, as per [MWL#107](http://askmonty.org/worklog/?tid=107)
+ Most important [MWL#120](http://askmonty.org/worklog/?tid=120) and [MWL#133](http://askmonty.org/worklog/?tid=133), for obtaining and applying events.
+ Then a mechanism for prioritising transactions.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Database Normalization: 1st Normal Form Database Normalization: 1st Normal Form
=======================================
This article follows on from the [Database Normalization Overview](../database-normalization-overview/index).
At first, the data structure was as follows:
* Location code
* Location name
* 1-n plant numbers (1-n is a shorthand for saying there are many occurrences of this field. In other words, it is a repeating group).
* 1-n plant names
* 1-n soil categories
* 1-n soil descriptions
This is a completely unnormalized structure - in other words, it is in *zero normal form* So, to begin the normalization process, you start by moving from zero normal form to 1st normal form.
Tables are in 1st normal form if they follow these rules:
* There are no repeating groups.
* All the key attributes are defined.
* All attributes are dependent on the primary key.
What this means is that data must be able to fit into a tabular format, where each field contains one value. This is also the stage where the primary key is defined. Some sources claim that defining the primary key is not necessary for a table to be in first normal form, but usually it's done at this stage and is necessary before we can progress to the next stage. Theoretical debates aside, you'll have to define your primary keys at this point.
Although not always seen as part of the definition of 1st normal form, the principle of atomicity is usually applied at this stage as well. This means that all columns must contain their smallest parts, or be indivisible. A common example of this is where someone creates a *name* field, rather than *first name* and *surname* fields. They usually regret it later.
So far, the plant example has no keys, and there are repeating groups. To get it into 1st normal form, you'll need to define a primary key and change the structure so that there are no repeating groups; in other words, each row / column intersection contains one, and only one, value. Without this, you cannot put the data into the ordinary two-dimensional table that most databases require. You define location code and plant code as the primary key together (neither on its own can uniquely identify a record), and replace the repeating groups with a single-value attribute. After doing this, you are left with the structure shown in the table below (the primary key is in italics):
| Plant location table |
| --- |
| *Location code* |
| Location name |
| *Plant code* |
| Plant name |
| Soil category |
| Soil description |
This table is now in 1st normal formal. The process for turning a table into 2nd normal form is continued in the next article.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ACOS ACOS
====
Syntax
------
```
ACOS(X)
```
Description
-----------
Returns the arc cosine of `X`, that is, the value whose cosine is `X`. Returns `NULL` if `X` is not in the range `-1` to `1`.
Examples
--------
```
SELECT ACOS(1);
+---------+
| ACOS(1) |
+---------+
| 0 |
+---------+
SELECT ACOS(1.0001);
+--------------+
| ACOS(1.0001) |
+--------------+
| NULL |
+--------------+
SELECT ACOS(0);
+-----------------+
| ACOS(0) |
+-----------------+
| 1.5707963267949 |
+-----------------+
SELECT ACOS(0.234);
+------------------+
| ACOS(0.234) |
+------------------+
| 1.33460644244679 |
+------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ST_STARTPOINT ST\_STARTPOINT
==============
Syntax
------
```
ST_StartPoint(ls)
StartPoint(ls)
```
Description
-----------
Returns the [Point](../point/index) that is the start point of the [LineString](../linestring/index) value `ls`.
`ST_StartPoint()` and `StartPoint()` are synonyms.
Examples
--------
```
SET @ls = 'LineString(1 1,2 2,3 3)';
SELECT AsText(StartPoint(GeomFromText(@ls)));
+---------------------------------------+
| AsText(StartPoint(GeomFromText(@ls))) |
+---------------------------------------+
| POINT(1 1) |
+---------------------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Worklog Worklog
=======
**Worklog has been replaced, please refer to [JIRA - project planning and tracking](../jira-project-planning-and-tracking/index) for further information** .
[Worklog](http://askmonty.org/worklog/index.pl) is the tool used to track all development of [MariaDB](../mariadb/index).
The MariaDB Worklog is open to everyone at <http://askmonty.org/worklog/index.pl> (a free account on [the developer wiki](http://askmonty.org/wiki) is required to suggest new tasks and add comments, votes, donations etc.). The account signup page for the wiki is [here](http://askmonty.org/w/index.php?title=Special:Userlogin&type=signup).
If you find something in the worklog that you really would like to have done, you can commit to donate some money to the developer when this is done. (Search after "Make offer" on the worklog item you would like to sponsor).
If there is something in worklog which you would like to develop, you can contact us at 'maria-developers (at) lists.launchpad.com' or 'community (at) askmonty.org'. If you deliver a working solutions that is [accepted into the MariaDB source tree](http://kb.askmonty.org/v/community-contributing-to-the-mariadb-project#expectations-for-developers), you will get 60 % of the so far committed money. The rest of the money is kept by MariaDB Corporation Ab for help managing the project, code reviews, bug fixes, testing, maintenance, updates and merges to future MariaDB versions.
You can also add a link in the MariaDB worklog to tasks to the [MySQL worklog](http://forge.mysql.com/worklog/). Just refer to the MySQL task as #WL<task number>
Source Code for Worklog
-----------------------
The source code for the Worklog application is hosted on [Launchpad](https://launchpad.net/worklog). The license is GPL.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb JSON_LOOSE JSON\_LOOSE
===========
**MariaDB starting with [10.2.4](https://mariadb.com/kb/en/mariadb-1024-release-notes/)**This function was added in [MariaDB 10.2.4](https://mariadb.com/kb/en/mariadb-1024-release-notes/).
Syntax
------
```
JSON_LOOSE(json_doc)
```
Description
-----------
Adds spaces to a JSON document to make it look more readable.
Example
-------
```
SET @j = '{ "A":1,"B":[2,3]}';
SELECT JSON_LOOSE(@j), @j;
+-----------------------+--------------------+
| JSON_LOOSE(@j) | @j |
+-----------------------+--------------------+
| {"A": 1, "B": [2, 3]} | { "A":1,"B":[2,3]} |
+-----------------------+--------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MBRContains MBRContains
===========
Syntax
------
```
MBRContains(g1,g2)
```
Description
-----------
Returns 1 or 0 to indicate whether the Minimum Bounding Rectangle of g1 contains the Minimum Bounding Rectangle of g2. This tests the opposite relationship as [MBRWithin()](../mbrwithin/index).
Examples
--------
```
SET @g1 = GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0))');
SET @g2 = GeomFromText('Point(1 1)');
SELECT MBRContains(@g1,@g2), MBRContains(@g2,@g1);
+----------------------+----------------------+
| MBRContains(@g1,@g2) | MBRContains(@g2,@g1) |
+----------------------+----------------------+
| 1 | 0 |
+----------------------+----------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysql.time_zone_transition_type Table mysql.time\_zone\_transition\_type Table
========================================
The `mysql.time_zone_transition_type` table is one of the `mysql` system tables that can contain [time zone](../time-zones/index) information. It is usually preferable for the system to handle the time zone, in which case the table will be empty (the default), but you can populate the `mysql` time zone tables using the [mysql\_tzinfo\_to\_sql](../mysql_tzinfo_to_sql/index) utility. See [Time Zones](../time-zones/index) for details.
**MariaDB starting with [10.4](../what-is-mariadb-104/index)**In [MariaDB 10.4](../what-is-mariadb-104/index) and later, this table uses the [Aria](../aria/index) storage engine.
**MariaDB until [10.3](../what-is-mariadb-103/index)**In [MariaDB 10.3](../what-is-mariadb-103/index) and before, this table uses the [MyISAM](../myisam-storage-engine/index) storage engine.
The `mysql.time_zone_transition_type` table contains the following fields:
| Field | Type | Null | Key | Default | Description |
| --- | --- | --- | --- | --- | --- |
| `Time_zone_id` | `int(10) unsigned` | NO | PRI | `NULL` | |
| `Transition_type_id` | `int(10) unsigned` | NO | PRI | `NULL` | |
| `Offset` | `int(11)` | NO | | 0 | |
| `Is_DST` | `tinyint(3) unsigned` | NO | | 0 | |
| `Abbreviation` | `char(8)` | NO | | | |
Example
-------
```
SELECT * FROM mysql.time_zone_transition_type;
+--------------+--------------------+--------+--------+--------------+
| Time_zone_id | Transition_type_id | Offset | Is_DST | Abbreviation |
+--------------+--------------------+--------+--------+--------------+
| 1 | 0 | -968 | 0 | LMT |
| 1 | 1 | 0 | 0 | GMT |
| 2 | 0 | -52 | 0 | LMT |
| 2 | 1 | 1200 | 1 | GHST |
| 2 | 2 | 0 | 0 | GMT |
| 3 | 0 | 8836 | 0 | LMT |
| 3 | 1 | 10800 | 0 | EAT |
| 3 | 2 | 9000 | 0 | BEAT |
| 3 | 3 | 9900 | 0 | BEAUT |
| 3 | 4 | 10800 | 0 | EAT |
...
+--------------+--------------------+--------+--------+--------------+
```
See Also
--------
* [mysql.time\_zone table](../mysqltime_zone-table/index)
* [mysql.time\_zone\_leap\_second table](../mysqltime_zone_leap_second-table/index)
* [mysql.time\_zone\_name table](../mysqltime_zone_name-table/index)
* [mysql.time\_zone\_transition table](../mysqltime_zone_transition-table/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb COUNT COUNT
=====
Syntax
------
```
COUNT(expr)
```
Description
-----------
Returns a count of the number of non-NULL values of expr in the rows retrieved by a [SELECT](../select/index) statement. The result is a [BIGINT](../bigint/index) value. It is an [aggregate function](../aggregate-functions/index), and so can be used with the [GROUP BY](../group-by/index) clause.
COUNT(\*) counts the total number of rows in a table.
COUNT() returns 0 if there were no matching rows.
From [MariaDB 10.2.0](https://mariadb.com/kb/en/mariadb-1020-release-notes/), COUNT() can be used as a [window function](../window-functions/index).
Examples
--------
```
CREATE TABLE student (name CHAR(10), test CHAR(10), score TINYINT);
INSERT INTO student VALUES
('Chun', 'SQL', 75), ('Chun', 'Tuning', 73),
('Esben', 'SQL', 43), ('Esben', 'Tuning', 31),
('Kaolin', 'SQL', 56), ('Kaolin', 'Tuning', 88),
('Tatiana', 'SQL', 87), ('Tatiana', 'Tuning', 83);
SELECT COUNT(*) FROM student;
+----------+
| COUNT(*) |
+----------+
| 8 |
+----------+
```
[COUNT(DISTINCT)](../count-distinct/index) example:
```
SELECT COUNT(DISTINCT (name)) FROM student;
+------------------------+
| COUNT(DISTINCT (name)) |
+------------------------+
| 4 |
+------------------------+
```
As a [window function](../window-functions/index)
```
CREATE OR REPLACE TABLE student_test (name CHAR(10), test CHAR(10), score TINYINT);
INSERT INTO student_test VALUES
('Chun', 'SQL', 75), ('Chun', 'Tuning', 73),
('Esben', 'SQL', 43), ('Esben', 'Tuning', 31),
('Kaolin', 'SQL', 56), ('Kaolin', 'Tuning', 88),
('Tatiana', 'SQL', 87);
SELECT name, test, score, COUNT(score) OVER (PARTITION BY name)
AS tests_written FROM student_test;
+---------+--------+-------+---------------+
| name | test | score | tests_written |
+---------+--------+-------+---------------+
| Chun | SQL | 75 | 2 |
| Chun | Tuning | 73 | 2 |
| Esben | SQL | 43 | 2 |
| Esben | Tuning | 31 | 2 |
| Kaolin | SQL | 56 | 2 |
| Kaolin | Tuning | 88 | 2 |
| Tatiana | SQL | 87 | 1 |
+---------+--------+-------+---------------+
```
See Also
--------
* [SELECT](../select/index)
* [COUNT DISTINCT](../count-distinct/index)
* [Window Functions](../window-functions/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema STATISTICS Table Information Schema STATISTICS Table
===================================
The [Information Schema](../information_schema/index) `STATISTICS` table provides information about table indexes.
It contains the following columns:
| Column | Description |
| --- | --- |
| `TABLE_CATALOG` | Always `def`. |
| `TABLE_SCHEMA` | Database name. |
| `TABLE_NAME` | Table name. |
| `NON_UNIQUE` | `1` if the index can have duplicates, `0` if not. |
| `INDEX_SCHEMA` | Database name. |
| `INDEX_NAME` | Index name. The primary key is always named `PRIMARY`. |
| `SEQ_IN_INDEX` | The column sequence number, starting at 1. |
| `COLUMN_NAME` | Column name. |
| `COLLATION` | `A` for sorted in ascending order, or `NULL` for unsorted. |
| `CARDINALITY` | Estimate of the number of unique values stored in the index based on statistics stored as integers. Higher cardinalities usually mean a greater chance of the index being used in a join. Updated by the [ANALYZE TABLE](../analyze-table/index) statement or [myisamchk -a](../myisamchk/index). |
| `SUB_PART` | `NULL` if the whole column is indexed, or the number of indexed characters if partly indexed. |
| `PACKED` | `NULL` if not packed, otherwise how the index is packed. |
| `NULLABLE` | `YES` if the column may contain NULLs, empty string if not. |
| `INDEX_TYPE` | Index type, one of `BTREE`, `RTREE`, `HASH` or `FULLTEXT`. See [Storage Engine Index Types](../storage-engine-index-types/index). |
| `COMMENT` | Index comments from the [CREATE INDEX](../create-index/index) statement. |
| `IGNORED` | Whether or not an index will be ignored by the optimizer. See [Ignored Indexes](../ignored-indexes/index). From [MariaDB 10.6.0](https://mariadb.com/kb/en/mariadb-1060-release-notes/). |
The `[SHOW INDEX](../show-index/index)` statement produces similar output.
Example
-------
```
SELECT * FROM INFORMATION_SCHEMA.STATISTICS\G
...
*************************** 85. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: test
TABLE_NAME: table1
NON_UNIQUE: 1
INDEX_SCHEMA: test
INDEX_NAME: col2
SEQ_IN_INDEX: 1
COLUMN_NAME: col2
COLLATION: A
CARDINALITY: 6
SUB_PART: NULL
PACKED: NULL
NULLABLE:
INDEX_TYPE: BTREE
COMMENT:
INDEX_COMMENT:
...
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb MariaDB 5.2 Replication Feature Preview MariaDB 5.2 Replication Feature Preview
=======================================
**Note:** This page is obsolete. The information is old, outdated, or otherwise currently incorrect. We are keeping the page for historical reasons only. **Do not** rely on the information in this article.
This page describes a *"feature preview release"* which previewed some replication-related features which are included in [MariaDB 5.3](../what-is-mariadb-53/index). If you would like to try out the features mentioned here, it is recommended that you use [MariaDB 5.3](../what-is-mariadb-53/index) ([download MariaDB 5.3 here](http://downloads.askmonty.org/mariadb/5.3/)) instead of the actual release described below. Likewise, the code is available in the [MariaDB 5.3 tree on Launchpad](https://launchpad.net/maria/5.3).
About this release
------------------
There has been quite a lot of interest in these features, and providing this feature preview release allows the developers to get more and earlier feedback, as well as allowing more users an early opportunity to evaluate the new features.
This feature preview release is based on [MariaDB 5.2](../what-is-mariadb-52/index), adding a number of fairly isolated features that are considered complete and fairly well-tested. It is however not a stable or GA release, nor is it planned to be so.
The stable release including these features will be **[MariaDB 5.3](../what-is-mariadb-53/index)**. That being said, we greatly welcome any feedback / bug reports, and will strive to fix any issues found and we will update the feature preview until [MariaDB 5.3](../what-is-mariadb-53/index) stable is ready.
Download/Installation
---------------------
These packages are generated the same way as "official" MariaDB releases. Please see the [main download pages](../downloads/index) for more detailed instructions on installation etc.
The instructions below use the mirror [ftp.osuosl.org](http://ftp.osuosl.org/), but any of the MariaDB mirrors can be used by replacing the appropriate part of the URLs. See the [main download page](http://downloads.askmonty.org) for what mirrors are available.
### Debian/Ubuntu
For Debian and Ubuntu, it is highly recommended to install from the repositories, using `apt-get`, `aptitude`, or other favorite package managers.
First import the [public key](http://ftp.osuosl.org/pub/mariadb/PublicKey) with which the repositories are signed, so that `apt` can verify the integrity of the packages it downloads. For example like this:
```
wget -O- http://ftp.osuosl.org/pub/mariadb/PublicKey | sudo apt-key add -
```
Now add the appropriate repository. An easy way is to create a file called `mariadb-5.2-rpl.list` in `/etc/apt/sources.list.d/` with contents like this for Debian:
```
deb http://ftp.osuosl.org/pub/mariadb/mariadb-5.2-rpl/debian squeeze main
deb-src http://ftp.osuosl.org/pub/mariadb/mariadb-5.2-rpl/debian squeeze main
```
Or this for Ubuntu:
```
deb http://ftp.osuosl.org/pub/mariadb/mariadb-5.2-rpl/ubuntu maverick main
deb-src http://ftp.osuosl.org/pub/mariadb/mariadb-5.2-rpl/ubuntu maverick main
```
Replace "squeeze" or "maverick" in the examples above with the appropriate distribution name. Supported are "lenny" and "squeeze" for Debian, and "hardy", "jaunty", "karmic", "lucid", and "maverick" for Ubuntu.
Now run
```
sudo apt-get update
```
The packages can now be installed with your package manager of choice, for example:
```
sudo apt-get install mariadb-server-5.2
```
(To manually download and install packages, browse the directories below <http://ftp.osuosl.org/pub/mariadb/mariadb-5.2-rpl/> - the .debs are in `debian/pool/` and `ubuntu/pool/`, respectively.)
### Generic Linux binary tarball
Generic linux binary tarballs can be downloaded here:
* i386 (32-bit): <http://ftp.osuosl.org/pub/mariadb/mariadb-5.2-rpl/misc/kvm-bintar-hardy-x86/>
* amd64 (64-bit): <http://ftp.osuosl.org/pub/mariadb/mariadb-5.2-rpl/misc/kvm-bintar-hardy-amd64/>
### Centos 5 RPMs
* i386 (32-bit): <http://ftp.osuosl.org/pub/mariadb/mariadb-5.2-rpl/misc/kvm-rpm-centos5-x86/>
* amd64 (64-bit): <http://ftp.osuosl.org/pub/mariadb/mariadb-5.2-rpl/misc/kvm-rpm-centos5-amd64/>
### Windows (32-bit)
* <http://ftp.osuosl.org/pub/mariadb/mariadb-5.2-rpl/misc/kvm-zip-winxp-x86/>
### Source tarball
* <http://ftp.osuosl.org/pub/mariadb/mariadb-5.2-rpl/misc/kvm-tarbake-jaunty-x86/>
### Launchpad bzr branch:
* [`lp:~maria-captains/maria/mariadb-5.2-rpl`](https://code.launchpad.net/~maria-captains/maria/mariadb-5.2-rpl)
New Features in the [MariaDB 5.2](../what-is-mariadb-52/index) replication feature preview
------------------------------------------------------------------------------------------
Here is a summary of the new features included in this preview release. The headings link to more detailed information.
### [Group commit for the binary log](../group-commit/index)
This preview release implements group commit which works when using XtraDB with the binary log enabled. (In previous MariaDB releases, and all MySQL releases at the time of writing, group commit works in InnoDB/XtraDB when the binary log is disabled, but stops working when the binary log is enabled).
### [Enhancements for START TRANSACTION WITH CONSISTENT SNAPSHOT](../enhancements-for-start-transaction-with-consistent/index)
`START TRANSACTION WITH CONSISTENT SNAPSHOT` now also works with the binary log. This means it is possible to obtain the binlog position corresponding to a transactional snapshot of the database without blocking any other queries. This is used by `mysqldump --single-transaction --master-data` to do a fully non-blocking backup which can be used to provision a new slave.
`START TRANSACTION WITH CONSISTENT SNAPSHOT` now also works consistently between transactions involving more than one storage engine (currently XTraDB and PBXT support this).
### [Annotation of row-based replication events with the original SQL statement](../annotate_rows_log_event/index)
When using row-based replication, the binary log does not contain SQL statements, only discrete single-row insert/update/delete *events*. This can make it harder to read mysqlbinlog output and understand where in an application a given event may have originated, complicating analysis and debugging.
This feature adds an option to include the original SQL statement as a comment in the binary log (and shown in mysqlbinlog output) for row-based replication events.
### [Row-based replication for tables with no primary key](../row-based-replication-with-no-primary-key/index)
This feature can improve the performance of row-based replication on tables that do not have a primary key (or other unique key), but which do have another index that can help locate rows to update or delete. With this feature, index cardinality information from `ANALYZE TABLE` is considered when selecting the index to use (before this feature is implemented, the first index was selected unconditionally).
### [PBXT consistent commit ordering](../enhancements-for-start-transaction-with-consistent/index)
This feature implements the new commit ordering storage engine API in PBXT. With this feature, it is possible to use `START TRANSACTION WITH
CONSISTENT SNAPSHOT` and get consistency among transactions which involve both XtraDB and InnoDB. (Without this feature, there is no such consistency guarantee. For example, even after running `START TRANSACTION WITH CONSISTENT
SNAPSHOT` it was still possible for the InnoDB/XtraDB part of some transaction *T* to be visible and the PBXT part of the same transaction *T* to not be visible.)
### Miscellaneous
* This preview also includes a small change to make mysqlbinlog omit redundant `use` statements around `BEGIN`, `SAVEPOINT`, `COMMIT`, and `ROLLBACK` events when reading MySQL 5.0 binlogs.
* The preview included a feature [--innodb-release-locks-early](../innodb-release-locks-early/index). However we decided to omit this feature from future MariaDB releases because of a fundamental design bug, [lp:798213](https://bugs.launchpad.net/maria/+bug/798213).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Upgrading from MySQL to MariaDB Upgrading from MySQL to MariaDB
===============================
For [all practical purposes](../mariadb-vs-mysql-compatibility/index), you can view MariaDB as an upgrade of MySQL:
* Before upgrading, please [check if there are any known incompatibilities](../mariadb-vs-mysql-compatibility/index) between your MySQL release and the MariaDB release you want to move to.
* In particular, note that the [JSON type](../json-data-type/index) in MariaDB is a LONGTEXT, while in MySQL it's a binary type. See [Making MariaDB understand MySQL JSON](https://mariadb.org/making-mariadb-understand-mysql-json/).
* If you are using MySQL 8.0 or above, you have to use [mysqldump](../mysqldump/index) to move your database to MariaDB.
* For upgrading from very old MySQL versions, see [Upgrading to MariaDB from MySQL 5.0 (or older version)](../upgrading-to-mariadb-from-mysql-50-or-older-version/index).
* Within the same base version (for example MySQL 5.5 -> [MariaDB 5.5](../what-is-mariadb-55/index), MySQL 5.6 -> [MariaDB 10.0](../what-is-mariadb-100/index) and MySQL 5.7 -> [MariaDB 10.2](../what-is-mariadb-102/index)) you can in most cases just uninstall MySQL and install MariaDB and you are good to go. There is no need to dump and restore databases. As with any upgrade, we recommend making a backup of your data beforehand.
* You should run `[mysql\_upgrade](../mysql_upgrade/index)` (just as you would with MySQL) to finish the upgrade. This is needed to ensure that your mysql privilege and event tables are updated with the new fields MariaDB uses. Note that if you use a MariaDB package, `mysql_upgrade` is usually run automatically.
* All your old clients and connectors (PHP, Perl, Python, Java, etc.) will work unchanged (no need to recompile). This works because MariaDB and MySQL use the same client protocol and the client libraries are binary compatible. You can also use your old MySQL connector packages with MariaDB if you want.
[Upgrading on Windows](../upgrading-mariadb-on-windows/index)
-------------------------------------------------------------
On Windows, you should not uninstall MySQL and install MariaDB, this would not work, the existing database will not be found.
Thus On Windows, just install MariaDB and use the upgrade wizard which is part of installer package and is launched by MSI installer. Or, in case you prefer command line, use `mysql_upgrade_service <service_name>` on the command line.
Upgrading my.cnf
----------------
All the options in your original MySQL [`my.cnf` file](../mysqld-configuration-files-and-groups/index) should work fine for MariaDB.
However as MariaDB has more features than MySQL, there is a few things that you should consider changing in your `my.cnf` file.
* MariaDB uses by default the [Aria storage engine](../aria-storage-engine/index) for internal temporary files instead of MyISAM. If you have a lot of temporary files, you should add and set `[aria-pagecache-buffer-size](../aria-system-variables/index#aria_pagecache_buffer_size)` to the same value as you have for `[key-buffer-size](../myisam-system-variables/index#key_buffer_size)`.
* If you don't use MyISAM tables, you can set `[key-buffer-size](../myisam-system-variables/index#key_buffer_size)` to a very low value, like 64K.
* If using [MariaDB 10.1](../what-is-mariadb-101/index) or earlier, and your applications often connect and disconnect to MariaDB, you should set up `[thread-cache-size](../server-system-variables/index#thread_cache_size)` to the number of concurrent queries threads you are typically running. This is important in MariaDB as we are using the [jemalloc](http://www.canonware.com/jemalloc/) memory allocator. [jemalloc](http://www.canonware.com/jemalloc/) usually has better performance when running many threads compared to other memory allocators, except if you create and destroy a lot of threads, in which case it will spend a lot of resources trying to manage thread specific storage. Having a thread cache will fix this problem.
* If you have a LOT of connections (> 100) that mostly run short running queries, you should consider using the [thread pool](../threadpool-in-55/index). For example using : `[thread\_handling=pool-of-threads](../server-system-variables/index#thread_handling)` and `[thread\_pool\_size=128](../server-system-variables/index#thread_pool_size)` could give a notable performance boost in this case. Where the `thread_pool_size` should be about `2 * number of cores on your machine`.
Other Things to Think About
---------------------------
* Views with definition `ALGORITHM=MERGE` or `ALGORITHM=TEMPTABLE` got accidentally swapped between MariaDB and MySQL. You have to re-create views created with either of these definitions (see [MDEV-6916](https://jira.mariadb.org/browse/MDEV-6916)).
* MariaDB has LGPL versions of the [C connector](../client-library-for-c/index) and [Java Client](../mariadb-java-client/index). If you are shipping an application that supports MariaDB or MySQL, you should consider using these!
* You should consider trying out the [MyRocks storage engine](../myrocks/index) or some of the other [new storage engines](../mariadb-storage-engines/index) that MariaDB provides.
See Also
--------
* MariaDB has a lot of [new features](../mariadb-vs-mysql-features/index) that you should know about.
* [MariaDB versus MySQL - Compatibility](../mariadb-vs-mysql-compatibility/index)
* [Migrating to MariaDB](../migrating-to-mariadb/index)
* You can find general upgrading informations on the [MariaDB installation page](../getting-installing-and-upgrading-mariadb/index).
* There is a [Screencast for upgrading MySQL to MariaDB](../screencast-for-upgrading-mysql-to-mariadb/index).
* [Upgrading to MariaDB in Debian 9](../moving-from-mysql-to-mariadb-in-debian-9/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Polygon Properties Polygon Properties
===================
Polygon properties
| Title | Description |
| --- | --- |
| [AREA](../polygon-properties-area/index) | Synonym for ST\_AREA. |
| [CENTROID](../centroid/index) | Synonym for ST\_CENTROID. |
| [ExteriorRing](../polygon-properties-exteriorring/index) | Synonym for ST\_ExteriorRing. |
| [InteriorRingN](../polygon-properties-interiorringn/index) | Synonym for ST\_InteriorRingN. |
| [NumInteriorRings](../polygon-properties-numinteriorrings/index) | Synonym for NumInteriorRings. |
| [ST\_AREA](../st_area/index) | Area of a Polygon. |
| [ST\_CENTROID](../st_centroid/index) | The mathematical centroid (geometric center) for a MultiPolygon. |
| [ST\_ExteriorRing](../st_exteriorring/index) | Returns the exterior ring of a Polygon as a LineString. |
| [ST\_InteriorRingN](../st_interiorringn/index) | Returns the N-th interior ring for a Polygon. |
| [ST\_NumInteriorRings](../st_numinteriorrings/index) | Number of interior rings in a Polygon. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema USER_VARIABLES Table Information Schema USER\_VARIABLES Table
========================================
**MariaDB [10.2.0](https://mariadb.com/kb/en/mariadb-1020-release-notes/)**The `USER_VARIABLES` table was introduced in [MariaDB 10.2.0](https://mariadb.com/kb/en/mariadb-1020-release-notes/) as part of the `[user\_variables](../user-variables-plugin/index)` plugin.
Description
-----------
The `USER_VARIABLES` table is created when the [user\_variables](../user-variables-plugin/index) plugin is enabled, and contains information about [user-defined variables](../user-defined-variables/index).
The table contains the following columns:
| Column | Description |
| --- | --- |
| `VARIABLE_NAME` | Variable name. |
| `VARIABLE_VALUE` | Variable value. |
| `VARIABLE_TYPE` | Variable [type](../data-types/index). |
| `CHARACTER_SET_NAME` | [Character set](../character-sets/index). |
User variables are reset and the table emptied with the [FLUSH USER\_VARIABLES](../flush/index) statement.
Example
-------
```
SELECT * FROM information_schema.USER_VARIABLES ORDER BY VARIABLE_NAME;
+---------------+----------------+---------------+--------------------+
| VARIABLE_NAME | VARIABLE_VALUE | VARIABLE_TYPE | CHARACTER_SET_NAME |
+---------------+----------------+---------------+--------------------+
| var | 0 | INT | utf8 |
| var2 | abc | VARCHAR | utf8 |
+---------------+----------------+---------------+--------------------+
```
See Also
--------
* [User-defined variables](../user-defined-variables/index)
* [Performance Schema user\_variables\_by\_thread Table](../performance-schema-user_variables_by_thread-table/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CONNECT ODBC Table Type: Accessing Tables From Another DBMS CONNECT ODBC Table Type: Accessing Tables From Another DBMS
===========================================================
ODBC (Open Database Connectivity) is a standard API for accessing database management systems (DBMS). CONNECT uses this API to access data contained in other DBMS without having to implement a specific application for each one. An exception is the access to MySQL that should be done using the [MYSQL table type](../connect-table-types-mysql-table-type-accessing-mysqlmariadb-tables/index).
Note: On Linux, unixODBC must be installed.
These tables are given the type ODBC. For example, if a "Customers" table is contained in an Access™ database you can define it with a command such as:
```
create table Customer (
CustomerID varchar(5),
CompanyName varchar(40),
ContactName varchar(30),
ContactTitle varchar(30),
Address varchar(60),
City varchar(15),
Region varchar(15),
PostalCode varchar(10),
Country varchar(15),
Phone varchar(24),
Fax varchar(24))
engine=connect table_type=ODBC block_size=10
tabname='Customers'
Connection='DSN=MS Access Database;DBQ=C:/Program
Files/Microsoft Office/Office/1033/FPNWIND.MDB;';
```
Tabname option defaults to the table name. It is required if the source table name is different from the name of the CONNECT table. Note also that for some data sources this name is case sensitive.
Often, because CONNECT can retrieve the table description using ODBC catalog functions, the column definitions can be unspecified. For instance this table can be simply created as:
```
create table Customer engine=connect table_type=ODBC
block_size=10 tabname='Customers'
Connection='DSN=MS Access Database;DBQ=C:/Program Files/Microsoft Office/Office/1033/FPNWIND.MDB;';
```
The `BLOCK_SIZE` specification will be used later to set the RowsetSize when retrieving rows from the ODBC table. A reasonably large RowsetSize can greatly accelerate the fetching process.
If you specify the column description, the column names of your table must exist in the data source table. However, you are not obliged to define all the data source columns and you can change the order of the columns. Some type conversion can also be done if appropriate. For instance, to access the FireBird sample table EMPLOYEE, you could define your table as:
```
create table empodbc (
EMP_NO smallint(5) not null,
FULL_NAME varchar(37) not null),
PHONE_EXT varchar(4) not null,
HIRE_DATE date,
DEPT_NO smallint(3) not null,
JOB_COUNTRY varchar(15),
SALARY double(12,2) not null)
engine=CONNECT table_type=ODBC tabname='EMPLOYEE'
connection='DSN=firebird';
```
This definition ignores the FIRST\_NAME, LAST\_NAME, JOB\_CODE, and JOB\_GRADE columns. It places the FULL\_NAME last column of the original table in second position. The type of the HIRE\_DATE column was changed from *timestamp* to *date* and the type of the DEPT\_NO column was changed from *char* to *integer*.
Currently, some restrictions apply to ODBC tables:
1. Cursor type is forward only (sequential reading).
2. No indexing of ODBC tables (do not specify any columns as key). However, because CONNECT can often add a where clause to the query sent to the data source, indexing will be used by the data source if it supports it. (Remote indexing is available with version 1.04, released with [MariaDB 10.1.6](https://mariadb.com/kb/en/mariadb-1016-release-notes/))
3. CONNECT ODBC supports [SELECT](../select/index) and [INSERT](../insert/index). [UPDATE](../update/index) and [DELETE](../delete/index) are also supported in a somewhat restricted way (see below). For other operations, use an ODBC table with the EXECSRC option (see below) to directly send proper commands to the data source.
Random Access of ODBC Tables
----------------------------
In CONNECT version 1.03 (until [MariaDB 10.1.5](https://mariadb.com/kb/en/mariadb-1015-release-notes/)) ODBC tables are not indexable. Version 1.04 (from [MariaDB 10.1.6](https://mariadb.com/kb/en/mariadb-1016-release-notes/)) adds remote indexing facility to the ODBC table type.
However, some queries require random access to an ODBC table; for instance when it is joined to another table or used in an order by queries applied to a long column or large tables.
There are several ways to enable random (position) access to a CONNECT ODBC table. They are dependant on the following table options:
| Option | Type | Used For |
| --- | --- | --- |
| Block\_Size | Integer | Specifying the rowset size. |
| Memory\* | Integer | Storing the result set in memory. |
| Scrollable\* | Boolean | Using a scrollable cursor. |
`*` - To be specified in the option\_list.
When dealing with small tables, the simpler way to enable random access is to specify a rowset size equal or larger than the table size (or the result set size if a push down where clause is used). This means that the whole result is in memory on the first fetch and CONNECT will use it for further positional accesses.
Another way to have the result set in memory is to use the memory option. This option can be set to the following values:
**0.** No memory used (the default). Best when the table is read sequentially as in SELECT statements with only eventual WHERE clauses.
**1.** Memory size required is calculated during the first sequential table read. The allocated memory is filled during the second sequential read. Then the table rows are retrieved from the memory. This should be used when the table will be accessed several times randomly, such as in sub-selects or being the target table of a join.
**2.** A first query is executed to get the result set size and the needed memory is allocated. It is filled on the first sequential reading. Then random access of the table is possible. This can be used in the case of ORDER BY clauses, when MariaDB uses position reading.
Note that the best way to handle ORDER BY is to set the max\_length\_for\_sort\_data variable to a larger value (its default value is 1024 that is pretty small). Indeed, it requires less memory to be used, particularly when a WHERE clause limits the retrieved data set. This is because in the case of an order by query, MariaDB firstly retrieves the sequentially the result set and the position of each records. Often the sort can be done from the result set if it is not too big. But if too big, or if it implies some “long” columns, only the positions are sorted and MariaDB retrieves the final result from the table read in random order. If setting the max\_length\_for\_sort\_data variable is not feasible or does not work, to be able to retrieve table data from memory after the first sequential read, the memory option must be set to 2.
For tables too large to be stored in memory another possibility is to make your table to use a scrollable cursor. In this case each randomly accessed row can be retrieved from the data source specifying its cursor position, which is reasonably fast. However, scrollable cursors are not supported by all data sources.
With CONNECT version 1.04 (from [MariaDB 10.1.6](https://mariadb.com/kb/en/mariadb-1016-release-notes/)), another way to provide random access is to specify some columns to be indexed. This should be done only when the corresponding column of the source table is also indexed. This should be used for tables too large to be stored in memory and is similar to the remote indexing used by the [MYSQL table type](../connect-table-types-mysql-table-type-accessing-mysqlmariadb-tables/index) and by the [FEDERATED engine](../federatedx-storage-engine/index).
There remains the possibility to extract data from the external table and to construct another table of any file format from the data source. For instance to construct a fixed formatted DOS table containing the CUSTOMER table data, create the table as
```
create table Custfix engine=connect File_name='customer.txt'
table_type=fix block_size=20 as select * from customer;
```
Now you can use *custfix* for fast database operations on the copied *customer* table data.
Retrieving data from a spreadsheet
----------------------------------
ODBC can also be used to create tables based on tabular data belonging to an Excel spreadsheet:
```
create table XLCONT
engine=CONNECT table_type=ODBC tabname='CONTACT'
Connection='DSN=Excel Files;DBQ=D:/Ber/Doc/Contact_BP.xls;';
```
This supposes that a tabular zone of the sheet including column headers is defined as a table named CONTACT or using a “named reference”. Refer to the Excel documentation for how to specify tables inside sheets. Once done, you can ask:
```
select * from xlcont;
```
This will extract the data from Excel and display:
| Nom | Fonction | Societe |
| --- | --- | --- |
| Boisseau Frederic | | 9 Telecom |
| Martelliere Nicolas | | Vidal SA (Groupe UBM) |
| Remy Agathe | | Price Minister |
| Du Halgouet Tanguy | | Danone |
| Vandamme Anna | | GDF |
| Thomas Willy | | Europ Assistance France |
| Thomas Dominique | | Acoss (DG des URSSAF) |
| Thomas Berengere | Responsable SI Decisionnel | DEXIA Credit Local |
| Husy Frederic | Responsable Decisionnel | Neuf Cegetel |
| Lemonnier Nathalie | Directeur Marketing Client | Louis Vuitton |
| Louis Loic | Reporting International Decisionnel | Accor |
| Menseau Eric | | Orange France |
Here again, the columns description was left to CONNECT when creating the table.
Multiple ODBC tables
--------------------
The concept of multiple tables can be extended to ODBC tables when they are physically represented by files, for instance to Excel or Access tables. The condition is that the connect string for the table must contain a field DBQ=*filename*, in which wildcard characters can be included as for multiple=1 tables in their filename. For instance, a table contained in several Excel files CA200401.xls, CA200402.xls, ...CA200412.xls can be created by a command such as:
```
create table ca04mul (Date char(19), Operation varchar(64),
Debit double(15,2), Credit double(15,2))
engine=CONNECT table_type=ODBC multiple=1
qchar= '"' tabname='bank account'
connection='DSN=Excel Files;DBQ=D:/Ber/CA/CA2004*.xls;';
```
Providing that in each file the applying information is internally set for Excel as a table named "bank account". This extension to ODBC does not support *multiple*=2. The *qchar* option was specified to make the identifiers quoted in the select statement sent to ODBC, in particular the when the table or column names contain blanks, to avoid SQL syntax errors.
**Caution:** Avoid accessing tables belonging to the currently running MariaDB server via the MySQL ODBC connector. This may not work and may cause the server to be restarted.
Performance consideration
-------------------------
To avoid extracting entire tables from an ODBC source, which can be a lengthy process, CONNECT extracts the "compatible" part of query WHERE clauses and adds it to the ODBC query. Compatible means that it must be understood by the data source. In particular, clauses involving scalar functions are not kept because the data source may have different functions than MariaDB or use a different syntax. Of course, clauses involving sub-select are also skipped. This will transfer eventual indexing to the data source.
Take care with clauses involving string items because you may not know whether they are treated by the data source as case sensitive or case insensitive. If in doubt, make your queries as if the data source was processing strings as case sensitive to avoid incomplete results.
Using ODBC Tables inside correlated sub-queries
-----------------------------------------------
Unlike not correlated subqueries that are executed only once, correlated subqueries are executed many times. It is what ODBC calls a "requery". Several methods can be used by CONNECT to deal with this depending on the setting of the MEMORY or SCROLLABLE Boolean options:
| Option | Description |
| --- | --- |
| Default | Implementing "requery" by discarding the current result set and re submitting the query (as MFC does) |
| Memory=1 or 2 | Storing the result set in memory as MYSQL tables do. |
| Scrollable=Yes | Using a scrollable cursor. |
Note: the MEMORY and SCROLLABLE options must be specified in the OPTION \_ LIST.
Because the table is accessed several times, this can make queries last very long except for small tables and is almost unacceptable for big tables. However, if it cannot be avoided, using the memory method is the best choice and can be more than four times faster than the default method. If it is supported by the driver, using a scrollable cursor is slightly slower than using memory but can be an alternative to avoid memory problems when the sub-query returns a huge result set.
If the result set is of reasonable size, it is also possible to specify the block\_size option equal or slightly larger than the result set. The whole result set being read on the first fetch, can be accessed many times without having to do anything else.
Another good workaround is to replace within the correlated sub-query the ODBC table by a local copy of it because MariaDB is often able to optimize the query and to provide a very fast execution.
Accessing specified views
-------------------------
Instead of specifying a source table name via the TABNAME option, it is possible to retrieve data from a “view” whose definition is given in a new option SRCDEF. For instance:
```
CREATE TABLE custnum (
country varchar(15) NOT NULL,
customers int(6) NOT NULL)
ENGINE=CONNECT TABLE_TYPE=ODBC BLOCK_SIZE=10
CONNECTION='DSN=MS Access Database;DBQ=C:/Program Files/Microsoft Office/Office/1033/FPNWIND.MDB;'
SRCDEF='select country, count(*) as customers from customers group by country';
```
Or simply, because CONNECT can retrieve the returned column definition:
```
CREATE TABLE custnum ENGINE=CONNECT TABLE_TYPE=ODBC BLOCK_SIZE=10
CONNECTION='DSN=MS Access Database;DBQ=C:/Program Files/Microsoft Office/Office/1033/FPNWIND.MDB;'
SRCDEF='select country, count(*) as customers from customers group by country';
```
Then, when executing for instance:
```
select * from custnum where customers > 3;
```
The processing of the group by is done by the data source, which returns only the generated result set on which only the where clause is performed locally. The result:
| country | customers |
| --- | --- |
| Brazil | 9 |
| France | 11 |
| Germany | 11 |
| Mexico | 5 |
| Spain | 5 |
| UK | 7 |
| USA | 13 |
| Venezuela | 4 |
This makes possible to let the data source do complicated operations, such as joining several tables or executing procedures returning a result set. This minimizes the data transfer through ODBC.
Data Modifying Operations
-------------------------
The only data modifying operations are the [INSERT](../insert/index) , [UPDATE](../update/index) and [DELETE](../delete/index) commands. They can be executed successfully only if the data source database or tables are not read/only.
### INSERT Command
When inserting values to an ODBC table, local values are used and sent to the ODBC table. This does not make any difference when the values are constant but in a query such as:
```
insert into t1 select * from t2;
```
Where t1 is an ODBC table, t2 is a locally defined table that must exist on the local server. Besides, it is a good way to create a distant ODBC table from local data.
CONNECT does not directly support INSERT commands such as:
```
insert into t1 values(2,'Deux') on duplicate key update msg = 'Two';
```
Sure enough, the “on duplicate key update” part of it is ignored, and will result in error if the key value is duplicated.
### UPDATE and DELETE Commands
Unlike the [INSERT](../insert/index) command, [UPDATE](../update/index) and [DELETE](../delete/index) are supported in a simplified way. Only simple table commands are supported; CONNECT does not support multi-table commands, commands sent from a procedure, or issued via a trigger. These commands are just rephrased to correspond to the data source syntax and sent to the data source for execution. Let us suppose we created the table:
```
create table tolite (
id int(9) not null,
nom varchar(12) not null,
nais date default null,
rem varchar(32) default null)
ENGINE=CONNECT TABLE_TYPE=ODBC tabname='lite'
CONNECTION='DSN=SQLite3 Datasource;Database=test.sqlite3'
CHARSET=utf8 DATA_CHARSET=utf8;
```
We can populate it by:
```
insert into tolite values(1,'Toto',now(),'First'),
(2,'Foo','2012-07-14','Second'),(4,'Machin','1968-05-30','Third');
```
The function `now()` will be executed by MariaDB and it returned value sent to the ODBC table.
Let us see what happens when updating the table. If we use the query:
```
update tolite set nom = 'Gillespie' where id = 10;
```
CONNECT will rephrase the command as:
```
update lite set nom = 'Gillespie' where id = 10;
```
What it did is just to replace the local table name with the remote table name and change all the back ticks to blanks or to the data source identifier quoting characters if QUOTED is specified. Then this command will be sent to the data source to be executed by it.
This is simpler and can be faster than doing a positional update using a cursor and commands such as “select ... for update of ...” that are not supported by all data sources. However, there are some restrictions that must be understood due to the way it is handled by MariaDB.
1. MariaDB does not know about all the above. The command will be parsed as if it were to be executed locally. Therefore, it must respect the MariaDB syntax.
2. Being executed by the data source, the (rephrased) command must also respect the data source syntax.
3. All data referenced in the SET and WHERE clause belongs to the data source.
This is possible because both MariaDB and the data source are using the SQL language. But you must use only the basic features that are part of the core SQL language. For instance, keywords like IGNORE or LOW\_PRIORITY will cause syntax error with many data source.
Scalar function names also can be different, which severely restrict the use of them. For instance:
```
update tolite set nais = now() where id = 2;
```
This will not work with SQLite3, the data source returning an “unknown scalar function” error message. Note that in this particular case, you can rephrase it to:
```
update tolite set nais = date('now') where id = 2;
```
This understood by both parsers, and even if this function would return NULL executed by MariaDB, it does return the current date when executed by SQLite3. But this begins to become too trickery so to overcome all these restrictions, and permit to have all types of commands executed by the data source, CONNECT provides a specific ODBC table subtype described now.
Sending commands to a Data Source
---------------------------------
This can be done using a special subtype of ODBC table. Let us see this in an example:
```
create table crlite (
command varchar(128) not null,
number int(5) not null flag=1,
message varchar(255) flag=2)
engine=connect table_type=odbc
connection='Driver=SQLite3 ODBC Driver;Database=test.sqlite3;NoWCHAR=yes'
option_list='Execsrc=1';
```
The key points in this create statement are the EXECSRC option and the column definition.
The EXECSRC option tells that this table will be used to send a command to the data source. Most of the sent commands do not return result set. Therefore, the table columns are used to specify the command to be executed and to get the result of the execution. The name of these columns can be chosen arbitrarily, their function coming from the FLAG value:
| | |
| --- | --- |
| Flag=0: | The command to execute. |
| Flag=1: | The affected rows, or -1 in case of error, or the result number of column if the command returns a result set. |
| Flag=2: | The returned (eventually error) message. |
How to use this table and specify the command to send? By executing a command such as:
```
select * from crlite where command = 'a command';
```
This will send the command specified in the WHERE clause to the data source and return the result of its execution. The syntax of the WHERE clause must be exactly as shown above. For instance:
```
select * from crlite where command =
'CREATE TABLE lite (
ID integer primary key autoincrement,
name char(12) not null,
birth date,
rem varchar(32))';
```
This command returns:
| command | number | message |
| --- | --- | --- |
| `CREATE TABLE lite (ID integer primary key autoincrement, name...` | 0 | Affected rows |
Now we can create a standard ODBC table on the newly created table:
```
CREATE TABLE tlite
ENGINE=CONNECT TABLE_TYPE=ODBC tabname='lite'
CONNECTION='Driver=SQLite3 ODBC Driver;Database=test.sqlite3;NoWCHAR=yes'
CHARSET=utf8 DATA_CHARSET=utf8;
```
We can populate it directly using the supported [INSERT](../insert/index) statement:
```
insert into tlite(name,birth) values('Toto','2005-06-12');
insert into tlite(name,birth,rem) values('Foo',NULL,'No ID');
insert into tlite(name,birth) values('Truc','1998-10-27');
insert into tlite(name,birth,rem) values('John','1968-05-30','Last');
```
And see the result:
```
select * from tlite;
```
| ID | name | birth | rem |
| --- | --- | --- | --- |
| 1 | Toto | 2005-06-12 | NULL |
| 2 | Foo | NULL | No ID |
| 3 | Truc | 1998-10-27 | NULL |
| 4 | John | 1968-05-30 | Last |
Any command, for instance [UPDATE](../update/index), can be executed from the *crlite* table:
```
select * from crlite where command =
'update lite set birth = ''2012-07-14'' where ID = 2';
```
This command returns:
| command | number | message |
| --- | --- | --- |
| `update lite set birth = '2012-07-15' where ID = 2` | 1 | Affected rows |
Let us verify it:
```
select * from tlite where ID = 2;
```
| ID | name | birth | rem |
| --- | --- | --- | --- |
| 2 | Foo | 2012-07-15 | No ID |
The syntax to send a command is rather strange and may seem unnatural. It is possible to use an easier syntax by defining a stored procedure such as:
```
create procedure send_cmd(cmd varchar(255))
MODIFIES SQL DATA
select * from crlite where command = cmd;
```
Now you can send commands like this:
```
call send_cmd('drop tlite');
```
This is possible only when sending one single command.
### Sending several commands together
Grouping commands uses an easier syntax and is faster because only one connection is made for the all of them. To send several commands in one call, use the following syntax:
```
select * from crlite where command in (
'update lite set birth = ''2012-07-14'' where ID = 2',
'update lite set birth = ''2009-08-10'' where ID = 3');
```
When several commands are sent, the execution stops at the end of them or after a command that is in error. To continue after *n* errors, set the option maxerr=*n* (0 by default) in the option list.
**Note 1:** It is possible to specify the SRCDEF option when creating an EXECSRC table. It will be the command sent by default when a WHERE clause is not specified.
**Note 2:** Most data sources do not allow sending several commands separated by semi-colons.
**Note 3:** Quotes inside commands must be escaped. This can be avoided by using a different quoting character than the one used in the command
**Note 4:** The sent command must obey the data source syntax.
**Note 5:** Sent commands apply in the specified database. However, they can address any table within this database, or belonging to another database using the name syntax *schema.tabname*.
Connecting to a Data Source
---------------------------
There are two ways to establish a connection to a data source:
1. Using SQLDriverConnect and a Connection String
2. Using SQLConnect and a Data Source Name (DSN)
The first way uses a Connection String whose components describe what is needed to establish the connection. It is the most complete way to do it and by default CONNECT uses it.
The second way is a simplified way in which ODBC is just given the name of a DSN that must have been defined to ODBC or UnixOdbc and that contains the necessary information to establish the connection. Only the user name and password can be specified out of the DSN specification.
### Defining the Connection String
Using the first way, the connection string must be specified. This is sometimes the most difficult task when creating ODBC tables because, depending on the operating system and the data source, this string can widely differ.
The format of the ODBC Connection String is:
```
connection-string::= empty-string[;] | attribute[;] | attribute; connection-string
empty-string ::=
attribute ::= attribute-keyword=attribute-value | DRIVER=[{]attribute-value[}]
attribute-keyword ::= DSN | UID | PWD | driver-defined-attribute-keyword
attribute-value ::= character-string
driver-defined-attribute-keyword = identifier
```
Where character-string has zero or more characters; identifier has one or more characters; attribute- keyword is not case-sensitive; attribute-value may be case-sensitive; and the value of the DSN keyword does not consist solely of blanks. Due to the connection string grammar, keywords and attribute values that contain the characters `[]{}(),;?*=!@` should be avoided. The value of the DSN keyword cannot consist only of blanks, and should not contain leading blanks. Because of the grammar of the system information, keywords and data source names cannot contain the backslash (\) character. Applications do not have to add braces around the attribute value after the DRIVER keyword unless the attribute contains a semicolon (;), in which case the braces are required. If the attribute value that the driver receives includes the braces, the driver should not remove them, but they should be part of the returned connection string.
### ODBC Defined Connection Attributes
The ODBC defined attributes are:
* DSN - the name of the data source to connect to. You must create this before attempting to refer to it. You create new DSNs through the ODBC Administrator (Windows), ODBCAdmin (unixODBC's GUI manager) or in the odbc.ini file.
* DRIVER - the name of the driver to connect to. You can use this in DSN-less connections.
* FILEDSN - the name of a file containing the connection attributes.
* UID/PWD - any username and password the database requires for authentication.
* SAVEFILE - request the DSN attributes are saved in this file.
Other attributes are DSN dependent attributes. The connection string can give the name of the driver in the DRIVER field or the data source in the DSN field (attention! meet the spelling and case) and has other fields that depend on the data source. When specifying a file, the DBQ field must give the **full** path and name of the file containing the table. Refer to the specific ODBC connector documentation for the exact syntax of the connection string.
### Using a Predefined DSN
This is done by specifying in the option list the Boolean option “UseDSN” as yes or 1. In addition, string options “user” and “password” can be optionally specified in the option list.
When doing so, the connection string just contains the name of the predefined Data Source. For instance:
```
CREATE TABLE tlite ENGINE=CONNECT TABLE_TYPE=ODBC tabname='lite'
CONNECTION='SQLite3 Datasource'
OPTION_LIST='UseDSN=Yes,User=me,Password=mypass';
```
Note: the connection data source name (limited to 32 characters) should not be preceded by “DSN=”.
ODBC Tables on Linux/Unix
-------------------------
In order to use ODBC tables, you will need to have unixODBC installed. Additionally, you will need the ODBC driver for your foreign server's protocol. For example, for MS SQL Server or Sybase, you will need to have FreeTDS installed.
Make sure the user running mysqld (usually the mysql user) has permission to the ODBC data source configuration and the ODBC drivers. If you get an error on Linux/Unix when using TABLE\_TYPE=ODBC:
```
Error Code: 1105 [unixODBC][Driver Manager]Can't open lib
'/usr/cachesys/bin/libcacheodbc.so' : file not found
```
You must make sure that the user running mysqld (usually "mysql") has enough permission to load the ODBC driver library. It can happen that the driver file does not have enough read privileges (use chmod to fix this), or loading is prevented by SELinux configuration (see below).
Try this command in a shell to check if the driver had enough permission:
```
sudo -u mysql ldd /usr/cachesys/bin/libcacheodbc.so
```
#### SELinux
SELinux can cause various problems. If you think SELinux is causing problems, check the system log (e.g. /var/log/messages) or the audit log (e.g. /var/log/audit/audit.log).
**mysqld can't load some executable code, so it can't use the ODBC driver.**
Example error:
```
Error Code: 1105 [unixODBC][Driver Manager]Can't open lib
'/usr/cachesys/bin/libcacheodbc.so' : file not found
```
Audit log:
```
type=AVC msg=audit(1384890085.406:76): avc: denied { execute }
for pid=1433 comm="mysqld"
path="/usr/cachesys/bin/libcacheodbc.so" dev=dm-0 ino=3279212
scontext=unconfined_u:system_r:mysqld_t:s0
tcontext=unconfined_u:object_r:usr_t:s0 tclass=file
```
**mysqld can't open TCP sockets on some ports, so it can't connect to the foreign server.**
Example error:
```
ERROR 1296 (HY000): Got error 174 '[unixODBC][FreeTDS][SQL Server]Unable to connect to data source' from CONNECT
```
Audit log:
```
type=AVC msg=audit(1423094175.109:433): avc: denied { name_connect } for pid=3193 comm="mysqld" dest=1433 scontext=system_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:mssql_port_t:s0 tclass=tcp_socket
```
ODBC Catalog Information
------------------------
Depending on the version of the used ODBC driver, some additional information on the tables are existing, such as table QUALIFIER or OWNER for old versions, now named CATALOG or SCHEMA since version 3.
CATALOG is apparently rarely used by most data sources, but SCHEMA (formerly OWNER) is and corresponds to the DATABASE information of MySQL.
The issue is that if no schema name is specified, some data sources return information for all schemas while some others only return the information of the “default” schema. In addition, the used “schema” or “database” is sometimes implied by the connection string and sometimes is not. Sometimes, it also can be included in a data source definition.
CONNECT offers two ways to specify this information:
1. When specified, the DBNAME create table option is regarded by ODBC tables as the SCHEMA name.
2. Table names can be specified as “*cat.sch.tab*” allowing to set the catalog and schema info.
When both are used, the qualified table name has precedence over DBNAME . For instance:
| Tabname | DBname | Description |
| --- | --- | --- |
| test.t1 | | The t1 table of the test schema. |
| test.t1 | mydb | The t1 table of the test schema (test has precedence) |
| t1 | mydb | The t1 table of the mydb schema |
| %.%.% | | All tables in all catalogs and all schemas |
| t1 | | The t1 table in the default or all schema depending on the DSN |
| %.t1 | | The t1 table in all schemas for all DSN |
| test.% | | All tables in the test schema |
When creating a standard ODBC table, you should make sure only one source table is specified. Specifying more than one source table must be done only for CONNECT catalog tables (with CATFUNC=tables or columns).
In particular, when column definition is left to the Discovery feature, if tables with the same name are present in several schemas and the schema name is not specified, several columns with the same name will be generated. This will make the creation fail with a not very explicit error message.
Note: With some ODBC drivers, the DBNAME option or qualified table name is useless because the schema implied by the connection string or the definition of the data source has priority over the specified DBNAME .
### Table name case
Another issue when dealing with ODBC tables is the way table and column names are handled regarding of the case.
For instance, Oracle follows to the SQL standard here. It converts non-quoted identifiers to upper case. This is correct and expected. PostgreSQL is not standard. It converts identifiers to lower case. MySQL/MariaDB is not standard. They preserve identifiers on Linux, and convert to lower case on Windows.
Think about that if you fail to see a table or a column on an ODBC data source.
Non-ASCII Character Sets with Oracle
------------------------------------
When connecting through ODBC, the MariaDB Server operates as a client to the foreign database management system. As such, it requires that you configure MariaDB as you would configure native clients for the given database server.
In the case of connecting to Oracle, when using non-ASCI character sets, you need to properly set the NLS\_LANG environment variable before starting the MariaDB Server.
For instance, to test this on Oracle, create a table that contains a series of special characters:
```
CREATE TABLE t1 (letter VARCHAR(4000));
INSERT INTO t1 VALUES
(UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW('C4'))),
(UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW('C5'))),
(UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW('C6')));
SELECT letter, RAWTOHEX(letter) FROM t1;
letter | RAWTOHEX(letter)
-------|-----------------
Ä | C4
Å | C5
Æ | C6
```
Then create a connecting table on MariaDB and attempt the same query:
```
CREATE TABLE t1 (
letter VARCHAR(4000))
ENGINE=CONNECT
DEFAULT CHARSET=utf8mb4
CONNECTION='DSN=YOUR_DSN'
TABLE_TYPE = 'ODBC'
DATA_CHARSET = latin1
TABNAME = 'YOUR_SCHEMA.T1';
SELECT letter, HEX(letter) FROM t1;
+--------+-------------+
| letter | HEX(letter) |
+--------+-------------+
| A | 41 |
| ? | 3F |
| ? | 3F |
+--------+-------------+
```
While the character set is defined in a way that satisfies MariaDB, it has not been defined for Oracle, (that is, setting the NLS\_LANG environment variable). As a result, Oracle is not providing the characters you want to MariaDB and Connect. The specific method of setting the NLS\_LANG variable can vary depending on your operating system or distribution. If you're experiencing this issue, check your OS documentation for more details on how to properly set environment variables.
### Using systemd
With Linux distributions that use [systemd](../systemd/index), you need to set the environment variable in the service file, (systemd doesn't read from the /etc/environment file).
This is done by setting the Environment variable in the [Service] unit. For instance,
```
# systemctl edit mariadb.service
[Service]
Environment=NLS_LANG=GERMAN_GERMANY.WE8ISO8859P1
```
Then restart MariaDB,
```
# systemctl restart mariadb.service
```
You can now retrieve the appropriate characters from Oracle tables:
```
SELECT letter, HEX(letter) FROM t1;
+--------+-------------+
| letter | HEX(letter) |
+--------+-------------+
| Ä | C384 |
| Å | C385 |
| Æ | C386 |
+--------+-------------+
```
### Using Windows
Microsoft Windows doesn't ignore environment variables the way systemd does on Linux, but it does require that you set the NLS\_LANG environment variable on your system. In order to do so, you need to open an elevated command-prompt, (that is, Cmd.exe with administrative privileges).
From here, you can use the Setx command to set the variable. For instance,
```
Setx NLS_LANG GERMAN_GERMANY.WE8ISO8859P1 /m
```
Note: For more detail about this, see [MDEV-17501](https://jira.mariadb.org/browse/MDEV-17501).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Aggregate Functions Aggregate Functions
====================
The following functions (also called aggregate functions) can be used with the [GROUP BY](../group-by/index) clause:
| Title | Description |
| --- | --- |
| [Stored Aggregate Functions](../stored-aggregate-functions/index) | Custom aggregate functions. |
| [AVG](../avg/index) | Returns the average value. |
| [BIT\_AND](../bit_and/index) | Bitwise AND. |
| [BIT\_OR](../bit_or/index) | Bitwise OR. |
| [BIT\_XOR](../bit_xor/index) | Bitwise XOR. |
| [COUNT](../count/index) | Returns count of non-null values. |
| [COUNT DISTINCT](../count-distinct/index) | Returns count of number of different non-NULL values. |
| [GROUP\_CONCAT](../group_concat/index) | Returns string with concatenated values from a group. |
| [JSON\_ARRAYAGG](../json_arrayagg/index) | Returns a JSON array containing an element for each value in a given set of JSON or SQL values. |
| [JSON\_OBJECTAGG](../json_objectagg/index) | Returns a JSON object containing key-value pairs. |
| [MAX](../max/index) | Returns the maximum value. |
| [MIN](../min/index) | Returns the minimum value. |
| [STD](../std/index) | Population standard deviation. |
| [STDDEV](../stddev/index) | Population standard deviation. |
| [STDDEV\_POP](../stddev_pop/index) | Returns the population standard deviation. |
| [STDDEV\_SAMP](../stddev_samp/index) | Standard deviation. |
| [SUM](../sum/index) | Sum total. |
| [VARIANCE](../variance/index) | Population standard variance. |
| [VAR\_POP](../var_pop/index) | Population standard variance. |
| [VAR\_SAMP](../var_samp/index) | Returns the sample variance. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb JSON_OBJECT JSON\_OBJECT
============
**MariaDB starting with [10.2.3](https://mariadb.com/kb/en/mariadb-1023-release-notes/)**JSON functions were added in [MariaDB 10.2.3](https://mariadb.com/kb/en/mariadb-1023-release-notes/).
Syntax
------
```
JSON_OBJECT([key, value[, key, value] ...])
```
Description
-----------
Returns a JSON object containing the given key/value pairs. The key/value list can be empty.
An error will occur if there are an odd number of arguments, or any key name is NULL.
Example
-------
```
SELECT JSON_OBJECT("id", 1, "name", "Monty");
+---------------------------------------+
| JSON_OBJECT("id", 1, "name", "Monty") |
+---------------------------------------+
| {"id": 1, "name": "Monty"} |
+---------------------------------------+
```
See also
--------
* [JSON\_MAKE\_OBJECT](../connect-json-table-type/index#json_make_object), the CONNECT storage engine function
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MBR Definition MBR Definition
==============
Description
-----------
The MBR (Minimum Bounding Rectangle), or Envelope is the bounding geometry, formed by the minimum and maximum (X,Y) coordinates:
Examples
--------
```
((MINX MINY, MAXX MINY, MAXX MAXY, MINX MAXY, MINX MINY))
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysqlcheck mysqlcheck
==========
**MariaDB starting with [10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/)**From [MariaDB 10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/), `mariadb-check` is a symlink to `mysqlcheck`.
**MariaDB starting with [10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/)**From [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/), `mariadb-check` is the name of the tool, with `mysqlcheck` a symlink .
`mysqlcheck` is a maintenance tool that allows you to check, repair, analyze and optimize multiple tables from the command line.
It is essentially a commandline interface to the [CHECK TABLE](../check-table/index), [REPAIR TABLE](../repair-table/index), [ANALYZE TABLE](../analyze-table/index) and [OPTIMIZE TABLE](../optimize-table/index) commands, and so, unlike [myisamchk](../myisamchk/index) and [aria\_chk](../aria_chk/index), requires the server to be running.
This tool does not work with partitioned tables.
Using mysqlcheck
----------------
```
./client/mysqlcheck [OPTIONS] database [tables]
```
OR
```
./client/mysqlcheck [OPTIONS] --databases DB1 [DB2 DB3...]
```
OR
```
./client/mysqlcheck [OPTIONS] --all-databases
```
`mysqlcheck` can be used to CHECK (-c, -m, -C), REPAIR (-r), ANALYZE (-a), or OPTIMIZE (-o) tables. Some of the options (like -e or -q) can be used at the same time. Not all options are supported by all storage engines.
The -c, -r, -a and -o options are exclusive to each other.
The option `--check` will be used by default, if no other options were specified. You can change the default behavior by making a symbolic link to the binary, or copying it somewhere with another name, the alternatives are:
| | |
| --- | --- |
| `mysqlrepair` | The default option will be `-r` (`--repair`) |
| `mysqlanalyze` | The default option will be `-a` (`--analyze`) |
| `mysqloptimize` | The default option will be `-o` (`--optimize`) |
### Options
`mysqlcheck` supports the following options:
| Option | Description |
| --- | --- |
| `-A`, `--all-databases` | Check all the databases. This is the same as `--databases` with all databases selected. |
| `-1`, `--all-in-1` | Instead of issuing one query for each table, use one query per database, naming all tables in the database in a comma-separated list. |
| `-a`, `--analyze` | Analyze given tables. |
| `--auto-repair` | If a checked table is corrupted, automatically fix it. Repairing will be done after all tables have been checked. |
| `--character-sets-dir=name` | Directory where [character set](../data-types-character-sets-and-collations/index) files are installed. |
| `-c`, `--check` | Check table for errors. |
| `-C`, `--check-only-changed` | Check only tables that have changed since last check or haven't been closed properly. |
| `-g`, `--check-upgrade` | Check tables for version-dependent changes. May be used with `--auto-repair` to correct tables requiring version-dependent updates. Automatically enables the `--fix-db-names` and `--fix-table-names` options. Used [when upgrading](../upgrading-to-mariadb-from-mysql/index) |
| `--compress` | Compress all information sent between the client and server if both support compression. |
| `-B`, `--databases` | Check several databases. Note that normally *mysqlcheck* treats the first argument as a database name, and following arguments as table names. With this option, no tables are given, and all name arguments are regarded as database names. |
| `-#` , `--debug[=name]` | Output debug log. Often this is 'd:t:o,filename'. |
| `--debug-check` | Check memory and open file usage at exit. |
| `--debug-info` | Print some debug info at exit. |
| `--default-auth=plugin` | Default authentication client-side plugin to use. |
| `--default-character-set=name` | Set the default [character set](../data-types-character-sets-and-collations/index). |
| `-e`, `--extended` | If you are using this option with `--check`, it will ensure that the table is 100 percent consistent, but will take a long time. If you are using this option with `--repair`, it will force using the old, slow, repair with keycache method, instead of the much faster repair by sorting. |
| `-F`, `--fast` | Check only tables that haven't been closed properly. |
| `--fix-db-names` | Convert database names to the format used since MySQL 5.1. Only database names that contain special characters are affected. Used [when upgrading](../upgrading-to-mariadb-from-mysql/index) from an old MySQL version. |
| `--fix-table-names` | Convert table names (including [views](../views/index)) to the format used since MySQL 5.1. Only table names that contain special characters are affected. Used [when upgrading](../upgrading-to-mariadb-from-mysql/index) from an old MySQL version. |
| `--flush` | Flush each table after check. This is useful if you don't want to have the checked tables take up space in the caches after the check. |
| `-f`, `--force` | Continue even if we get an SQL error. |
| `-?`, `--help` | Display this help message and exit. |
| `-h name`, `--host=name` | Connect to the given host. |
| `-m`, `--medium-check` | Faster than extended-check, but only finds 99.99 percent of all errors. Should be good enough for most cases. |
| `-o`, `--optimize` | Optimize tables. |
| `-p`, `--password[=name]` | Password to use when connecting to the server. If you use the short option form (`-p`), you cannot have a space between the option and the password. If you omit the password value following the `--password` or `-p` option on the command line, mysqlcheck prompts for one. Specifying a password on the command line should be considered insecure. You can use an option file to avoid giving the password on the command line. |
| `-Z`, `--persistent` | When using ANALYZE TABLE (`--analyze`), uses the PERSISTENT FOR ALL option, which forces [Engine-independent Statistics](../engine-independent-table-statistics/index) for this table to be updated. Added in [MariaDB 10.1.10](https://mariadb.com/kb/en/mariadb-10110-release-notes/) |
| `-W`, `--pipe` | On Windows, connect to the server via a named pipe. This option applies only if the server supports named-pipe connections. |
| `--plugin-dir` | Directory for client-side plugins. |
| `-P num`, `--port=num` | Port number to use for connection or 0 for default to, in order of preference, my.cnf, $MYSQL\_TCP\_PORT, /etc/services, built-in default (3306). |
| `--process-tables` | Perform the requested operation (check, repair, analyze, optimize) on tables. Enabled by default. Use `--skip-process-tables` to disable. |
| `--process-views[=val]` | Perform the requested operation (only [CHECK VIEW](../check-view/index) or [REPAIR VIEW](../repair-view/index)). Possible values are NO, YES (correct the checksum, if necessary, add the mariadb-version field), UPGRADE\_FROM\_MYSQL (same as YES and toggle the algorithm MERGE<->TEMPTABLE. |
| `--protocol=name` | The connection protocol (tcp, socket, pipe, memory) to use for connecting to the server. Useful when other connection parameters would cause a protocol to be used other than the one you want. |
| `-q`, `--quick` | If you are using this option with CHECK TABLE, it prevents the check from scanning the rows to check for wrong links. This is the fastest check. If you are using this option with REPAIR TABLE, it will try to repair only the index tree. This is the fastest repair method for a table. |
| `-r`, `--repair` | Can fix almost anything except unique keys that aren't unique. |
| `--shared-memory-base-name` | Shared-memory name to use for Windows connections using shared memory to a local server (started with the `--shared-memory` option). Case-sensitive. |
| `-s`, `--silent` | Print only error messages. |
| `--skip-database` | Don't process the database (case-sensitive) specified as argument. |
| `-S name`, `--socket=name` | For connections to localhost, the Unix socket file to use, or, on Windows, the name of the named pipe to use. |
| `--ssl` | Enables [TLS](../data-in-transit-encryption/index). TLS is also enabled even without setting this option when certain other TLS options are set. Starting with [MariaDB 10.2](../what-is-mariadb-102/index), the `--ssl` option will not enable [verifying the server certificate](../secure-connections-overview/index#server-certificate-verification) by default. In order to verify the server certificate, the user must specify the `--ssl-verify-server-cert` option. |
| `--ssl-ca=name` | Defines a path to a PEM file that should contain one or more X509 certificates for trusted Certificate Authorities (CAs) to use for [TLS](../data-in-transit-encryption/index). This option requires that you use the absolute path, not a relative path. See [Secure Connections Overview: Certificate Authorities (CAs)](../secure-connections-overview/index#certificate-authorities-cas) for more information. This option implies the `--ssl` option. |
| `--ssl-capath=name` | Defines a path to a directory that contains one or more PEM files that should each contain one X509 certificate for a trusted Certificate Authority (CA) to use for [TLS](../data-in-transit-encryption/index). This option requires that you use the absolute path, not a relative path. The directory specified by this option needs to be run through the [openssl rehash](https://www.openssl.org/docs/man1.1.1/man1/rehash.html) command. See [Secure Connections Overview: Certificate Authorities (CAs)](../secure-connections-overview/index#certificate-authorities-cas) for more information. This option is only supported if the client was built with OpenSSL or yaSSL. If the client was built with GnuTLS or Schannel, then this option is not supported. See [TLS and Cryptography Libraries Used by MariaDB](../tls-and-cryptography-libraries-used-by-mariadb/index) for more information about which libraries are used on which platforms. This option implies the `--ssl` option. |
| `--ssl-cert=name` | Defines a path to the X509 certificate file to use for [TLS](../data-in-transit-encryption/index). This option requires that you use the absolute path, not a relative path. This option implies the `--ssl` option. |
| `--ssl-cipher=name` | List of permitted ciphers or cipher suites to use for [TLS](../data-in-transit-encryption/index). This option implies the `--ssl` option. |
| `--ssl-crl=name` | Defines a path to a PEM file that should contain one or more revoked X509 certificates to use for [TLS](../data-in-transit-encryption/index). This option requires that you use the absolute path, not a relative path. See [Secure Connections Overview: Certificate Revocation Lists (CRLs)](../secure-connections-overview/index#certificate-revocation-lists-crls) for more information. This option is only supported if the client was built with OpenSSL or Schannel. If the client was built with yaSSL or GnuTLS, then this option is not supported. See [TLS and Cryptography Libraries Used by MariaDB](../tls-and-cryptography-libraries-used-by-mariadb/index) for more information about which libraries are used on which platforms. |
| `--ssl-crlpath=name` | Defines a path to a directory that contains one or more PEM files that should each contain one revoked X509 certificate to use for [TLS](../data-in-transit-encryption/index). This option requires that you use the absolute path, not a relative path. The directory specified by this option needs to be run through the [openssl rehash](https://www.openssl.org/docs/man1.1.1/man1/rehash.html) command. See [Secure Connections Overview: Certificate Revocation Lists (CRLs)](../secure-connections-overview/index#certificate-revocation-lists-crls) for more information. This option is only supported if the client was built with OpenSSL. If the client was built with yaSSL, GnuTLS, or Schannel, then this option is not supported. See [TLS and Cryptography Libraries Used by MariaDB](../tls-and-cryptography-libraries-used-by-mariadb/index) for more information about which libraries are used on which platforms. |
| `--ssl-key=name` | Defines a path to a private key file to use for [TLS](../data-in-transit-encryption/index). This option requires that you use the absolute path, not a relative path. This option implies the `--ssl` option. |
| `--ssl-verify-server-cert` | Enables [server certificate verification](../secure-connections-overview/index#server-certificate-verification). This option is disabled by default. |
| `--tables` | Overrides the `--databases` or `-B` option such that all name arguments following the option are regarded as table names. |
| `--use-frm` | For repair operations on MyISAM tables, get table structure from .frm file, so the table can be repaired even if the .MYI header is corrupted. |
| `-u`, `--user=name` | User for login if not current user. |
| `-v`, `--verbose` | Print info about the various stages. You can give this option several times to get even more information. See [mysqlcheck and verbose](#mysqlcheck-and-verbose), below. |
| `-V`, `--version` | Output version information and exit. |
| `--write-binlog` | Write ANALYZE, OPTIMIZE and REPAIR TABLE commands to the [binary log](../binary-log/index). Enabled by default; use `--skip-write-binlog` when commands should not be sent to replication slaves. |
### Option Files
In addition to reading options from the command-line, `mysqlcheck` can also read options from [option files](../configuring-mariadb-with-option-files/index). If an unknown option is provided to `mysqlcheck` in an option file, then it is ignored.
The following options relate to how MariaDB command-line tools handles option files. They must be given as the first argument on the command-line:
| Option | Description |
| --- | --- |
| `--print-defaults` | Print the program argument list and exit. |
| `--no-defaults` | Don't read default options from any option file. |
| `--defaults-file=#` | Only read default options from the given file #. |
| `--defaults-extra-file=#` | Read this file after the global files are read. |
| `--defaults-group-suffix=#` | In addition to the default option groups, also read option groups with this suffix. |
In [MariaDB 10.2](../what-is-mariadb-102/index) and later, `mysqlcheck` is linked with [MariaDB Connector/C](../about-mariadb-connector-c/index). However, MariaDB Connector/C does not yet handle the parsing of option files for this client. That is still performed by the server option file parsing code. See [MDEV-19035](https://jira.mariadb.org/browse/MDEV-19035) for more information.
#### Option Groups
`mysqlcheck` reads options from the following [option groups](../configuring-mariadb-with-option-files/index#option-groups) from [option files](../configuring-mariadb-with-option-files/index):
| Group | Description |
| --- | --- |
| `[mysqlcheck]` | Options read by `mysqlcheck`, which includes both MariaDB Server and MySQL Server. |
| `[mariadb-check]` | Options read by `mysqlcheck`. Available starting with [MariaDB 10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/). |
| `[client]` | Options read by all MariaDB and MySQL [client programs](../clients-utilities/index), which includes both MariaDB and MySQL clients. For example, `mysqldump`. |
| `[client-server]` | Options read by all MariaDB [client programs](../clients-utilities/index) and the MariaDB Server. This is useful for options like socket and port, which is common between the server and the clients. |
| `[client-mariadb]` | Options read by all MariaDB [client programs](../clients-utilities/index). |
Notes
-----
### Default Values
To see the default values for the options and also to see the arguments you get from configuration files you can do:
```
./client/mysqlcheck --print-defaults
./client/mysqlcheck --help
```
### mysqlcheck and auto-repair
When running `mysqlcheck` with `--auto-repair` (as done by [mysql\_upgrade](../mysql_upgrade/index)), `mysqlcheck` will first check all tables and then in a separate phase repair those that failed the check.
### mysqlcheck and all-databases
`mysqlcheck --all-databases` will ignore the internal log tables [general\_log](../mysqlgeneral_log-table/index) and [slow\_log](../mysqlslow_log-table/index) as these can't be checked, repaired or optimized.
### mysqlcheck and verbose
Using one `--verbose` option will give you more information about what mysqlcheck is doing.
Using two `--verbose` options will also give you connection information.
If you use three `--verbose` options you will also get, on stdout, all [ALTER](../alter-table/index), [RENAME](../rename-table/index), and [CHECK](../check-table/index) commands that mysqlcheck executes.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb MPolyFromText MPolyFromText
=============
Syntax
------
```
MPolyFromText(wkt[,srid])
MultiPolygonFromText(wkt[,srid])
```
Description
-----------
Constructs a [MULTIPOLYGON](../multipolygon/index) value using its [WKT](../wkt-definition/index) representation and [SRID](../srid/index).
`MPolyFromText()` and `MultiPolygonFromText()` are synonyms.
Examples
--------
```
CREATE TABLE gis_multi_polygon (g MULTIPOLYGON);
SHOW FIELDS FROM gis_multi_polygon;
INSERT INTO gis_multi_polygon VALUES
(MultiPolygonFromText('MULTIPOLYGON(
((28 26,28 0,84 0,84 42,28 26),(52 18,66 23,73 9,48 6,52 18)),
((59 18,67 18,67 13,59 13,59 18)))')),
(MPolyFromText('MULTIPOLYGON(
((28 26,28 0,84 0,84 42,28 26),(52 18,66 23,73 9,48 6,52 18)),
((59 18,67 18,67 13,59 13,59 18)))')),
(MPolyFromWKB(AsWKB(MultiPolygon(Polygon(
LineString(Point(0, 3), Point(3, 3), Point(3, 0), Point(0, 3)))))));
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Migrating to MariaDB Migrating to MariaDB
=====================
Migrating to MariaDB from another DBMS.
| Title | Description |
| --- | --- |
| [Migrating to MariaDB from MySQL](../moving-from-mysql/index) | Help with moving from MySQL to MariaDB, features and compatibility |
| [Migrating to MariaDB from SQL Server](../migrating-to-mariadb-from-sql-server/index) | Guide to help you migrate from SQL Server to MariaDB. |
| [Migrating to MariaDB from PostgreSQL](../migrating-to-mariadb-from-postgresql/index) | Information on migrating from PostgreSQL to MariaDB. |
| [Migrating to MariaDB from Oracle](../migrating-to-mariadb-from-oracle/index) | Help with migrating to MariaDB from Oracle |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CREATE TRIGGER CREATE TRIGGER
==============
Syntax
------
```
CREATE [OR REPLACE]
[DEFINER = { user | CURRENT_USER | role | CURRENT_ROLE }]
TRIGGER [IF NOT EXISTS] trigger_name trigger_time trigger_event
ON tbl_name FOR EACH ROW
[{ FOLLOWS | PRECEDES } other_trigger_name ]
trigger_stmt;
```
Description
-----------
This statement creates a new [trigger](../triggers/index). A trigger is a named database object that is associated with a table, and that activates when a particular event occurs for the table. The trigger becomes associated with the table named `tbl_name`, which must refer to a permanent table. You cannot associate a trigger with a `TEMPORARY` table or a view.
`CREATE TRIGGER` requires the [TRIGGER](../grant/index#table-privileges) privilege for the table associated with the trigger.
**MariaDB starting with [10.2.3](https://mariadb.com/kb/en/mariadb-1023-release-notes/)**You can have multiple triggers for the same `trigger_time` and `trigger_event`.
For valid identifiers to use as trigger names, see [Identifier Names](../identifier-names/index).
### OR REPLACE
**MariaDB starting with [10.1.4](https://mariadb.com/kb/en/mariadb-1014-release-notes/)**If used and the trigger already exists, instead of an error being returned, the existing trigger will be dropped and replaced by the newly defined trigger.
### DEFINER
The `DEFINER` clause determines the security context to be used when checking access privileges at trigger activation time. Usage requires the [SUPER](../grant/index#super) privilege, or, from [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/), the [SET USER](../grant/index#set-user) privilege.
### IF NOT EXISTS
**MariaDB starting with [10.1.4](https://mariadb.com/kb/en/mariadb-1014-release-notes/)**If the `IF NOT EXISTS` clause is used, the trigger will only be created if a trigger of the same name does not exist. If the trigger already exists, by default a warning will be returned.
### trigger\_time
`trigger_time` is the trigger action time. It can be `BEFORE` or `AFTER` to indicate that the trigger activates before or after each row to be modified.
### trigger\_event
`trigger_event` indicates the kind of statement that activates the trigger. The `trigger_event` can be one of the following:
* `INSERT`: The trigger is activated whenever a new row is inserted into the table; for example, through [INSERT](../insert-commands/index), [LOAD DATA](../load-data-infile/index), and [REPLACE](../replace/index) statements.
* `UPDATE`: The trigger is activated whenever a row is modified; for example, through [UPDATE](../update/index) statements.
* `DELETE`: The trigger is activated whenever a row is deleted from the table; for example, through [DELETE](../delete/index) and [REPLACE](../replace/index) statements. However, `DROP TABLE` and `TRUNCATE` statements on the table do not activate this trigger, because they do not use `DELETE`. Dropping a partition does not activate `DELETE` triggers, either.
#### FOLLOWS/PRECEDES other\_trigger\_name
**MariaDB starting with [10.2.3](https://mariadb.com/kb/en/mariadb-1023-release-notes/)**The `FOLLOWS other_trigger_name` and `PRECEDES other_trigger_name` options were added in [MariaDB 10.2.3](https://mariadb.com/kb/en/mariadb-1023-release-notes/) as part of supporting multiple triggers per action time. This is the same syntax used by MySQL 5.7, although MySQL 5.7 does not have multi-trigger support.
`FOLLOWS` adds the new trigger after another trigger while `PRECEDES` adds the new trigger before another trigger. If neither option is used, the new trigger is added last for the given action and time.
`FOLLOWS` and `PRECEDES` are not stored in the trigger definition. However the trigger order is guaranteed to not change over time. [mariadb-dump/mysqldump](../mysqldump/index) and other backup methods will not change trigger order. You can verify the trigger order from the `ACTION_ORDER` column in [INFORMATION\_SCHEMA.TRIGGERS](../information-schema-triggers-table/index) table.
```
SELECT trigger_name, action_order FROM information_schema.triggers
WHERE event_object_table='t1';
```
### Atomic DDL
**MariaDB starting with [10.6.1](https://mariadb.com/kb/en/mariadb-1061-release-notes/)**[MariaDB 10.6.1](https://mariadb.com/kb/en/mariadb-1061-release-notes/) supports [Atomic DDL](../atomic-ddl/index) and `CREATE TRIGGER` is atomic.
Examples
--------
```
CREATE DEFINER=`root`@`localhost` TRIGGER increment_animal
AFTER INSERT ON animals FOR EACH ROW
UPDATE animal_count SET animal_count.animals = animal_count.animals+1;
```
OR REPLACE and IF NOT EXISTS
```
CREATE DEFINER=`root`@`localhost` TRIGGER increment_animal
AFTER INSERT ON animals FOR EACH ROW
UPDATE animal_count SET animal_count.animals = animal_count.animals+1;
ERROR 1359 (HY000): Trigger already exists
CREATE OR REPLACE DEFINER=`root`@`localhost` TRIGGER increment_animal
AFTER INSERT ON animals FOR EACH ROW
UPDATE animal_count SET animal_count.animals = animal_count.animals+1;
Query OK, 0 rows affected (0.12 sec)
CREATE DEFINER=`root`@`localhost` TRIGGER IF NOT EXISTS increment_animal
AFTER INSERT ON animals FOR EACH ROW
UPDATE animal_count SET animal_count.animals = animal_count.animals+1;
Query OK, 0 rows affected, 1 warning (0.00 sec)
SHOW WARNINGS;
+-------+------+------------------------+
| Level | Code | Message |
+-------+------+------------------------+
| Note | 1359 | Trigger already exists |
+-------+------+------------------------+
1 row in set (0.00 sec)
```
See Also
--------
* [Identifier Names](../identifier-names/index)
* [Trigger Overview](../trigger-overview/index)
* [DROP TRIGGER](../drop-trigger/index)
* [Information Schema TRIGGERS Table](../information-schema-triggers-table/index)
* [SHOW TRIGGERS](../show-triggers/index)
* [SHOW CREATE TRIGGER](../show-create-trigger/index)
* [Trigger Limitations](../trigger-limitations/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb 10.3.6-gamma Release Upgrade Tests 10.3.6-gamma Release Upgrade Tests
==================================
### Tested revision
560743198604caf677c543db9719cef871df09ce
### Test date
2018-04-17 10:50:21
### Summary
One new (previously unknown) bug [MDEV-15912](https://jira.mariadb.org/browse/MDEV-15912). Known bugs [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103), [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094). One failure identified as a deadlock of yet unknown origin, could be a side-effect of known bugs. A few upgrades from MySQL and old MariaDB fail because the old versions hang on shutdown.
### Details
| type | pagesize | OLD version | file format | encrypted | compressed | | NEW version | file format | encrypted | compressed | readonly | result | notes |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| recovery | 16 | 10.3.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| recovery | 16 | 10.3.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| recovery | 4 | 10.3.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| recovery | 4 | 10.3.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| recovery | 32 | 10.3.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| recovery | 32 | 10.3.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| recovery | 64 | 10.3.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| recovery | 64 | 10.3.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| recovery | 8 | 10.3.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| recovery | 8 | 10.3.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| recovery | 16 | 10.3.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| recovery | 16 | 10.3.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| recovery | 4 | 10.3.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| recovery | 4 | 10.3.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| recovery | 32 | 10.3.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| recovery | 32 | 10.3.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| recovery | 64 | 10.3.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| recovery | 64 | 10.3.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| recovery | 8 | 10.3.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| recovery | 8 | 10.3.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo-recovery | 16 | 10.3.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo-recovery | 4 | 10.3.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo-recovery | 32 | 10.3.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo-recovery | 64 | 10.3.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo-recovery | 8 | 10.3.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo-recovery | 16 | 10.3.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo-recovery | 4 | 10.3.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo-recovery | 32 | 10.3.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo-recovery | 64 | 10.3.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo-recovery | 8 | 10.3.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo-recovery | 16 | 10.3.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| undo-recovery | 4 | 10.3.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| undo-recovery | 32 | 10.3.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| undo-recovery | 64 | 10.3.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| undo-recovery | 8 | 10.3.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| undo-recovery | 16 | 10.3.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo-recovery | 4 | 10.3.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo-recovery | 32 | 10.3.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo-recovery | 64 | 10.3.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo-recovery | 8 | 10.3.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 16 | 10.3.5 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 16 | 10.3.5 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.3.5 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.3.5 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.3.5 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.3.5 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.3.5 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.3.5 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.3.5 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.3.5 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 16 | 10.3.5 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 16 | 10.3.5 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.3.5 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.3.5 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.3.5 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.3.5 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.3.5 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.3.5 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.3.5 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.3.5 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| crash | 16 | 10.3.5 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 16 | 10.3.5 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 4 | 10.3.5 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 4 | 10.3.5 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 32 | 10.3.5 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 32 | 10.3.5 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 64 | 10.3.5 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 64 | 10.3.5 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 8 | 10.3.5 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 8 | 10.3.5 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 16 | 10.3.5 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 16 | 10.3.5 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 4 | 10.3.5 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 4 | 10.3.5 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 32 | 10.3.5 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 32 | 10.3.5 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 64 | 10.3.5 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 64 | 10.3.5 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 8 | 10.3.5 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 8 | 10.3.5 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 16 | 10.3.5 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 4 | 10.3.5 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 32 | 10.3.5 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 64 | 10.3.5 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 8 | 10.3.5 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 16 | 10.3.5 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 4 | 10.3.5 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 32 | 10.3.5 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 64 | 10.3.5 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 8 | 10.3.5 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 16 | 10.3.5 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 4 | 10.3.5 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 32 | 10.3.5 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 64 | 10.3.5 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 8 | 10.3.5 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 16 | 10.3.5 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 4 | 10.3.5 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 32 | 10.3.5 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 64 | 10.3.5 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 8 | 10.3.5 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 16 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 16 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 16 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 16 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| crash | 16 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 16 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 4 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 4 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 32 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 32 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 64 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 64 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 8 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 8 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 16 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 16 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 4 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 4 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 32 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 32 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 64 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 64 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 8 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 8 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 16 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 4 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 32 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 64 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 8 | 10.2.14 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 16 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 4 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 32 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 64 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 8 | 10.2.14 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 16 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 4 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 32 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 64 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 8 | 10.2.14 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 16 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 4 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 32 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 64 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 8 | 10.2.14 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 16 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 16 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 16 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 16 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 4 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 32 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 64 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| normal | 8 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13094](https://jira.mariadb.org/browse/MDEV-13094)(1) |
| crash | 16 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 16 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 4 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 4 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 32 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 32 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 64 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 64 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 8 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| crash | 8 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | KNOWN\_BUGS [MDEV-13103](https://jira.mariadb.org/browse/MDEV-13103)(1) |
| crash | 16 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 16 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 4 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 4 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 32 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 32 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 64 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 64 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| crash | 8 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| crash | 8 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 16 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 4 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 32 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 64 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 8 | 10.2.6 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 16 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 4 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 32 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 64 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 8 | 10.2.6 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 16 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 4 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 32 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 64 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 8 | 10.2.6 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 16 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 4 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 32 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 64 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 8 | 10.2.6 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 16 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 16 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 4 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 4 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 32 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 32 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 64 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 64 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 8 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 8 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 16 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 16 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 4 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 4 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 32 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 32 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 64 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 64 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 8 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 8 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 16 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 4 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 32 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 64 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 8 | 10.1.32 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 16 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 4 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 32 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 64 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 8 | 10.1.32 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 16 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 4 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 32 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 64 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | **FAIL** | SERVER\_DEADLOCKED |
| undo | 8 | 10.1.32 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 16 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 4 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 32 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 64 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 8 | 10.1.32 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 16 | 10.1.13 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 16 | 10.1.13 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 4 | 10.1.13 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 4 | 10.1.13 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 32 | 10.1.13 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 32 | 10.1.13 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 64 | 10.1.13 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 64 | 10.1.13 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 8 | 10.1.13 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| normal | 8 | 10.1.13 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| normal | 16 | 10.1.10 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 16 | 10.1.10 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 4 | 10.1.10 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 4 | 10.1.10 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 32 | 10.1.10 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 32 | 10.1.10 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 64 | 10.1.10 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 64 | 10.1.10 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 8 | 10.1.10 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| normal | 8 | 10.1.10 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 16 | 10.1.22 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 4 | 10.1.22 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 32 | 10.1.22 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 64 | 10.1.22 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 8 | 10.1.22 (inbuilt) | Barracuda | on | - | => | 10.3.6 (inbuilt) | Barracuda | on | - | - | OK | |
| undo | 16 | 10.1.22 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 4 | 10.1.22 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 32 | 10.1.22 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 64 | 10.1.22 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 8 | 10.1.22 (inbuilt) | Barracuda | - | - | => | 10.3.6 (inbuilt) | Barracuda | - | - | - | OK | |
| undo | 16 | 10.1.22 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 4 | 10.1.22 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 32 | 10.1.22 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 64 | 10.1.22 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 8 | 10.1.22 (inbuilt) | Barracuda | on | zlib | => | 10.3.6 (inbuilt) | Barracuda | on | zlib | - | OK | |
| undo | 16 | 10.1.22 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 4 | 10.1.22 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 32 | 10.1.22 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | **FAIL** | UPGRADE\_FAILURE |
| undo | 64 | 10.1.22 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| undo | 8 | 10.1.22 (inbuilt) | Barracuda | - | zlib | => | 10.3.6 (inbuilt) | Barracuda | - | zlib | - | OK | |
| normal | 4 | 10.0.34 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 8 | 10.0.34 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 16 | 10.0.34 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 16 | 10.0.34 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| normal | 4 | 10.0.34 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| normal | 8 | 10.0.34 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| undo | 16 | 10.0.34 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| undo | 4 | 10.0.34 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| undo | 8 | 10.0.34 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| undo | 16 | 10.0.34 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| undo | 4 | 10.0.34 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| undo | 8 | 10.0.34 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 4 | 10.0.14 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 8 | 10.0.14 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 16 | 10.0.14 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 16 | 10.0.14 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| normal | 4 | 10.0.14 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| normal | 8 | 10.0.14 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| undo | 16 | 10.0.18 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | **FAIL** | TEST\_FAILURE |
| undo | 4 | 10.0.18 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | **FAIL** | TEST\_FAILURE |
| undo | 8 | 10.0.18 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| undo | 16 | 10.0.18 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | **FAIL** | TEST\_FAILURE |
| undo | 4 | 10.0.18 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | **FAIL** | TEST\_FAILURE |
| undo | 8 | 10.0.18 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 64 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 8 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 16 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 32 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 4 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 4 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| normal | 8 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| normal | 16 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| normal | 64 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| normal | 32 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| undo | 16 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| undo | 4 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| undo | 32 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| undo | 64 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| undo | 8 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| undo | 16 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| undo | 4 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| undo | 32 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| undo | 64 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| undo | 8 | 5.7.21 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 4 | 5.6.39 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 8 | 5.6.39 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 16 | 5.6.39 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
| normal | 16 | 5.6.39 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| normal | 4 | 5.6.39 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| normal | 8 | 5.6.39 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| undo | 16 | 5.6.39 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| undo | 4 | 5.6.39 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | OK | |
| undo | 8 | 5.6.39 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | on | - | - | **FAIL** | TEST\_FAILURE |
| undo | 16 | 5.6.39 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | **FAIL** | TEST\_FAILURE |
| undo | 4 | 5.6.39 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | **FAIL** | TEST\_FAILURE |
| undo | 8 | 5.6.39 (inbuilt) | | - | - | => | 10.3.6 (inbuilt) | | - | - | - | OK | |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb mariadb-embedded mariadb-embedded
================
**MariaDB starting with [10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/)**From [MariaDB 10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/), `mariadb-embedded` is a symlink to `mysql_embedded`, the embedded server.
**MariaDB starting with [10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/)**From [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/), `mariadb-embedded` is the name of the tool, with `mysql_embedded` a symlink.
See [mysql\_embedded](../mysql_embedded/index) for details.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ps_is_thread_instrumented ps\_is\_thread\_instrumented
============================
Syntax
------
```
sys.ps_is_thread_instrumented(connection_id)
```
Description
-----------
`ps_is_thread_instrumented` is a [stored function](../stored-functions/index) available with the [Sys Schema](../sys-schema/index) that returns whether or not Performance Schema instrumentation for the given *connection\_id* is enabled.
* `YES` - instrumentation is enabled
* `NO` - instrumentation is not enabled
* `UNKNOWN` - the connection ID is unknown
* `NULL` - NULL value
Examples
--------
```
SELECT sys.ps_is_thread_instrumented(CONNECTION_ID());
+------------------------------------------------+
| sys.ps_is_thread_instrumented(CONNECTION_ID()) |
+------------------------------------------------+
| YES |
+------------------------------------------------+
SELECT sys.ps_is_thread_instrumented(2042);
+-------------------------------------+
| sys.ps_is_thread_instrumented(2042) |
+-------------------------------------+
| UNKNOWN |
+-------------------------------------+
SELECT sys.ps_is_thread_instrumented(NULL);
+-------------------------------------+
| sys.ps_is_thread_instrumented(NULL) |
+-------------------------------------+
| NULL |
+-------------------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore Window Functions ColumnStore Window Functions
============================
Introduction
============
MariaDB ColumnStore provides support for window functions broadly following the SQL 2003 specification. A window function allows for calculations relating to a window of data surrounding the current row in a result set. This capability provides for simplified queries in support of common business questions such as cumulative totals, rolling averages, and top 10 lists.
Aggregate functions are utilized for window functions however differ in behavior from a group by query because the rows remain ungrouped. This provides support for cumulative sums and rolling averages, for example.
Two key concepts for window functions are Partition and Frame:
* A Partition is a group of rows, or window, that have the same value for a specific column, for example a Partition can be created over a time period such as a quarter or lookup values.
* The Frame for each row is a subset of the row's Partition. The frame typically is dynamic allowing for a sliding frame of rows within the Partition. The Frame determines the range of rows for the windowing function. A Frame could be defined as the last X rows and next Y rows all the way up to the entire Partition.
Window functions are applied after joins, group by, and having clauses are calculated.
Syntax
======
A window function is applied in the select clause using the following syntax:
```
function_name ([expression [, expression ... ]]) OVER ( window_definition )
```
where *window\_definition* is defined as:
```
[ PARTITION BY expression [, ...] ]
[ ORDER BY expression [ ASC | DESC ] [ NULLS { FIRST | LAST } ] [, ...] ]
[ frame_clause ]
```
PARTITION BY:
* Divides the window result set into groups based on one or more *expressions*.
* An expression may be a constant, column, and non window function expressions.
* A query is not limited to a single partition by clause. Different partition clauses can be used across different window function applications.
* The partition by columns do not need to be in the select list but do need to be available from the query result set.
* If there is no PARTITION BY clause, all rows of the result set define the group.
ORDER BY
* Defines the ordering of values within the partition.
* Can be ordered by multiple keys which may be a constant, column or non window function expression.
* The order by columns do not need to be in the select list but need to be available from the query result set.
* Use of a select column alias from the query is not supported.
* ASC (default) and DESC options allow for ordering ascending or descending.
* NULLS FIRST and NULL\_LAST options specify whether null values come first or last in the ordering sequence. NULLS\_FIRST is the default for ASC order, and NULLS\_LAST is the default for DESC order.
and the optional *frame\_clause* is defined as:
```
{ RANGE | ROWS } frame_start
{ RANGE | ROWS } BETWEEN frame_start AND frame_end
```
and the optional *frame\_start* and *frame\_end* are defined as (value being a numeric expression):
```
UNBOUNDED PRECEDING
value PRECEDING
CURRENT ROW
value FOLLOWING
UNBOUNDED FOLLOWING
```
RANGE/ROWS:
* Defines the windowing clause for calculating the set of rows that the function applies to for calculating a given rows window function result.
* Requires an ORDER BY clause to define the row order for the window.
* ROWS specify the window in physical units, i.e. result set rows and must be a constant or expression evaluating to a positive numeric value.
* RANGE specifies the window as a logical offset. If the the expression evaluates to a numeric value then the ORDER BY expression must be a numeric or DATE type. If the expression evaluates to an interval value then the ORDER BY expression must be a DATE data type.
* UNBOUNDED PRECEDING indicates the window starts at the first row of the partition.
* UNBOUNDED FOLLOWING indicates the window ends at the last row of the partition.
* CURRENT ROW specifies the window start or ends at the current row or value.
* If omitted, the default is ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW.
Supported Functions
===================
| Function | Description |
| --- | --- |
| AVG() | The average of all input values. |
| COUNT() | Number of input rows. |
| CUME\_DIST() | Calculates the cumulative distribution, or relative rank, of the current row to other rows in the same partition. Number of peer or preceding rows / number of rows in partition. |
| DENSE\_RANK() | Ranks items in a group leaving no gaps in ranking sequence when there are ties. |
| FIRST\_VALUE() | The value evaluated at the row that is the first row of the window frame (counting from 1); null if no such row. |
| LAG() | The value evaluated at the row that is offset rows before the current row within the partition; if there is no such row, instead return default. Both offset and default are evaluated with respect to the current row. If omitted, offset defaults to 1 and default to null. LAG provides access to more than one row of a table at the same time without a self-join. Given a series of rows returned from a query and a position of the cursor, LAG provides access to a row at a given physical offset prior to that position. |
| LAST\_VALUE() | The value evaluated at the row that is the last row of the window frame (counting from 1); null if no such row. |
| LEAD() | Provides access to a row at a given physical offset beyond that position. Returns value evaluated at the row that is offset rows after the current row within the partition; if there is no such row, instead return default. Both offset and default are evaluated with respect to the current row. If omitted, offset defaults to 1 and default to null. |
| MAX() | Maximum value of expression across all input values. |
| MEDIAN() | An inverse distribution function that assumes a continuous distribution model. It takes a numeric or datetime value and returns the middle value or an interpolated value that would be the middle value once the values are sorted. Nulls are ignored in the calculation. |
| MIN() | Minimum value of expression across all input values. |
| NTH\_VALUE() | The value evaluated at the row that is the nth row of the window frame (counting from 1); null if no such row. |
| NTILE() | Divides an ordered data set into a number of buckets indicated by expr and assigns the appropriate bucket number to each row. The buckets are numbered 1 through expr. The expr value must resolve to a positive constant for each partition. Integer ranging from 1 to the argument value, dividing the partition as equally as possible. |
| PERCENT\_RANK() | relative rank of the current row: (rank - 1) / (total rows - 1). |
| PERCENTILE\_CONT() | An inverse distribution function that assumes a continuous distribution model. It takes a percentile value and a sort specification, and returns an interpolated value that would fall into that percentile value with respect to the sort specification. Nulls are ignored in the calculation. |
| PERCENTILE\_DISC() | An inverse distribution function that assumes a discrete distribution model. It takes a percentile value and a sort specification and returns an element from the set. Nulls are ignored in the calculation. |
| RANK() | rank of the current row with gaps; same as row\_number of its first peer. |
| ROW\_NUMBER() | number of the current row within its partition, counting from 1 |
| STDDEV() STDDEV\_POP() | Computes the population standard deviation and returns the square root of the population variance. |
| STDDEV\_SAMP() | Computes the cumulative sample standard deviation and returns the square root of the sample variance. |
| SUM() | Sum of expression across all input values. |
| VARIANCE() VAR\_POP() | Population variance of the input values (square of the population standard deviation). |
| VAR\_SAMP() | Sample variance of the input values (square of the sample standard deviation). |
Examples
========
Example Schema
--------------
The examples are all based on the following simplified sales opportunity table:
```
create table opportunities (
id int,
accountName varchar(20),
name varchar(128),
owner varchar(7),
amount decimal(10,2),
closeDate date,
stageName varchar(11)
) engine=columnstore;
```
Some example values are (thanks to <https://www.mockaroo.com> for sample data generation):
| id | accountName | name | owner | amount | closeDate | stageName |
| --- | --- | --- | --- | --- | --- | --- |
| 1 | Browseblab | Multi-lateral executive function | Bob | 26444.86 | 2016-10-20 | Negotiating |
| 2 | Mita | Organic demand-driven benchmark | Maria | 477878.41 | 2016-11-28 | ClosedWon |
| 3 | Miboo | De-engineered hybrid groupware | Olivier | 80181.78 | 2017-01-05 | ClosedWon |
| 4 | Youbridge | Enterprise-wide bottom-line Graphic Interface | Chris | 946245.29 | 2016-07-02 | ClosedWon |
| 5 | Skyba | Reverse-engineered fresh-thinking standardization | Maria | 696241.82 | 2017-02-17 | Negotiating |
| 6 | Eayo | Fundamental well-modulated artificial intelligence | Bob | 765605.52 | 2016-08-27 | Prospecting |
| 7 | Yotz | Extended secondary infrastructure | Chris | 319624.20 | 2017-01-06 | ClosedLost |
| 8 | Oloo | Configurable web-enabled data-warehouse | Chris | 321016.26 | 2017-03-08 | ClosedLost |
| 9 | Kaymbo | Multi-lateral web-enabled definition | Bob | 690881.01 | 2017-01-02 | Developing |
| 10 | Rhyloo | Public-key coherent infrastructure | Chris | 965477.74 | 2016-11-07 | Prospecting |
The schema, sample data, and queries are available as an attachment to this article.
Cumulative Sum and Running Max Example
--------------------------------------
Window functions can be used to achieve cumulative / running calculations on a detail report. In this case a won opportunity report for a 7 day period adds columns to show the accumulated won amount as well as the current highest opportunity amount in preceding rows.
```
select owner,
accountName,
CloseDate,
amount,
sum(amount) over (order by CloseDate rows between unbounded preceding and current row) cumeWon,
max(amount) over (order by CloseDate rows between unbounded preceding and current row) runningMax
from opportunities
where stageName='ClosedWon'
and closeDate >= '2016-10-02' and closeDate <= '2016-10-09'
order by CloseDate;
```
with example results:
| owner | accountName | CloseDate | amount | cumeWon | runningMax |
| --- | --- | --- | --- | --- | --- |
| Bill | Babbleopia | 2016-10-02 | 437636.47 | 437636.47 | 437636.47 |
| Bill | Thoughtworks | 2016-10-04 | 146086.51 | 583722.98 | 437636.47 |
| Olivier | Devpulse | 2016-10-05 | 834235.93 | 1417958.91 | 834235.93 |
| Chris | Linkbridge | 2016-10-07 | 539977.45 | 2458738.65 | 834235.93 |
| Olivier | Trupe | 2016-10-07 | 500802.29 | 1918761.20 | 834235.93 |
| Bill | Latz | 2016-10-08 | 857254.87 | 3315993.52 | 857254.87 |
| Chris | Avamm | 2016-10-09 | 699566.86 | 4015560.38 | 857254.87 |
Partitioned Cumulative Sum and Running Max Example
--------------------------------------------------
The above example can be partitioned, so that the window functions are over a particular field grouping such as owner and accumulate within that grouping. This is achieved by adding the syntax "partition by <columns>" in the window function clause.
```
select owner,
accountName,
CloseDate,
amount,
sum(amount) over (partition by owner order by CloseDate rows between unbounded preceding and current row) cumeWon,
max(amount) over (partition by owner order by CloseDate rows between unbounded preceding and current row) runningMax
from opportunities
where stageName='ClosedWon'
and closeDate >= '2016-10-02' and closeDate <= '2016-10-09'
order by owner, CloseDate;
```
with example results:
| owner | accountName | CloseDate | amount | cumeWon | runningMax |
| --- | --- | --- | --- | --- | --- |
| Bill | Babbleopia | 2016-10-02 | 437636.47 | 437636.47 | 437636.47 |
| Bill | Thoughtworks | 2016-10-04 | 146086.51 | 583722.98 | 437636.47 |
| Bill | Latz | 2016-10-08 | 857254.87 | 1440977.85 | 857254.87 |
| Chris | Linkbridge | 2016-10-07 | 539977.45 | 539977.45 | 539977.45 |
| Chris | Avamm | 2016-10-09 | 699566.86 | 1239544.31 | 699566.86 |
| Olivier | Devpulse | 2016-10-05 | 834235.93 | 834235.93 | 834235.93 |
| Olivier | Trupe | 2016-10-07 | 500802.29 | 1335038.22 | 834235.93 |
Ranking / Top Results
---------------------
The rank window function allows for ranking or assigning a numeric order value based on the window function definition. Using the Rank() function will result in the same value for ties / equal values and the next rank value skipped. The Dense\_Rank() function behaves similarly except the next consecutive number is used after a tie rather than skipped. The Row\_Number() function will provide a unique ordering value. The example query shows the Rank() function being applied to rank sales reps by the number of opportunities for Q4 2016.
```
select owner,
wonCount,
rank() over (order by wonCount desc) rank
from (
select owner,
count(*) wonCount
from opportunities
where stageName='ClosedWon'
and closeDate >= '2016-10-01' and closeDate < '2016-12-31'
group by owner
) t
order by rank;
```
with example results (note the query is technically incorrect by using closeDate < '2016-12-31' however this creates a tie scenario for illustrative purposes):
| owner | wonCount | rank |
| --- | --- | --- |
| Bill | 19 | 1 |
| Chris | 15 | 2 |
| Maria | 14 | 3 |
| Bob | 14 | 3 |
| Olivier | 10 | 5 |
If the dense\_rank function is used the rank values would be 1,2,3,3,4 and for the row\_number function the values would be 1,2,3,4,5.
First and Last Values
---------------------
The first\_value and last\_value functions allow determining the first and last values of a given range. Combined with a group by this allows summarizing opening and closing values. The example shows a more complex case where detailed information is presented for first and last opportunity by quarter.
```
select a.year,
a.quarter,
f.accountName firstAccountName,
f.owner firstOwner,
f.amount firstAmount,
l.accountName lastAccountName,
l.owner lastOwner,
l.amount lastAmount
from (
select year,
quarter,
min(firstId) firstId,
min(lastId) lastId
from (
select year(closeDate) year,
quarter(closeDate) quarter,
first_value(id) over (partition by year(closeDate), quarter(closeDate) order by closeDate rows between unbounded preceding and current row) firstId,
last_value(id) over (partition by year(closeDate), quarter(closeDate) order by closeDate rows between current row and unbounded following) lastId
from opportunities where stageName='ClosedWon'
) t
group by year, quarter order by year,quarter
) a
join opportunities f on a.firstId = f.id
join opportunities l on a.lastId = l.id
order by year, quarter;
```
with example results:
| year | quarter | firstAccountName | firstOwner | firstAmount | lastAccountName | lastOwner | lastAmount |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 2016 | 3 | Skidoo | Bill | 523295.07 | Skipstorm | Bill | 151420.86 |
| 2016 | 4 | Skimia | Chris | 961513.59 | Avamm | Maria | 112493.65 |
| 2017 | 1 | Yombu | Bob | 536875.51 | Skaboo | Chris | 270273.08 |
Prior and Next Example
----------------------
Sometimes it useful to understand the previous and next values in the context of a given row. The lag and lead window functions provide this capability. By default the offset is one providing the prior or next value but can also be provided to get a larger offset. The example query is a report of opportunities by account name showing the opportunity amount, and the prior and next opportunity amount for that account by close date.
```
select accountName,
closeDate,
amount currentOppAmount,
lag(amount) over (partition by accountName order by closeDate) priorAmount, lead(amount) over (partition by accountName order by closeDate) nextAmount
from opportunities
order by accountName, closeDate
limit 9;
```
with example results:
| accountName | closeDate | currentOppAmount | priorAmount | nextAmount |
| --- | --- | --- | --- | --- |
| Abata | 2016-09-10 | 645098.45 | NULL | 161086.82 |
| Abata | 2016-10-14 | 161086.82 | 645098.45 | 350235.75 |
| Abata | 2016-12-18 | 350235.75 | 161086.82 | 878595.89 |
| Abata | 2016-12-31 | 878595.89 | 350235.75 | 922322.39 |
| Abata | 2017-01-21 | 922322.39 | 878595.89 | NULL |
| Abatz | 2016-10-19 | 795424.15 | NULL | NULL |
| Agimba | 2016-07-09 | 288974.84 | NULL | 914461.49 |
| Agimba | 2016-09-07 | 914461.49 | 288974.84 | 176645.52 |
| Agimba | 2016-09-20 | 176645.52 | 914461.49 | NULL |
Quartiles Example
-----------------
The NTile window function allows for breaking up a data set into portions assigned a numeric value to each portion of the range. NTile(4) breaks the data up into quartiles (4 sets). The example query produces a report of all opportunities summarizing the quartile boundaries of amount values.
```
select t.quartile,
min(t.amount) min,
max(t.amount) max
from (
select amount,
ntile(4) over (order by amount asc) quartile
from opportunities
where closeDate >= '2016-10-01' and closeDate <= '2016-12-31'
) t
group by quartile
order by quartile;
```
With example results:
| quartile | min | max |
| --- | --- | --- |
| 1 | 6337.15 | 287634.01 |
| 2 | 288796.14 | 539977.45 |
| 3 | 540070.04 | 748727.51 |
| 4 | 753670.77 | 998864.47 |
Percentile Example
------------------
The percentile functions have a slightly different syntax from other window functions as can be seen in the example below. These functions can be only applied against numeric values. The argument to the function is the percentile to evaluate. Following 'within group' is the sort expression which indicates the sort column and optionally order. Finally after 'over' is an optional partition by clause, for no partition clause use 'over ()'. The example below utilizes the value 0.5 to calculate the median opportunity amount in the rows. The values differ sometimes because percentile\_cont will return the average of the 2 middle rows for an even data set while percentile\_desc returns the first encountered in the sort.
```
select owner,
accountName,
CloseDate,
amount,
percentile_cont(0.5) within group (order by amount) over (partition by owner) pct_cont,
percentile_disc(0.5) within group (order by amount) over (partition by owner) pct_disc
from opportunities
where stageName='ClosedWon'
and closeDate >= '2016-10-02' and closeDate <= '2016-10-09'
order by owner, CloseDate;
```
With example results:
| owner | accountName | CloseDate | amount | pct\_cont | pct\_disc |
| --- | --- | --- | --- | --- | --- |
| Bill | Babbleopia | 2016-10-02 | 437636.47 | 437636.4700000000 | 437636.47 |
| Bill | Thoughtworks | 2016-10-04 | 146086.51 | 437636.4700000000 | 437636.47 |
| Bill | Latz | 2016-10-08 | 857254.87 | 437636.4700000000 | 437636.47 |
| Chris | Linkbridge | 2016-10-07 | 539977.45 | 619772.1550000000 | 539977.45 |
| Chris | Avamm | 2016-10-09 | 699566.86 | 619772.1550000000 | 539977.45 |
| Olivier | Devpulse | 2016-10-05 | 834235.93 | 667519.1100000000 | 500802.29 |
| Olivier | Trupe | 2016-10-07 | 500802.29 | 667519.1100000000 | 500802.29 |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Character Sets and Collations Character Sets and Collations
==============================
Simply put, a character set defines how and which characters are stored to support a particular language or languages. A collation, on the other hand, defines the order used when comparing strings (i.e. the position of any given character within the alphabet of that language)
| Title | Description |
| --- | --- |
| [Character Set and Collation Overview](../character-set-and-collation-overview/index) | Introduction to character sets and collations. |
| [Supported Character Sets and Collations](../supported-character-sets-and-collations/index) | MariaDB supports the following character sets and collations. |
| [Setting Character Sets and Collations](../setting-character-sets-and-collations/index) | Changing from the default character set and collation. |
| [Unicode](../unicode/index) | Unicode support. |
| [SHOW CHARACTER SET](../show-character-set/index) | Available character sets. |
| [SHOW COLLATION](../show-collation/index) | Supported collations. |
| [Information Schema CHARACTER\_SETS Table](../information-schema-character_sets-table/index) | Supported character sets. |
| [Information Schema COLLATIONS Table](../information-schema-collations-table/index) | Supported collations. |
| [Internationalization and Localization](../internationalization-and-localization/index) | Character sets, collations, time zones and locales. |
| [SET CHARACTER SET](../set-character-set/index) | Maps all strings sent between the current client and the server with the given mapping. |
| [SET NAMES](../set-names/index) | The character set used to send statements to the server, and results back to the client. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Understanding the Network Database Model Understanding the Network Database Model
========================================
The network database model was a progression from the [hierarchical database model](../understanding-the-hierarchical-database-model/index) and was designed to solve some of that model's problems, specifically the lack of flexibility. Instead of only allowing each child to have one parent, this model allows each child to have multiple parents (it calls the children *members* and the parents *owners*). It addresses the need to model more complex relationships such as the orders/parts many-to-many relationship mentioned in the [hierarchical article](../understanding-the-hierarchical-database-model/index). As you can see in the figure below, *A1* has two members, *B1* and *B2*. *B1.* is the owner of *C1*, *C2*, *C3* and *C4*. However, in this model, *C4* has two owners, *B1* and *B2*.
Of course, this model has its problems, or everyone would still be using it. It is more difficult to implement and maintain, and, although more flexible than the hierarchical model, it still has flexibility problems, Not all relations can be satisfied by assigning another owner, and the programmer still has to understand the data structure well in order to make the model efficient.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema XTRADB_RSEG Table Information Schema XTRADB\_RSEG Table
=====================================
**MariaDB starting with [10.0.9](https://mariadb.com/kb/en/mariadb-1009-release-notes/)**The `XTRADB_RSEG` table was added in [MariaDB 10.0.9](https://mariadb.com/kb/en/mariadb-1009-release-notes/).
The [Information Schema](../information_schema/index) `XTRADB_RSEG` table contains information about the XtraDB rollback segments.
The `PROCESS` [privilege](../grant/index) is required to view the table.
It has the following columns:
| Column | Description |
| --- | --- |
| `rseg_id` | Rollback segment id. |
| `space_id` | Space where the segment is placed. |
| `zip_size` | Size in bytes of the compressed page size, or zero if uncompressed. |
| `page_no` | Page number of the segment header. |
| `max_size` | Maximum size in pages. |
| `curr_size` | Current size in pages. |
The number of records will match the value set in the `[innodb\_undo\_logs](../xtradbinnodb-server-system-variables/index#innodb_undo_logs)` variable (by default 128).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Operating System Optimizations Operating System Optimizations
===============================
Between the hardware and MariaDB sits the operating system, and there are a number of optimizations that can be made at this level
| Title | Description |
| --- | --- |
| [Configuring Linux for MariaDB](../configuring-linux-for-mariadb/index) | Linux kernel settings IO scheduler For optimal IO performance running a da... |
| [Configuring Swappiness](../configuring-swappiness/index) | Setting Linux swappiness. |
| [Filesystem Optimizations](../filesystem-optimizations/index) | Which filesystem is best? The filesystem is not the most important aspect ... |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Building MariaDB From Source Using musl-based GNU/Linux Building MariaDB From Source Using musl-based GNU/Linux
=======================================================
Instructions on compiling MariaDB on musl-based operating systems (Alpine)
--------------------------------------------------------------------------
The instructions on this page will help you compile [MariaDB](../mariadb/index) from source. Links to more complete instructions for specific platforms can be found on the [source](../source/index) page.
* First, [get a copy of the MariaDB source](../getting-the-mariadb-source-code/index).
* Next, [prepare your system to be able to compile the source](../build-environment-setup-for-linux/index).
Using cmake
-----------
[MariaDB 10.1](../what-is-mariadb-101/index) and above is compiled using *cmake*. You can configure your build simply by running *cmake* using special option, i.e.
```
cmake . -DWITHOUT_TOKUDB=1
```
To build and install MariaDB after running *cmake* use
```
make
sudo make install
```
Note that building with MariaDB this way will disable tokuDB, till tokuDB becomes fully supported on musl.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Installing and Configuring a Multi Server ColumnStore System - 1.2.X Installing and Configuring a Multi Server ColumnStore System - 1.2.X
====================================================================
Preparing to Install
--------------------
Review the [Preparing for ColumnStore Installation 1.2.x](../preparing-for-columnstore-installation-12x/index) document and ensure that any necessary pre-requisites have been completed on all target servers for the ColumnStore cluster including installing the ColumnStore software packages.
Validating Pre-Requisites are Complete
--------------------------------------
The [ColumnStore Cluster Tester Tool](../mariadb-columnstore-cluster-test-tool/index) can be used to verify that the target servers are are setup correctly and report specific known errors that can cause failures or timeouts in the cluster setup scripts.
The tool should be run from the same server as the subsequent quick install and postConfigure scripts. With no arguments the script will only test the current server. Specify the other servers in the cluster using the *--ippaddr* argument to validate those servers are reachable and also configured correctly.
MariaDB ColumnStore Multi-Server Quick Installer
------------------------------------------------
The script *quick\_installer\_multi\_server.sh* provides a simple 1 step install of MariaDB ColumnStore bypassing the interactive wizard style interface and works for both root and non-root installs.
The script has 4 parameters.
* --pm-ip-addresses=xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx : IP Addresses of PM nodes, specify current node IP as first value.
* --um-ip-addresses=xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx : IP Addresses of other UM nodes. optional if combined node install is desired.
* --dist-install : Optional override default to perform legacy distributed installation which also performs remote ColumnStore software installation.
* --system-name=<name> : ColumnStore System Name, defaults to 'columnstore-1'
The script will then perform an equivalent installation to running postConfigure with the arguments: It will perform an install with these defaults:
* System-Name : Argument given or defaults to 'columnstore-1'
* Multi-Server Install
+ if only *--pm-ip-addresses* specified then *combined* install with number of IP Addresses nodes.
+ if both *--pm-ip-addresses* and *--um-ip-addresses* specified then *seperate* install with PM IP's Performance Module Nodes and UM IP's User Module Nodes.
+ Non distributed install, i.e. ColumnStore software must be pre-installed on the other nodes.
- Legacy distributed install is used if *--dist-install* specified which will perform remote install of the ColumnStore software on other nodes. SSH Keys must be setup to allow passwordless login to the other nodes as the OS installation user.
* Storage : Internal
* DBRoot : 1 DBroot per 1 Performance Module
* Local Query is disabled on um/pm install
* MariaDB Replication is enabled
NOTE: The Multi-Server Quick Installer defaults to a Non-Distributed Install meaning the user is required to install the MariaDB ColumnStore on all nodes and starting the ColumnStore Server on all non-pm1 nodes before running the script.
### Example: 1 UM 1 PM Deployment as Root
```
# /usr/local/mariadb/columnstore/bin/quick_installer_multi_server.sh --um-ip-addresses=10.128.0.4 --pm-ip-addresses=10.128.0.3
```
### Example : 2 PM Combo Non Root Distributed Install
```
# /home/guest/mariadb/columnstore/bin/quick_installer_multi_server.sh --pm-ip-addresses=10.128.0.3,10.128.0.4 --dist-install
```
MariaDB ColumnStore Custom Installation
---------------------------------------
If you choose not to do the quick install and chose to customize the various options of installations using a wizard, you may use MariaDB ColumnStore postConfigure script. Please look at [Custom Installation of Multi-Server Cluster](../custom-installation-of-multi-server-columnstore-cluster/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore Alter View ColumnStore Alter View
======================
Alters the definition of a view. CREATE OR REPLACE VIEW may also be used to alter the definition of a view.
Syntax
------
```
CREATE
[OR REPLACE]
VIEW view_name [(column_list)]
AS select_statement
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb REGEXP_INSTR REGEXP\_INSTR
=============
Syntax
------
```
REGEXP_INSTR(subject, pattern)
```
Returns the position of the first occurrence of the regular expression `pattern` in the string `subject`, or 0 if pattern was not found.
The positions start with 1 and are measured in characters (i.e. not in bytes), which is important for multi-byte character sets. You can cast a multi-byte character set to [BINARY](../binary/index) to get offsets in bytes.
The function follows the case sensitivity rules of the effective [collation](../data-types-character-sets-and-collations/index). Matching is performed case insensitively for case insensitive collations, and case sensitively for case sensitive collations and for binary data.
The collation case sensitivity can be overwritten using the (?i) and (?-i) PCRE flags.
MariaDB uses the [PCRE regular expression](../pcre-regular-expressions/index) library for enhanced regular expression performance, and REGEXP\_INSTR was introduced as part of this enhancement.
Examples
--------
```
SELECT REGEXP_INSTR('abc','b');
-> 2
SELECT REGEXP_INSTR('abc','x');
-> 0
SELECT REGEXP_INSTR('BJÖRN','N');
-> 5
```
Casting a multi-byte character set as BINARY to get offsets in bytes:
```
SELECT REGEXP_INSTR(BINARY 'BJÖRN','N') AS cast_utf8_to_binary;
-> 6
```
Case sensitivity:
```
SELECT REGEXP_INSTR('ABC','b');
-> 2
SELECT REGEXP_INSTR('ABC' COLLATE utf8_bin,'b');
-> 0
SELECT REGEXP_INSTR(BINARY'ABC','b');
-> 0
SELECT REGEXP_INSTR('ABC','(?-i)b');
-> 0
SELECT REGEXP_INSTR('ABC' COLLATE utf8_bin,'(?i)b');
-> 2
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Limitations/Differences with a MariaDB Server Compiled for Debugging Limitations/Differences with a MariaDB Server Compiled for Debugging
====================================================================
A MariaDB server configured with `--with-debug=full` has the following differences from a normal MariaDB server:
* You can have maximum of 1000 tables locked at the same time in one statement. (Define `MAX_LOCKS` in mysys/thrlock.c). This is to detect loops in the used lists.
* You can have maximum of 1000 threads locking the same table. (Define `MAX_THREADS` in mysys/thrlock.c). This is to detect loops in the used lists.
* Deadlock detection of mutex will be done at runtime. If wrong mutex handling is found an error will be printed to the error log. (Define `SAFE_MUTEX`)
* Memory overrun/underrun and not freed memory will be reported to the error log (Define `SAFEMALLOC`)
* You can get a trace of what `mysqld` (and most other binaries) is doing by starting it with the `--debug` option. The trace is usually put in `/tmp` or `C:\`
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Running mysqld as root Running mysqld as root
======================
MariaDB should never normally be run as the system's root user (this is unrelated to the MariaDB root user). If it is, any user with the FILE privilege can create or modify any files on the server as root.
MariaDB will normally return the error **Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root!** if you attempt to run mysqld as root. If you need to override this restriction for some reason, start mysqld with the `[user=root](../mysqld-options/index#-user)` option.
Better practice, and the default in most situations, is to use a separate user, exclusively used for MariaDB. In most distributions, this user is called `mysql`.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Checking MariaDB RPM Package Signatures Checking MariaDB RPM Package Signatures
=======================================
MariaDB RPM packages since [MariaDB 5.1.55](https://mariadb.com/kb/en/mariadb-5155-release-notes/) are signed.
The key we use has an id of `1BB943DB` and the key fingerprint is:
```
1993 69E5 404B D5FC 7D2F E43B CBCB 082A 1BB9 43DB
```
To check the signature you first need to import the public part of the key like so:
```
gpg --keyserver hkp://pgp.mit.edu --recv-keys 1BB943DB
```
Next you need to let pgp know about the key like so:
```
gpg --export --armour 1BB943DB > mariadb-signing-key.asc
sudo rpm --import mariadb-signing-key.asc
```
You can check to see if the key was imported with:
```
rpm -qa gpg-pubkey*
```
Once the key is imported, you can check the signature of the MariaDB RPM files by running the something like the following in your download directory:
```
rpm --checksig $(find . -name '*.rpm')
```
The output of the above will look something like this (make sure gpg shows up on each OK line):
```
me@desktop:~$ rpm --checksig $(find . -name '*.rpm')
./kvm-rpm-centos5-amd64/rpms/MariaDB-test-5.1.55-98.el5.x86_64.rpm: (sha1) dsa sha1 md5 gpg OK
./kvm-rpm-centos5-amd64/rpms/MariaDB-server-5.1.55-98.el5.x86_64.rpm: (sha1) dsa sha1 md5 gpg OK
./kvm-rpm-centos5-amd64/rpms/MariaDB-client-5.1.55-98.el5.x86_64.rpm: (sha1) dsa sha1 md5 gpg OK
./kvm-rpm-centos5-amd64/rpms/MariaDB-shared-5.1.55-98.el5.x86_64.rpm: (sha1) dsa sha1 md5 gpg OK
./kvm-rpm-centos5-amd64/rpms/MariaDB-devel-5.1.55-98.el5.x86_64.rpm: (sha1) dsa sha1 md5 gpg OK
./kvm-rpm-centos5-amd64/rpms/MariaDB-debuginfo-5.1.55-98.el5.x86_64.rpm: (sha1) dsa sha1 md5 gpg OK
./kvm-rpm-centos5-amd64/srpms/MariaDB-5.1.55-98.el5.src.rpm: (sha1) dsa sha1 md5 gpg OK
```
See Also
--------
* [Installing MariaDB RPM Files](../installing-mariadb-rpm-files/index)
* [Troubleshooting MariaDB Installs on RedHat/CentOS](../troubleshooting-mariadb-installs-on-redhatcentos/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Upgrading from MariaDB 10.2 to MariaDB 10.3 with Galera Cluster Upgrading from MariaDB 10.2 to MariaDB 10.3 with Galera Cluster
===============================================================
**MariaDB starting with [10.1](../what-is-mariadb-101/index)**Since [MariaDB 10.1](../what-is-mariadb-101/index), the [MySQL-wsrep](https://github.com/codership/mysql-wsrep) patch has been merged into MariaDB Server. Therefore, in [MariaDB 10.1](../what-is-mariadb-101/index) and above, the functionality of MariaDB Galera Cluster can be obtained by installing the standard MariaDB Server packages and the Galera wsrep provider library package.
Beginning in [MariaDB 10.1](../what-is-mariadb-101/index), [Galera Cluster](../what-is-mariadb-galera-cluster/index) ships with the MariaDB Server. Upgrading a Galera Cluster node is very similar to upgrading a server from [MariaDB 10.2](../what-is-mariadb-102/index) to [MariaDB 10.3](../what-is-mariadb-103/index). For more information on that process as well as incompatibilities between versions, see the [Upgrade Guide](../upgrading-from-mariadb-102-to-mariadb-103/index).
Performing a Rolling Upgrade
----------------------------
The following steps can be used to perform a rolling upgrade from [MariaDB 10.2](../what-is-mariadb-102/index) to [MariaDB 10.3](../what-is-mariadb-103/index) when using Galera Cluster. In a rolling upgrade, each node is upgraded individually, so the cluster is always operational. There is no downtime from the application's perspective.
First, before you get started:
1. First, take a look at [Upgrading from MariaDB 10.2 to MariaDB 10.3](../upgrading-from-mariadb-102-to-mariadb-103/index) to see what has changed between the major versions.
1. Check whether any system variables or options have been changed or removed. Make sure that your server's configuration is compatible with the new MariaDB version before upgrading.
2. Check whether replication has changed in the new MariaDB version in any way that could cause issues while the cluster contains upgraded and non-upgraded nodes.
3. Check whether any new features have been added to the new MariaDB version. If a new feature in the new MariaDB version cannot be replicated to the old MariaDB version, then do not use that feature until all cluster nodes have been upgrades to the new MariaDB version.
2. Next, make sure that the Galera version numbers are compatible.
1. If you are upgrading from the most recent [MariaDB 10.2](../what-is-mariadb-102/index) release to [MariaDB 10.3](../what-is-mariadb-103/index), then the versions will be compatible. Both [MariaDB 10.2](../what-is-mariadb-102/index) and [MariaDB 10.3](../what-is-mariadb-103/index) use Galera 3 (i.e. Galera wsrep provider versions 25.3.x), so they should be compatible.
2. See [What is MariaDB Galera Cluster?: Galera wsrep provider Versions](../what-is-mariadb-galera-cluster/index#galera-wsrep-provider-versions) for information on which MariaDB releases uses which Galera wsrep provider versions.
3. Ideally, you want to have a large enough gcache to avoid a [State Snapshot Transfer (SST)](../introduction-to-state-snapshot-transfers-ssts/index) during the rolling upgrade. The gcache size can be configured by setting `[gcache.size](../wsrep_provider_options/index#gcachesize)` For example:
`wsrep_provider_options="gcache.size=2G"`
Before you upgrade, it would be best to take a backup of your database. This is always a good idea to do before an upgrade. We would recommend [Mariabackup](../mariabackup/index).
Then, for each node, perform the following steps:
1. Modify the repository configuration, so the system's package manager installs [MariaDB 10.3](../what-is-mariadb-103/index). For example,
* On Debian, Ubuntu, and other similar Linux distributions, see [Updating the MariaDB APT repository to a New Major Release](../installing-mariadb-deb-files/index#updating-the-mariadb-apt-repository-to-a-new-major-release) for more information.
* On RHEL, CentOS, Fedora, and other similar Linux distributions, see [Updating the MariaDB YUM repository to a New Major Release](../yum/index#updating-the-mariadb-yum-repository-to-a-new-major-release) for more information.
* On SLES, OpenSUSE, and other similar Linux distributions, see [Updating the MariaDB ZYpp repository to a New Major Release](../installing-mariadb-with-zypper/index#updating-the-mariadb-zypp-repository-to-a-new-major-release) for more information.
2. If you use a load balancing proxy such as MaxScale or HAProxy, make sure to drain the server from the pool so it does not receive any new connections.
3. [Stop MariaDB](../starting-and-stopping-mariadb-starting-and-stopping-mariadb/index).
4. Uninstall the old version of MariaDB and the Galera wsrep provider.
* On Debian, Ubuntu, and other similar Linux distributions, execute the following:
`sudo apt-get remove mariadb-server galera`
* On RHEL, CentOS, Fedora, and other similar Linux distributions, execute the following:
`sudo yum remove MariaDB-server galera`
* On SLES, OpenSUSE, and other similar Linux distributions, execute the following:
`sudo zypper remove MariaDB-server galera`
5. Install the new version of MariaDB and the Galera wsrep provider.
* On Debian, Ubuntu, and other similar Linux distributions, see [Installing MariaDB Packages with APT](../installing-mariadb-deb-files/index#installing-mariadb-packages-with-apt) for more information.
* On RHEL, CentOS, Fedora, and other similar Linux distributions, see [Installing MariaDB Packages with YUM](../yum/index#installing-mariadb-packages-with-yum) for more information.
* On SLES, OpenSUSE, and other similar Linux distributions, see [Installing MariaDB Packages with ZYpp](../installing-mariadb-with-zypper/index#installing-mariadb-packages-with-zypp) for more information.
6. Make any desired changes to configuration options in [option files](../configuring-mariadb-with-option-files/index), such as `my.cnf`. This includes removing any system variables or options that are no longer supported.
7. On Linux distributions that use `systemd` you may need to increase the service startup timeout as the default timeout of 90 seconds may not be sufficient. See [Systemd: Configuring the Systemd Service Timeout](../systemd/index#configuring-the-systemd-service-timeout) for more information.
8. [Start MariaDB](../starting-and-stopping-mariadb-starting-and-stopping-mariadb/index).
9. Run `[mysql\_upgrade](../mysql_upgrade/index)` with the `--skip-write-binlog` option.
* `mysql_upgrade` does two things:
1. Ensures that the system tables in the `[mysq](../the-mysql-database-tables/index)l` database are fully compatible with the new version.
2. Does a very quick check of all tables and marks them as compatible with the new version of MariaDB .
When this process is done for one node, move onto the next node.
Note that when upgrading the Galera wsrep provider, sometimes the Galera protocol version can change. The Galera wsrep provider should not start using the new protocol version until all cluster nodes have been upgraded to the new version, so this is not generally an issue during a rolling upgrade. However, this can cause issues if you restart a non-upgraded node in a cluster where the rest of the nodes have been upgraded.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb MariaDB Galera Cluster - Known Limitations MariaDB Galera Cluster - Known Limitations
==========================================
This article contains information on known problems and limitations of MariaDB Galera Cluster.
Limitations from codership.com:
-------------------------------
* Currently replication works only with the [InnoDB storage engine](../xtradb-and-innodb/index). Any writes to tables of other types, including system (mysql.\*) tables are not replicated (this limitation excludes DDL statements such as [CREATE USER](../create-user/index), which implicitly modify the mysql.\* tables — those are replicated). There is however experimental support for [MyISAM](../myisam/index) - see the [wsrep\_replicate\_myisam](../galera-cluster-system-variables/index#wsrep_replicate_myisam) system variable)
* Unsupported explicit locking include [LOCK TABLES](../transactions-lock/index), [FLUSH TABLES {explicit table list} WITH READ LOCK](../flush/index), ([GET\_LOCK()](../get_lock/index), [RELEASE\_LOCK()](../release_lock/index),…). Using transactions properly should be able to overcome these limitations. Global locking operators like [FLUSH TABLES WITH READ LOCK](../flush/index) are supported.
* All tables should have a primary key (multi-column primary keys are supported). [DELETE](../delete/index) operations are unsupported on tables without a primary key. Also, rows in tables without a primary key may appear in a different order on different nodes.
* The [general query log](../general-query-log/index) and the [slow query log](../slow-query-log/index) cannot be directed to a table. If you enable these logs, then you must forward the log to a file by setting `[log\_output=FILE](../server-system-variables/index#log_output)`.
* [XA transactions](../xa-transactions/index) are not supported.
* Transaction size. While Galera does not explicitly limit the transaction size, a writeset is processed as a single memory-resident buffer and as a result, extremely large transactions (e.g. [LOAD DATA](../load-data/index)) may adversely affect node performance. To avoid that, the [wsrep\_max\_ws\_rows](../galera-cluster-system-variables/index#wsrep_max_ws_rows) and [wsrep\_max\_ws\_size](../galera-cluster-system-variables/index#wsrep_max_ws_size) system variables limit transaction rows to 128K and the transaction size to 2Gb by default. If necessary, users may want to increase those limits. Future versions will add support for transaction fragmentation.
Other observations, in no particular order:
-------------------------------------------
* If you are using [mysqldump](../mysqldump/index) for state transfer, and it failed for whatever reason (e.g. you do not have the database account it attempts to connect with, or it does not have necessary permissions), you will see an SQL SYNTAX error in the server [error log](../error-log/index). Don't let it fool you, this is just a fancy way to deliver a message (the pseudo-statement inside of the bogus SQL will actually contain the error message).
* Do not use transactions of any essential size. Just to insert 100K rows, the server might require additional 200-300 Mb. In a less fortunate scenario it can be 1.5 Gb for 500K rows, or 3.5 Gb for 1M rows. See [MDEV-466](https://jira.mariadb.org/browse/MDEV-466) for some numbers (you'll see that it's closed, but it's not closed because it was fixed).
* Locking is lax when DDL is involved. For example, if your DML transaction uses a table, and a parallel DDL statement is started, in the normal MySQL setup it would have waited for the metadata lock, but in Galera context it will be executed right away. It happens even if you are running a single node, as long as you configured it as a cluster node. See also [MDEV-468](https://jira.mariadb.org/browse/MDEV-468). This behavior might cause various side-effects, the consequences have not been investigated yet. Try to avoid such parallelism.
* Do not rely on auto-increment values to be sequential. Galera uses a mechanism based on autoincrement increment to produce unique non-conflicting sequences, so on every single node the sequence will have gaps. See <http://codership.blogspot.com/2009/02/managing-auto-increments-with-multi.html>
* A command may fail with `ER_UNKNOWN_COM_ERROR` producing 'WSREP has not yet prepared node for application use' (or 'Unknown command' in older versions) error message. It happens when a cluster is suspected to be split and the node is in a smaller part — for example, during a network glitch, when nodes temporarily lose each other. It can also occur during state transfer. The node takes this measure to prevent data inconsistency. Its usually a temporary state which can be detected by checking [wsrep\_ready](../galera-cluster-status-variables/index#wsrep_ready) value. The node, however, allows SHOW and SET command during this period.
* After a temporary split, if the 'good' part of the cluster was still reachable and its state was modified, resynchronization occurs. As a part of it, nodes of the 'bad' part of the cluster drop all client connections. It might be quite unexpected, especially if the client was idle and did not even know anything wrong was happening. Please also note that after the connection to the isolated node is restored, if there is a flow on the node, it takes a long time for it to synchronize, during which the "good" node says that the cluster is already of the normal size and synced, while the rejoining node says it's only joined (but not synced). The connections keep getting 'unknown command'. It should pass eventually.
* While [binlog\_format](../replication-and-binary-log-server-system-variables/index#binlog_format) is checked on startup and can only be ROW (see [Binary Log Formats](../binary-log-formats/index)), it can be changed at runtime. Do NOT change binlog\_format at runtime, it is likely not only cause replication failure, but make all other nodes crash.
* If you are using rsync for state transfer, and a node crashes before the state transfer is over, rsync process might hang forever, occupying the port and not allowing to restart the node. The problem will show up as 'port in use' in the server error log. Find the orphan rsync process and kill it manually.
* Performance: by design performance of the cluster cannot be higher than performance of the slowest node; however, even if you have only one node, its performance can be considerably lower comparing to running the same server in a standalone mode (without wsrep provider). It is particularly true for big enough transactions (even those which are well within current limitations on transaction size quoted above).
* Windows is not supported.
* Replication filters: When using Galera cluster, replication filters should be used with caution. See [Configuring MariaDB Galera Cluster: Replication Filters](../configuring-mariadb-galera-cluster/index#replication-filters) for more details. See also [MDEV-421](https://jira.mariadb.org/browse/MDEV-421) and [MDEV-6229](https://jira.mariadb.org/browse/MDEV-6229).
* Flashback isn't supported in Galera due to incompatible binary log format.
* `FLUSH PRIVILEGES` is not replicated.
* The [query cache](../query-cache/index) needed to be disabled by setting `[query\_cache\_size=0](../server-system-variables/index#query_cache_size)` prior to MariaDB Galera Cluster 5.5.40, MariaDB Galera Cluster 10.0.14, and [MariaDB 10.1.2](https://mariadb.com/kb/en/mariadb-1012-release-notes/)..
* In an asynchronous replication setup where a master replicates to a galera node acting as slave, parallel replication (slave-parallel-threads > 1) on slave is currently not supported (see [MDEV-6860](https://jira.mariadb.org/browse/MDEV-6860)).
* The disk-based [Galera gcache](https://galeracluster.com/library/documentation/state-transfer.html#write-set-cache-gcache) is not encrypted ([MDEV-8072](https://jira.mariadb.org/browse/MDEV-8072)).
* Nodes may have different table definitions, especially temporarily during [rolling schema upgrade](../galera-cluster-system-variables/index#wsrep_osu_method) operations, but the same [schema compatibility restrictions](../replication-when-the-master-and-slave-have-different-table-definitions/index) apply as they do for ROW based replication
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb _rowid \_rowid
=======
Syntax
------
\_rowid
Description
-----------
The `_rowid` pseudo column is mapped to the primary key in the related table. This can be used as a replacement of the `rowid` pseudo column in other databases. Another usage is to simplify sql queries as one doesn't have to know the name of the primary key.
Examples
--------
```
create table t1 (a int primary key, b varchar(80));
insert into t1 values (1,"one"),(2,"two");
select * from t1 where _rowid=1;
```
```
+---+------+
| a | b |
+---+------+
| 1 | one |
+---+------+
```
```
update t1 set b="three" where _rowid=2;
select * from t1 where _rowid>=1 and _rowid<=10;
```
```
+---+-------+
| a | b |
+---+-------+
| 1 | one |
| 2 | three |
+---+-------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb UTC_TIMESTAMP UTC\_TIMESTAMP
==============
Syntax
------
```
UTC_TIMESTAMP
UTC_TIMESTAMP([precision])
```
Description
-----------
Returns the current [UTC](../coordinated-universal-time/index) date and time as a value in 'YYYY-MM-DD HH:MM:SS' or YYYYMMDDHHMMSS.uuuuuu format, depending on whether the function is used in a string or numeric context.
The optional *precision* determines the microsecond precision. See [Microseconds in MariaDB](../microseconds-in-mariadb/index).
Examples
--------
```
SELECT UTC_TIMESTAMP(), UTC_TIMESTAMP() + 0;
+---------------------+-----------------------+
| UTC_TIMESTAMP() | UTC_TIMESTAMP() + 0 |
+---------------------+-----------------------+
| 2010-03-27 17:33:16 | 20100327173316.000000 |
+---------------------+-----------------------+
```
With precision:
```
SELECT UTC_TIMESTAMP(4);
+--------------------------+
| UTC_TIMESTAMP(4) |
+--------------------------+
| 2018-07-10 07:51:09.1019 |
+--------------------------+
```
See Also
--------
* [Time Zones](../time-zones/index)
* [Microseconds in MariaDB](../microseconds-in-mariadb/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mariadb-dump mariadb-dump
============
**MariaDB starting with [10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/)**From [MariaDB 10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/), `mariadb-dump` is a symlink to `mysqldump`, the backup tool.
See [mysqldump](../mysqldump/index) for details.
**MariaDB starting with [10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/)**From [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/), `mariadb-dump` is the name of the tool, with `mysqldump` a symlink .
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Sys Schema sys_config Table Sys Schema sys\_config Table
============================
**MariaDB starting with [10.6.0](https://mariadb.com/kb/en/mariadb-1060-release-notes/)**The Sys Schema *sys\_config* table was added in [MariaDB 10.6.0](https://mariadb.com/kb/en/mariadb-1060-release-notes/).
The *sys\_config* table holds configuration options for the [Sys Schema](../sys-schema/index).
This is a persistent table (using the [InnoDB](../innodb/index) storage engine), with the configuration persisting across upgrades (new options are added with [INSERT IGNORE](../insert-ignore/index)).
The table also has two related triggers, which maintain the user that INSERTs or UPDATEs the configuration - sys\_config\_insert\_set\_user and sys\_config\_update\_set\_user respectively.
Its structure is as follows:
```
+----------+--------------+------+-----+-------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+----------+--------------+------+-----+-------------------+-----------------------------+
| variable | varchar(128) | NO | PRI | NULL | |
| value | varchar(128) | YES | | NULL | |
| set_time | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
| set_by | varchar(128) | YES | | NULL | |
+----------+--------------+------+-----+-------------------+-----------------------------+
```
Note, when functions check for configuration options, they first check whether a similar named user variable exists with a value, and if this is not set then pull the configuration option from this table in to that named user variable. This is done for performance reasons (to not continually SELECT from the table), however this comes with the side effect that once inited, the values last with the session, somewhat like how session variables are inited from global variables. If the values within this table are changed, they will not take effect until the user logs in again.
### Options Included
| Variable | Default Value | Description |
| --- | --- | --- |
| statement\_truncate\_len | 64 | Sets the size to truncate statements to, for the [format\_statement](../format_statement/index) function. |
| statement\_performance\_analyzer.limit | 100 | The maximum number of rows to include for the views that does not have a built-in limit (e.g. the 95th percentile view). If not set the limit is 100. |
| statement\_performance\_analyzer.view | NULL | Used together with the 'custom' view. If the value contains a space, it is considered a query, otherwise it must be an existing view querying the performance\_schema.events\_statements\_summary\_by\_digest table. |
| diagnostics.allow\_i\_s\_tables | OFF | Specifies whether it is allowed to do table scan queries on information\_schema.TABLES for the diagnostics procedure. |
| diagnostics.include\_raw | OFF | Set to 'ON' to include the raw data (e.g. the original output of "SELECT \* FROM sys.metrics") for the diagnostics procedure. |
| ps\_thread\_trx\_info.max\_length | 65535 | Sets the maximum output length for JSON object output by the ps\_thread\_trx\_info() function. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Upgrading from MariaDB 5.3 to MariaDB 5.5 Upgrading from MariaDB 5.3 to MariaDB 5.5
=========================================
What you need to know
---------------------
There are no changes in table or index formats between [MariaDB 5.3](../what-is-mariadb-53/index) and [MariaDB 5.5](../what-is-mariadb-55/index), so on most servers the upgrade should be painless.
### How to upgrade
The suggested upgrade procedure is:
1. For Windows, see [Upgrading MariaDB on Windows](../upgrading-mariadb-on-windows/index) instead.
2. Shutdown [MariaDB 5.3](../what-is-mariadb-53/index)
3. Take a backup (this is the perfect time to take a backup of your databases)
4. Uninstall [MariaDB 5.3](../what-is-mariadb-53/index)
5. Install [MariaDB 5.5](../what-is-mariadb-55/index) [[1](#_note-0)]
6. Run [mysql\_upgrade](../mysql_upgrade/index)
* Ubuntu and Debian packages do this automatically when they are installed; Red Hat, CentOS, and Fedora packages do not
* `mysql_upgrade` does two things:
1. Upgrades the permission tables in the `mysql` database with some new fields
2. Does a very quick check of all tables and marks them as compatible with [MariaDB 5.5](../what-is-mariadb-55/index)
* In most cases this should be a fast operation (depending of course on the number of tables)
7. Add new options to [my.cnf](../configuring-mariadb-with-mycnf/index) to enable features
* If you change `my.cnf` then you need to restart `mysqld`
### Incompatible changes between 5.3 and 5.5
As mentioned previously, on most servers upgrading from 5.5 should be painless. However, there are some things that have changed which could affect an upgrade:
#### XtraDB options that have changed default values
| Option | Old value | New value |
| --- | --- | --- |
| [innodb\_change\_buffering](../xtradbinnodb-server-system-variables/index#innodb_change_buffering) | inserts | all |
| [innodb\_flush\_neighbor\_pages](../xtradbinnodb-server-system-variables/index#innodb_flush_neighbor_pages) | 1 | area |
#### Options that have been removed or renamed
Percona, the provider of [XtraDB](../xtradb-and-innodb/index), does not provide all earlier XtraDB features in the 5.5 code base. Because of that, [MariaDB 5.5](../what-is-mariadb-55/index) can't provide them either. The following options are not supported by XtraDB 5.5. If you are using them in any of your my.cnf files, you should remove them before upgrading to 5.5.
* [innodb\_adaptive\_checkpoint](../xtradbinnodb-server-system-variables/index#innodb_adaptive_checkpoint); Use [innodb\_adaptive\_flushing\_method](../xtradbinnodb-server-system-variables/index#innodb_adaptive_flushing_method) instead.
* [innodb\_auto\_lru\_dump](../xtradbinnodb-server-system-variables/index#innodb_auto_lru_dump); Use [innodb\_buffer\_pool\_restore\_at\_startup](../xtradbinnodb-server-system-variables/index#innodb_buffer_pool_restore_at_startup) instead (and [innodb\_buffer\_pool\_load\_at\_startup](../xtradbinnodb-server-system-variables/index#innodb_buffer_pool_load_at_startup) in [MariaDB 10.0](../what-is-mariadb-100/index)).
* [innodb\_blocking\_lru\_restore](../xtradbinnodb-server-system-variables/index#innodb_blocking_lru_restore); Use [innodb\_blocking\_buffer\_pool\_restore](../xtradbinnodb-server-system-variables/index#innodb_blocking_buffer_pool_restore) instead.
* [innodb\_enable\_unsafe\_group\_commit](../xtradbinnodb-server-system-variables/index#innodb_enable_unsafe_group_commit)
* [innodb\_expand\_import](../xtradbinnodb-server-system-variables/index#innodb_expand_import); Use [innodb\_import\_table\_from\_xtrabackup](../xtradbinnodb-server-system-variables/index#innodb_import_table_from_xtrabackup) instead.
* [innodb\_extra\_rsegments](../xtradbinnodb-server-system-variables/index#innodb_extra_rsegments); Use [innodb\_rollback\_segments](../xtradbinnodb-server-system-variables/index#innodb_rollback_segments) instead.
* [innodb\_extra\_undoslots](../xtradbinnodb-server-system-variables/index#innodb_extra_undoslots)
* [innodb\_fast\_recovery](../xtradbinnodb-server-system-variables/index#innodb_fast_recovery)
* [innodb\_flush\_log\_at\_trx\_commit\_session](../xtradbinnodb-server-system-variables/index#innodb_flush_log_at_trx_commit_session)
* [innodb\_overwrite\_relay\_log\_info](../xtradbinnodb-server-system-variables/index#innodb_overwrite_relay_log_info)
* [innodb\_pass\_corrupt\_table](../xtradbinnodb-server-system-variables/index#innodb_pass_corrupt_table); Use [innodb\_corrupt\_table\_action](../xtradbinnodb-server-system-variables/index#innodb_corrupt_table_action) instead.
* [innodb\_use\_purge\_thread](../xtradbinnodb-server-system-variables/index#innodb_use_purge_thread)
* [xtradb\_enhancements](../xtradbinnodb-server-system-variables/index#xtradb_enhancements)
Notes
-----
1. [↑](#_ref-0) If using a MariaDB `apt` or `yum` [repository](https://downloads.mariadb.org/mariadb/repositories/), it is often enough to replace instances of '5.3' with '5.5' and then run an update/upgrade. For example, in Ubuntu/Debian update the MariaDB `sources.list` entry from something that looks similar to this:
```
deb http://ftp.osuosl.org/pub/mariadb/repo/5.3/ubuntu trusty main
```
To something like this:
```
deb http://ftp.osuosl.org/pub/mariadb/repo/5.5/ubuntu trusty main
```
And then run
```
apt-get update && apt-get upgrade
```
And in Red Hat, CentOS, and Fedora, change the `baseurl` line from something that looks like this:
```
baseurl = http://yum.mariadb.org/5.3/centos6-amd64
```
To something that looks like this:
```
baseurl = http://yum.mariadb.org/5.5/centos6-amd64
```
And then run
```
yum update
```
See also
--------
* [The features in MariaDB 5.5](../what-is-mariadb-55/index)
* [Perconas guide of how to upgrade to 5.5](http://www.percona.com/doc/percona-server/5.5/upgrading_guide_51_55.html)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb mysql.db Table mysql.db Table
==============
The `mysql.db` table contains information about database-level privileges. The table can be queried and although it is possible to directly update it, it is best to use [GRANT](../grant/index) for setting privileges.
Note that the MariaDB privileges occur at many levels. A user may not be granted a privilege at the database level, but may still have permission on a table level, for example. See [privileges](../grant/index) for a more complete view of the MariaDB privilege system.
**MariaDB starting with [10.4](../what-is-mariadb-104/index)**In [MariaDB 10.4](../what-is-mariadb-104/index) and later, this table uses the [Aria](../aria/index) storage engine.
**MariaDB until [10.3](../what-is-mariadb-103/index)**In [MariaDB 10.3](../what-is-mariadb-103/index) and before, this table uses the [MyISAM](../myisam-storage-engine/index) storage engine.
The `mysql.db` table contains the following fields:
| Field | Type | Null | Key | Default | Description | Introduced |
| --- | --- | --- | --- | --- | --- | --- |
| `Host` | `char(60)` | NO | PRI | | Host (together with `User` and `Db` makes up the unique identifier for this record. Until [MariaDB 5.5](../what-is-mariadb-55/index), if the host field was blank, the corresponding record in the [mysql.host](../mysqlhost-table/index) table would be examined. From [MariaDB 10.0](../what-is-mariadb-100/index), a blank host field is the same as the `%` wildcard. | |
| `Db` | `char(64)` | NO | PRI | | Database (together with `User` and `Host` makes up the unique identifier for this record. | |
| `User` | `char(80)` | NO | PRI | | User (together with `Host` and `Db` makes up the unique identifier for this record. | |
| `Select_priv` | `enum('N','Y')` | NO | | N | Can perform [SELECT](../select/index) statements. | |
| `Insert_priv` | `enum('N','Y')` | NO | | N | Can perform [INSERT](../insert/index) statements. | |
| `Update_priv` | `enum('N','Y')` | NO | | N | Can perform [UPDATE](../update/index) statements. | |
| `Delete_priv` | `enum('N','Y')` | NO | | N | Can perform [DELETE](../delete/index) statements. | |
| `Create_priv` | `enum('N','Y')` | NO | | N | Can [CREATE TABLE's](../create-table/index). | |
| `Drop_priv` | `enum('N','Y')` | NO | | N | Can [DROP DATABASE's](../drop-database/index) or [DROP TABLE's](../drop-table/index). | |
| `Grant_priv` | `enum('N','Y')` | NO | | N | User can [grant](../grant/index) privileges they possess. | |
| `References_priv` | `enum('N','Y')` | NO | | N | Unused | |
| `Index_priv` | `enum('N','Y')` | NO | | N | Can create an index on a table using the [CREATE INDEX](../create-index/index) statement. Without the `INDEX` privilege, user can still create indexes when creating a table using the [CREATE TABLE](../create-table/index) statement if the user has have the `CREATE` privilege, and user can create indexes using the [ALTER TABLE](../alter-table/index) statement if they have the `ALTER` privilege. | |
| `Alter_priv` | `enum('N','Y')` | NO | | N | Can perform [ALTER TABLE](../alter-table/index) statements. | |
| `Create_tmp_table_priv` | `enum('N','Y')` | NO | | N | Can create temporary tables with the [CREATE TEMPORARY TABLE](../create-table/index) statement. | |
| `Lock_tables_priv` | `enum('N','Y')` | NO | | N | Acquire explicit locks using the [LOCK TABLES](../transactions-lock/index) statement; user also needs to have the `SELECT` privilege on a table in order to lock it. | |
| `Create_view_priv` | `enum('N','Y')` | NO | | N | Can create a view using the [CREATE\_VIEW](../create-view/index) statement. | |
| `Show_view_priv` | `enum('N','Y')` | NO | | N | Can show the [CREATE VIEW](../create-view/index) statement to create a view using the [SHOW CREATE VIEW](../show-create-view/index) statement. | |
| `Create_routine_priv` | `enum('N','Y')` | NO | | N | Can create stored programs using the [CREATE PROCEDURE](../create-procedure/index) and [CREATE FUNCTION](../create-function/index) statements. | |
| `Alter_routine_priv` | `enum('N','Y')` | NO | | N | Can change the characteristics of a stored function using the [ALTER FUNCTION](../alter-function/index) statement. | |
| `Execute_priv` | `enum('N','Y')` | NO | | N | Can execute [stored procedure](../stored-procedures/index) or functions. | |
| `Event_priv` | `enum('N','Y')` | NO | | N | Create, drop and alter [events](../stored-programs-and-views-events/index). | |
| `Trigger_priv` | `enum('N','Y')` | NO | | N | Can execute [triggers](../triggers/index) associated with tables the user updates, execute the [CREATE TRIGGER](../create-trigger/index) and [DROP TRIGGER](../drop-trigger/index) statements. | |
| `Delete_history_priv` | `enum('N','Y')` | NO | | N | Can delete rows created through [system versioning](../system-versioned-tables/index). | [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/) |
The [Acl\_database\_grants](../server-status-variables/index#acl_database_grants) status variable, added in [MariaDB 10.1.4](https://mariadb.com/kb/en/mariadb-1014-release-notes/), indicates how many rows the `mysql.db` table contains.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SHOW PROCEDURE CODE SHOW PROCEDURE CODE
===================
Syntax
------
```
SHOW PROCEDURE CODE proc_name
```
Description
-----------
This statement is a MariaDB extension that is available only for servers that have been built with debugging support. It displays a representation of the internal implementation of the named [stored procedure](../stored-procedures/index). A similar statement, `[SHOW FUNCTION CODE](../show-function-code/index)`, displays information about [stored functions](../stored-functions/index).
Both statements require that you be the owner of the routine or have `[SELECT](../grant/index)` access to the `[mysql.proc](../mysqlproc-table/index)` table.
If the named routine is available, each statement produces a result set. Each row in the result set corresponds to one "instruction" in the routine. The first column is Pos, which is an ordinal number beginning with 0. The second column is Instruction, which contains an SQL statement (usually changed from the original source), or a directive which has meaning only to the stored-routine handler.
Examples
--------
```
DELIMITER //
CREATE PROCEDURE p1 ()
BEGIN
DECLARE fanta INT DEFAULT 55;
DROP TABLE t2;
LOOP
INSERT INTO t3 VALUES (fanta);
END LOOP;
END//
Query OK, 0 rows affected (0.00 sec)
SHOW PROCEDURE CODE p1//
+-----+----------------------------------------+
| Pos | Instruction |
+-----+----------------------------------------+
| 0 | set fanta@0 55 |
| 1 | stmt 9 "DROP TABLE t2" |
| 2 | stmt 5 "INSERT INTO t3 VALUES (fanta)" |
| 3 | jump 2 |
+-----+----------------------------------------+
```
See Also
--------
* [Stored Procedure Overview](../stored-procedure-overview/index)
* [CREATE PROCEDURE](../create-procedure/index)
* [ALTER PROCEDURE](../alter-procedure/index)
* [DROP PROCEDURE](../drop-procedure/index)
* [SHOW CREATE PROCEDURE](../show-create-procedure/index)
* [SHOW PROCEDURE STATUS](../show-procedure-status/index)
* [Stored Routine Privileges](../stored-routine-privileges/index)
* [Information Schema ROUTINES Table](../information-schema-routines-table/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Building ColumnStore in MariaDB Building ColumnStore in MariaDB
===============================
This is a description of how to build and start a local ColumnStore installation, for debugging purposes.
Install the dependencies
------------------------
For CentOS:
```
yum -y groupinstall "Development Tools" \
&& yum -y install bison ncurses-devel readline-devel perl-devel openssl-devel cmake libxml2-devel gperf libaio-devel libevent-devel python-devel ruby-devel tree wget pam-devel snappy-devel libicu \
&& yum -y install vim wget strace ltrace gdb rsyslog net-tools openssh-server expect \
&& boost perl-DBI
```
Get the source code
-------------------
```
git clone https://github.com/mariadb-corporation/mariadb-columnstore-server.git
cd mariadb-columnstore-server/
git clone https://github.com/mariadb-corporation/mariadb-columnstore-engine.git
```
Compile
-------
```
cmake . -DCMAKE_BUILD_TYPE=Debug \
-DWITHOUT_MROONGA:bool=1 -DWITHOUT_TOKUDB:bool=1 \
-DCMAKE_INSTALL_PREFIX=/usr/local/mariadb/columnstore/mysql
make -j10
sudo make install
```
```
cd mariadb-columnstore-engine/
cmake . -DCMAKE_BUILD_TYPE=Debug
make -j10
sudo make install
cd /usr/local/mariadb/columnstore/bin/
```
Configure
---------
Make sure you do NOT have `/etc/my.cnf` or `/.my.cnf`.
```
sudo ./postConfigure
```
Answer "Enter" to all questions, except:
```
Select the type of System Server install [1=single, 2=multi] (2) >
```
Here, answer `1`.
Access the server
```
source /usr/local/mariadb/columnstore/bin/columnstoreAlias
mcsmysql
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Optimizing for "Latest News"-style Queries Optimizing for "Latest News"-style Queries
==========================================
The problem space
-----------------
Let's say you have "news articles" (rows in a table) and want a web page showing the latest ten articles about a particular topic.
Variants on "topic":
* Category
* Tag
* Provider (of news article)
* Manufacturer (of item for sale)
* Ticker (financial stock)
Variants on "news article"
* Item for sale
* Blog comment
* Blog thread
Variants on "latest"
* Publication date (unix\_timestamp)
* Most popular (keep the count)
* Most emailed (keep the count)
* Manual ranking (1..10 -- 'top ten')
Variants on "10" - there is nothing sacred about "10" in this discussion.
The performance issues
----------------------
Currently you have a table (or a column) that relates the topic to the article. The SELECT statement to find the latest 10 articles has grown in complexity, and performance is poor. You have focused on what index to add, but nothing seems to work.
* If there are multiple topics for each article, you need a many-to-many table.
* You have a flag "is\_deleted" that needs filtering on.
* You want to "paginate" the list (ten articles per page, for as many pages as necessary).
The solution
------------
First, let me give you the solution, then I will elaborate on why it works well.
* One new table called, say, Lists.
* Lists has \_exactly\_ 3 columns: topic, article\_id, sequence
* Lists has \_exactly\_ 2 indexes: PRIMARY KEY(topic, sequence, article\_id), INDEX(article\_id)
* Only viewable articles are in Lists. (This avoids the filtering on "is\_deleted", etc)
* Lists is [InnoDB](../innodb/index). (This gets "clustering".)
* "sequence" is typically the date of the article, but could be some other ordering.
* "topic" should probably be normalized, but that is not critical to this discussion.
* "article\_id" is a link to the bulky row in another table(s) that provide all the details about the article.
The queries
-----------
Find the latest 10 articles for a topic:
```
SELECT a.*
FROM Articles a
JOIN Lists s ON s.article_id = a.article_id
WHERE s.topic = ?
ORDER BY s.sequence DESC
LIMIT 10;
```
You must *not* have any WHERE condition touching columns in Articles.
When you mark an article for deletion; you *must* remove it from Lists:
```
DELETE FROM Lists
WHERE article_id = ?;
```
I emphasize "must" because flags and other filtering is often the root of performance issues.
Why it works
------------
By now, you may have discovered why it works.
The big goal is to minimize the disk hits. Let's itemize how few disk hits are needed. When finding the latest articles with 'normal' code, you will probably find that it is doing significant scans of the Articles table, failing to quickly home in on the 10 rows you want. With this design, there is only one extra disk hit:
* 1 disk hit: 10 adjacent, narrow, rows in Lists -- probably in a single "block".
* 10 disk hits: The 10 articles. (These hits are unavoidable, but may be cached.) The PRIMARY KEY, and using InnoDB, makes these quite efficient.
OK, you pay for this by removing things that you should avoid.
* 1 disk hit: INDEX(article\_id) - finding a few ids
* A few more disk hits to DELETE rows from Lists. This is a small price to pay -- and you are not paying it while the user is waiting for the page to render.
See also
--------
Rick James graciously allowed us to use this article in the Knowledge Base.
[Rick James' site](http://mysql.rjweb.org/) has other useful tips, how-tos, optimizations, and debugging tips.
Original source: <http://mysql.rjweb.org/doc.php/lists>
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema XtraDB Tables Information Schema XtraDB Tables
=================================
List of Information Schema tables specifically related to [XtraDB](../xtradb/index). Tables that XtraDB shares with InnoDB are listed in [Information Schema InnoDB Tables](../information-schema-innodb-tables/index).
| Title | Description |
| --- | --- |
| [Information Schema CHANGED\_PAGE\_BITMAPS Table](../information-schema-changed_page_bitmaps-table/index) | Dummy table to allow FLUSH NO\_WRITE\_TO\_BINLOG CHANGED\_PAGE\_BITMAPS |
| [Information Schema INNODB\_BUFFER\_POOL\_PAGES Table](../information-schema-innodb_buffer_pool_pages-table/index) | XtraDB buffer pool page information. |
| [Information Schema INNODB\_BUFFER\_POOL\_PAGES\_BLOB Table](../information-schema-innodb_buffer_pool_pages_blob-table/index) | XtraDB buffer pool blob pages. |
| [Information Schema INNODB\_BUFFER\_POOL\_PAGES\_INDEX Table](../information-schema-innodb_buffer_pool_pages_index-table/index) | XtraDB buffer pool index pages. |
| [Information Schema INNODB\_UNDO\_LOGS Table](../information-schema-innodb_undo_logs-table/index) | XtraDB undo log segments. |
| [Information Schema XTRADB\_INTERNAL\_HASH\_TABLES Table](../information-schema-xtradb_internal_hash_tables-table/index) | InnoDB/XtraDB hash table memory usage information. |
| [Information Schema XTRADB\_READ\_VIEW Table](../information-schema-xtradb_read_view-table/index) | Information about the oldest active transaction in the system. |
| [Information Schema XTRADB\_RSEG Table](../information-schema-xtradb_rseg-table/index) | XtraDB rollback segment information. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SHOW FUNCTION STATUS SHOW FUNCTION STATUS
====================
Syntax
------
```
SHOW FUNCTION STATUS
[LIKE 'pattern' | WHERE expr]
```
Description
-----------
This statement is similar to `[SHOW PROCEDURE STATUS](../show-procedure-status/index)` but for [stored functions](../stored-functions/index).
The LIKE clause, if present on its own, indicates which function names to match.
The `WHERE` and `LIKE` clauses can be given to select rows using more general conditions, as discussed in [Extended SHOW](../extended-show/index).
The `[information\_schema.ROUTINES](../information-schema-routines-table/index)` table contains more detailed information.
Examples
--------
Showing all stored functions:
```
SHOW FUNCTION STATUS\G
*************************** 1. row ***************************
Db: test
Name: VatCents
Type: FUNCTION
Definer: root@localhost
Modified: 2013-06-01 12:40:31
Created: 2013-06-01 12:40:31
Security_type: DEFINER
Comment:
character_set_client: utf8
collation_connection: utf8_general_ci
Database Collation: latin1_swedish_ci
```
Stored functions whose name starts with 'V':
```
SHOW FUNCTION STATUS LIKE 'V%' \G
*************************** 1. row ***************************
Db: test
Name: VatCents
Type: FUNCTION
Definer: root@localhost
Modified: 2013-06-01 12:40:31
Created: 2013-06-01 12:40:31
Security_type: DEFINER
Comment:
character_set_client: utf8
collation_connection: utf8_general_ci
Database Collation: latin1_swedish_ci
```
Stored functions with a security type of 'DEFINER':
```
SHOW FUNCTION STATUS WHERE Security_type LIKE 'DEFINER' \G
*************************** 1. row ***************************
Db: test
Name: VatCents
Type: FUNCTION
Definer: root@localhost
Modified: 2013-06-01 12:40:31
Created: 2013-06-01 12:40:31
Security_type: DEFINER
Comment:
character_set_client: utf8
collation_connection: utf8_general_ci
Database Collation: latin1_swedish_ci
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Plans for 5.6 Plans for 5.6
=============
The information on this page is obsolete. Current information can be found on the **[Plans for 10x](../plans-for-10x/index)** page.
Also see the following blog posts:
* <http://blog.mariadb.org/what-comes-in-between-mariadb-now-and-mysql-5-6/>
* <http://blog.mariadb.org/explanation-on-mariadb-10-0/>
The following is a list of features considered for 5.6.
As soon as 5.3/5.5 is declared gamma we have to decode who will work on what features for 5.6. All features that has a designed developer who agrees to get the thing done in a given timeline will be considered as a 5.6 feature. Then we will also create a new kb page with the to-be-done features.
The items below that have a name after them are already allocated to a developer.
If you want to be part of developing any of these features, get [an account](http://kb.askmonty.org/v/about) and add your name after the feature you are interested in. You can also add new features to this list or to the [worklog](../worklog/index).
[MariaDB 5.3](../what-is-mariadb-53/index)
------------------------------------------
Features which will be in [MariaDB 5.3](../what-is-mariadb-53/index) (instead of waiting until 5.6 to add them).
* Storage independent test suite (add to 5.3)
* OpenGIS: create required tables: GeometryColumns, related views. (Holyfoot) (move to 5.3)
* OpenGIS: stored procedure AddGeometryColumn (Holyfoot) (move to 5.3)
[MariaDB 5.5](../what-is-mariadb-55/index)
------------------------------------------
Features which will be in [MariaDB 5.5](../what-is-mariadb-55/index) (instead of waiting until 5.6 to add them).
* Present in MySQL 5.5: Performance Schema
+ what do we want to do with it, embrace it, extend it?
+ or it is better to have more SHOW commands and INFORMATION\_SCHEMA tables?
+ are going to use Facebook's user stats/index stats patch or create a PERFORMANCE\_SCHEMA-based solution? (MP + Percona)
+ Not much we can do to improve it for 5.6
* FB request: log all SQL errors (Holyfoot to do in 5.5); [MWL#177](http://askmonty.org/worklog/?tid=177)
* FB request: EXPLAIN the \*actual\* plan on a \*running\* statement; no progress indicators and numbers are needed; [MWL#182](http://askmonty.org/worklog/?tid=182) (In Progress) (spetrunia)
* Thread pool (Wlad) (5.5)
* Memory tables: VARCHAR and BLOB support (Have sponsor; Will be implemented for 5.5) (monty)
* Plugins by Sergey: INSTALL PLUGIN \* ([MWL#77](http://askmonty.org/worklog/?tid=77)) (done but not pushed) (will be in 5.5) (sergey)
* LGPL/BSD client library (MP) (done in 5.5)
+ Need to build it outside the tree as a separate package that people can use
[MariaDB 5.6](../what-is-mariadb-56/index) Definite
----------------------------------------------------
Features which will definitely be in [MariaDB 5.6](../what-is-mariadb-56/index)
* FB request: Better monitoring for replication (FB has patch; MP will add) (kristian) (for 5.5 or 5.6)
* Aria: Concurrent UPDATE & DELETE. [MWL#235](http://askmonty.org/worklog/?tid=235) (Have partial sponsor) (monty) (will do before April)
* Aria: Segmented key cache for Aria (igor) (definitely before 5.7)
* Aria: Fast next not same (monty) (will be done for 5.6)
* From MySQL 5.6: Global transaction ID, so the slave state becomes recoverable, and facilitate automatic moving a slave to a new master across multi-level hierarchies.
[MariaDB 5.6](../what-is-mariadb-56/index) High Probability
------------------------------------------------------------
Features which have a high probability of being in [MariaDB 5.6](../what-is-mariadb-56/index)
* Performance: More scalable query cache under higher concurrency (Sanja) (maybe)
+ Allow stale data (Sanja) (maybe)
[MariaDB 5.6](../what-is-mariadb-56/index) Rolling
---------------------------------------------------
Features which will be added when they are ready.
* Parameterized Views
* Percona patches (Monty & Sanja) (rolling feature)
Will skip for 5.6
-----------------
Features which will not be added to [MariaDB 5.6](../what-is-mariadb-56/index).
* Community Request: prevent full scans from running at all above a certain table size;
+ is existing `max-join-size` variable sufficient or more granula control is needed?
* Optimizer: Implement UNION ALL without usage of a temporary table (nice to have) (wait for sponsor)
* Federated: Generic query pushdown
* Federated: Apply it to federated
* Federated: Timour's old list of tasks (Timour)
* Table functions (Timour) (after 5.6)
* Refactoring: do\_select refactoring to remove if's and make each code group (like end\_select) smaller (small speedup and cleaner code)
Uncategorized
-------------
Features which have not been categorized into the above categories.
### OpenGIS compliance
* OpenGIS: prefill the spatial\_ref\_sys table. (Holyfoot)
* OpenGIS: Add possible III-rd coordinate (Altitude). (Holyfoot)
* OpenGIS: Distance3D, related optimization.
* OpenGIS: Precise math coordinates instead of DOUBLE-s.
### GIS-Optimizer
* optimize simple queries with Intersects(), Within, Distance()<X
* add Distance\_sphere() and the related optimization.
### Online operations
* Extension to bigger datatype (var)char(n) (n+x)
* Online extension of any NUMERIC datatype ; Like ALTER tinyint -> smallint
* Extend ENUM done need more visibility
* Alter comment (Monty)
* Online Add and drop index (Drop is easy to implement MyISAM/ARIA)
* Online OPTIMIZE
* Online ANALYSE
* Add ALTER ONLINE TABLE (Done by Monty: Syntax added in 5.3; When ONLINE is used, one gets an error if the ALTER TABLE can't be done online)
* Look at patch for online backport sent to maria-developers
### COMPATIBILITY & USABILITY
* Date & time embedded timezone (Need sponsor)
* IPV6 native type
+ Functions ; Functions exists in public patch (MP)
+ Datatype ;Old patch exists
* Extended timestamp > 2038
* 1M tables Information schema (MP will investigate)
* 1M users requirements
+ Roles
* mysql.\* in any engine
* LOG tables in a log\_schema schema
* User Ldap Authentification like Drizzle
* Make openssl hash functions available for user. (Bank will sponsor)
* Query logging and summary per query [MWL#179](http://askmonty.org/worklog/?tid=179)
* Auditing for specific user (to general log)
* Flush and reload variables from my.cnf
### Replication
* Replication filters, like `--replicate-do-db` and friends, need to be possible to change dynamically, without having to restart the server. Having to stop the slave should ideally also not be needed, but is less of a problem.
* Transactional storage of slave state, rather than file-based master.info and relay-log.info . So the slave can recover consistently after a crash.
* Support in global transaction ID for master\_pos\_wait()
* Hooks around rotation of the binlog, so user can configure shell commands when a new log is started and when it is ended. The command must be run asynchroneously, and get the old and new log file name as arguments.
* Reduce fsyncs from 3 to 1 in group commit with binary log ([MWL#164](http://askmonty.org/worklog/?tid=164))
* Parallel applying of binary log in slave ([MWL#169](http://askmonty.org/worklog/?tid=169))
* Replication APIs, as per [MWL#107](http://askmonty.org/worklog/?tid=107) (Needs sponsor)
+ Most important [MWL#120](http://askmonty.org/worklog/?tid=120) and [MWL#133](http://askmonty.org/worklog/?tid=133), for obtaining and applying events.
+ Then a mechanism for prioritizing transactions.
* Multi source (Slave can have multiple masters). [MWL#201](http://askmonty.org/worklog/?tid=201). There is a partial sponsorship for this already. (Monty and Kristian)
### Statistics and monitoring
* Strategic direction: Enterprise monitoring
+ graphing and data aggregation tools, server monitoring, etc.
+ customer has reported that Merlin is inadequate, should we enter into this market? (MonYog, SkySQL, Percona, Oli Sennhauser, Open Query etc is doing tools)
* QA request: better EXPLAIN (HIGH priority; MP; Spetrunia)
+ required in order to debug performance issues in queries without knowing the query or the data;
+ the customer will only provide EXPLAIN and SHOW output, we need to debug based on that; (need examples)
+ Perhaps optimizer trace is what we need
* QA request: engine independent PERSISTENT TABLE STATISTICS (Igor)
+ required to ensure repeatable query execution for InnoDB;
+ may allow various statistics to be reported by the server regardless of engine;
+ able to simulate different sized tables
* U/C at Oracle: OPTIMIZER tracing spetrunia: report actual estimates, and all decisions of the optimizer, including why an index was \*not\* picked, etc.
+ want to change for 5.7
* FB request: more options for controlling the slow query log [MWL#181](http://askmonty.org/worklog/?tid=181) (Holyfoot will check)
+ sample one out of every N queries or transactions ; with N ~ 99 (Patch by FB; Will be changed to use AUDIT)
* idea: collect statistics per query text, or normalized query text and report;
* request by community: progress bar for SELECT;
+ how to estimate the total running time of the query;
+ Percona has [support for this](http://www.mysqlperformanceblog.com/2011/03/13/percona-server-and-xtrabackup-weekly-news-march-12th)
+ 5.3 has progress reporting for SHOW PROGRESS PROCESSLIST; SHOW QUERY PROGRESS; and LOAD DATA .
* FB request: limit total temptable size on the server (MP) [MWL#183](http://askmonty.org/worklog/?tid=183)
* FB patch: Admission Control (MP) (seriously considered to be done) (wlad)
+ limit number of concurrently running queries per user;
+ if all user queries are blocked, allow a few more queries to join;
* Integration with log watching tools
+ alter log formats to make them compatible with tools;
+ include logwatch mysql-specific config file in packages/distributions;
**a counter for the total number of bytes read by I/O thread that does not rotate on log rotation;** "seconds behind real master" to report the actual time the slave I/O thread is behind (MP will look into this)
* FB patch: report the time spent in individual phases of query processing (MP)
### Optimizer
* Put cost related constants into variables (will be done) (timour)
+ Automatic tuning of cost constants for specific setup (SSD / TAPE)
* Make optimizer switch more user friendly (Sanja and Spetrunia) (will be done)
* Cost model cleanup (Don't assume things are B-trees) (timour)
+ Consistent cost interface trough handler methods
* Persistent data statistics (Igor)
* Grace HASH join (Need sponsor)
* Sort merge join (Need sponsor)
* Better item\_equal (Igor & Monty)
+ missing item\_equal for ORDER BY and GROUP BY
* Less fetches of data pages.
+ can use whenever you have many-to-many relationships between two tables
+ use to try to minimize data access
+ it's like INDEX INTERSECTION
### Performance:
* Performance: Better multi CPU performance above 16 cores (Work with Intel)
* Performance: Predictive parser to replace yacc based ; 5 % speedup for simpler queries
* Performance: Faster WHERE (a,b) in ((1,2),(2,3)...huge list...) (customer request)
* Performance: Faster VIEW (Not open frm & parse query for every access); Speed up simple view handling 2x
### Aria
* MIN/MAX indexes
* Index withing key pages to speed up lookup on compressed key pages.
### User friendly features
* Better/safer upgrade (?)
* Better option files in distributions.
* UNIQUE CONSTRAINT for BLOB ([MWL#139](http://askmonty.org/worklog/?tid=139)) (medium)
### Other things
* Enhance RQG to test for query correctness and performance on production workloads during upgrade ([MWL#178](http://askmonty.org/worklog/?tid=178)) (prototype done, need to get outside users to verify the code)
### Plugins by Sergey
* Plugins by Sergey: query rewrite ([MWL#144](http://askmonty.org/worklog/?tid=144))
* Plugins by Sergey: full-text search engine plugin ([MWL#143](http://askmonty.org/worklog/?tid=143))
* Plugins by Sergey: Plugin Loader ([MWL#162](http://askmonty.org/worklog/?tid=162))
* Plugins by Sergey: smaller, nice to have, tasks:
+ show plugins soname ... ([MWL#80](http://askmonty.org/worklog/?tid=80))
+ mutex/condition service ([WL#83](http://askmonty.org/worklog/?tid=83))
+ duplicate plugin names ([MWL#79](http://askmonty.org/worklog/?tid=79))
+ create a charset service ([MWL#81](http://askmonty.org/worklog/?tid=81))
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Setting the Language for Error Messages Setting the Language for Error Messages
=======================================
MariaDB server error messages are by default in English. However, MariaDB server also supports error message localization in many different languages. Each supported language has its own version of the [error message file](../error-log/index#error-messages-file) called `errmsg.sys` in a dedicated directory for that language.
Supported Languages for Error Messages
--------------------------------------
Error message localization is supported for the following languages:
* Bulgarian
* Chinese (from [MariaDB 10.4.25](https://mariadb.com/kb/en/mariadb-10425-release-notes/), [10.5.16](https://mariadb.com/kb/en/mariadb-10516-release-notes/), [10.6.8](https://mariadb.com/kb/en/mariadb-1068-release-notes/), [10.7.4](https://mariadb.com/kb/en/mariadb-1074-release-notes/), [10.8.3](https://mariadb.com/kb/en/mariadb-1083-release-notes/))
* Czech
* Danish
* Dutch
* English
* Estonian
* French
* German
* Greek
* Hindi
* Hungarian
* Italian
* Japanese
* Korean
* Norwegian
* Norwegian-ny (Nynorsk)
* Polish
* Portuguese
* Romanian
* Russian
* Serbian
* Slovak
* Spanish
* Swedish
* Ukrainian
Setting the `lc_messages` and `lc_messages_dir` System Variables
----------------------------------------------------------------
The [lc\_messages](../server-system-variables/index#lc_messages) and [lc\_messages\_dir](../server-system-variables/index#lc_messages_dir) system variables can be used to set the [server locale](../server-locale/index) used for error messages.
The [lc\_messages](../server-system-variables/index#lc_messages) system variable can be specified as a [locale](../server-locale/index) name. The language of the associated [locale](../server-locale/index) will be used for error messages. See [Server Locales](../server-locale/index) for a list of supported locales and their associated languages.
The [lc\_messages](../server-system-variables/index#lc_messages) system variable is set to `en_US` by default, which means that error messages are in English by default.
If the `[lc\_messages](../server-system-variables/index#lc_messages)` system variable is set to a valid [locale](../server-locale/index) name, but the server can't find an [error message file](../error-log/index#error-messages-file) for the language associated with the [locale](../server-locale/index), then the default language will be used instead.
This system variable can be specified as command-line arguments to [mysqld](../mysqld-options/index) or it can be specified in a relevant server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index). For example:
```
[mariadb]
...
lc_messages=fr_CA
```
The [lc\_messages](../server-system-variables/index#lc_messages) system variable can also be changed dynamically with [SET GLOBAL](../set/index#global-session). For example:
```
SET GLOBAL lc_messages='fr_CA';
```
If a server has the [lc\_messages](../server-system-variables/index#lc_messages) system variable set to the `fr_CA` locale like the above example, then error messages would be in French. For example:
```
SELECT blah;
ERROR 1054 (42S22): Champ 'blah' inconnu dans field list
```
The [lc\_messages\_dir](../server-system-variables/index#lc_messages_dir) system variable can be specified either as the path to the directory storing the server's [error message files](../error-log/index#error-messages-file) or as the path to the directory storing the specific language's [error message file](../error-log/index#error-messages-file).
The server initially tries to interpret the value of the `[lc\_messages\_dir](../server-system-variables/index#lc_messages_dir)` system variable as a path to the directory storing the server's [error message files](../error-log/index#error-messages-file). Therefore, it constructs the path to the language's [error message file](../error-log/index#error-messages-file) by concatenating the value of the [lc\_messages\_dir](../server-system-variables/index#lc_messages_dir) system variable with the language name of the [locale](../server-locale/index) specified by the [lc\_messages](../server-system-variables/index#lc_messages) system variable .
If the server does not find the [error message file](../error-log/index#error-messages-file) for the language, then it tries to interpret the value of the [lc\_messages\_dir](../server-system-variables/index#lc_messages_dir) system variable as a direct path to the directory storing the specific language's [error message file](../error-log/index#error-messages-file).
This system variable can be specified as command-line arguments to [mysqld](../mysqld-options/index) or it can be specified in a relevant server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index).
For example, to specify the path to the directory storing the server's [error message files](../error-log/index#error-messages-file):
```
[mariadb]
...
lc_messages_dir=/usr/share/mysql/
```
Or to specify the path to the directory storing the specific language's [error message file](../error-log/index#error-messages-file):
```
[mariadb]
...
lc_messages_dir=/usr/share/mysql/french/
```
The `[lc\_messages\_dir](../server-system-variables/index#lc_messages_dir)` system variable can not be changed dynamically.
Setting the --language Option
-----------------------------
The [--language](../mysqld-options/index#-language) option can also be used to set the server's language for error messages, but it is deprecated. It is recommended to set the [lc\_messages](../server-system-variables/index#lc_messages) system variable instead.
The [--language](../mysqld-options/index#-language) option can be specified either as a language name or as the path to the directory storing the language's [error message file](../error-log/index#error-messages-file). See [Server Locales](../server-locale/index) for a list of supported locales and their associated languages.
This option can be specified as command-line arguments to [mysqld](../mysqld-options/index) or it can be specified in a relevant server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index).
For example, to specify a language name:
```
[mariadb]
...
language=french
```
Or to specify the path to the directory storing the language's [error message file](../error-log/index#error-messages-file):
```
[mariadb]
...
language=/usr/share/mysql/french/
```
Character Set
-------------
The character set that the error messages are returned in is determined by the [character\_set\_results](../server-system-variables/index#character_set_results) variable, which defaults to UTF8.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Setting Up Replication Setting Up Replication
======================
The terms *master* and *slave* have historically been used in replication, but the terms terms *primary* and *replica* are now preferred. The old terms are used still used in parts of the documentation, and in MariaDB commands, although [MariaDB 10.5](../what-is-mariadb-105/index) has begun the process of renaming. The documentation process is ongoing. See [MDEV-18777](https://jira.mariadb.org/browse/MDEV-18777) to follow progress on this effort.
Getting [replication](../replication/index) working involves steps on both the master server/s and steps on the slave server/s.
[MariaDB 10.0](../what-is-mariadb-100/index) introduced replication with [global transaction IDs](../global-transaction-id/index). These have a number of benefits, and it is generally recommended to use this feature from [MariaDB 10.0](../what-is-mariadb-100/index).
Setting up a Replication Slave with Mariabackup
-----------------------------------------------
If you would like to use [Mariabackup](../mariabackup/index) to set up a replication slave, then you might find the information at [Setting up a Replication Slave with Mariabackup](../setting-up-a-replication-slave-with-mariabackup/index) helpful.
Versions
--------
In general, when replicating across different versions of MariaDB, it is best that the master is an older version than the slave. MariaDB versions are usually backward compatible, while of course older versions cannot always be forward compatible. See also [Replicating from MySQL Master to MariaDB Slave](#replicating-from-mysql-master-to-mariadb-slave).
Configuring the Master
----------------------
* Enable binary logging if it's not already enabled. See [Activating the Binary Log](../activating-the-binary-log/index) and [Binary log formats](../binary-log-formats/index) for details.
* Give the master a unique [server\_id](../replication-and-binary-log-system-variables/index#server_id). All slaves must also be given a server\_id. This can be a number from 1 to 232-1, and must be unique for each server in the replicating group.
* Specify a unique name for your replication logs with [--log-basename](../mysqld-options-full-list/index#-log-basename). If this is not specified your host name will be used and there will be problems if the hostname ever changes.
* Slaves will need permission to connect and start replicating from a server. Usually this is done by creating a dedicated slave user, and granting that user permission only to replicate (REPLICATION SLAVE permission).
### Example Enabling Replication for MariaDB
Add the following into your [my.cnf](../configuring-mariadb-with-mycnf/index) file and restart the database.
```
[mariadb]
log-bin
server_id=1
log-basename=master1
binlog-format=mixed
```
The server id is a unique number for each MariaDB/MySQL server in your network. [binlog-format](../binary-log-formats/index) specifies how your statements are logged. This mainly affects the size of the [binary log](../binary-log/index) that is sent between the Master and the Slaves.
Then execute the following SQL with the [`mysql`](../mysql-command-line-client/index) command line client:
```
CREATE USER 'replication_user'@'%' IDENTIFIED BY 'bigs3cret';
GRANT REPLICATION SLAVE ON *.* TO 'replication_user'@'%';
```
### Example Enabling Replication for MySQL
If you want to enable replication from MySQL to MariaDB, you can do it in almost the same way as between MariaDB servers. The main difference is that MySQL doesn't support `log-basename`.
```
[mysqld]
log-bin
server_id=1
```
Settings to Check
-----------------
There are a number of options that may impact or break replication. Check the following settings to avoid problems.
* [skip-networking](../server-system-variables/index#skip_networking). If `skip-networking=1`, the server will limit connections to localhost only, and prevent all remote slaves from connecting.
* [bind-address](../server-system-variables/index#bind_address). Similarly, if the address the server listens for TCP/IP connections is 127.0.0.1 (localhost), remote slaves connections will fail.
Configuring the Slave
---------------------
* Give the slave a unique [server\_id](../replication-and-binary-log-server-system-variables/index#server_id). All servers, whether masters or slaves, are given a server\_id. This can be a number from 1 to 232-1, and must be unique for each server in the replicating group. The server will need to be restarted in order for a change in this option to take effect.
Getting the Master's Binary Log Co-ordinates
--------------------------------------------
Now you need prevent any changes to the data while you view the binary log position. You'll use this to tell the slave at exactly which point it should start replicating from.
* On the master, flush and lock all tables by running `FLUSH TABLES WITH READ LOCK`. Keep this session running - exiting it will release the lock.
* Get the current position in the binary log by running `[SHOW MASTER STATUS](../show-master-status/index)`:
```
SHOW MASTER STATUS;
+--------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+--------------------+----------+--------------+------------------+
| master1-bin.000096 | 568 | | |
+--------------------+----------+--------------+------------------+
```
* Record the *File* and *Position* details. If binary logging has just been enabled, these will be blank.
* Now, with the lock still in place, copy the data from the master to the slave. See [Backup, Restore and Import](../backup-restore-and-import/index) for details on how to do this.
* Note for live databases: You just need to make a local copy of the data, you don't need to keep the master locked until the slave has imported the data.
* Once the data has been copied, you can release the lock on the master by running [UNLOCK TABLES](../transactions-lock/index).
```
UNLOCK TABLES;
```
Start the Slave
---------------
* Once the data has been imported, you are ready to start replicating. Begin by running a [CHANGE MASTER TO](../change-master-to/index), making sure that *MASTER\_LOG\_FILE* matches the file and *MASTER\_LOG\_POS* the position returned by the earlier SHOW MASTER STATUS. For example:
```
CHANGE MASTER TO
MASTER_HOST='master.domain.com',
MASTER_USER='replication_user',
MASTER_PASSWORD='bigs3cret',
MASTER_PORT=3306,
MASTER_LOG_FILE='master1-bin.000096',
MASTER_LOG_POS=568,
MASTER_CONNECT_RETRY=10;
```
If you are starting a slave against a fresh master that was configured for replication from the start, then you don't have to specify `MASTER_LOG_FILE` and `MASTER_LOG_POS`.
### Use Global Transaction Id (GTID)
**MariaDB starting with [10.0](../what-is-mariadb-100/index)**[MariaDB 10.0](../what-is-mariadb-100/index) introduced global transaction IDs (GTIDs) for replication. It is generally recommended to use (GTIDs) from [MariaDB 10.0](../what-is-mariadb-100/index), as this has a number of benefits. All that is needed is to add the `MASTER_USE_GTID` option to the `CHANGE MASTER` statement, for example:
```
CHANGE MASTER TO MASTER_USE_GTID = slave_pos
```
See [Global Transaction ID](../global-transaction-id/index) for a full description.
* Now start the slave with the [`START SLAVE`](../start-slave/index) command:
```
START SLAVE;
```
* Check that the replication is working by executing the [`SHOW SLAVE STATUS`](../show-slave-status/index) command:
```
SHOW SLAVE STATUS \G
```
* If replication is working correctly, both the values of `Slave_IO_Running` and `Slave_SQL_Running` should be `Yes`:
```
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
```
Replicating from MySQL Master to MariaDB Slave
----------------------------------------------
* Replicating from MySQL 5.5 to [MariaDB 5.5](../what-is-mariadb-55/index)+ should just work. When using a [MariaDB 10.2](../what-is-mariadb-102/index)+ as a slave, it may be necessary to set [binlog\_checksum](../replication-and-binary-log-server-system-variables/index#binlog_checksum) to NONE.
* Replicating from MySQL 5.6 without GTID to MariaDB 10+ should work.
* Replication from MySQL 5.6 with GTID, binlog\_rows\_query\_log\_events and ignorable events works starting from [MariaDB 10.0.22](https://mariadb.com/kb/en/mariadb-10022-release-notes/) and [MariaDB 10.1.8](https://mariadb.com/kb/en/mariadb-1018-release-notes/). In this case MariaDB will remove the MySQL GTIDs and other unneeded events and instead adds its own GTIDs.
See Also
--------
* [Differences between Statement-based, mixed and row logging](../binary-log-formats/index)
* [Replication and Foreign Keys](../replication-and-foreign-keys/index)
* [Replication as a Backup Solution](../replication-as-a-backup-solution/index)
* [Multi-source Replication](../multi-source-replication/index)
* [Global Transaction ID](../global-transaction-id/index)
* [Parallel Replication](../parallel-replication/index)
* [Replication and Binary Log System Variables](../replication-and-binary-log-server-system-variables/index)
* [Replication and Binary Log Status Variables](../replication-and-binary-log-status-variables/index)
* [Semisynchronous Replication](../semisynchronous-replication/index)
* [Delayed Replication](../delayed-replication/index)
* [Replication Compatibility](../mariadb-vs-mysql-compatibility/index#replication-compatibility)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Understanding MariaDB Architecture Understanding MariaDB Architecture
==================================
MariaDB architecture is partly different from the architecture of traditional DBMSs, like SQL Server. Here we will examine the main components that a new MariaDB DBA needs to know. We will also discuss a bit of history, because this may help understand MariaDB philosophy and certain design choices.
This section is an overview of the most important components. More information is included in specific sections of this migration guide, or in other pages of the MariaDB Knowledge Base (see the links scattered over the text).
Storage Engines
---------------
MariaDB was born from the source code of MySQL, in 2008. Therefore, its history begins with MySQL.
MySQL was born at the beginning of the 90s. Back in the days, if compared to its existing competitors, MySQL was lightweight, simple to install, easy to learn. While it had a very limited set of features, it was also fast in certain common operations. And it was open source. These characteristics made it suitable to back the simple websites that existed at that time.
The web evolved rapidly, and the same happened to MySQL. Being open source helped a lot in this respect, because the community needed functionalities that weren’t supported at that time.
MySQL was probably the first database system to support a [pluggable storage engine architecture](../storage-engines/index). Basically, this means that MySQL knows very little about creating or populating a table, reading from it, building proper indexes and caches. It just delegated all these operations to a special plugin type called a storage engine.
One of the first plugins developed by third parties was [InnoDB](index#innodb). It is very fast, and it adds two important features that are not otherwise supported: transactions and [foreign keys](../foreign-keys/index).
Note that when MariaDB asks a storage engine to write or read a row, the storage engine could theoretically do anything. This led to the creation of very interesting alternative engines, like [BLACKHOLE](../blackhole/index) (which doesn’t write or read any data, acting like the /dev/null file in Linux), or [CONNECT](../connect/index) (which can read and write to files written in many different formats, or remote DBMSs, or some other special data sources).
Nowadays InnoDB is the default MariaDB storage engine, and it is the best choice for most use cases. But for particular needs, sometimes using a different storage engine is desirable. In case of doubts about the best storage engine to use for a specific case, check the [Choosing the Right Storage Engine](../choosing-the-right-storage-engine/index) page.
When we create a table, we specify its storage engine or use the default one. It is possible to convert an existing table to another storage engine, though this is a blocking operation which requires a complete table copy. Third-party storage engines can also be installed while MariaDB is running.
Note that it is perfectly possible to use tables with different storage engines in the same transaction (even if some engines are not transactional). It is even possible to use different engines in the same query, for example with JOINs and subqueries.
The default storage engine can be changed by changing the [default\_storage\_engine](../server-system-variables/index#default_storage_engine) variable. A different default can be specified for temporary tables by setting [default\_tmp\_storage\_engine](../server-system-variables/index#default_tmp_storage_engine). MariaDB uses [Aria](../aria-storage-engine/index) for system tables and temporary tables created internally to store the intermediate results of a query.
### InnoDB
It is worth spending some more words here about [InnoDB](../innodb/index), the default storage engine.
#### Primary Key and Indexes
InnoDB primary keys are always the equivalent of SQL Server clustered indexes. In other words, an InnoDB table is always ordered by the primary key.
If an InnoDB table doesn't have a user-defined primary key, the first `UNIQUE` index whose columns are all `NOT NULL` is used as a primary key. If there is no such index, the table will have a *clustered index*. The terminology here can be a bit confusing for SQL Server and other DBMS users. A clustered index in InnoDB is a 6 bytes value that is added to the table. This index and its values are completely invisible to the users. It's important to note that clustered indexes are governed by a global mutex that greatly reduces their scalability.
Secondary indexes are ordered by the columns that are part of the index, and contain a reference to each entry's corresponding primary key value.
Some consequences of these design choices are the following:
* For performance reasons, a primary key value should be inserted in order. In other words, the last inserted value should be the highest. This order is normally followed when inserting values into an `AUTO_INCREMENT` primary key. The reason is that inserting values in the middle of an ordered data structure is slower, unless they fit into existing holes. If we insert primary key values randomly, InnoDB often has to rearrange pages to make some room for the new data.
* A big primary keys means that all secondary indexes are also big.
* A query by primary key will require a single search. A query on a secondary index that also reads columns not contained in the index will require one search on the index, plus one more search for each row that satisfies the index condition.
* We shouldn't explicitly include the primary key in a secondary index. If we do so, the primary key column will be duplicated in the index.
#### Tablespaces
For InnoDB, a *tablespace* is a file containing data (not a file group as in SQL Server). The types of tablespaces are:
* [System tablespace](../innodb-system-tablespaces/index).
* [File-per-table tablespaces](../innodb-file-per-table-tablespaces/index).
* [Temporary tablespaces](../innodb-temporary-tablespaces/index).
The system tablespace is stored in the file `ibdata`. It contains information used by InnoDB internally, like rollback segments, as well as some system tables. Historically, the system tablespace also contained all tables created by the user. In modern MariaDB versions, a table is created in the system tablespace only if the [innodb\_file\_per\_table](../innodb-system-variables/index#innodb_file_per_table) system variable is set to 0 at the moment of the table creation. By default, innodb\_file\_per\_table is 1.
Tables created while `innodb_file_per_table=1` are written into their own tablespace. These are `.ibd` files.
Starting from [MariaDB 10.2](../what-is-mariadb-102/index), temporary tables are written into temporary tablespaces, which means `ibtmp*` files. Previously, they were created in the system tablespace or in file-per-table tablespaces according to the value of `innodb_file_per_table`, just like regular tables. Temporary tablespaces, if present, are deleted when MariaDB starts.
**It is important to remember that tablespaces can never shrink**. If a file-per-table tablespace grows too much, deleting data won't recover space. Instead, a new table must be created and data needs to be copied. Finally, the old table will be deleted. If the system tablespace grows too much, the only solution is to move data into a new MariaDB installation.
#### Transaction Logs
In SQL Server, the transaction log contains both the undo log and the redo log. Usually we have only one transaction log.
In MariaDB the undo log and the redo log are stored separately. By default, the [redo log](../innodb-redo-log/index) is written to two files, called `ib_logfile0` and `ib_logfile1`. The [undo log](../innodb-undo-log/index) by default is written to the *system tablespace*, which is in the `ibdata1` file. However, it is possible to write it in separate files in a specified directory.
MariaDB provides no way to inspect the contents of the transaction logs. However, it is possible to inspect the [binary log](index#the-binary-log).
InnoDB transaction logs are written in a circular fashion: their size is normally fixed, and when the end is reached, InnoDB continues to write from the beginning. However, if very long transactions are running, InnoDB cannot overwrite the oldest data, so it has to expand the log size instead.
#### InnoDB Buffer Pool
MariaDB doesn't have a central buffer pool. Each storage engine may or may not have a buffer pool. The [InnoDB buffer pool](../innodb-buffer-pool/index) is typically assigned a big amount of memory. See [MariaDB Memory Allocation](../mariadb-memory-allocation/index).
MariaDB has no extension like the SQL Server buffer pool extension.
A part of the buffer pool is called the [change buffer](../innodb-change-buffering/index). It contains dirty pages that have been modified in memory and not yet flushed.
#### InnoDB Background Threads
InnoDB has background threads that take care of flushing dirty pages from the change buffer to the tablespaces. They don't directly affect the latency of queries, but they are very important for performance.
[SHOW ENGINE InnoDB STATUS](../show-engine-innodb-status/index) shows information about them in the `BACKGROUND THREAD` section. They can also be seen using the [threads](../performance-schema-threads-table/index) table, in the [performance\_schema](../performance-schema/index).
InnoDB flushing is similar to *lazy writes* and *checkpoints* in SQL Server. It has no equivalent for *eager writing*.
For more information, see [InnoDB Page Flushing](../innodb-page-flushing/index) and [InnoDB Purge](../innodb-purge/index).
#### Checksums and Doublewrite Buffer
InnoDB pages have checksums. After writing pages to disk, InnoDB verifies that the checksums match. The checksum algorithm is determined by [innodb\_checksum\_algorithm](../innodb-system-variables/index#innodb_checksum_algorithm). Check the variable documentation for its consequences on performance, backward compatibility and encryption.
In case of a system crash, hardware failure or power outage, a page could be half-written on disk. For some pages, this causes a disaster. Therefore, InnoDB writes essential pages to disk twice. A backup copy of the new page version is written first. Then, the old page is overwritten. The backup copies are written into a file called the *doublewrite buffer*.
* If an event prevents the first page from being written, the old version of the page will still be available.
* If an event prevents the old page from being completely overwritten by its new version, the page can still be recovered using the doublewrite buffer.
The doublewrite buffer can disabled using the [innodb\_doublewrite](../innodb-system-variables/index#innodb_doublewrite) variable, but this usually doesn't bring big performance benefits. The doublewrite buffer location can be changed with [innodb\_doublewrite\_file](../innodb-system-variables/index#innodb_doublewrite_file).
### Aria
Even if we only create InnoDB tables, we use Aria indirectly, in two ways:
* For system tables.
* For internal temporary tables.
Aria is a non-transactional storage engine. By default it is crash-safe, meaning that all changes to data are written and fsynced to a write-ahead log and can always be recovered in case of a crash.
Aria caches indexes into the pagecache. Data are not directly cached by Aria, so it's important that the underlying filesystem caches reads and writes.
The pagecache size is determined by the [aria\_pagecache\_buffer\_size](../aria-system-variables/index#aria_pagecache_buffer_size) system variable. To know if it is big enough we can check the proportion of free pages (the ratio between [Aria\_pagecache\_blocks\_used](../aria-status-variables/index#aria_pagecache_blocks_used) and [Aria\_pagecache\_blocks\_unused](../aria-status-variables/index#aria_pagecache_blocks_unused)) and the proportion of cache misses (the ratio between [Aria\_pagecache\_read\_requests](../aria-status-variables/index#aria_pagecache_read_requests) and [Aria\_pagecache\_reads](../aria-status-variables/index#aria_pagecache_reads).
The proportion of dirty pages is the ratio between [Aria\_pagecache\_blocks\_used](../aria-status-variables/index#aria_pagecache_blocks_used) and [Aria\_pagecache\_blocks\_not\_flushed](../aria-status-variables/index#aria_pagecache_blocks_not_flushed) tells us if the log file is big enough.
The size of Aria log is determined by [aria\_log\_file\_size](../aria-system-variables/index#aria_pagecache_buffer_size).
Databases
---------
MariaDB does not support the concept of schema. In MariaDB SQL, *schema* and *schemas* are synonyms for *database* and *databases*.
When a user connects to MariaDB, they don't connect to a specific database. Instead, they can access any table they have permissions for. There is however a concept of *default database*, see below.
A database is a container for database objects like tables and views. A database serves the following purposes:
* A database is a namespace.
* A database is a logical container to separate objects.
* A database has a default [character set](../character-sets/index) and collation, which are inherited by their tables.
* Permissions can be assigned on a whole database, to make permission maintenance simpler.
* Physical data files are stored in a directory which has the same name as the database to which they belong.
### System Databases
MariaDB has the following system databases:
* [mysql](../the-mysql-database-tables/index) is for internal use only, and should not be read or written directly.
* [information\_schema](../information-schema/index) contains all information that can be found in SQL Server's information\_schema and more. However, while SQL Server's `information_schema` is a schema containing information about the local database, MariaDB's `information_schema` is a database that contains information about all databases.
* [performance\_schema](../performance-schema/index) contains information about MariaDB runtime. It is disabled by default. Enabling it requires setting the [performance\_schema](../performance-schema-system-variables/index#performance_schema) system variable to 1 and restarting MariaDB.
### Default Database
When a user connects to MariaDB, they can optionally specify a default database. A default database can also be specified or changed later, with the [USE](../use/index) command.
Having a default database specified allows one to specify tables without specifying the name of the database where they are located. If no default database is specified, all table names must be fully qualified.
For example, the two following snippets are equivalent:
```
SELECT * FROM my_database.my_table;
-- is equivalent to:
USE my_database;
SELECT * FROM my_table;
```
Even if a default database is specified, tables from other databases can be accessed by specifying their fully qualified names:
```
-- this query joins my_database.my_table to your_database.your_table
USE my_database;
SELECT m.*
FROM my_table m
JOIN your_database.your_table y
ON m.xyz = y.xyz;
```
MariaDB has the [DATABASE()](../database/index) function to determine the current database:
```
SELECT DATABASE();
```
Stored procedures and triggers don't inherit a default database from the session, nor by a caller procedure. In that context, the default database is the database which contains the procedure. `USE` can be used to change it. The default database will only be valid for the rest of the procedure.
The Binary Log
--------------
Different tables can be built using different storage engines. It is important to note that not all engines are transactional, and that different engines implement the transaction logs in different ways. For this reason, MariaDB cannot replicate data from a master to a slave using an equivalent of SQL Server transactional replication.
Instead, it needs a global mechanism to log the changes that are applied to data. This mechanism is the [binary log](../binary-log/index), often abbreviated to binlog.
The binary log can be written in the following formats:
* STATEMENT logs SQL statements that modify data;
* ROW logs a reference to the rows that have been modified, if any (usually it’s the primary key), and the new values that have been added or modified, in a binary format.
* MIXED is a combination of the above formats. It means that ROW is used for statements that can safely be logged in this way (see below), and STATEMENT is used in other cases. This is the default format from [MariaDB 10.2](../what-is-mariadb-102/index).
In most cases, STATEMENT is slower because the SQL statement needs to be re-executed by the slave, and because certain statements may produce a different result in the slave (think about queries that use LIMIT without ORDER BY, or the CURRENT\_TIMESTAMP() function). But there are exceptions, and besides, DDL statements are always logged as STATEMENT to avoid flooding the binary log. Therefore, the binary log may well contain both ROW and STATEMENT entries.
See [Binary Log Formats](../binary-log-formats/index).
The binary log allows:
* replication, if enabled on the master;
* promoting a slave to a master, if enabled on that slave;
* incremental backups;
* seeing data as they were in a point of time in the past ([flashback](../flashback/index));
* restoring a backup and re-appling the binary log, with the exception of a data change which caused problems (human mistake, application bug, SQL injection);
* Capture Data Changes (CDC), by streaming the binary log to technologies like Apache Kafka.
If you don't plan to use any of these features on a server, it is possible to [disable](../replication-and-binary-log-system-variables/index#log_bin) the binary log to slightly improve the performance.
The binary log can be inspected using the [mysqlbinlog](../mysqlbinlog/index) utility, which comes with MariaDB. Enabling or disabling the binary log requires restarting MariaDB.
See also [MariaDB Replication Overview for SQL Server Users](../mariadb-replication-overview-for-sql-server-users/index) and [MariaDB Backups Overview for SQL Server Users](../mariadb-backups-overview-for-sql-server-users/index) for a better understanding of how the binary log is used.
Plugins
-------
Storage engines are a special type of [plugin](../plugins/index). But others exist. For example, plugins can add authentication methods, new features, SQL syntax, functions, informative tables, and more.
A plugin may add some server variables and some status variables. Server variables can be used to configure the plugin, and status variables can be used to monitor its activities and status. These variables generally use the plugin's name as a prefix. For example InnoDB has a server variable called innodb\_buffer\_pool\_size to configure the size of its buffer pool, and a status variable called Innodb\_pages\_read which indicates the number of memory pages read from the buffer pool. The category [system variables](../system-variables/index) of the MariaDB Knowledge Base has specific pages for system and status variables associated with various plugins.
Many plugins are installed by default, or available but not installed by default. They can be installed or uninstalled at runtime with SQL statements, like `INSTALL PLUGIN`, `UNINSTALL PLUGIN` and others; see [Plugin SQL Statements](../plugin-sql-statements/index). 3rd party plugins can be made available for installation by simply copying them to the [plugin\_dir](../server-system-variables/index#plugin_dir).
It is important to note that different plugins may have different maturity levels. It is possible to prevent the installation of plugins we don’t consider production-ready by setting the [plugin\_maturity](../server-system-variables/index#plugin_maturity) system variable. For plugins that are distributed with MariaDB, the maturity level is determined by the MariaDB team based on the bugs reported and fixed.
Some plugins are developed by 3rd parties. Even some 3rd party plugins are included in MariaDB official distributions - the ones available on mariadb.org.
In MariaDB every authorization method (including the default one) is provided by an [authentication plugin](../authentication-plugins/index). A user can be required to use a certain authentication plugin. This gives us much flexibility and control. Windows users may be interested in [gsapi](../authentication-plugin-gssapi/index) (which supports Windows authentication, Kerberos and NTLM) and [named\_pipe](../authentication-plugin-named-pipe/index) (which uses named pipe impersonation).
Other plugins that can be very useful include [userstat](../user-statistics/index), which includes statistics about resources and table usage, and [METADATA\_LOCK\_INFO](../metadata_lock_info/index), which provides information about metadata locks.
Thread Pool
-----------
MariaDB supports [thread pool](../thread-pool/index). It works differently on UNIX and on Windows. On Windows, it is enabled by default and its implementation is quite similar to SQL Server. It uses the Windows native CreateThreadpool API.
If we don't use the thread pool, MariaDB will use its traditional method to handle connections. It consists of using a dedicated thread for each client connection. Creating a new thread has a cost in terms of CPU time. To mitigate this cost, after a client disconnects, the thread may be preserved for a certain time in the [thread cache](../server-system-variables/index#thread_cache_size).
Whichever connection method we use, MariaDB has a maximum number of simultaneous connections, which can be changed at runtime. When the limit is reached, if more clients try to connect they will receive an error. This prevents MariaDB from consuming all the server resources and freezing or crashing. See [Handling Too Many Connections](../handling-too-many-connections/index).
Configuration
-------------
MariaDB has many settings that control the server behavior. These can be set up when starting mysqld ([mysqld options](../mysqld-options/index)), and the vast majority are also accessible as [server system variables](../server-system-variables/index). These can be classified in these ways:
* **Dynamic** or **static**;
* **Global**, **session**, or both.
Note that server system variables are not to be confused with [user-defined variables](../user-defined-variables/index). The latter are not used for MariaDB configuration.
### Configuration Files
MariaDB can use several [configuration files](../configuring-mariadb-with-option-files/index). Configuration files are searched in several locations, including in the user directory, and if present they all are read and used. They are read in a consistent order. These locations depend on the operating system; see [Default Option File Locations](../configuring-mariadb-with-option-files/index#default-option-file-locations). It is possible to tell MariaDB which files it should read; see [Global Options Related to Option Files](../configuring-mariadb-with-option-files/index#global-options-related-to-option-files).
On Linux, by default the configuration files are called `my.cnf`. On Windows, by default the configuration files can be called `my.ini` or `my.cnf`. The former is more common.
If a variable is mentioned multiple times in different files, the occurrence that is read last will overwrite the others. Similarly, if a variable is mentioned several times in a single file, the occurrence that is read last overwrites the others.
The contents of each configuration file are organized by *option groups*. MariaDB Server and client programs read different groups. The read groups also depend on the MariaDB version. See [Option Groups](../configuring-mariadb-with-option-files/index#option-groups) for the details. Most commonly, the `[server]` or `[mysqld]` groups are used to contain all server configuration. The `[client-server]` group can be used for options that are shared by the server and the clients (like the port to use), to avoid repeating those variables multiple times.
### Dynamic and Static Variables
Dynamic variables have a value that can be changed at runtime, using the [SET](../set/index) SQL statement. Static variables have a value that is decided at startup (see below) and cannot be changed without a restart.
The [Server System Variables](../server-system-variables/index) page states if variables are dynamic or static.
### Scope
A global system variable is one that affects the general behavior of MariaDB. For example [innodb\_buffer\_pool\_size](../innodb-system-variables/index#innodb_buffer_pool_size) determines the size of the InnoDB buffer pool, which is used by read and write operations, no matter which user issued them. A session system variable is one that affects MariaDB behavior for the current connection; changing it will not affect other connected users, or future connections from the current user.
A variable could exist in both the global and session scopes. In this case, the session value is what affects the current connection. When a user connects, the current global value is copied to the session scope. Changing the global value afterward will not change existing connections.
The [Server System Variables](../server-system-variables/index) page states the scope of each variable.
Global variables and some session variables can only be modified by a user with the [SUPER](../grant/index#global-privileges) privilege (typically root).
### Syntax
To see the value of a system variable:
```
-- global variables:
SELECT @@global.variable_name;
-- session variables:
SELECT @@session.variable_name;
-- or just use the shortcut:
SELECT @@variable_name;
```
A longer syntax, which is mostly useful to get multiple variables, makes use of the same pattern syntax that is used by the [LIKE](../like/index) operator:
```
-- global variables whose name starts with 'innodb':
SHOW GLOBAL VARIABLES LIKE 'innodb%';
-- session variables whose name starts with 'innodb':
SHOW SESSION VARIABLES LIKE 'innodb%';
SHOW VARIABLES LIKE 'innodb%';
```
To modify the global or session value of a dynamic variable:
```
SET @@global.variable_name = 'new 'value';
SET @@session.variable_name = 'new 'value';
```
Notice that if we modify a global variable in this way, the new value will be lost at server restart. For this reason we probably want to change the value in the configuration file too.
For further information see:
* The [SET](../set/index) statement.
* The [SHOW VARIABLES](../show-variables/index) statement.
### Setting System Variables with Startup Parameters
System variables can be set at server startup without writing their values into a configuration file. This is useful if we want a value to be set once, until we change it or restart MariaDB. Values passed in this way override values written in the configuration files.
The general rule is that every global variable can be passed as an argument of `mysqld` by prefixing its name with `--` and by replacing every occurrence of `_` with `-` in its name.
For example, to pass `bind_address` as a startup argument:
```
mysqld --bind-address=127.0.0.1
```
### Debugging Configuration
Mistyping a variable can prevent MariaDB from starting. We cannot set a variable that doesn't exist in the MariaDB version in use. In these cases, an error is written in the [error log](../error-log/index).
Having several configuration files and configuration groups, as well as being able to pass variables as command-line arguments, brings a lot of flexibility but can sometimes be confusing. When we are unsure about which values will be used, we can run:
```
mysqld --print-defaults
```
Status Variables
----------------
MariaDB status variables and some system tables allow external tools to monitor a server, building graphs on how they change over time, and allow the user to inspect what is happening inside the server.
[Status variables](../server-status-variables/index) cannot be directly modified by the user. Their values indicate how MariaDB is operating. Their scope can be:
* **Global**, meaning that the value is about some MariaDB activity.
* **Session**, meaning that the value measures activities taking place in the current session.
Many status variables exist in both scopes. For example,[Cpu\_time](../server-status-variables/index#cpu_time) at global level indicates how much time the CPU was used by the MariaDB process (including all user sessions and all the background threads). At session level, it indicates how much time the CPU was used by the current session.
The status variables created by a plugin, usually, use the plugin name as a prefix.
The [SHOW STATUS](../show-status/index) statement prints the values of the status variables that match a certain pattern.
```
-- Show all InnoDB global status variables
SHOW GLOBAL STATUS LIKE 'innodb%';
-- Show all InnoDB session status variables
SHOW SESSION STATUS LIKE 'innodb%';
SHOW STATUS LIKE 'innodb%';
-- Show global variables that contain the "size" substring:
SHOW GLOBAL STATUS LIKE '%size%';
```
Some status variables values are reset when [FLUSH STATUS](../flush/index#flush-status) is executed. A possible use:
```
DELIMITER ||
BEGIN NOT ATOMIC
SET @i = 0;
WHILE @i < 60 DO
SHOW GLOBAL STATUS LIKE 'Com_select';
FLUSH STATUS;
DO SLEEP(1);
SET @i = @i + 1;
END WHILE;
END ||
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Identifier Qualifiers Identifier Qualifiers
=====================
Qualifiers are used within SQL statements to reference data structures, such as databases, tables, or columns. For example, typically a SELECT query contains references to some columns and at least one table.
Qualifiers can be composed by one or more [identifiers](../identifier-names/index), where the initial parts affect the context within which the final identifier is interpreted:
* For a database, only the database identifier needs to be specified.
* For objects which are contained in a database (like tables, views, functions, etc) the database identifier can be specified. If no database is specified, the current database is assumed (see [USE](../use/index) and [DATABASE()](../database/index) for more details). If there is no default database and no database is specified, an error is issued.
* For column names, the table and the database are generally obvious from the context of the statement. It is however possible to specify the table identifier, or the database identifier plus the table identifier.
* An identifier is fully-qualified if it contains all possible qualifiers, for example, the following column is fully qualified: `db_name.tbl_name.col_name`.
If a qualifier is composed by more than one identifier, a dot (.) must be used as a separator. All identifiers can be quoted individually. Extra spacing (including new lines and tabs) is allowed.
All the following examples are valid:
* db\_name.tbl\_name.col\_name
* tbl\_name
* `db\_name`.`tbl\_name`.`col\_name`
* `db\_name` . `tbl\_name`
* db\_name. tbl\_name
If a table identifier is prefixed with a dot (.), the default database is assumed. This syntax is supported for ODBC compliance, but has no practical effect on MariaDB. These qualifiers are equivalent:
* tbl\_name
* . tbl\_name
* .`tbl\_name`
* . `tbl\_name`
For DML statements, it is possible to specify a list of the partitions using the PARTITION clause. See [Partition Pruning and Selection](../partition-pruning-and-selection/index) for details.
See Also
--------
* [Identifier Names](../identifier-names/index)
* [USE](../use/index)
* [DATABASE()](../database/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb LEFT LEFT
====
Syntax
------
```
LEFT(str,len)
```
Description
-----------
Returns the leftmost `len` characters from the string `str`, or NULL if any argument is NULL.
Examples
--------
```
SELECT LEFT('MariaDB', 5);
+--------------------+
| LEFT('MariaDB', 5) |
+--------------------+
| Maria |
+--------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Stored Functions Stored Functions
=================
A stored function is a defined function that is called from within an SQL statement like a regular function, and returns a single value.
| Title | Description |
| --- | --- |
| [Stored Function Overview](../stored-function-overview/index) | Function called from within an SQL statement, returning a single value |
| [Stored Routine Privileges](../stored-routine-privileges/index) | Privileges associated with stored functions and stored procedures. |
| [CREATE FUNCTION](../create-function/index) | Creates a stored function. |
| [ALTER FUNCTION](../alter-function/index) | Change the characteristics of a stored function. |
| [DROP FUNCTION](../drop-function/index) | Drop a stored function. |
| [SHOW CREATE FUNCTION](../show-create-function/index) | Statement that created the function. |
| [SHOW FUNCTION STATUS](../show-function-status/index) | Stored function characteristics |
| [SHOW FUNCTION CODE](../show-function-code/index) | Representation of the internal implementation of the stored function |
| [Stored Aggregate Functions](../stored-aggregate-functions/index) | Custom aggregate functions. |
| [Binary Logging of Stored Routines](../binary-logging-of-stored-routines/index) | Stored routines require extra consideration when binary logging. |
| [Stored Function Limitations](../stored-function-limitations/index) | Restrictions applying to stored functions |
| [Information Schema ROUTINES Table](../information-schema-routines-table/index) | Stored procedures and stored functions information |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb FROM_DAYS FROM\_DAYS
==========
Syntax
------
```
FROM_DAYS(N)
```
Description
-----------
Given a day number N, returns a DATE value. The day count is based on the number of days from the start of the standard calendar (0000-00-00).
The function is not designed for use with dates before the advent of the Gregorian calendar in October 1582. Results will not be reliable since it doesn't account for the lost days when the calendar changed from the Julian calendar.
This is the converse of the [TO\_DAYS()](../to_days/index) function.
Examples
--------
```
SELECT FROM_DAYS(730669);
+-------------------+
| FROM_DAYS(730669) |
+-------------------+
| 2000-07-03 |
+-------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ST_TOUCHES ST\_TOUCHES
===========
Syntax
------
```
ST_TOUCHES(g1,g2)
```
Description
-----------
Returns `1` or `0` to indicate whether geometry *`g1`* spatially touches geometry *`g2`*. Two geometries spatially touch if the interiors of the geometries do not intersect, but the boundary of one of the geometries intersects either the boundary or the interior of the other.
ST\_TOUCHES() uses object shapes, while [TOUCHES()](../touches/index), based on the original MySQL implementation, uses object bounding rectangles.
Examples
--------
```
SET @g1 = ST_GEOMFROMTEXT('POINT(2 0)');
SET @g2 = ST_GEOMFROMTEXT('LINESTRING(2 0, 0 2)');
SELECT ST_TOUCHES(@g1,@g2);
+---------------------+
| ST_TOUCHES(@g1,@g2) |
+---------------------+
| 1 |
+---------------------+
SET @g1 = ST_GEOMFROMTEXT('POINT(2 1)');
SELECT ST_TOUCHES(@g1,@g2);
+---------------------+
| ST_TOUCHES(@g1,@g2) |
+---------------------+
| 0 |
+---------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Storage Engine Index Types Storage Engine Index Types
==========================
This refers to the index\_type definition when creating an index, i.e. BTREE, HASH or RTREE.
For more information on general types of indexes, such as primary keys, unique indexes etc, go to [Getting Started with Indexes](../getting-started-with-indexes/index).
| Storage Engine | Permitted Indexes |
| --- | --- |
| [Aria](../aria/index) | BTREE, RTREE |
| [MyISAM](../myisam/index) | BTREE, RTREE |
| [InnoDB](../innodb/index) | BTREE |
| [MEMORY/HEAP](../memory-storage-engine/index) | HASH, BTREE |
BTREE is generally the default index type. For [MEMORY](../memory-storage-engine/index) tables, HASH is the default. [TokuDB](../tokudb/index) uses a particular data structure called *fractal trees*, which is optimized for data that do not entirely fit memory.
Understanding the B-tree and hash data structures can help predict how different queries perform on different storage engines that use these data structures in their indexes, particularly for the MEMORY storage engine that lets you choose B-tree or hash indexes. B-Tree Index Characteristics
B-tree Indexes
--------------
B-tree indexes are used for column comparisons using the >, >=, =, >=, < or BETWEEN operators, as well as for LIKE comparisons that begin with a constant.
For example, the query `SELECT * FROM Employees WHERE First_Name LIKE 'Maria%';` can make use of a B-tree index, while `SELECT * FROM Employees WHERE First_Name LIKE '%aria';` cannot.
B-tree indexes also permit leftmost prefixing for searching of rows.
If the number or rows doesn't change, hash indexes occupy a fixed amount of memory, which is lower than the memory occupied by BTREE indexes.
Hash Indexes
------------
Hash indexes, in contrast, can only be used for equality comparisons, so those using the = or <=> operators. They cannot be used for ordering, and provide no information to the optimizer on how many rows exist between two values.
Hash indexes do not permit leftmost prefixing - only the whole index can be used.
R-tree Indexes
--------------
See [SPATIAL](../spatial/index) for more information.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Installing Sphinx Installing Sphinx
=================
In order to use the [Sphinx Storage Engine](../sphinxse/index), it is necessary to install the Sphinx daemon.
Many Linux distributions have Sphinx in their repositories. These can be used to install Sphinx instead of following the instructions below, but these are usually quite old versions and don't all include API's for easy integration. Ubuntu users can use the updated repository at <https://launchpad.net/~builds/+archive/sphinxsearch-rel21> (see instructions below). Alternatively, download from <http://sphinxsearch.com/downloads/release/>
Debian and Ubuntu
-----------------
Ubuntu users can make use of the repository, as follows:
```
sudo add-apt-repository ppa:builds/sphinxsearch-rel21
sudo apt-get update
sudo apt-get install sphinxsearch
```
Alternatively, install as follows:
* The Sphinx package and daemon are named `sphinxsearch`.
* `sudo apt-get install unixodbc libpq5 mariadb-client`
* `sudo dpkg -i sphinxsearch*.deb`
* [Configure Sphinx](../configuring-sphinx/index) as required
* You may need to check `/etc/default/sphinxsearch` to see that `START=yes`
* Start with `sudo service sphinxsearch start` (and stop with `sudo service sphinxsearch stop`)
Red Hat and CentOS
------------------
* The package name is `sphinx` and the daemon `searchd`.
* `sudo yum install postgresql-libs unixODBC`
* `sudo rpm -Uhv sphinx*.rpm`
* [Configure Sphinx](../configuring-sphinx/index) as required
* `service searchd start`
Windows
-------
* Unzip and extract the downloaded zip file
* Move the extracted directory to `C:\Sphinx`
* [Configure Sphinx](../configuring-sphinx/index) as required
* Install as a service:
+ `C:\Sphinx\bin> C:\Sphinx\bin\searchd --install --config C:\Sphinx\sphinx.conf.in --servicename SphinxSearch`
Once Sphinx has been installed, it will need to be [configured](../configuring-sphinx/index).
Full instructions, including details on compiling Sphinx yourself, are available at <http://sphinxsearch.com/docs/current.html>.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Table Pullout Optimization Table Pullout Optimization
==========================
Table pullout is an optimization for [Semi-join subqueries](../semi-join-subquery-optimizations/index).
The idea of Table Pullout
-------------------------
Sometimes, a subquery can be re-written as a join. For example:
```
select *
from City
where City.Country in (select Country.Code
from Country
where Country.Population < 100*1000);
```
If we know that there can be, at most, one country with with a given value of `Country.Code` (we can tell that if we see that table Country has a primary key or unique index over that column), we can re-write this query as:
```
select City.*
from
City, Country
where
City.Country=Country.Code AND Country.Population < 100*1000;
```
Table pullout in action
-----------------------
If one runs [EXPLAIN](../explain/index) for the above query in MySQL 5.1-5.6 or [MariaDB 5.1](../what-is-mariadb-51/index)-5.2, they'll get this plan:
```
MySQL [world]> explain select * from City where City.Country in (select Country.Code from Country where Country.Population < 100*1000);
+----+--------------------+---------+-----------------+--------------------+---------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+---------+-----------------+--------------------+---------+---------+------+------+-------------+
| 1 | PRIMARY | City | ALL | NULL | NULL | NULL | NULL | 4079 | Using where |
| 2 | DEPENDENT SUBQUERY | Country | unique_subquery | PRIMARY,Population | PRIMARY | 3 | func | 1 | Using where |
+----+--------------------+---------+-----------------+--------------------+---------+---------+------+------+-------------+
2 rows in set (0.00 sec)
```
It shows that the optimizer is going to do a full scan on table `City`, and for each city it will do a lookup in table `Country`.
If one runs the same query in [MariaDB 5.3](../what-is-mariadb-53/index), they will get this plan:
```
MariaDB [world]> explain select * from City where City.Country in (select Country.Code from Country where Country.Population < 100*1000);
+----+-------------+---------+-------+--------------------+------------+---------+--------------------+------+-----------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------+-------+--------------------+------------+---------+--------------------+------+-----------------------+
| 1 | PRIMARY | Country | range | PRIMARY,Population | Population | 4 | NULL | 37 | Using index condition |
| 1 | PRIMARY | City | ref | Country | Country | 3 | world.Country.Code | 18 | |
+----+-------------+---------+-------+--------------------+------------+---------+--------------------+------+-----------------------+
2 rows in set (0.00 sec)
```
The interesting parts are:
* Both tables have `select_type=PRIMARY`, and `id=1` as if they were in one join.
* The `Country` table is first, followed by the `City` table.
Indeed, if one runs EXPLAIN EXTENDED; SHOW WARNINGS, they will see that the subquery is gone and it was replaced with a join:
```
MariaDB [world]> show warnings\G
*************************** 1. row ***************************
Level: Note
Code: 1003
Message: select `world`.`City`.`ID` AS `ID`,`world`.`City`.`Name` AS
`Name`,`world`.`City`.`Country` AS `Country`,`world`.`City`.`Population` AS
`Population`
from `world`.`City` join `world`.`Country` where
((`world`.`City`.`Country` = `world`.`Country`.`Code`) and (`world`.`Country`.
`Population` < (100 * 1000)))
1 row in set (0.00 sec)
```
Changing the subquery into a join allows feeding the join to the join optimizer, which can make a choice between two possible join orders:
1. City -> Country
2. Country -> City
as opposed to the single choice of
1. City->Country
which we had before the optimization.
In the above example, the choice produces a better query plan. Without pullout, the query plan with a subquery would read `(4079 + 1*4079)=8158` table records. With table pullout, the join plan would read `(37 + 37 * 18) = 703` rows. Not all row reads are equal, but generally, reading `10` times fewer table records is faster.
Table pullout fact sheet
------------------------
* Table pullout is possible only in semi-join subqueries.
* Table pullout is based on `UNIQUE`/`PRIMARY` key definitions.
* Doing table pullout does not cut off any possible query plans, so MariaDB will always try to pull out as much as possible.
* Table pullout is able to pull individual tables out of subqueries to their parent selects. If all tables in a subquery have been pulled out, the subquery (i.e. its semi-join) is removed completely.
* One common bit of advice for optimizing MySQL has been "If possible, rewrite your subqueries as joins". Table pullout does exactly that, so manual rewrites are no longer necessary.
Controlling table pullout
-------------------------
There is no separate @@optimizer\_switch flag for table pullout. Table pullout can be disabled by switching off all semi-join optimizations with `SET @@optimizer_switch='semijoin=off'` command.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Binary Literals Binary Literals
===============
Binary literals can be written in one of the following formats: `b'value'`, `B'value'` or `0bvalue`, where `value` is a string composed by `0` and `1` digits.
Binary literals are interpreted as binary strings, and are convenient to represent [VARBINARY](../varbinary/index), [BINARY](../binary/index) or [BIT](../bit/index) values.
To convert a binary literal into an integer, just add 0.
Examples
--------
Printing the value as a binary string:
```
SELECT 0b1000001;
+-----------+
| 0b1000001 |
+-----------+
| A |
+-----------+
```
Converting the same value into a number:
```
SELECT 0b1000001+0;
+-------------+
| 0b1000001+0 |
+-------------+
| 65 |
+-------------+
```
See Also
--------
* [BIN()](../bin/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysql.servers Table mysql.servers Table
===================
The `mysql.servers` table contains information about servers as used by the [Spider](../spider/index), [FEDERATED](../federated-storage-engine/index) or [FederatedX](../federatedx/index), [Connect](../connect/index) storage engines (see [CREATE SERVER](../create-server/index)).
**MariaDB starting with [10.4](../what-is-mariadb-104/index)**In [MariaDB 10.4](../what-is-mariadb-104/index) and later, this table uses the [Aria](../aria/index) storage engine.
**MariaDB until [10.3](../what-is-mariadb-103/index)**In [MariaDB 10.3](../what-is-mariadb-103/index) and before, this table uses the [MyISAM](../myisam-storage-engine/index) storage engine.
The `mysql.servers` table contains the following fields:
| Field | Type | Null | Key | Default | Description |
| --- | --- | --- | --- | --- | --- |
| `Server_name` | `char(64)` | NO | PRI | | |
| `Host` | `char(64)` | NO | | | |
| `Db` | `char(64)` | NO | | | |
| `Username` | `char(80)` | NO | | | |
| `Password` | `char(64)` | NO | | | |
| `Port` | `int(4)` | NO | | 0 | |
| `Socket` | `char(64)` | NO | | | |
| `Wrapper` | `char(64)` | NO | | | `mysql` or `mariadb` |
| `Owner` | `char(64)` | NO | | | |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Geographic & Geometric Features Geographic & Geometric Features
================================
MariaDB supports spatial extensions that enable the creation, storage and analysis of geographic features. These can be used in the [Aria](../aria/index), [MyISAM](../myisam/index), [InnoDB/XtraDB](../innodb/index) and [ARCHIVE](../archive/index) engines in MariaDB.
[Partitioned tables](../managing-mariadb-partitioning/index) do not support geometric types.
| Title | Description |
| --- | --- |
| [GIS Resources](../gis-resources/index) | Resources for those interested in GIS |
| [GIS features in 5.3.3](../gis-features-in-533/index) | Basic information about the existing spatial features can be found in the G... |
| [Geometry Types](../geometry-types/index) | Supported geometry types. |
| [Geometry Hierarchy](../geometry-hierarchy/index) | The base Geometry class has subclasses for Point, Curve, Surface and GeometryCollection |
| [Geometry Constructors](../geometry-constructors/index) | Geometry constructors |
| [Geometry Properties](../geometry-properties/index) | Geometry properties |
| [Geometry Relations](../geometry-relations/index) | Geometry relations |
| [LineString Properties](../linestring-properties/index) | LineString properties |
| [MBR (Minimum Bounding Rectangle)](../mbr-minimum-bounding-rectangle/index) | |
| [Point Properties](../point-properties/index) | Point properties |
| [Polygon Properties](../polygon-properties/index) | Polygon properties |
| [WKB](../wkb/index) | Well-Known Binary format for geometric data |
| [WKT](../wkt/index) | Well-Known Text geometry representation |
| [MySQL/MariaDB Spatial Support Matrix](../mysqlmariadb-spatial-support-matrix/index) | Table comparing when different spatial features were introduced into MySQL and MariaDB |
| [SPATIAL INDEX](../spatial-index/index) | An index type used for geometric columns. |
| [MariaDB Plans - GIS](../mariadb-plans-gis/index) | Old GIS plans |
| [The maria/5.3-gis tree on Launchpad.](../the-maria53-gis-tree-on-launchpad/index) | Note: This page is obsolete. The information is old, outdated, or otherwise... |
| [GeoJSON](../geojson/index) | GeoJSON functions |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Labels Labels
======
Syntax
------
```
label: <construct>
[label]
```
Labels are MariaDB [identifiers](../identifier-names/index) which can be used to identify a [BEGIN ... END](../begin-end/index) construct or a loop. They have a maximum length of 16 characters and can be quoted with backticks (i.e.., ```).
Labels have a start part and an end part. The start part must precede the portion of code it refers to, must be followed by a colon (`:`) and can be on the same or different line. The end part is optional and adds nothing, but can make the code more readable. If used, the end part must precede the construct's delimiter (`;`). Constructs identified by a label can be nested. Each construct can be identified by only one label.
Labels need not be unique in the stored program they belong to. However, a label for an inner loop cannot be identical to a label for an outer loop. In this case, the following error would be produced:
```
ERROR 1309 (42000): Redefining label <label_name>
```
[LEAVE](../leave/index) and [ITERATE](../iterate/index) statements can be used to exit or repeat a portion of code identified by a label. They must be in the same [Stored Routine](../stored-programs-and-views/index), [Trigger](../triggers/index) or [Event](../events/index) which contains the target label.
Below is an example using a simple label that is used to exit a [LOOP](../loop/index):
```
CREATE PROCEDURE `test_sp`()
BEGIN
`my_label`:
LOOP
SELECT 'looping';
LEAVE `my_label`;
END LOOP;
SELECT 'out of loop';
END;
```
The following label is used to exit a procedure, and has an end part:
```
CREATE PROCEDURE `test_sp`()
`my_label`:
BEGIN
IF @var = 1 THEN
LEAVE `my_label`;
END IF;
DO something();
END `my_label`;
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema replication_applier_status_by_coordinator Table Performance Schema replication\_applier\_status\_by\_coordinator Table
======================================================================
**MariaDB starting with [10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/)**The `replication_applier_status_by_coordinator` table was added in [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/).
The [Performance Schema](../performance-schema/index) replication\_applier\_status\_by\_coordinator table displays the status of the coordinator thread used in multi-threaded replicas to manage multiple worker threads.
It contains the following fields.
| Column | Type | Null | Description |
| --- | --- | --- | --- |
| CHANNEL\_NAME | varchar(256) | NO | Replication channel name. |
| THREAD\_ID | bigint(20) unsigned | YES | The SQL/coordinator thread ID. |
| SERVICE\_STATE | enum('ON','OFF') | NO | ON (thread exists and is active or idle) or OFF (thread no longer exists). |
| LAST\_ERROR\_NUMBER | int(11) | NO | Last error number that caused the SQL/coordinator thread to stop. |
| LAST\_ERROR\_MESSAGE | varchar(1024) | NO | Last error message that caused the SQL/coordinator thread to stop. |
| LAST\_ERROR\_TIMESTAMP | timestamp | NO | Timestamp that shows when the most recent SQL/coordinator error occured. |
| LAST\_SEEN\_TRANSACTION | char(57) | NO | The transaction the worker has last seen. |
| LAST\_TRANS\_RETRY\_COUNT | int(11) | NO | Total number of retries attempted by last transaction. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysql_find_rows mysql\_find\_rows
=================
**MariaDB starting with [10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/)**From [MariaDB 10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/), `mariadb-find-rows` is a symlink to `mysql_find_rows`.
**MariaDB starting with [10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/)**From [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/), `mariadb-find-rows` is the name of the binary, with `mysql_find_rows` a symlink .
`mysql_find_rows` reads files containing SQL statements and extracts statements that match a given regular expression or that contain [USE db\_name](../use/index) or [SET](../set/index) statements. The utility was written for use with update log files (as used prior to MySQL 5.0) and as such expects statements to be terminated with semicolon (;) characters. It may be useful with other files that contain SQL statements as long as statements are terminated with semicolons.
Usage
-----
```
mysql_find_rows [options] [file_name ...]
```
Each file\_name argument should be the name of file containing SQL statements. If no file names are given, *mysql\_find\_rows* reads the standard input.
Options
-------
mysql\_find\_rows supports the following options:
| Option | Description |
| --- | --- |
| `--help`, `--Information` | Display help and exit. |
| `--regexp=pattern` | Display queries that match the pattern. |
| `--rows=N` | Quit after displaying N queries. |
| `--skip-use-db` | Do not include [USE db\_name](../use/index) statements in the output. |
| `--start_row=N` | Start output from this row (first row is 1). |
Examples
--------
```
mysql_find_rows --regexp=problem_table --rows=20 < update.log
mysql_find_rows --regexp=problem_table update-log.1 update-log.2
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema INNODB_SYS_SEMAPHORE_WAITS Table Information Schema INNODB\_SYS\_SEMAPHORE\_WAITS Table
======================================================
The [Information Schema](../information_schema/index) INNODB\_SYS\_SEMAPHORE\_WAITS table is meant to contain information about current semaphore waits. At present it is not correctly populated. See [MDEV-21330](https://jira.mariadb.org/browse/MDEV-21330).
The [PROCESS privilege](../grant/index#process) is required to view the table.
It contains the following columns:
| Column | Description |
| --- | --- |
| THREAD\_ID | Thread id waiting for semaphore |
| OBJECT\_NAME | Semaphore name |
| FILE | File name where semaphore was requested |
| LINE | Line number on above file |
| WAIT\_TIME | Wait time |
| WAIT\_OBJECT | |
| WAIT\_TYPE | Object type (mutex, rw-lock) |
| HOLDER\_THREAD\_ID | Holder thread id |
| HOLDER\_FILE | File name where semaphore was acquired |
| HOLDER\_LINE | Line number for above |
| CREATED\_FILE | Creation file name |
| CREATED\_LINE | Line number for above |
| WRITER\_THREAD | Last write request thread id |
| RESERVATION\_MODE | Reservation mode (shared, exclusive) |
| READERS | Number of readers if only shared mode |
| WAITERS\_FLAG | Flags |
| LOCK\_WORD | Lock word (for developers) |
| LAST\_READER\_FILE | Removed |
| LAST\_READER\_LINE | Removed |
| LAST\_WRITER\_FILE | Last writer file name |
| LAST\_WRITER\_LINE | Above line number |
| OS\_WAIT\_COUNT | Wait count |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SHOW CREATE TRIGGER SHOW CREATE TRIGGER
===================
Syntax
------
```
SHOW CREATE TRIGGER trigger_name
```
Description
-----------
This statement shows a `[CREATE TRIGGER](../create-trigger/index)` statement that creates the given trigger, as well as the `[SQL\_MODE](../sql-mode/index)` that was used when the trigger has been created and the character set used by the connection.
The output of this statement is unreliably affected by the `[sql\_quote\_show\_create](../server-system-variables/index#sql_quote_show_create)` server system variable - see <http://bugs.mysql.com/bug.php?id=12719>
Examples
--------
```
SHOW CREATE TRIGGER example\G
*************************** 1. row ***************************
Trigger: example
sql_mode: ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,STRICT_ALL_TABLES
,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_
ENGINE_SUBSTITUTION
SQL Original Statement: CREATE DEFINER=`root`@`localhost` TRIGGER example BEFORE
INSERT ON t FOR EACH ROW
BEGIN
SET NEW.c = NEW.c * 2;
END
character_set_client: cp850
collation_connection: cp850_general_ci
Database Collation: utf8_general_ci
Created: 2016-09-29 13:53:34.35
```
**MariaDB starting with [10.2.3](https://mariadb.com/kb/en/mariadb-1023-release-notes/)**The `Created` column was added in MySQL 5.7 and [MariaDB 10.2.3](https://mariadb.com/kb/en/mariadb-1023-release-notes/) as part of introducing multiple trigger events per action.
See also
--------
* [Trigger Overview](../trigger-overview/index)
* `[CREATE TRIGGER](../create-trigger/index)`
* `[DROP TRIGGER](../drop-trigger/index)`
* `[information\_schema.TRIGGERS Table](../information-schema-triggers-table/index)`
* `[SHOW TRIGGERS](../show-triggers/index)`
* [Trigger Limitations](../trigger-limitations/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Merging from MySQL (obsolete) Merging from MySQL (obsolete)
=============================
**Note:** This page is obsolete. The information is old, outdated, or otherwise currently incorrect. We are keeping the page for historical reasons only. **Do not** rely on the information in this article.
Merging from MySQL into MariaDB
-------------------------------
### Merging code changes from MySQL bzr repository
We generally merge only released versions of MySQL into MariaDB trunk. This is to be able to release a well-working release of MariaDB at any time, without having to worry about including half-finished changes from MySQL. Merges of MySQL revisions in-between MySQL releases can still be done (eg. to reduce the merge task to smaller pieces), but should then be pushed to the maria-5.1-merge branch, not to the main lp:maria branch.
The merge command should thus generally be of this form:
```
bzr merge -rtag:mysql-<MYSQL-VERSION> lp:mysql-server/5.1
```
As a general rule, when the MySQL and MariaDB side has changes with the same meaning but differing text, pick the MySQL variant when resolving this conflict. This will help reduce the number of conflicts in subsequent merges.
### Buildbot testing
To assist in understanding test failures that arise during the merge, we pull the same revision to be merged into the lp:maria-captains/maria/mysql-5.1-testing tree for buildbot test. This allows to check easily if any failures introduced are also present in the vanilla MySQL tree being merged.
### Helpful tags and diffs
To help keep track of merges, we tag the result of a merge:
```
mariadb-merge-mysql-<MYSQL-VERSION>
```
For example, when merging MySQL 5.1.39, the commit of the merge would be tagged like this:
```
mariadb-merge-mysql-5.1.39
```
The right-hand parent of tag:mariadb-merge-mysql-5.1.39 will be the revision tag:mysql-5.1.39. The left-hand parent will be a revision on the MariaDB trunk.
When merging, these tags and associated revisions can be used to generate some diffs, which are useful when resolving conflicts. Here is a diagram of the history in a merge:
```
B----maria------A0-------A1
\ / /
\ / /
---mysql---Y0------Y1
```
Here,
* `'B'` is the base revision when MariaDB was originally branched from MySQL.
* `'A0'` is the result of the last MySQL merge, eg. `tag:mariadb-merge-mysql-5.1.38`.
* `'Y0'` is the MySQL revision that was last merged, eg. `tag:mysql-5.1.38`.
* `'Y1'` is the MySQL revision to be merged in the new merge, eg. `tag:mysql-5.1.39`.
* `'A1'` is the result of committing the new merge, to be tagged as eg. `tag:mariadb-merge-mysql-5.1.39`.
Then, these diffs can be useful:
* `'bzr diff -rY0..before:A1'` - this is the MariaDB side of changes to be merged.
* `'bzr diff -rY0..Y1'` - this is the MySQL side of changes to be merged.
* `'bzr diff -rA0..before:A1'` - these are the new changes on the MariaDB side to be merged; this can be useful do separate them from other MariaDB-specific changes that have already been resolved against conflicting MySQL changes.
### Merging documentation from MySQL source tarballs
The documentation for MySQL is not maintained in the MySQL source bzr repository. Therefore changes to MySQL documentation needs to be merged separately.
Only some of the MySQL documentation is available under the GPL (man pages, help tables, installation instructions). Notably the MySQL manual is not available under the GPL, and so is not included in MariaDB in any form.
The man pages, help tables, and installation instruction READMEs are obtained from MySQL source tarballs and manually merged into the MariaDB source trees. The procedure for this is as follows:
There is a tree on Launchpad used for tracking merges:
```
lp:~maria-captains/maria/mysql-docs-merge-base
```
(At the time of writing, this procedure only exists for the 5.1 series of MySQL and MariaDB. Additional merge base trees will be needed for other release series.)
This tree must **only** be used to import new documentation files from new MySQL upstream source tarballs. The procedure to import a new set of files when a new MySQL release happens is as follows:
* Download the new MySQL source tarball and unpack it, say to mysql-5.1.38
* run these commands:
```
T=../mysql-5.1.38
bzr branch lp:~maria-captains/maria/mysql-docs-merge-base
cd mysql-docs-merge-base
for i in Docs/INSTALL-BINARY INSTALL-SOURCE INSTALL-WIN-SOURCE support-files/MacOSX/ReadMe.txt scripts/fill_help_tables.sql $(cd "$T" && find man -type f | grep '\.[0-9]$' | grep -v '^man/ndb_' | grep -v '^man/mysqlman.1$') ; do cp "$T/$i" $i; bzr add $i ; done
bzr commit -m"Imported MySQL documentation files from $T"
bzr push lp:~maria-captains/maria/mysql-docs-merge-base
```
* Now do a normal merge from `lp:maria-captains/maria/mysql-docs-merge-base` into `lp:maria`
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Testing HandlerSocket in a Source Distribution Testing HandlerSocket in a Source Distribution
==============================================
[MariaDB 5.5](../what-is-mariadb-55/index)
------------------------------------------
In [MariaDB 5.5](../what-is-mariadb-55/index), which is built using `cmake`, `Makefile.PL` is not generated automatically. If you want to run the perl tests, you will need to create it manually from `Makefile.PL.in`. It is fairly easy to do by replacing `LIB` and `INC` values with the correct ones. Also, `libhsclient.so` is not built by default; `libhsclient.a` can be found in `plugin/handler_socket` folder.
[MariaDB 5.3](../what-is-mariadb-53/index)
------------------------------------------
If you want to test or use handlersocket with a source installation of [MariaDB 5.3](../what-is-mariadb-53/index), here is one way to do this:
1. Compile with one of the build scripts that has the `-max` option, like `BUILD/compile-pentium64-max` or `BUILD/compile-pentium64-debug-max`
2. Start mysqld with the test framework
```
cd mysql-test
LD_LIBRARY_PATH=../plugin/handler_socket/libhsclient/.libs \
MTR_VERSION=1 perl mysql-test-run.pl --start-and-exit 1st \
--mysqld=--plugin-dir=../plugin/handler_socket/handlersocket/.libs \
--mysqld=--loose-handlersocket_port=9998 \
--mysqld=--loose-handlersocket_port_wr=9999 \
--master_port=9306 --mysqld=--innodb
```
3. This will end with:
```
Servers started, exiting
```
4. Load handlersocket
```
client/mysql -uroot --protocol=tcp --port=9306 \
-e 'INSTALL PLUGIN handlersocket soname "handlersocket.so"'
```
5. Configure and compile the handlersocket perl module
```
cd plugin/handler_socket/perl-Net-HandlerSocket
perl Makefile.PL
make
```
6. If you would like to install the handlersocket perl module permanently, you should do:
```
make install
```
If you do this, you don't have to set `PERL5LIB` below.
7. Run the handlersocket test suite
```
cd plugin/handler_socket/regtest/test_01_lib
MYHOST=127.0.0.1 MYPORT=9306 LD_LIBRARY_PATH=../../libhsclient/.libs/ \
PERL5LIB=../common:../../perl-Net-HandlerSocket/lib:../../perl-Net-HandlerSocket/blib/arch/auto/Net/HandlerSocket/ ./run.sh
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb JSON_SET JSON\_SET
=========
**MariaDB starting with [10.2.3](https://mariadb.com/kb/en/mariadb-1023-release-notes/)**JSON functions were added in [MariaDB 10.2.3](https://mariadb.com/kb/en/mariadb-1023-release-notes/).
Syntax
------
```
JSON_SET(json_doc, path, val[, path, val] ...)
```
Description
-----------
Updates or inserts data into a JSON document, returning the result, or NULL if any of the arguments are NULL or the optional path fails to find an object.
An error will occur if the JSON document is invalid, the path is invalid or if the path contains a \* or **wildcard.**
JSON\_SET can update or insert data, while [JSON\_REPLACE](../json_replace/index) can only update, and [JSON\_INSERT](../json_insert/index) only insert.
Examples
--------
```
SELECT JSON_SET(Priv, '$.locked', 'true') FROM mysql.global_priv
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb sysbench v0.5 - Three Times Five Minutes Runs on work with 5.1.42 sysbench v0.5 - Three Times Five Minutes Runs on work with 5.1.42
=================================================================
MariDB/MySQL sysbench benchmark comparison in %
Each test was run for 5 minutes 3 times
```
Number of threads
1 4 8 16 32 64 128
sysbench test
delete 98.99 86.56 97.42 102.60 101.25 98.91 99.99
insert 99.20 97.52 98.18 99.01 99.32 99.76 99.36
oltp_complex_ro 100.34 99.60 98.97 100.34 99.37 99.98 100.25
oltp_complex_rw 115.90 101.87 101.93 100.78 100.45 95.67 105.08
oltp_simple 100.09 99.82 99.73 99.57 99.57 101.48 100.59
select 99.72 99.83 98.85 99.92 101.29 99.34 100.11
update_index 112.62 101.40 99.31 100.21 98.15 99.12 99.98
update_non_index 99.36 99.28 100.20 87.68 97.09 102.04 99.91
(MariaDB q/s / MySQL q/s * 100)
```
Benchmark was run on work: Linux openSUSE 11.1 (x86\_64), daul socket quad-core Intel 3.0GHz. with 6MB L2 cache, 8 GB RAM, data\_dir on single disk.
MariaDB and MySQL were compiled with
```
BUILD/compile-amd64-max
```
MariaDB revision was:
```
-rtag:5.1.42
```
MySQL revision was:
```
-rtag:5.1.42
```
sysbench was run with these parameters:
```
--oltp-table-size=2000000 \
--max-time=300 \
--max-requests=0 \
--mysql-table-engine=InnoDB \
--mysql-user=root \
--mysql-engine-trx=yes
```
and this variable part of parameters
```
--num-threads=$THREADS --test=${TEST_DIR}/${SYSBENCH_TEST}
```
Configuration used for MariaDB and MySQL:
```
--no-defaults \
--skip-grant-tables \
--language=./sql/share/english \
--datadir=$DATA_DIR \
--tmpdir=$TEMP_DIR \
--socket=$MY_SOCKET \
--table_open_cache=512 \
--thread_cache=512 \
--query_cache_size=0 \
--query_cache_type=0 \
--innodb_data_home_dir=$DATA_DIR \
--innodb_data_file_path=ibdata1:128M:autoextend \
--innodb_log_group_home_dir=$DATA_DIR \
--innodb_buffer_pool_size=1024M \
--innodb_additional_mem_pool_size=32M \
--innodb_log_file_size=256M \
--innodb_log_buffer_size=16M \
--innodb_flush_log_at_trx_commit=1 \
--innodb_lock_wait_timeout=50 \
--innodb_doublewrite=0 \
--innodb_flush_method=O_DIRECT \
--innodb_thread_concurrency=0 \
--innodb_max_dirty_pages_pct=80"
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Trigger Overview Trigger Overview
================
A trigger, as its name suggests, is a set of statements that run, or are triggered, when an event occurs on a table.
Events
------
The event can be an INSERT, an UPDATE or a DELETE. The trigger can be executed BEFORE or AFTER the event. Until [MariaDB 10.2.3](https://mariadb.com/kb/en/mariadb-1023-release-notes/), a table could have only one trigger defined for each event/timing combination: for example, a table could only have one BEFORE INSERT trigger.
The [LOAD DATA INFILE](../load-data-infile/index) and [LOAD XML](../load-xml/index) statements invoke INSERT triggers for each row that is being inserted.
The [REPLACE](../replace/index) statement is executed with the following workflow:
* BEFORE INSERT;
* BEFORE DELETE (only if a row is being deleted);
* AFTER DELETE (only if a row is being deleted);
* AFTER INSERT.
The [INSERT ... ON DUPLICATE KEY UPDATE](../insert-on-duplicate-key-update/index) statement, when a row already exists, follows the following workflow:
* BEFORE INSERT;
* BEFORE UPDATE;
* AFTER UPDATE.
Otherwise, it works like a normal INSERT statement.
Note that [TRUNCATE TABLE](../truncate-table/index) does not activate any triggers.
Triggers and Errors
-------------------
With non-transactional storage engines, if a BEFORE statement produces an error, the statement will not be executed. Statements that affect multiple rows will fail before inserting the current row.
With transactional engines, triggers are executed in the same transaction as the statement that invoked them.
If a warning is issued with the SIGNAL or RESIGNAL statement (that is, an error with an SQLSTATE starting with '01'), it will be treated like an error.
Creating a Trigger
------------------
Here's a simple example to demonstrate a trigger in action. Using these two tables as an example:
```
CREATE TABLE animals (id mediumint(9)
NOT NULL AUTO_INCREMENT,
name char(30) NOT NULL,
PRIMARY KEY (`id`));
CREATE TABLE animal_count (animals int);
INSERT INTO animal_count (animals) VALUES(0);
```
We want to increment a counter each time a new animal is added. Here's what the trigger will look like:
```
CREATE TRIGGER increment_animal
AFTER INSERT ON animals
FOR EACH ROW
UPDATE animal_count SET animal_count.animals = animal_count.animals+1;
```
The trigger has:
* a *name* (in this case `increment_animal`)
* a trigger time (in this case *after* the specified trigger event)
* a trigger event (an `INSERT`)
* a table with which it is associated (`animals`)
* a set of statements to run (here, just the one UPDATE statement)
`AFTER INSERT` specifies that the trigger will run *after* an `INSERT`. The trigger could also be set to run *before*, and the statement causing the trigger could be a `DELETE` or an `UPDATE` as well.
Now, if we insert a record into the `animals` table, the trigger will run, incrementing the animal\_count table;
```
SELECT * FROM animal_count;
+---------+
| animals |
+---------+
| 0 |
+---------+
INSERT INTO animals (name) VALUES('aardvark');
INSERT INTO animals (name) VALUES('baboon');
SELECT * FROM animal_count;
+---------+
| animals |
+---------+
| 2 |
+---------+
```
For more details on the syntax, see [CREATE TRIGGER](../create-trigger/index).
Dropping Triggers
-----------------
To drop a trigger, use the [DROP TRIGGER](../drop-trigger/index) statement. Triggers are also dropped if the table with which they are associated is also dropped.
```
DROP TRIGGER increment_animal;
```
Triggers Metadata
-----------------
The [Information Schema TRIGGERS Table](../information-schema-triggers-table/index) stores information about triggers.
The [SHOW TRIGGERS](../show-triggers/index) statement returns similar information.
The [SHOW CREATE TRIGGER](../show-create-trigger/index) statement returns a CREATE TRIGGER statement that creates the given trigger.
More Complex Triggers
---------------------
Triggers can consist of multiple statements enclosed by a [BEGIN and END](../begin-end/index). If you're entering multiple statements on the command line, you'll want to temporarily set a new delimiter so that you can use a semicolon to delimit the statements inside your trigger. See [Delimiters in the mysql client](../delimiters-in-the-mysql-client/index) for more.
```
DROP TABLE animals;
UPDATE animal_count SET animals=0;
CREATE TABLE animals (id mediumint(9) NOT NULL AUTO_INCREMENT,
name char(30) NOT NULL,
PRIMARY KEY (`id`))
ENGINE=InnoDB;
DELIMITER //
CREATE TRIGGER the_mooses_are_loose
AFTER INSERT ON animals
FOR EACH ROW
BEGIN
IF NEW.name = 'Moose' THEN
UPDATE animal_count SET animal_count.animals = animal_count.animals+100;
ELSE
UPDATE animal_count SET animal_count.animals = animal_count.animals+1;
END IF;
END; //
DELIMITER ;
INSERT INTO animals (name) VALUES('Aardvark');
SELECT * FROM animal_count;
+---------+
| animals |
+---------+
| 1 |
+---------+
INSERT INTO animals (name) VALUES('Moose');
SELECT * FROM animal_count;
+---------+
| animals |
+---------+
| 101 |
+---------+
```
Trigger Errors
--------------
If a trigger contains an error and the engine is transactional, or it is a BEFORE trigger, the trigger will not run, and will prevent the original statement from running as well. If the engine is non-transactional, and it is an AFTER trigger, the trigger will not run, but the original statement will.
Here, we'll drop the above examples, and then recreate the trigger with an error, a field that doesn't exist, first using the default [InnoDB](../innodb/index), a transactional engine, and then again using [MyISAM](../myisam/index), a non-transactional engine.
```
DROP TABLE animals;
CREATE TABLE animals (id mediumint(9) NOT NULL AUTO_INCREMENT,
name char(30) NOT NULL,
PRIMARY KEY (`id`))
ENGINE=InnoDB;
CREATE TRIGGER increment_animal
AFTER INSERT ON animals
FOR EACH ROW
UPDATE animal_count SET animal_count.id = animal_count_id+1;
INSERT INTO animals (name) VALUES('aardvark');
ERROR 1054 (42S22): Unknown column 'animal_count.id' in 'field list'
SELECT * FROM animals;
Empty set (0.00 sec)
```
And now the identical procedure, but with a MyISAM table.
```
DROP TABLE animals;
CREATE TABLE animals (id mediumint(9) NOT NULL AUTO_INCREMENT,
name char(30) NOT NULL,
PRIMARY KEY (`id`))
ENGINE=MyISAM;
CREATE TRIGGER increment_animal
AFTER INSERT ON animals
FOR EACH ROW
UPDATE animal_count SET animal_count.id = animal_count_id+1;
INSERT INTO animals (name) VALUES('aardvark');
ERROR 1054 (42S22): Unknown column 'animal_count.id' in 'field list'
SELECT * FROM animals;
+----+----------+
| id | name |
+----+----------+
| 1 | aardvark |
+----+----------+
```
The following example shows how to use a trigger to validate data. The [SIGNAL](../signal/index) statement is used to intentionally produce an error if the email field is not a valid email. As the example shows, in that case the new row is not inserted (because it is a BEFORE trigger).
```
CREATE TABLE user (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
first_name CHAR(20),
last_name CHAR(20),
email CHAR(100)
)
ENGINE = MyISAM;
DELIMITER //
CREATE TRIGGER bi_user
BEFORE INSERT ON user
FOR EACH ROW
BEGIN
IF NEW.email NOT LIKE '_%@_%.__%' THEN
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Email field is not valid';
END IF;
END; //
DELIMITER ;
INSERT INTO user (first_name, last_name, email) VALUES ('John', 'Doe', 'john_doe.example.net');
ERROR 1644 (45000): Email field is not valid
SELECT * FROM user;
Empty set (0.00 sec)
```
See Also
--------
* [CREATE TRIGGER](../create-trigger/index)
* [DROP TRIGGER](../drop-trigger/index)
* [Information Schema TRIGGERS Table](../information-schema-triggers-table/index)
* [SHOW TRIGGERS](../show-triggers/index)
* [SHOW CREATE TRIGGER](../show-create-trigger/index)
* [Trigger Limitations](../trigger-limitations/index)
* [Creative uses of triggers: Things you people wouldn't believe](https://www.youtube.com/watch?v=-O2up6Fr9M0) (video)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb InnoDB COMPACT Row Format InnoDB COMPACT Row Format
=========================
**MariaDB until [10.2.1](https://mariadb.com/kb/en/mariadb-1021-release-notes/)**In [MariaDB 10.2.1](https://mariadb.com/kb/en/mariadb-1021-release-notes/) and before, the default row format is `COMPACT`.
The `COMPACT` row format is similar to the `REDUNDANT` row format, but it stores data in a more compact manner that requires about 20% less storage.
Using the `COMPACT` Row Format
------------------------------
**MariaDB starting with [10.2.2](https://mariadb.com/kb/en/mariadb-1022-release-notes/)**In [MariaDB 10.2.2](https://mariadb.com/kb/en/mariadb-1022-release-notes/) and later, the easiest way to create an InnoDB table that uses the `COMPACT` row format is by setting the [ROW\_FORMAT](../create-table/index#row_format) table option to to `COMPACT` in a [CREATE TABLE](../create-table/index) or [ALTER TABLE](../alter-table/index) statement.
It is recommended to set the [innodb\_strict\_mode](../innodb-system-variables/index#innodb_strict_mode) system variable to `ON` when using this row format.
The `COMPACT` row format is supported by both the `Antelope` and the `Barracuda` [file formats](../xtradbinnodb-file-format/index), so tables with this row format can be created regardless of the value of the [innodb\_file\_format](../innodb-system-variables/index#innodb_file_format) system variable.
For example:
```
SET SESSION innodb_strict_mode=ON;
CREATE TABLE tab (
id int,
str varchar(50)
) ENGINE=InnoDB ROW_FORMAT=COMPACT;
```
**MariaDB until [10.2.1](https://mariadb.com/kb/en/mariadb-1021-release-notes/)**In [MariaDB 10.2.1](https://mariadb.com/kb/en/mariadb-1021-release-notes/) and before, the default row format is `COMPACT`. Therefore, in these versions, the easiest way to create an InnoDB table that uses the `COMPACT` row format is by **not** setting the [ROW\_FORMAT](../create-table/index#row_format) table option at all in the [CREATE TABLE](../create-table/index) or [ALTER TABLE](../alter-table/index) statement.
It is recommended to set the [innodb\_strict\_mode](../innodb-system-variables/index#innodb_strict_mode) system variable to `ON` when using this row format.
The `COMPACT` row format is supported by both the `Antelope` and the `Barracuda` [file formats](../innodb-file-format/index), so tables with this row format can be created regardless of the value of the [innodb\_file\_format](../innodb-system-variables/index#innodb_file_format) system variable.
For example:
```
SET SESSION innodb_strict_mode=ON;
CREATE TABLE tab (
id int,
str varchar(50)
) ENGINE=InnoDB;
```
Index Prefixes with the `COMPACT` Row Format
--------------------------------------------
The `COMPACT` row format supports index prefixes up to 767 bytes.
Overflow Pages with the `COMPACT` Row Format
--------------------------------------------
All InnoDB row formats can store certain kinds of data in overflow pages. This allows for the maximum row size of an InnoDB table to be larger than the maximum amount of data that can be stored in the row's main data page. See [Maximum Row Size](#maximum-row-size) for more information about the other factors that can contribute to the maximum row size for InnoDB tables.
In the `COMPACT` row format variable-length columns, such as columns using the [VARBINARY](../varbinary/index), [VARCHAR](../varchar/index), [BLOB](../blob/index) and [TEXT](../text/index) data types, can be partially stored in overflow pages.
InnoDB only considers using overflow pages if the table's row size is greater than half of [innodb\_page\_size](../innodb-system-variables/index#innodb_page_size). If the row size is greater than this, then InnoDB chooses variable-length columns to be stored on overflow pages until the row size is less than half of [innodb\_page\_size](../innodb-system-variables/index#innodb_page_size).
For [VARBINARY](../varbinary/index), [VARCHAR](../varchar/index), [BLOB](../blob/index) and [TEXT](../text/index) columns, only values longer than 767 bytes are considered for storage on overflow pages. Bytes that are stored to track a value's length do not count towards this limit. This limit is only based on the length of the actual column's data.
Fixed-length columns greater than 767 bytes are encoded as variable-length columns, so they can also be stored in overflow pages if the table's row size is greater than half of [innodb\_page\_size](../innodb-system-variables/index#innodb_page_size). Even though a column using the [CHAR](../char/index) data type can hold at most 255 characters, a [CHAR](../char/index) column can still exceed 767 bytes in some cases. For example, a `char(255)` column can exceed 767 bytes if the [character set](../character-sets/index) is `utf8mb4`.
If a column is chosen to be stored on overflow pages, then the first 767 bytes of the column's value and a 20-byte pointer to the column's first overflow page are stored on the main page. Each overflow page is the size of [innodb\_page\_size](../innodb-system-variables/index#innodb_page_size). If a column is too large to be stored on a single overflow page, then it is stored on multiple overflow pages. Each overflow page contains part of the data and a 20-byte pointer to the next overflow page, if a next page exists.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysql.gtid_slave_pos Table mysql.gtid\_slave\_pos Table
============================
The `mysql.gtid_slave_pos` table is used in [replication](../replication/index) by replica servers to keep track of their current position (the [global transaction ID](../gtid/index) of the last transaction applied). Using the table allows the replica to maintain a consistent value for the [gtid\_slave\_pos](../global-transaction-id/index#gtid_slave_pos) system variable across server restarts. See [Global Transaction ID](../global-transaction-id/index).
You should never attempt to modify the table directly. If you do need to change the global gtid\_slave\_pos value, use `SET GLOBAL gtid_slave_pos = ...` instead.
The table is updated with the new position as part of each transaction committed during replication. This makes it preferable that the table is using the same storage engine as the tables otherwise being modified in the transaction, since otherwise a multi-engine transaction is needed that can reduce performance.
Starting from [MariaDB 10.3.1](https://mariadb.com/kb/en/mariadb-1031-release-notes/), multiple versions of this table are supported, each using a different storage engine. This is selected with the [gtid\_pos\_auto\_engines option](../global-transaction-id/index#gtid_pos_auto_engines), by giving a comma-separated list of engine names. The server will then on-demand create an extra version of the table using the appropriate storage engine, and select the table version using the same engine as the rest of the transaction, avoiding multi-engine transactions.
For example, when `gtid_pos_auto_engines=innodb,rocksdb`, tables `mysql.gtid_slave_pos_InnoDB` and `mysql.gtid_slave_pos_RocksDB` will be created and used, if needed. If there is no match to the storage engine, the default `mysql.gtid_slave_pos` table will be used; this also happens if non-transactional updates (like MyISAM) are replicated, since there is then no active transaction at the time of the `mysql.gtid_slave_pos` table update.
Prior to [MariaDB 10.3.1](https://mariadb.com/kb/en/mariadb-1031-release-notes/), only the default `mysql.gtid_slave_pos` table is available. In these versions, the table should preferably be using the storage engine that is used for most replicated transactions.
The default `mysql.gtid_slave_pos` table will be initially created using the default storage engine set for the server (which itself defaults to InnoDB). If the application load is primarily non-transactional MyISAM or Aria tables, it can be beneficial to change the storage engine to avoid including an InnoDB update with every operation:
```
ALTER TABLE mysql.gtid_slave_pos ENGINE=MyISAM;
```
The `mysql.gtid_slave_pos` table should not be changed manually in any other way. From [MariaDB 10.3.1](https://mariadb.com/kb/en/mariadb-1031-release-notes/), it is preferable to use the `gtid_pos_auto_engines` server variable to get the GTID position updates to use the TokuDB or RocksDB storage engine.
Note that for scalability reasons, the automatic creation of a new `mysql.gtid_slave_posXXX` table happens asynchronously when the first transaction with the new storage engine is committed. So the very first few transactions will update the old version of the table, until the new version is created and available.
The table `mysql.gtid_slave_pos` contains the following fields
| Field | Type | Null | Key | Default | Description |
| --- | --- | --- | --- | --- | --- |
| `domain_id` | `int(10) unsigned` | NO | PRI | `NULL` | Domain id (see [Global Transaction ID domain ID](../global-transaction-id/index#the-domain-id). |
| `sub_id` | `bigint(20) unsigned` | NO | PRI | `NULL` | This field enables multiple parallel transactions within same `domain_id` to update this table without contention. At any instant, the replication state corresponds to records with largest `sub_id` for each `domain_id`. |
| `server_id` | `int(10) unsigned` | NO | | `NULL` | [Server id](../global-transaction-id/index#server_id). |
| `seq_no` | `bigint(20) unsigned` | NO | | `NULL` | Sequence number, an integer that is monotonically increasing for each new event group logged into the binlog. |
From [MariaDB 10.3.1](https://mariadb.com/kb/en/mariadb-1031-release-notes/), some status variables are available to monitor the use of the different `gtid_slave_pos` table versions:
[Transactions\_gtid\_foreign\_engine](../replication-and-binary-log-status-variables/index#transactions_gtid_foreign_engine)
Number of replicated transactions where the update of the `gtid_slave_pos` table had to choose a storage engine that did not otherwise participate in the transaction. This can indicate that setting gtid\_pos\_auto\_engines might be useful.
[Rpl\_transactions\_multi\_engine](../replication-and-binary-log-status-variables/index#rpl_transactions_multi_engine)
Number of replicated transactions that involved changes in multiple (transactional) storage engines, before considering the update of `gtid_slave_pos`. These are transactions that were already cross-engine, independent of the GTID position update introduced by replication
[Transactions\_multi\_engine](../replication-and-binary-log-status-variables/index#transactions_multi_engine)
Number of transactions that changed data in multiple (transactional) storage engines. If this is significantly larger than Rpl\_transactions\_multi\_engine, it indicates that setting `gtid_pos_auto_engines` could reduce the need for cross-engine transactions.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb FROM_BASE64 FROM\_BASE64
============
Syntax
------
```
FROM_BASE64(str)
```
Description
-----------
Decodes the given base-64 encode string, returning the result as a binary string. Returns `NULL` if the given string is `NULL` or if it's invalid.
It is the reverse of the `[TO\_BASE64](../to_base64/index)` function.
There are numerous methods to base-64 encode a string. MariaDB uses the following:
* It encodes alphabet value 64 as '`+`'.
* It encodes alphabet value 63 as '`/`'.
* It codes output in groups of four printable characters. Each three byte of data encoded uses four characters. If the final group is incomplete, it pads the difference with the '`=`' character.
* It divides long output, adding a new line very 76 characters.
* In decoding, it recognizes and ignores newlines, carriage returns, tabs and space whitespace characters.
```
SELECT TO_BASE64('Maria') AS 'Input';
+-----------+
| Input |
+-----------+
| TWFyaWE= |
+-----------+
SELECT FROM_BASE64('TWFyaWE=') AS 'Output';
+--------+
| Output |
+--------+
| Maria |
+--------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysqlimport mysqlimport
===========
**MariaDB starting with [10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/)**From [MariaDB 10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/), `mariadb-import` is a symlink to `mysqlimport`.
**MariaDB starting with [10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/)**From [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/), `mariadb-import` is the name of the script, with `mysqlimport` a symlink .
`mysqlimport` loads tables from text files in various formats. The base name of the text file must be the name of the table that should be used. If one uses sockets to connect to the MariaDB server, the server will open and read the text file directly. In other cases the client will open the text file. The SQL command [LOAD DATA INFILE](../load-data-infile/index) is used to import the rows.
Using mysqlimport
-----------------
The command to use `mysqlimport` and the general syntax is:
```
mysqlimport [OPTIONS] database textfile1 [textfile2 ...]
```
### Options
`mysqlimport` supports the following options:
| variable | Description |
| --- | --- |
| `--character-sets-dir=name` | Directory for character set files. |
| `-c cols`, `--columns=cols` | Use only these columns to import the data to. Give the column names in a comma separated list. This is same as giving columns to [LOAD DATA INFILE](../load-data-infile/index). |
| `-C`, `--compress` | Use compression in server/client protocol. |
| `-# [options]` , `--debug[=options]` | Output debug log. Often this is `d:t:o,filename`. The default is `d:t:o`. |
| `--debug-check` | Check memory and open file usage at exit. |
| `--debug-info` | Print some debug info at exit. |
| `--default-auth=plugin` | Default authentication client-side plugin to use. |
| `--default-character-set=name` | Set the default [character set](../data-types-character-sets-and-collations/index). |
| `--defaults-extra-file=name` | Read this file after the global files are read. Must be given as the first option. |
| `--defaults-file=name` | Only read default options from the given file *name* Must be given as the first option. |
| `--defaults-group-suffix=name` | In addition to the given groups, also read groups with this suffix. |
| `-d`, `--delete` | First delete all rows from table. |
| `--fields-terminated-by=name` | Fields in the input file are terminated by the given string. |
| `--fields-enclosed-by=name` | Fields in the import file are enclosed by the given character. |
| `--fields-optionally-enclosed-by=name` | Fields in the input file are optionally enclosed by the given character. |
| `--fields-escaped-by=name` | Fields in the input file are escaped by the given character. |
| `-f`, `--force` | Continue even if we get an SQL error. |
| `-?`, `--help` | Displays this help and exits. |
| `-h name`, `--host=name` | Connect to host. |
| `-i`, `--ignore` | If duplicate unique key was found, keep old row. |
| `k`, `--ignore-foreign-keys` | Disable foreign key checks while importing the data. From [MariaDB 10.3.16](https://mariadb.com/kb/en/mariadb-10316-release-notes/), [MariaDB 10.2.25](https://mariadb.com/kb/en/mariadb-10225-release-notes/) and [MariaDB 10.1.41](https://mariadb.com/kb/en/mariadb-10141-release-notes/). |
| `--ignore-lines=n` | Ignore first *n* lines of data infile. |
| `--lines-terminated-by=name` | Lines in the input file are terminated by the given string. |
| `-L`, `--local` | Read all files through the client. |
| `-l`, `--lock-tables` | Lock all tables for write (this disables threads). |
| `--low-priority` | Use LOW\_PRIORITY when updating the table. |
| `--no-defaults` | Don't read default options from any option file. Must be given as the first option. |
| `-p[passwd]`, `--password[=passwd]` | Password to use when connecting to server. If password is not given it's asked from the terminal. Specifying a password on the command line should be considered insecure. You can use an option file to avoid giving the password on the command line. |
| `--pipe`, `-W` | On Windows, connect to the server via a named pipe. This option applies only if the server supports named-pipe connections. |
| `--plugin-dir` | Directory for client-side plugins. |
| `-P num`, `--port=num` | Port number to use for connection or 0 for default to, in order of preference, my.cnf, the MYSQL\_TCP\_PORT [environment variable](../mariadb-environment-variables/index), /etc/services, built-in default (3306). |
| `--print-defaults` | Print the program argument list and exit. Must be given as the first option. |
| `--protocol=name` | The protocol to use for connection (tcp, socket, pipe, memory). |
| `-r`, `--replace` | If duplicate unique key was found, replace old row. |
| `--shared-memory-base-name` | Shared-memory name to use for Windows connections using shared memory to a local server (started with the `--shared-memory` option). Case-sensitive. |
| `-s`, `--silent` | Silent mode. Produce output only when errors occur. |
| `-S`, `--socket=name` | For connections to localhost, the Unix socket file to use, or, on Windows, the name of the named pipe to use. |
| `--ssl` | Enables [TLS](../data-in-transit-encryption/index). TLS is also enabled even without setting this option when certain other TLS options are set. Starting with [MariaDB 10.2](../what-is-mariadb-102/index), the `--ssl` option will not enable [verifying the server certificate](../secure-connections-overview/index#server-certificate-verification) by default. In order to verify the server certificate, the user must specify the `--ssl-verify-server-cert` option. |
| `--ssl-ca=name` | Defines a path to a PEM file that should contain one or more X509 certificates for trusted Certificate Authorities (CAs) to use for [TLS](../data-in-transit-encryption/index). This option requires that you use the absolute path, not a relative path. See [Secure Connections Overview: Certificate Authorities (CAs)](../secure-connections-overview/index#certificate-authorities-cas) for more information. This option implies the `--ssl` option. |
| `--ssl-capath=name` | Defines a path to a directory that contains one or more PEM files that should each contain one X509 certificate for a trusted Certificate Authority (CA) to use for [TLS](../data-in-transit-encryption/index). This option requires that you use the absolute path, not a relative path. The directory specified by this option needs to be run through the `[openssl rehash](https://www.openssl.org/docs/man1.1.1/man1/rehash.html)` command. See [Secure Connections Overview: Certificate Authorities (CAs)](../secure-connections-overview/index#certificate-authorities-cas) for more information. This option is only supported if the client was built with OpenSSL or yaSSL. If the client was built with GnuTLS or Schannel, then this option is not supported. See [TLS and Cryptography Libraries Used by MariaDB](../tls-and-cryptography-libraries-used-by-mariadb/index) for more information about which libraries are used on which platforms. This option implies the `--ssl` option. |
| `--ssl-cert=name` | Defines a path to the X509 certificate file to use for [TLS](../data-in-transit-encryption/index). This option requires that you use the absolute path, not a relative path. This option implies the `--ssl` option. |
| `--ssl-cipher=name` | List of permitted ciphers or cipher suites to use for [TLS](../data-in-transit-encryption/index). This option implies the `--ssl` option. |
| `--ssl-crl=name` | Defines a path to a PEM file that should contain one or more revoked X509 certificates to use for [TLS](../data-in-transit-encryption/index). This option requires that you use the absolute path, not a relative path. See [Secure Connections Overview: Certificate Revocation Lists (CRLs)](../secure-connections-overview/index#certificate-revocation-lists-crls) for more information. This option is only supported if the client was built with OpenSSL or Schannel. If the client was built with yaSSL or GnuTLS, then this option is not supported. See [TLS and Cryptography Libraries Used by MariaDB](../tls-and-cryptography-libraries-used-by-mariadb/index) for more information about which libraries are used on which platforms. |
| `--ssl-crlpath=name` | Defines a path to a directory that contains one or more PEM files that should each contain one revoked X509 certificate to use for [TLS](../data-in-transit-encryption/index). This option requires that you use the absolute path, not a relative path. The directory specified by this option needs to be run through the `[openssl rehash](https://www.openssl.org/docs/man1.1.1/man1/rehash.html)` command. See [Secure Connections Overview: Certificate Revocation Lists (CRLs)](../secure-connections-overview/index#certificate-revocation-lists-crls) for more information. This option is only supported if the client was built with OpenSSL. If the client was built with yaSSL, GnuTLS, or Schannel, then this option is not supported. See [TLS and Cryptography Libraries Used by MariaDB](../tls-and-cryptography-libraries-used-by-mariadb/index) for more information about which libraries are used on which platforms. |
| `--ssl-key=name` | Defines a path to a private key file to use for [TLS](../data-in-transit-encryption/index). This option requires that you use the absolute path, not a relative path. This option implies the `--ssl` option. |
| `--ssl-verify-server-cert` | Enables [server certificate verification](../secure-connections-overview/index#server-certificate-verification). This option is disabled by default. |
| `--tls-version=name` | This option accepts a comma-separated list of TLS protocol versions. A TLS protocol version will only be enabled if it is present in this list. All other TLS protocol versions will not be permitted. See [Secure Connections Overview: TLS Protocol Versions](../secure-connections-overview/index#tls-protocol-versions) for more information. This option was added in [MariaDB 10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/). |
| `--use-threads=num` | Load files in parallel. The argument is the number of threads to use for loading data. |
| `-u name`, `--user=name` | User for login if not current user. |
| `-v`, `--verbose` | Print info about the various stages. |
| `-V`, `--version` | Output version information and exit. |
### Option Files
In addition to reading options from the command-line, `mysqlimport` can also read options from [option files](../configuring-mariadb-with-option-files/index). If an unknown option is provided to `mysqlimport` in an option file, then it is ignored.
The following options relate to how MariaDB command-line tools handles option files. They must be given as the first argument on the command-line:
| Option | Description |
| --- | --- |
| `--print-defaults` | Print the program argument list and exit. |
| `--no-defaults` | Don't read default options from any option file. |
| `--defaults-file=#` | Only read default options from the given file #. |
| `--defaults-extra-file=#` | Read this file after the global files are read. |
In [MariaDB 10.2](../what-is-mariadb-102/index) and later, `mysqlimport` is linked with [MariaDB Connector/C](../about-mariadb-connector-c/index). Therefore, it may be helpful to see [Configuring MariaDB Connector/C with Option Files](../configuring-mariadb-connectorc-with-option-files/index) for more information on how MariaDB Connector/C handles option files.
#### Option Groups
`mysqlimport` reads options from the following [option groups](../configuring-mariadb-with-option-files/index#option-groups) from [option files](../configuring-mariadb-with-option-files/index):
| Group | Description |
| --- | --- |
| `[mysqlimport]` | Options read by `mysqlimport`, which includes both MariaDB Server and MySQL Server. |
| `[mariadb-import]` | Options read by `mysqlimport`. Available starting with [MariaDB 10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/). |
| `[client]` | Options read by all MariaDB and MySQL [client programs](../clients-utilities/index), which includes both MariaDB and MySQL clients. For example, `mysqldump`. |
| `[client-server]` | Options read by all MariaDB [client programs](../clients-utilities/index) and the MariaDB Server. This is useful for options like socket and port, which is common between the server and the clients. |
| `[client-mariadb]` | Options read by all MariaDB [client programs](../clients-utilities/index). |
### Default Values
| Variables (``--`variable-name=value`) and boolean options `{FALSE`|`TRUE}` | Value (after reading options) |
| --- | --- |
| `character-sets-dir` | *(No default value)* |
| `default-character-set` | latin1 |
| `columns` | *(No default value)* |
| `compress` | FALSE |
| `debug-check` | FALSE |
| `debug-info` | FALSE |
| `delete` | FALSE |
| `fields-terminated-by` | *(No default value)* |
| `fields-enclosed-by` | *(No default value)* |
| `fields-optionally-enclosed-by` | *(No default value)* |
| `fields-escaped-by` | *(No default value)* |
| `force` | FALSE |
| `host` | *(No default value)* |
| `ignore` | FALSE |
| `ignore-lines` | 0 |
| `lines-terminated-by` | *(No default value)* |
| `local` | FALSE |
| `lock-tables` | FALSE |
| `low-priority` | FALSE |
| `port` | 3306 |
| `replace` | FALSE |
| `silent` | FALSE |
| `socket` | /var/run/mysqld/mysqld.sock |
| `ssl` | FALSE |
| `ssl-ca` | *(No default value)* |
| `ssl-capath` | *(No default value)* |
| `ssl-cert` | *(No default value)* |
| `ssl-cipher` | *(No default value)* |
| `ssl-key` | *(No default value)* |
| `ssl-verify-server-cert` | FALSE |
| `use-threads` | 0 |
| `user` | *(No default value)* |
| `verbose` | FALSE |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB ColumnStore MariaDB ColumnStore
====================
MariaDB ColumnStore is a columnar storage engine that utilizes a massively parallel distributed data architecture. It's a columnar storage system built by porting InfiniDB 4.6.7 to MariaDB, and released under the GPL license.
From [MariaDB 10.5.4](https://mariadb.com/kb/en/mariadb-1054-release-notes/), it is available as a storage engine for MariaDB Server. Before then, it is only available as a separate download.
MariaDB ColumnStore is designed for big data scaling to process petabytes of data, linear scalability and exceptional performance with real-time response to analytical queries. It leverages the I/O benefits of columnar storage, compression, just-in-time projection, and horizontal and vertical partitioning to deliver tremendous performance when analyzing large data sets.
Documentation for the latest release of Columnstore is not available on the Knowledge Base. Instead, see:
* [Release Notes](https://mariadb.com/docs/release-notes/mariadb-columnstore-1-5-2-release-notes/)
* [Deployment Instructions](https://mariadb.com/docs/deploy/community-single-columnstore/)
| Title | Description |
| --- | --- |
| [About MariaDB ColumnStore](../about-mariadb-columnstore/index) | About MariaDB ColumnStore. |
| [MariaDB ColumnStore Release Notes](https://mariadb.com/kb/en/columnstore-release-notes/) | MariaDB ColumnStore Release Notes |
| [ColumnStore Getting Started](../columnstore-getting-started/index) | Quick summary of steps needed to install MariaDB ColumnStore |
| [ColumnStore Upgrade Guides](../mariadb-columnstore-columnstore/index) | Documentation on upgrading from prior versions and InfiniDB migration. |
| [ColumnStore Architecture](../columnstore-architecture/index) | MariaDB ColumnStore Architecture |
| [Managing ColumnStore](../managing-columnstore/index) | Managing MariaDB ColumnStore System Environment and Database |
| [ColumnStore Data Ingestion](../columnstore-data-ingestion/index) | How to load and manipulate data into MariaDB ColumnStore |
| [ColumnStore SQL Structure and Commands](../columnstore-sql-structure-and-commands/index) | SQL syntax supported by MariaDB ColumnStore |
| [ColumnStore Performance Tuning](../columnstore-performance-tuning/index) | Information relating to configuring and analyzing the ColumnStore system for optimal performance. |
| [ColumnStore System Variables](../columnstore-system-variables/index) | ColumnStore System Variables |
| [ColumnStore Security Vulnerabilities](../columnstore-security-vulnerabilities/index) | Security vulnerabilities affecting MariaDB ColumnStore |
| [ColumnStore Troubleshooting](../columnstore-troubleshooting/index) | Articles on troubleshooting tips and techniques |
| [StorageManager](../storagemanager/index) | Articles on StorageManager and S3 configuration |
| [Using MariaDB ColumnStore](../using-mariadb-columnstore/index) | Provides details on using third party products and tools with MariaDB ColumnStore |
| [Building ColumnStore in MariaDB](../building-columnstore-in-mariadb/index) | This is a description of how to build and start a local ColumnStore install... |
| [Can't create a table starting with a capital letter. All tables are lower case-](../mariadb-columnstore-cant-create-a-table-starting-with-a-capital-letter-all-/index) | Hi, I was playing around with my MariaDB ColumnStore and I noticed the I am... |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SHOW GRANTS SHOW GRANTS
===========
Syntax
------
```
SHOW GRANTS [FOR user|role]
```
Description
-----------
The `SHOW GRANTS` statement lists privileges granted to a particular user or role.
### Users
The statement lists the [GRANT](../grant/index) statement or statements that must be issued to duplicate the privileges that are granted to a MariaDB user account. The account is named using the same format as for the `GRANT` statement; for example, '`jeffrey'@'localhost`'. If you specify only the user name part of the account name, a host name part of '`%`' is used. For additional information about specifying account names, see [GRANT](../grant/index).
```
SHOW GRANTS FOR 'root'@'localhost';
+---------------------------------------------------------------------+
| Grants for root@localhost |
+---------------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION |
+---------------------------------------------------------------------+
```
To list the privileges granted to the account that you are using to connect to the server, you can use any of the following statements:
```
SHOW GRANTS;
SHOW GRANTS FOR CURRENT_USER;
SHOW GRANTS FOR CURRENT_USER();
```
If `SHOW GRANTS FOR CURRENT_USER` (or any of the equivalent syntaxes) is used in `DEFINER` context (such as within a stored procedure that is defined with `SQL SECURITY DEFINER`), the grants displayed are those of the definer and not the invoker.
Note that the `DELETE HISTORY` privilege, introduced in [MariaDB 10.3.4](https://mariadb.com/kb/en/mariadb-1034-release-notes/), was displayed as `DELETE VERSIONING ROWS` when running `SHOW GRANTS` until [MariaDB 10.3.15](https://mariadb.com/kb/en/mariadb-10315-release-notes/) ([MDEV-17655](https://jira.mariadb.org/browse/MDEV-17655)).
### Roles
`SHOW GRANTS` can also be used to view the privileges granted to a [role](../roles/index).
#### Example
```
SHOW GRANTS FOR journalist;
+------------------------------------------+
| Grants for journalist |
+------------------------------------------+
| GRANT USAGE ON *.* TO 'journalist' |
| GRANT DELETE ON `test`.* TO 'journalist' |
+------------------------------------------+
```
See Also
--------
* [Authentication from MariaDB 10.4](../authentication-from-mariadb-104/index)
* [SHOW CREATE USER](../show-create-user/index) shows how the user was created.
* [SHOW PRIVILEGES](../show-privileges/index) shows the privileges supported by MariaDB.
* [Roles](../roles/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Aria Two-step Deadlock Detection Aria Two-step Deadlock Detection
================================
Description
-----------
The [Aria](../aria/index) storage engine can automatically detect and deal with deadlocks (see the [Wikipedia deadlocks article](http://en.wikipedia.org/wiki/Deadlock)).
This feature is controlled by four configuration variables, two that control the search depth and two that control the timeout.
* [deadlock\_search\_depth\_long](../aria-server-system-variables/index#deadlock_search_depth_long)
* [deadlock\_search\_depth\_short](../aria-server-system-variables/index#deadlock_search_depth_short)
* [deadlock\_timeout\_long](../aria-server-system-variables/index#deadlock_timeout_long)
* [deadlock\_timeout\_short](../aria-server-system-variables/index#deadlock_timeout_short)
How it Works
------------
If Aria is ever unable to obtain a lock, we might have a deadlock. There are two primary ways for detecting if a deadlock has actually occurred. First is to search a wait-for graph (see the [wait-for graph on Wikipedia](http://en.wikipedia.org/wiki/Wait-for_graph)) and the second is to just wait and let the deadlock exhibit itself. Aria Two-step Deadlock Detection does a combination of both.
First, if the lock request cannot be granted immediately, we do a short search of the wait-for graph with a small search depth as configured by the `deadlock_search_depth_short` variable. We have a depth limit because the graph can (theoretically) be arbitrarily big and we don't want to recursively search the graph arbitrarily deep. This initial, short search is very fast and most deadlocks will be detected right away. If no deadlock cycles are found with the short search the system waits for the amount of time configured in `deadlock_timeout_short` to see if the lock conflicts will be removed and the lock can be granted. Assuming this did not happen and the lock request still waits, the system then moves on to step two, which is a repeat of the process but this time searching deeper using the `deadlock_search_depth_long`. If no deadlock has been detected, it waits `deadlock_timeout_long` and times out.
When a deadlock is detected the system uses a weighting algorithm to determine which thread in the deadlock should be killed and then kills it.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ACID: Concurrency Control with Transactions ACID: Concurrency Control with Transactions
===========================================
Database requests happen in linear fashion, one after another. When many users are accessing a database, or one user has a related set of requests to run, it becomes important to ensure that the results remain consistent. To achieve this, you use *transactions*, which are groups of database requests that are processed as a whole. Put another way, they are logical units of work.
To ensure data integrity, transactions need to adhere to four conditions: atomicity, consistency, isolation and durability (ACID).
### Atomicity
*Atomicity* means the entire transaction must complete. If this is not the case, the entire transaction is aborted. This ensures that the database can never be left with partially completed transactions, which lead to poor data integrity. If you remove money out of one bank account, for example, but the second request fails and the system cannot place the money in another bank, both requests must fail. The money cannot simply be lost, or taken from one account without going into the other.
### Consistency
*Consistency* refers to the state the data is in when certain conditions are met. For example, one rule may be that each invoice must relate to a customer in the customer table. These rules may be broken during the course of a transaction if, for example the invoice is inserted without a related customer, which is added at a later stage in the transaction. These temporary violations are not visible outside of the transaction, and will always be resolved by the time the transaction is complete.
### Isolation
*Isolation* means that any data being used during the processing of one transaction cannot be used by another transaction until the first transaction is complete. For example, if two people deposit $100 into another account with a balance of $900, the first transaction must add $100 to $900, and the second must add $100 to $1000. If the second transaction reads the $900 before the first transaction has completed, both transactions will seem to succeed, but $100 will have gone missing. The second transaction must wait until it alone is accessing the data.
### Durability
*Durability* refers to the fact that once data from a transaction has been committed, its effects will remain, even after a system failure. While a transaction is under way, the effects are not persistent. If the database crashes, backups will always restore it to a consistent state prior to the transaction commencing. Nothing a transaction does should be able to change this fact.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Table Elimination User Interface Table Elimination User Interface
================================
One can check that table elimination is working by looking at the output of `EXPLAIN [EXTENDED]` and not finding there the tables that were eliminated:
```
explain select ACRAT_rating from actors where ACNAM_name=’Gary Oldman’;
+----+--------------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
| 1 | PRIMARY | ac_anchor | index | PRIMARY | PRIMARY | 4 | NULL | 2 | Using index |
| 1 | PRIMARY | ac_name | eq_ref | PRIMARY | PRIMARY | 4 | test.ac_anchor.AC_ID | 1 | Using where |
| 1 | PRIMARY | ac_rating | ref | PRIMARY | PRIMARY | 4 | test.ac_anchor.AC_ID | 1 | |
| 3 | DEPENDENT SUBQUERY | sub | ref | PRIMARY | PRIMARY | 4 | test.ac_rating.AC_ID | 1 | Using index |
+----+--------------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
```
Note that `ac_dob` table is not in the output. Now let's try getting birthdate instead:
```
explain select ACDOB_birthdate from actors where ACNAM_name=’Gary Oldman’;
+----+-------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
| 1 | PRIMARY | ac_anchor | index | PRIMARY | PRIMARY | 4 | NULL | 2 | Using index |
| 1 | PRIMARY | ac_name | eq_ref | PRIMARY | PRIMARY | 4 | test.ac_anchor.AC_ID | 1 | Using where |
| 1 | PRIMARY | ac_dob | eq_ref | PRIMARY | PRIMARY | 4 | test.ac_anchor.AC_ID | 1 | |
+----+-------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
3 rows in set (0.01 sec)
```
The `ac_dob` table is there while `ac_rating` and the subquery are gone. Now, if we just want to check the name of the actor:
```
explain select count(*) from actors where ACNAM_name=’Gary Oldman’;
+----+-------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
| 1 | PRIMARY | ac_anchor | index | PRIMARY | PRIMARY | 4 | NULL | 2 | Using index |
| 1 | PRIMARY | ac_name | eq_ref | PRIMARY | PRIMARY | 4 | test.ac_anchor.AC_ID | 1 | Using where |
+----+-------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
2 rows in set (0.01 sec)
```
In this case it will eliminate both the `ac_dob` and `ac_rating` tables.
Removing tables from a query does not make the query slower, and it does not cut off any optimization opportunities, so table elimination is unconditional and there are no plans on having any kind of query hints for it.
For debugging purposes there is a `table_elimination=on|off` switch in debug builds of the server.
See Also
--------
* This page is based on the following blog post about table elimination: <http://s.petrunia.net/blog/?p=58>
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Upgrading Between Minor Versions with Galera Cluster Upgrading Between Minor Versions with Galera Cluster
====================================================
Performing a Rolling Upgrade
----------------------------
The following steps can be used to perform a rolling upgrade between minor versions of MariaDB (for example from [MariaDB 10.3.12](https://mariadb.com/kb/en/mariadb-10312-release-notes/) to [MariaDB 10.3.13](https://mariadb.com/kb/en/mariadb-10313-release-notes/)) when Galera Cluster is being used. In a rolling upgrade, each node is upgraded individually, so the cluster is always operational. There is no downtime from the application's perspective.
Before you upgrade, it would be best to take a backup of your database. This is always a good idea to do before an upgrade. We would recommend [Mariabackup](../mariabackup/index).
For each node, perform the following steps:
1. [Stop MariaDB](../starting-and-stopping-mariadb-starting-and-stopping-mariadb/index).
2. Install the new version of MariaDB and the Galera wsrep provider.
* On Debian, Ubuntu, and other similar Linux distributions, see [Installing MariaDB Packages with APT](../installing-mariadb-deb-files/index#installing-mariadb-packages-with-apt) for more information.
* On RHEL, CentOS, Fedora, and other similar Linux distributions, see [Installing MariaDB Packages with YUM](../yum/index#installing-mariadb-packages-with-yum) for more information.
* On SLES, OpenSUSE, and other similar Linux distributions, see [Installing MariaDB Packages with ZYpp](../installing-mariadb-with-zypper/index#installing-mariadb-packages-with-zypp) for more information.
3. Make any desired changes to configuration options in [option files](../configuring-mariadb-with-option-files/index), such as `my.cnf`. This includes removing any system variables or options that are no longer supported.
4. [Start MariaDB](../starting-and-stopping-mariadb-starting-and-stopping-mariadb/index).
5. Run `[mysql\_upgrade](../mysql_upgrade/index)` with the `--skip-write-binlog` option.
* `mysql_upgrade` does two things:
1. Ensures that the system tables in the `[mysq](../the-mysql-database-tables/index)l` database are fully compatible with the new version.
2. Does a very quick check of all tables and marks them as compatible with the new version of MariaDB .
When this process is done for one node, move onto the next node.
Note that when upgrading the Galera wsrep provider, sometimes the Galera protocol version can change. The Galera wsrep provider should not start using the new protocol version until all cluster nodes have been upgraded to the new version, so this is not generally an issue during a rolling upgrade. However, this can cause issues if you restart a non-upgraded node in a cluster where the rest of the nodes have been upgraded.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb dbForge Fusion: MySQL & MariaDB Plugin for VS dbForge Fusion: MySQL & MariaDB Plugin for VS
=============================================
[**dbForge Fusion**](https://www.devart.com/dbforge/mysql/fusion/) is a powerful add-in for Visual Studio. It provides automatic and simple MariaDB database development and boosts data management capacity. With this tool integrated, it is easy to work with database development and administration tasks from Visual Studio.
dbForge Fusion add-in Key Features:
-----------------------------------
### 1. MySQL & MariaDB Data Import to Visual Studio
Import table data from multiple tables in various formats with the Data Import Wizard
Save import configuration templates for future use
### 2. Data Export from Visual Studio
Export data from multiple tables to various formats
Create templates with export settings for later use
### 3. Visual Studio Schema Compare
Generate schema synchronization scripts
Filter the results of a comparison process
Generate comparison reports
### 4. Data Comparison
Sync data with a command-line interface
Adjust and export comparison results
Generate a schema synchronization script
Filter objects during data comparison
### 5. Integration with Devart dotConnect
Complete codes easily
Enjoy advanced formatting
### 6. Drag&Drop from Database Explorer to WinForms
Drag database objects easily
Execute automatically generated scripts of components
See the result in a convenient grid
### 7. Code Formatter and Syntax Checker
Code snippets
Keyword and object suggestion
Errors highlighting
### 8. Code formatting
Embedded formatting profiles that can be swapped
Command-line support for automatic and scheduled formatting tasks
Bulk formatting
### 9. Routine Debugger
Automate the debugging process
Simplify your work with stored routines and triggers
### 10. Object editors
Enjoy visual creation, modification and management of any database and table objects
Download a free 30-day trial of dbForge Fusion for MariaDB and MySQL [here](https://www.devart.com/dbforge/mysql/fusion/download.html).
[Documentation](https://docs.devart.com/fusion-for-mysql)
[Editions](https://www.devart.com/dbforge/mysql/fusion/editions.html)
| Version | Introduced |
| --- | --- |
| dbForge Fusion for MariaDB and MySQL 6.6 | [MariaDB 10.1](../what-is-mariadb-101/index)-10.5 |
| dbForge Fusion for MariaDB and MySQL 6.1 | [MariaDB 10.0](../what-is-mariadb-100/index), [MariaDB 5.5](../what-is-mariadb-55/index) |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Upgrading MariaDB on Windows Upgrading MariaDB on Windows
============================
Minor upgrades
--------------
To install a minor upgrade, e.g 10.1.27 on top of existing 10.1.26, with MSI, just download the 10.1.27 MSI and start it. It will do everything that needs to be done for minor upgrade automatically - shutdown MariaDB service(s), replace executables and DLLs, and start service(s) again.
The rest of the article is dedicated to \*major\* upgrades, e.g 10.1.x to 10.2.y.
General information on upgrade and version coexistence
------------------------------------------------------
This section assumes MSI installations.
First, check everything listed in the Incompatibilities section of the article relating to the version you are upgrading, for example, [Upgrading from MariaDB 10.1 to MariaDB 10.2](../upgrading-from-mariadb-101-to-mariadb-102/index), to make sure you are prepared for the upgrade.
MariaDB (and also MySQL) allows different versions of the product to co-exist on the same machine, as long as these versions are different either in major or minor version numbers. For example, it is possible to have say [MariaDB 5.1.51](https://mariadb.com/kb/en/mariadb-5151-release-notes/) and 5.2.6 to be installed on the same machine.
However only a single instance of 5.2 can exist. If for example 5.2.7 is installed on a machine where 5.2.6 is already installed, the installer will just replace 5.2.6 executables with 5.2.7 ones.
Now imagine, that both 5.1 and 5.2 are installed on the same machine and we want to upgrade the database instance running on 5.1 to the new version. In this case special tools are requied. Traditionally, `[mysql\_upgrade](../mysql_upgrade/index)` is used to accomplish this. On Windows, the [MySQL upgrade](http://dev.mysql.com/doc/refman/5.5/en/windows-upgrading.html) is a complicated multiple-step manual process.
Since [MariaDB 5.2.6](https://mariadb.com/kb/en/mariadb-526-release-notes/), the Windows distribution includes tools that simplify migration between different versions and also allow migration between MySQL and MariaDB.
**Note**. Automatic upgrades are only possible for DB instances that run as a Windows service.
General recommendations
-----------------------
**Important:** Ignore any statement that tells you to *"just uninstall MySQL and install MariaDB"*. This does not work on Windows, never has, and never will. Keep your MySQL installed until after the database had been converted.
The following install/upgrade sequence is recommended in case of "major" upgrades, like going from 5.3 to 5.5
* Install new version, while still retaining the old one
* Upgrade services one by one, like described later in the document (e.g with mysql\_upgrade\_service). It is recommeded to have services cleanly shut down before the upgrade.
* Uninstall old version when previous step is done.
**Note**. This recommendation differs from the procedure on Unixes, where the upgrade sequence is "uninstall old version, install new version"
Upgrade Wizard
--------------
This is a GUI tool that is typically invoked at the end of a MariaDB installation if upgradable services are found. The UI allows you to select instances you want to upgrade.

mysql\_upgrade\_service
-----------------------
This is a command line tool that performs upgrades. The tool requires full administrative privileges (it has to start and stop services).
Example usage:
```
mysql_upgrade_service --service=MySQL
```
`mysql_upgrade_service` accepts a single parameter — the name of the MySQL or MariaDB service. It performs all the steps to convert a MariaDB/MySQL instance running as the service to the current version.
Migration to 64 bit MariaDB from 32 bit
---------------------------------------
Earlier we said that only single instance of "MariaDB <major>.<minor>" version can be installed on the same machine. This was almost correct, because MariaDB MSI installations allow 32 and 64-bit versions to be installed on the same machine, and in this case it is possible to have two instances of say 5.2 installed at the same time, an x86 one and an x64 one. One can use the x64 Upgrade wizard to upgrade an instance running as a 32-bit process to run as 64-bit.
Upgrading ZIP-based installations.
----------------------------------
Both UpgradeWizard and mysql\_upgrade\_service can also be used to upgrade database instances that were installed with the [ZIP installation](../installing-mariadb-windows-zip-packages/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb HBase Storage Engine HBase Storage Engine
====================
Data mapping from HBase to SQL
The Jira task for this is [MDEV-122](https://jira.mariadb.org/browse/MDEV-122)
Nobody is working on this feature ATM. See [Cassandra Storage Engine](../cassandra-storage-engine/index) for a related development that has reached the release stage.
This page describes a feature that's under development. The feature has not been released (even in beta), its interface and function may change, etc.
Hbase data model and operations
-------------------------------
### 1.1 HBase data model
* An HBase table consists of rows, which are identified by row key.
* Each row has an arbitrary (potentially, very large) number of columns.
* Columns are split into column groups, column groups define how the columns are stored (not reading some column groups is an optimization).
* Each (row, column) combination can have multiple versions of the data, identified by timestamp.
### 1.2 Hbase read operations
HBase API defines two ways to read data:
* Point lookup: get record for a given row\_key.
* Point scan: read all records in [startRow, stopRow) range.
Both kinds of scans allow to specify:
* A column family we're interested in
* A particular column we're interested in
The default behavior for versioned columns is to return only the most recent version. HBase API also allows to ask for
* versions of columns that were valid at some specific timestamp value;
* all versions that were valid within a specifed [minStamp, maxStamp) interval.
* N most recent versions We'll refer to the above as [VersionedDataConds].
One can see two ways to map HBase tables to SQL tables:
* Per-cell mapping
* Per-row mapping
2. Per-cell mapping
-------------------
HBase shell has 'scan' command, here's an example of its output:
```
hbase(main):007:0> scan 'testtable'
ROW COLUMN+CELL
myrow-1 column=colfam1:q1, timestamp=1297345476469, value=value-1
myrow-2 column=colfam1:q2, timestamp=1297345495663, value=value-2
myrow-2 column=colfam1:q3, timestamp=1297345508999, value=value-3
```
Here, one HBase row produces multiple rows in the query output. Each output row represents one (row\_id, column) combination, so rows with multiple columns (and multiple revisions of column data) can be easily represented.
### 2.1 Table definition
Mapping could be defined as follows:
```
CREATE TABLE hbase_tbl_cells (
row_id binary(MAX_HBASE_ROWID_LEN),
column_family binary(MAX_HBASE_COLFAM_LEN),
column_name binary(MAX_HBASE_NAME_LEN),
timestamp TIMESTAMP,
value BLOB,
PRIMARY KEY (row_id, column_family, column_name, timestamp)
) ENGINE=hbase_cell;
```
There is no need for dynamic columns in this mapping.
* NOTE: It is nice to have SQL table DDLs independent of the content of the backend hbase table. This saves us from the need to synchronize table DDLs between hbase and mysql (NDB cluster had to do this and they have ended up implementing a very complex system to do this).
### 2.2 Queries in per-cell mapping
```
# Point-select:
SELECT value
FROM hbase_cell
WHERE
row_id='hbase_row_id' AND
column_family='hbase_column_family' AND column_name='hbase_column'
...
```
```
# Range select:
# (the example uses BETWEEN but we will support arbitrary predicates)
SELECT value
FROM hbase_cell
WHERE
row_id BETWEEN 'hbase_row_id1' AND 'hbase_row_id2' AND
column_family='hbase_column_family' AND column_name='hbase_column'
```
```
# Update a value for {row, column}
UPDATE hbase_cell SET value='value'
WHERE row_id='hbase_row' AND
column_family='col_family' AND column_name='col_name'
```
```
# Add a column into row
INSERT INTO hbase_cell values ('hbase_row', 'col_family','col_name','value');
```
Note that
* accessing versioned data is easy: one can read some particular version, versions within a date range, etc
* it is also easy to select all columns from a certain column family.
### 2.3 Mapping of SQL statements
#### Mapping for SELECT
The table is defined as having a
```
PRIMARY KEY (row_id, column_family, column_name, timestamp)
```
which allows to make use of range optimizer to get ranges on
* rowid
* rowid, column\_family
* rowid, column\_family, column\_name
* ...
If a range specifies one row, we can read it with HTable.get(), otherwise we'll have to use HTable.getScanner() and make use of the obtained scanner.
##### Multiple non-equality conditions
HBase API allows to scan a range of rows, retrieving only certain column name or certain column families. In our SQL mapping, this can be written as:
```
SELECT value
FROM hbase_cell
WHERE
row_id BETWEEN 'hbase_row_id1' AND 'hbase_row_id2' AND
column_family='hbase_column_family' (*)
```
If we feed this into the range optimizer, it will produce a range:
```
('hbase_row_id1', 'hbase_column_family') <= (row_id, column_family) <=
('hbase_row_id2', 'hbase_column_family')
```
which includes all column families for records which satisfy
```
'hbase_row_id1' < rowid < 'hbase_row_id2'
```
This will cause extra data to be read.
Possible solutions:
* Extend multi-range-read interface to walk the 'SEL\_ARG graph' instead of list of ranges. This will allow to capture the exact form of conditions like (\*).
* Implement table condition pushdown and and perform independent condition analysis.
* Define more indexes, so that ranges are "dense". what about (row\_id BETWEEN $X AND $Y) AND (timestamp BETWEEN $T1 AND $T2) ? No matter which index you define, the range list will not be identical to the WHERE clause.
#### Mapping for INSERT
INSERT will be translated into HTable.checkAndPut(..., value=NULL) call. That way, attempt to insert a {rowid, column} that already exists will fail.
#### Mapping for DELETE
MySQL/MariaDB's storage engine API handles DELETEs like this:
* Use some way to read the record that should be deleted
* call handler->ha\_delete\_row(). It will delete the row that was last read.
ha\_hbase\_cell can remember {rowid, column\_name} of the record, and then use HBase.checkAndDelete() call, so that we're sure we're deleting what we've read.
If we get a statement in form of
```
DELETE FROM hbase_cell
WHERE rowid='hbase_row_id' AND column_family='...' AND column_name='...';
```
then reading the record is redundant (we could just make one HBase.checkAndDelete() call). This will require some form of query pushdown, though.
#### Mapping for UPDATE
UPDATEs are similar to deletes as long as row\_id, column\_family, and column\_name fields are not changed (that is, only column\_value changes). Like with DELETEs:
* HBase.checkAndPut() call can be used to make sure we're updating what we've read
* one-point UPDATEs may need a shortcut so that we don't have to read the value before we make an update.
If UPDATE statement changes row\_id, column\_family, or column\_name field, it becomes totally different. HBase doesn't allow to change rowid of a record. We can only remove the record with old rowid, and insert a record with the new rowid. HBase doesn't support multi-row transactions, so we'll want to insert the new variant of the record before we have deleted the old one (I assume that data duplication is better than data loss).
For first milestone, we could disallow UPDATEs that change row\_id, column\_family or column\_name.
3. Per-row mapping
------------------
Let each row in HBase table be mapped into a row from SQL point of view:
```
SELECT * FROM hbase_table;
row-id column1 column2 column3 column4 ...
------ ------- ------- ------- -------
row1 data1 data2
row2 data3
row3 data4 data5
```
The problem is that the set of columns in a HBase table is not fixed and is potentially is very large. The solution is to put all columns into one blob column and use Dynamic Columns (<http://kb.askmonty.org/en/dynamic-columns>) functions to pack/extract values of individual columns:
```
row-id dyn_columns
------ ------------------------------
row1 {column1=data1,column2=data2}
row2 {column3=data3}
row3 {column1=data4,column4=data5}
```
### 3.2 Mapping definition
Table DDL could look like this:
```
CREATE TABLE hbase_tbl_rows (
row_id BINARY(MAX_HBASE_ROWID_LEN),
columns BLOB, -- All columns/values packed in dynamic column format
PRIMARY KEY (row_id)
) ENGINE=hbase_row;
```
(TODO: Does Hbase have MAX\_HBASE\_ROWID\_LEN limit? What is it? We can ignore this. Let the user define 'row\_id' column with whatever limit he desires; don't do operations with rows that have row\_id longer than the limit)
Functions for reading data:
```
COLUMN_GET(dynamic_column, column_nr as type)
COLUMN_EXISTS(dynamic_column, column_nr);
COLUMN_LIST(dynamic_column);
```
Functions for data modification:
```
COLUMN_ADD(dynamic_column, column_nr, value [as type], ...)
COLUMN_DELETE(dynamic_column, column_nr, column_nr, ...);
```
### 3.2.1 Required improvements in Dynamic Columns
Dynamic column functions cannot be used as-is:
* **HBase columns have string names, Dynamic Columns have numbers** (see column\_nr parameter for the above functions). The set of column names in HBase is potentially very large, there is no way to get a list of all names: we won't be able to solve this with enum-style mapping, we'll need real support for string names.
* **HBase has column families, Dynamic Columns do not** . Column family is not just a ':' in the column name. For example, HBase API allows to request "all columns from within a certain column family".
* **HBase supports versioned data, Dynamic Columns do not**. A possible limited solution is to have global/session @@hbase\_timestamp variable which will globally specify the required data version.
* (See also note below about efficient execution)
Names for dynamic columns are covered in [MDEV-377](https://jira.mariadb.org/browse/MDEV-377)
### 3.3 Queries in per-row mapping
```
# Point-select, get value of one column
SELECT COLUMN_GET(hbase_tbl.columns, 'column_name' AS INTEGER)
FROM hbase_tbl
WHERE
row_id='hbase_row_id';
```
```
# Range select:
# (the example uses BETWEEN but we will support arbitrary predicates)
SELECT COLUMN_GET(hbase_tbl.columns, 'column_name' AS INTEGER)
FROM hbase_tbl
WHERE
row_id BETWEEN 'hbase_row_id1' AND 'hbase_row_id2';
```
```
# Update or add a column for a row
UPDATE hbase_tbl SET columns=COLUMN_ADD(columns, 'column_name', 'value') WHERE row_id='hbase_row_id1';
```
Use of COLUMN\_ADD like above will make no check whether column\_name=X already existed for that row. If it did, it will be silently overwritten.
ATTENTION: There seems to be no easy way to do something that would be like SQL's INSERT statement, i.e. which would fail if the data you're changing already exists.
One can write a convoluted IF(..., ....) expression will do the store-if-not-exist operation, but it's bad when basic operations require convoluted expressions.
ATTENTION: One could also question whether a statement with semantics of "store this data irrespectively of what was there before" has any value for "remote" storage engine, where you're not the only one who's modifying the data.
```
# Set all columns at once, overwriting the content that was there
UPDATE hbase_tbl SET columns=... WHERE row_id='hbase_row_id1';
UPDATE hbase_tbl SET columns=COLUMN_CREATE('column1', 'foo') WHERE row_id='row1';
```
Note that the lsat statement will cause all columns except for 'column1' to be deleted for row 'row1'. This seems logical for SQL but there is no such operation in HBase.
```
# Insert a new row with column(s)
INSERT INTO hbase_tbl (row_id, columns) VALUES
('hbase_row_id', COLUMN_CREATE('column_name', 'column-value'));
```
Q: It's not clear how to access versioned data? Can we go without versioned data for the first milestone? (and then, use @@hbase\_timestamp for the second milestone?)
Q: It's not clear how to select "all columns from column family X".
### 3.4 Efficient execution for per-row mapping
#### 3.4.1 Predicate analysis
The table declares:
```
row_id BINARY(MAX_HBASE_ROWID_LEN),
...
PRIMARY KEY (row_id)
```
which allows to use range/ref optimizer to extract ranges over the row\_id column.
One can also imagine a realistic query which uses conditions on hbase column names:
```
SELECT column_get(columns, 'some_data') FROM hbase_tbl
WHERE
row_id BETWEEN 'first_interesting_row' and 'last_interesting_row' AND
column_get(columns, 'attribute' as string)='eligible';
```
Range optimizer is unable to catch conditions in form of
```
column_get(columns, 'attribute' as string)='eligible'
```
We'll need to either extend it, or create another condition analyzer.
#### 3.4.2 Dynamic columns optimizations
Currently, MariaDB works with Dynamic Columns with this scenario:
1. When the record is read, the entire blob (=all columns) is read into memory
2. The query operates on the blob with Dynamic Columns Functions (reads and updates values for some columns, etc)
3. [If this is an UPDATE] the entire blob is written back to the table
If we use this approach with HBase, we will cause a lot of overhead with reading/writing of unneeded data.
#### Solution #1: on-demand reads
* When table record is read, don't read any columns, return a blob handle.
* Dynamic Column functions will use the handle to read particular columns. The column is read from HBase only when its value is requested.
This scheme ensures there are no redundant data reads, at the cost making extra mysqld<->HBase roundtrips (which are likely to be expensive)
#### Solution #2: List of reads
* Walk through the query and find all references to hbase\_table.columns.
* Collect the names of columns that are read, and retrieve only these columns.
This may cause redundant data reads, for example for
```
SELECT COLUMN_GET(hbase_tbl, 'column1' AS INTEGER)
FROM hbase_tbl
WHERE
row_id BETWEEN 'hbase_row_id1' AND 'hbase_row_id2' AND
COLUMN_GET(hbase_tbl, 'column2' AS INTEGER)=1
```
column1 will be read for rows which have column2!=1. This still seems to be better than making extra roundtrips.
There is a question of what should be done when the query has references like
```
COLUMN_GET(hbase_tbl, {non-const-item} AS ...)
```
where it is not possible to tell in advance which columns must be read. Possible approaches are
* retrieve all columns
* fetch columns on demand
* stop the query with an error.
### 3.5 Mapping of SQL statements
#### SELECT
See above sections: we'll be able to analyze condition on row\_id, and a list of columns we need to read. That will give sufficient info to do either an HTable.get() call, or call HTable.getScanner() and use the scanner.
#### INSERT
INSERT should make sure it actually creates the row, it should not overwrite existing rows. This is not trivial in HBase. The closest we can get is to make a number of HTable.checkAndPut() calls, with the checks that we're not overwriting the data.
This will cause INSERT ('row\_id', COLUMN\_CREATE('column1', 'data')) to succeed even if the table already had a row with ('row\_id', COLUMN\_CREATE('column2', 'data')).
Another possible problem is that INSERT can fail mid-way (we will insert only some columns of the record).
#### DELETE
DELETE seems to be ok: we can delete all {rowid, column\_name} combinations for the given row\_id. I'm not sure, perhaps this will require multiple HBase calls.
#### UPDATE
Just like with per-cell mapping, UPDATEs that change the row\_id are actually deletions followed by inserts. We can disallow them in the first milestone.
The most frequent form of UPDATE is expected to be one that changes the value of a column:
```
UPDATE hbase_tbl SET columns=COLUMN_ADD(columns, 'column_name', 'value')
WHERE
row_id='hbase_row_id1' AND COLUMN_GET(columns, 'column_name')='foo';
```
For that one, we need modified Dynamic Column Functions that will represent \*changes\* in the set of columns (and not \*state\*), so that we can avoid reading columns and writing them back.
4. Select-columns mapping
-------------------------
This is a simplification of the per-row mapping. Suppose, the user is only interested in particular columns with names `column1` and `column2`. They create a table with this definition:
```
CREATE TABLE hbase_tbl_cells (
row_id binary(MAX_HBASE_ROWID_LEN),
column1 TYPE,
column2 TYPE,
PRIMARY KEY (row_id),
KEY(column1),
KEY(column2)
) ENGINE=hbase_columns;
```
and then access it. Access is done like in per-row mapping, but without use of dynamic columns.
This mapping imposes lots of restrictions: it is only possible to select a fixed set of columns, there is no way to specify version of the data, etc.
5. Comparison of the mappings
-----------------------------
If we select two columns from a certain row, per-cell mapping produces "vertical" result, while per-row mapping produces "horizontal" result.
```
# Per-cell:
SELECT column_name, value
FROM hbase_cell
WHERE
row_id='hbase_row_id1' AND
column_family='col_fam' AND column_name IN ('column1','column2')
+-------------+-------+
| column_name | value |
+-------------+-------+
| column1 | val1 |
| column2 | val2 |
+-------------+-------+
```
```
# Per row:
SELECT
COLUMN_GET(columns, 'col_fam:column1') as column1,
COLUMN_GET(columns, 'col_fam:column2') as column2,
FROM hbase_row
WHERE
row_id='hbase_row_id1'
+---------+---------+
| column1 | column2 |
+---------+---------+
| val1 | val2 |
+---------+---------+
```
Per-cell mapping:
* Allows a finer control over selection of versioned data (easy to specify [range of] versions to select), column families, etc.
* Is more suitable for cases when one needs to select an arbitrarily-long list of columns.
Per-row (or select-columns) mapping is easier when:
* one is accessing a limited set of columns
* one needs to access multiple columns from multiple rows (in per-cell mapping this will require an [inefficient?] self-join).
6. Interfacing with HBase
-------------------------
HBase is in Java, and its native client API is a java library. We need to interface with it from C++ storage engine code. Possible options are:
### 6.1 Use Thrift
This requires HBase installation to run a Thrift server
### 6.2 Re-implement HBase's network protocol
* It seems to be a custom-made RPC protocol.
* There is an independent re-implementation here: <https://github.com/stumbleupon/asynchbase>. It is 10K lines of Java code, which gives an idea about HBase's protocol complexity
+ It seems to support only a subset of features? I.e. I was unable to find mention of pushed down conditions support?
+ Look in `HBaseRpc.java` for `"Unofficial Hadoop / HBase RPC protocol documentation"`
### 6.3 Use JNI+HBase client protocol
* not sure how complex this is
* Mark has mentioned this has an unacceptable overhead?
7. Consistency, transactions, etc
---------------------------------
* HBase has single-record transactions. Does this mean that HBase storage engine will have MyISAM-like characteristics? e.g. if we fail in the middle of a multi-row UPDATE, there is no way to go back.
* Q: Are the writes important at all? (e.g. if we've had the first version with provide read-only access, would that be useful?) A: Yes?
8. Batching
-----------
Q: will we need joins, i.e. do I need to implement Multi-Range-Read and support Batched Key Access right away?
9. Results of discussion with Monty
-----------------------------------
* Per-row mapping seems to be much more useful than per-cell mapping, because a lot of users have queries that retrieve lots of columns for lots of rows (is this so?)
* Dynamic column format will support string column names (see [MDEV-377](https://jira.mariadb.org/browse/MDEV-377))
* For the first milestone, forget about dynamic column concerns mentioned in "Efficient execution for per-row mapping". It is sufficient that all columns are returned as one blob that physically contains all columns.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Subquery Optimizations Subquery Optimizations
=======================
Articles about subquery optimizations in MariaDB.
| Title | Description |
| --- | --- |
| [Subquery Optimizations Map](../subquery-optimizations-map/index) | Map showing types of subqueries and the optimizer strategies available to handle them |
| [Semi-join Subquery Optimizations](../semi-join-subquery-optimizations/index) | MariaDB has a set of optimizations specifically targeted at semi-join subqueries |
| [Table Pullout Optimization](../table-pullout-optimization/index) | Table pullout is an optimization for Semi-join subqueries. |
| [Non-semi-join Subquery Optimizations](../non-semi-join-subquery-optimizations/index) | Alternative strategies for IN-subqueries that cannot be flattened into semi-joins |
| [Subquery Cache](../subquery-cache/index) | Subquery cache for optimizing the evaluation of correlated subqueries. |
| [Condition Pushdown Into IN subqueries](../condition-pushdown-into-in-subqueries/index) | This article describes Condition Pushdown into IN subqueries as implemented... |
| [Conversion of Big IN Predicates Into Subqueries](../conversion-of-big-in-predicates-into-subqueries/index) | The optimizer will convert big IN predicates into subqueries. |
| [EXISTS-to-IN Optimization](../exists-to-in-optimization/index) | Optimizations for IN subqueries. |
| [Optimizing GROUP BY and DISTINCT Clauses in Subqueries](../optimizing-group-by/index) | MariaDB removes DISTINCT and GROUP BY without HAVING in certain cases |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema socket_summary_by_instance Table Performance Schema socket\_summary\_by\_instance Table
======================================================
It aggregates timer and byte count statistics for all socket I/O operations by socket instance.
| Column | Description |
| --- | --- |
| `EVENT_NAME` | Socket instrument. |
| `OBJECT_INSTANCE_BEGIN` | Address in memory. |
| `COUNT_STAR` | Number of summarized events |
| `SUM_TIMER_WAIT` | Total wait time of the summarized events that are timed. |
| `MIN_TIMER_WAIT` | Minimum wait time of the summarized events that are timed. |
| `AVG_TIMER_WAIT` | Average wait time of the summarized events that are timed. |
| `MAX_TIMER_WAIT` | Maximum wait time of the summarized events that are timed. |
| `COUNT_READ` | Number of all read operations, including `RECV`, `RECVFROM`, and `RECVMSG`. |
| `SUM_TIMER_READ` | Total wait time of all read operations that are timed. |
| `MIN_TIMER_READ` | Minimum wait time of all read operations that are timed. |
| `AVG_TIMER_READ` | Average wait time of all read operations that are timed. |
| `MAX_TIMER_READ` | Maximum wait time of all read operations that are timed. |
| `SUM_NUMBER_OF_BYTES_READ` | Bytes read by read operations. |
| `COUNT_WRITE` | Number of all write operations, including `SEND`, `SENDTO`, and `SENDMSG`. |
| `SUM_TIMER_WRITE` | Total wait time of all write operations that are timed. |
| `MIN_TIMER_WRITE` | Minimum wait time of all write operations that are timed. |
| `AVG_TIMER_WRITE` | Average wait time of all write operations that are timed. |
| `MAX_TIMER_WRITE` | Maximum wait time of all write operations that are timed. |
| `SUM_NUMBER_OF_BYTES_WRITE` | Bytes written by write operations. |
| `COUNT_MISC` | Number of all miscellaneous operations not counted above, including `CONNECT`, `LISTEN`, `ACCEPT`, `CLOSE`, and `SHUTDOWN`. |
| `SUM_TIMER_MISC` | Total wait time of all miscellaneous operations that are timed. |
| `MIN_TIMER_MISC` | Minimum wait time of all miscellaneous operations that are timed. |
| `AVG_TIMER_MISC` | Average wait time of all miscellaneous operations that are timed. |
| `MAX_TIMER_MISC` | Maximum wait time of all miscellaneous operations that are timed. |
The corresponding row in the table is deleted when a connection terminates.
You can [TRUNCATE](../truncate-table/index) the table, which will reset all counters to zero.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore Bulk Data Loading ColumnStore Bulk Data Loading
=============================
Overview
--------
cpimport is a high-speed bulk load utility that imports data into ColumnStore tables in a fast and efficient manner. It accepts as input any flat file containing data that contains a delimiter between fields of data (i.e. columns in a table). The default delimiter is the pipe (‘|’) character, but other delimiters such as commas may be used as well. The data values must be in the same order as the create table statement, i.e. column 1 matches the first column in the table and so on. Date values must be specified in the format 'yyyy-mm-dd'.
cpimport – performs the following operations when importing data into a MariaDB ColumnStore database:
* Data is read from specified flat files.
* Data is transformed to fit ColumnStore’s column-oriented storage design.
* Redundant data is tokenized and logically compressed.
* Data is written to disk.
It is important to note that:
* The bulk loads are an append operation to a table so they allow existing data to be read and remain unaffected during the process.
* The bulk loads do not write their data operations to the transaction log; they are not transactional in nature but are considered an atomic operation at this time. Information markers, however, are placed in the transaction log so the DBA is aware that a bulk operation did occur.
* Upon completion of the load operation, a high water mark in each column file is moved in an atomic operation that allows for any subsequent queries to read the newly loaded data. This append operation provides for consistent read but does not incur the overhead of logging the data.
There are two primary steps to using the cpimport utility:
1. Optionally create a job file that is used to load data from a flat file into multiple tables.
2. Run the cpimport utility to perform the data import.
Syntax
------
The simplest form of cpimport command is
```
cpimport dbName tblName [loadFile]
```
The full syntax is like this:
```
cpimport dbName tblName [loadFile]
[-h] [-m mode] [-f filepath] [-d DebugLevel]
[-c readBufferSize] [-b numBuffers] [-r numReaders]
[-e maxErrors] [-B libBufferSize] [-s colDelimiter] [-E EnclosedByChar]
[-C escChar] [-j jobID] [-p jobFilePath] [-w numParsers]
[-n nullOption] [-P pmList] [-i] [-S] [-q batchQty]
positional parameters:
dbName Name of the database to load
tblName Name of table to load
loadFile Optional input file name in current directory,
unless a fully qualified name is given.
If not given, input read from STDIN.
Options:
-b Number of read buffers
-c Application read buffer size(in bytes)
-d Print different level(1-3) debug message
-e Max number of allowable error per table per PM
-f Data file directory path.
Default is current working directory.
In Mode 1, -f represents the local input file path.
In Mode 2, -f represents the PM based input file path.
In Mode 3, -f represents the local input file path.
-l Name of import file to be loaded, relative to -f path. (Cannot be used with -p)
-h Print this message.
-q Batch Quantity, Number of rows distributed per batch in Mode 1
-i Print extended info to console in Mode 3.
-j Job ID. In simple usage, default is the table OID.
unless a fully qualified input file name is given.
-n NullOption (0-treat the string NULL as data (default);
1-treat the string NULL as a NULL value)
-p Path for XML job description file.
-r Number of readers.
-s The delimiter between column values.
-B I/O library read buffer size (in bytes)
-w Number of parsers.
-E Enclosed by character if field values are enclosed.
-C Escape character used in conjunction with 'enclosed by'
character, or as part of NULL escape sequence ('\N');
default is '\'
-I Import binary data; how to treat NULL values:
1 - import NULL values
2 - saturate NULL values
-P List of PMs ex: -P 1,2,3. Default is all PMs.
-S Treat string truncations as errors.
-m mode
1 - rows will be loaded in a distributed manner across PMs.
2 - PM based input files loaded onto their respective PM.
3 - input files will be loaded on the local PM.
```
cpimport modes
--------------
### Mode 1: Bulk Load from a central location with single data source file
In this mode, you run the cpimport from your primary node (mcs1). The source file is located at this primary location and the data from cpimport is distributed across all the nodes. If no mode is specified, then this is the default.
Example:
```
cpimport -m1 mytest mytable mytable.tbl
```
### Mode 2: Bulk load from central location with distributed data source files
In this mode, you run the cpimport from your primary node (mcs1). The source data is in already partitioned data files residing on the PMs. Each PM should have the source data file of the same name but containing the partitioned data for the PM
Example:
```
cpimport -m2 mytest mytable -l /home/mydata/mytable.tbl
```
### Mode 3: Parallel distributed bulk load
In this mode, you run cpimport from the individual nodes independently, which will import the source file that exists on that node. Concurrent imports can be executed on every node for the same table.
Example:
```
cpimport -m3 mytest mytable /home/mydata/mytable.tbl
```
Note:
* The bulk loads are an append operation to a table so they allow existing data to be read and remain unaffected during the process.
* The bulk loads do not write their data operations to the transaction log; they are not transactional in nature but are considered an atomic operation at this time. Information markers, however, are placed in the transaction log so the DBA is aware that a bulk operation did occur.
* Upon completion of the load operation, a high water mark in each column file is moved in an atomic operation that allows for any subsequent queries to read the newly loaded data. This append operation provides for consistent read but does not incur the overhead of logging the data.
Bulk loading data from STDIN
----------------------------
Data can be loaded from STDIN into ColumnStore by simply not including the loadFile parameter
Example:
```
cpimport db1 table1
```
Bulk loading from AWS S3
------------------------
Similarly the AWS cli utility can be utilized to read data from an s3 bucket and pipe the output into cpimport allowing direct loading from S3. This assumes the aws cli program has been installed and configured on the host:
Example:
```
aws s3 cp --quiet s3://dthompson-test/trades_bulk.csv - | cpimport test trades -s ","
```
For troubleshooting connectivity problems remove the --quiet option which suppresses client logging including permission errors.
Bulk loading data from S3 bucket directly into SkySQL
-----------------------------------------------------
SInce SkySQL is a managed service, the normal command line utility (cpimport) is not exposed to end users. However, cpimport is still invoked on the database when using LOAD DATA LOCAL INFILE. The following example shows a method for pulling data from an S3 bucket and pushing to a SkySQL Columnstore table.
Example:
```
aws s3 cp --quiet s3://my-s3-bucket/flights.csv - | mariadb -e "LOAD DATA LOCAL INFILE '/dev/stdin' INTO TABLE bts.flights FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' LINES TERMINATED BY '\n';"
```
Bulk loading output of SELECT FROM Table(s)
-------------------------------------------
Standard in can also be used to directly pipe the output from an arbitrary SELECT statement into cpimport. The select statement may select from non-columnstore tables such as [MyISAM](../myisam/index) or [InnoDB](../innodb/index). In the example below, the db2.source\_table is selected from, using the -N flag to remove non-data formatting. The -q flag tells the mysql client to not cache results which will avoid possible timeouts causing the load to fail.
Example:
```
mariadb -q -e 'select * from source_table;' -N <source-db> | cpimport -s '\t' <target-db> <target-table>
```
Bulk loading from JSON
----------------------
Let's create a sample ColumnStore table:
```
CREATE DATABASE `json_columnstore`;
USE `json_columnstore`;
CREATE TABLE `products` (
`product_name` varchar(11) NOT NULL DEFAULT '',
`supplier` varchar(128) NOT NULL DEFAULT '',
`quantity` varchar(128) NOT NULL DEFAULT '',
`unit_cost` varchar(128) NOT NULL DEFAULT ''
) ENGINE=Columnstore DEFAULT CHARSET=utf8;
```
Now let's create a sample products.json file like this:
```
[{
"_id": {
"$oid": "5968dd23fc13ae04d9000001"
},
"product_name": "Sildenafil Citrate",
"supplier": "Wisozk Inc",
"quantity": 261,
"unit_cost": "$10.47"
}, {
"_id": {
"$oid": "5968dd23fc13ae04d9000002"
},
"product_name": "Mountain Juniperus Ashei",
"supplier": "Keebler-Hilpert",
"quantity": 292,
"unit_cost": "$8.74"
}, {
"_id": {
"$oid": "5968dd23fc13ae04d9000003"
},
"product_name": "Dextromethorphan HBR",
"supplier": "Schmitt-Weissnat",
"quantity": 211,
"unit_cost": "$20.53"
}]
```
We can then bulk load data from JSON into Columnstore by first piping the data to [jq](https://stedolan.github.io/jq/manual/v1.6/) and then to [cpimport](index) using a one line command.
Example:
```
cat products.json | jq -r '.[] | [.product_name,.supplier,.quantity,.unit_cost] | @csv' | cpimport json_columnstore products -s ',' -E '"'
```
In this example, the JSON data is coming from a static JSON file but this same method will work for and output streamed from any datasource using JSON such as an API or NoSQL database. For more information on 'jq', please view the manual here [here](https://stedolan.github.io/jq/manual/v1.6/).
Bulk loading into multiple tables
---------------------------------
There are two ways multiple tables can be loaded:
1. Run multiple cpimport jobs simultaneously. Tables per import should be unique or [PMs](../columnstore-performance-module/index) for each import should be unique if using mode 3.
2. Use colxml utility : colxml creates an XML job file for your database schema before you can import data. Multiple tables may be imported by either importing all tables within a schema or listing specific tables using the -t option in colxml. Then, using cpimport, that uses the job file generated by colxml. Here is an example of how to use colxml and cpimport to import data into all the tables in a database schema
```
colxml mytest -j299
cpimport -m1 -j299
```
### colxml syntax
```
Usage: colxml [options] dbName
Options:
-d Delimiter (default '|')
-e Maximum allowable errors (per table)
-h Print this message
-j Job id (numeric)
-l Load file name
-n "name in quotes"
-p Path for XML job description file that is generated
-s "Description in quotes"
-t Table name
-u User
-r Number of read buffers
-c Application read buffer size (in bytes)
-w I/O library buffer size (in bytes), used to read files
-x Extension of file name (default ".tbl")
-E EnclosedByChar (if data has enclosed values)
-C EscapeChar
-b Debug level (1-3)
```
### Example usage of colxml
The following tables comprise a database name ‘tpch2’:
```
MariaDB[tpch2]> show tables;
+---------------+
| Tables_in_tpch2 |
+--------------+
| customer |
| lineitem |
| nation |
| orders |
| part |
| partsupp |
| region |
| supplier |
+--------------+
8 rows in set (0.00 sec)
```
1. First, put delimited input data file for each table in /usr/local/mariadb/columnstore/data/bulk/data/import. Each file should be named <tblname>.tbl.
2. Run colxml for the load job for the ‘tpch2’ database as shown here:
```
/usr/local/mariadb/columnstore/bin/colxml tpch2 -j500
Running colxml with the following parameters:
2015-10-07 15:14:20 (9481) INFO :
Schema: tpch2
Tables:
Load Files:
-b 0
-c 1048576
-d |
-e 10
-j 500
-n
-p /usr/local/mariadb/columnstore/data/bulk/job/
-r 5
-s
-u
-w 10485760
-x tbl
File completed for tables:
tpch2.customer
tpch2.lineitem
tpch2.nation
tpch2.orders
tpch2.part
tpch2.partsupp
tpch2.region
tpch2.supplier
Normal exit.
```
Now actually run cpimport to use the job file generated by the colxml execution
```
/usr/local/mariadb/columnstore/bin/cpimport -j 500
Bulkload root directory : /usr/local/mariadb/columnstore/data/bulk
job description file : Job_500.xml
2015-10-07 15:14:59 (9952) INFO : successfully load job file /usr/local/mariadb/columnstore/data/bulk/job/Job_500.xml
2015-10-07 15:14:59 (9952) INFO : PreProcessing check starts
2015-10-07 15:15:04 (9952) INFO : PreProcessing check completed
2015-10-07 15:15:04 (9952) INFO : preProcess completed, total run time : 5 seconds
2015-10-07 15:15:04 (9952) INFO : No of Read Threads Spawned = 1
2015-10-07 15:15:04 (9952) INFO : No of Parse Threads Spawned = 3
2015-10-07 15:15:06 (9952) INFO : For table tpch2.customer: 150000 rows processed and 150000 rows inserted.
2015-10-07 15:16:12 (9952) INFO : For table tpch2.nation: 25 rows processed and 25 rows inserted.
2015-10-07 15:16:12 (9952) INFO : For table tpch2.lineitem: 6001215 rows processed and 6001215 rows inserted.
2015-10-07 15:16:31 (9952) INFO : For table tpch2.orders: 1500000 rows processed and 1500000 rows inserted.
2015-10-07 15:16:33 (9952) INFO : For table tpch2.part: 200000 rows processed and 200000 rows inserted.
2015-10-07 15:16:44 (9952) INFO : For table tpch2.partsupp: 800000 rows processed and 800000 rows inserted.
2015-10-07 15:16:44 (9952) INFO : For table tpch2.region: 5 rows processed and 5 rows inserted.
2015-10-07 15:16:45 (9952) INFO : For table tpch2.supplier: 10000 rows processed and 10000 rows inserted.
```
Handling Differences in Column Order and Values
-----------------------------------------------
If there are some differences between the input file and table definition then the colxml utility can be utilized to handle these cases:
* Different order of columns in the input file from table order
* Input file column values to be skipped / ignored.
* Target table columns to be defaulted.
In this case run the colxml utility (the -t argument can be useful for producing a job file for one table if preferred) to produce the job xml file and then use this a template for editing and then subsequently use that job file for running cpimport.
Consider the following simple table example:
```
create table emp (
emp_id int,
dept_id int,
name varchar(30),
salary int,
hire_date date) engine=columnstore;
```
This would produce a colxml file with the following table element:
```
<Table tblName="test.emp"
loadName="emp.tbl" maxErrRow="10">
<Column colName="emp_id"/>
<Column colName="dept_id"/>
<Column colName="name"/>
<Column colName="salary"/>
<Column colName="hire_date"/>
</Table>
```
If your input file had the data such that hire\_date comes before salary then the following modification will allow correct loading of that data to the original table definition (note the last 2 Column elements are swapped):
```
<Table tblName="test.emp"
loadName="emp.tbl" maxErrRow="10">
<Column colName="emp_id"/>
<Column colName="dept_id"/>
<Column colName="name"/>
<Column colName="hire_date"/>
<Column colName="salary"/>
</Table>
```
The following example would ignore the last entry in the file and default salary to it's default value (in this case null):
```
<Table tblName="test.emp"
loadName="emp.tbl" maxErrRow="10">
<Column colName="emp_id"/>
<Column colName="dept_id"/>
<Column colName="name"/>
<Column colName="hire_date"/>
<IgnoreField/>
<DefaultColumn colName="salary"/>
</Table>
```
* IgnoreFields instructs cpimport to ignore and skip the particular value at that position in the file.
* DefaultColumn instructs cpimport to default the current table column and not move the column pointer forward to the next delimiter.
Both instructions can be used indepedently and as many times as makes sense for your data and table definition.
Binary Source Import
--------------------
It is possible to import using a binary file instead of a CSV file using fixed length rows in binary data. This can be done using the '-I' flag which has two modes:
* -I1 - binary mode with NULLs accepted Numeric fields containing NULL will be treated as NULL unless the column has a default value
* -I2 - binary mode with NULLs saturated NULLs in numeric fields will be saturated
```
Example
cpimport -I1 mytest mytable /home/mydata/mytable.bin
```
The following table shows how to represent the data in the binary format:
| Datatype | Description |
| --- | --- |
| INT/TINYINT/SMALLINT/BIGINT | Little-endian format for the numeric data |
| FLOAT/DOUBLE | IEEE format native to the computer |
| CHAR/VARCHAR | Data padded with '\0' for the length of the field. An entry that is all '\0' is treated as NULL |
| DATE | Using the Date struct below |
| DATETIME | Using the DateTime struct below |
| DECIMAL | Stored using an integer representation of the DECIMAL without the decimal point. With precision/width of 2 or less 2 bytes should be used, 3-4 should use 3 bytes, 4-9 should use 4 bytes and 10+ should use 8 bytes |
For NULL values the following table should be used:
| Datatype | Signed NULL | Unsigned NULL |
| --- | --- | --- |
| BIGINT | 0x8000000000000000ULL | 0xFFFFFFFFFFFFFFFEULL |
| INT | 0x80000000 | 0xFFFFFFFE |
| SMALLINT | 0x8000 | 0xFFFE |
| TINYINT | 0x80 | 0xFE |
| DECIMAL | As equiv. INT | As equiv. INT |
| FLOAT | 0xFFAAAAAA | N/A |
| DOUBLE | 0xFFFAAAAAAAAAAAAAULL | N/A |
| DATE | 0xFFFFFFFE | N/A |
| DATETIME | 0xFFFFFFFFFFFFFFFEULL | N/A |
| CHAR/VARCHAR | Fill with '\0' | N/A |
### Date Struct
```
struct Date
{
unsigned spare : 6;
unsigned day : 6;
unsigned month : 4;
unsigned year : 16
};
```
The spare bits in the Date struct "must" be set to 0x3E.
### DateTime Struct
```
struct DateTime
{
unsigned msecond : 20;
unsigned second : 6;
unsigned minute : 6;
unsigned hour : 6;
unsigned day : 6;
unsigned month : 4;
unsigned year : 16
};
```
Working Folders & Logging
-------------------------
As of version 1.4, **cpimport** uses the `/var/lib/columnstore/bulk` folder for all work being done. This folder contains:
1. Logs
2. Rollback info
3. Job info
4. A staging folder
The log folder typically contains:
```
-rw-r--r--. 1 root root 0 Dec 29 06:41 cpimport_1229064143_21779.err
-rw-r--r--. 1 root root 1146 Dec 29 06:42 cpimport_1229064143_21779.log
```
A typical log might look like this:
```
2020-12-29 06:41:44 (21779) INFO : Running distributed import (mode 1) on all PMs...
2020-12-29 06:41:44 (21779) INFO2 : /usr/bin/cpimport.bin -s , -E " -R /tmp/columnstore_tmp_files/BrmRpt112906414421779.rpt -m 1 -P pm1-21779 -T SYSTEM -u388952c1-4ab8-46d6-9857-c44827b1c3b9 bts flights
2020-12-29 06:41:58 (21779) INFO2 : Received a BRM-Report from 1
2020-12-29 06:41:58 (21779) INFO2 : Received a Cpimport Pass from PM1
2020-12-29 06:42:03 (21779) INFO2 : Received a BRM-Report from 2
2020-12-29 06:42:03 (21779) INFO2 : Received a Cpimport Pass from PM2
2020-12-29 06:42:03 (21779) INFO2 : Received a BRM-Report from 3
2020-12-29 06:42:03 (21779) INFO2 : BRM updated successfully
2020-12-29 06:42:03 (21779) INFO2 : Received a Cpimport Pass from PM3
2020-12-29 06:42:04 (21779) INFO2 : Released Table Lock
2020-12-29 06:42:04 (21779) INFO2 : Cleanup succeed on all PMs
2020-12-29 06:42:04 (21779) INFO : For table bts.flights: 374573 rows processed and 374573 rows inserted.
2020-12-29 06:42:04 (21779) INFO : Bulk load completed, total run time : 20.3052 seconds
2020-12-29 06:42:04 (21779) INFO2 : Shutdown of all child threads Finished!!
```
*Prior to version 1.4, this folder was located at* `/usr/local/mariadb/columnstore/bulk`*.*
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Slave I/O Thread States Slave I/O Thread States
=======================
This article documents thread states that are related to [replication](../replication/index) slave I/O threads. These correspond to the `Slave_IO_State` shown by [SHOW SLAVE STATUS](../show-slave-status/index) and the `STATE` values listed by the [SHOW PROCESSLIST](../show-processlist/index) statement or in the [Information Schema PROCESSLIST Table](../information-schema-processlist-table/index) as well as the `PROCESSLIST_STATE` value listed in the [Performance Schema threads Table](../performance-schema-threads-table/index).
| Value | Description |
| --- | --- |
| Checking master version | Checking the master's version, which only occurs very briefly after establishing a connection with the master. |
| Connecting to master | Attempting to connect to master. |
| Queueing master event to the relay log | Event is being copied to the [relay log](../relay-log/index) after being read, where it can be processed by the SQL thread. |
| Reconnecting after a failed binlog dump request | Attempting to reconnect to the master after a previously failed binary log dump request. |
| Reconnecting after a failed master event read | Attempting to reconnect to the master after a previously failed request. After successfully connecting, the state will change to `Waiting for master to send event`. |
| Registering slave on master | Registering the slave on the master, which only occurs very briefly after establishing a connection with the master. |
| Requesting binlog dump | Requesting the contents of the binary logs from the given log file name and position. Only occurs very briefly after establishing a connection with the master. |
| Waiting for master to send event | Waiting for [binary log](../binary-log/index) events to arrive after successfully connecting. If there are no new events on the master, this state can persist for as many seconds as specified by the [slave\_net\_timeout](../replication-and-binary-log-server-system-variables/index#slave_net_timeout) system variable, after which the thread will reconnect. |
| Waiting for slave mutex on exit | Waiting for slave mutex while the thread is stopping. Only occurs very briefly. |
| Waiting for the slave SQL thread to free enough relay log space. | [Relay log](../relay-log/index) has reached its maximum size, determined by [relay\_log\_space\_limit](../replication-and-binary-log-server-system-variables/index#relay_log_space_limit) (no limit by default), so waiting for the SQL thread to free up space by processing enough relay log events. |
| Waiting for master update | State before connecting to master. |
| Waiting to reconnect after a failed binlog dump request | Waiting to reconnect after a binary log dump request has failed due to disconnection. The length of time in this state is determined by the `MASTER_CONNECT_RETRY` clause of the [CHANGE MASTER TO](../change-master-to/index) statement. |
| Waiting to reconnect after a failed master event read | Sleeping while waiting to reconnect after a disconnection error. The time in seconds is determined by the `MASTER_CONNECT_RETRY` clause of the [CHANGE MASTER TO](../change-master-to/index) statement. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CONCAT_WS CONCAT\_WS
==========
Syntax
------
```
CONCAT_WS(separator,str1,str2,...)
```
Description
-----------
`CONCAT_WS()` stands for Concatenate With Separator and is a special form of `[CONCAT()](../concat/index)`. The first argument is the separator for the rest of the arguments. The separator is added between the strings to be concatenated. The separator can be a string, as can the rest of the arguments.
If the separator is `NULL`, the result is `NULL`; all other `NULL` values are skipped. This makes `CONCAT_WS()` suitable when you want to concatenate some values and avoid losing all information if one of them is `NULL`.
Examples
--------
```
SELECT CONCAT_WS(',','First name','Second name','Last Name');
+-------------------------------------------------------+
| CONCAT_WS(',','First name','Second name','Last Name') |
+-------------------------------------------------------+
| First name,Second name,Last Name |
+-------------------------------------------------------+
SELECT CONCAT_WS('-','Floor',NULL,'Room');
+------------------------------------+
| CONCAT_WS('-','Floor',NULL,'Room') |
+------------------------------------+
| Floor-Room |
+------------------------------------+
```
In some cases, remember to include a space in the separator string:
```
SET @a = 'gnu', @b = 'penguin', @c = 'sea lion';
Query OK, 0 rows affected (0.00 sec)
SELECT CONCAT_WS(', ', @a, @b, @c);
+-----------------------------+
| CONCAT_WS(', ', @a, @b, @c) |
+-----------------------------+
| gnu, penguin, sea lion |
+-----------------------------+
```
Using `CONCAT_WS()` to handle `NULL`s:
```
SET @a = 'a', @b = NULL, @c = 'c';
SELECT CONCAT_WS('', @a, @b, @c);
+---------------------------+
| CONCAT_WS('', @a, @b, @c) |
+---------------------------+
| ac |
+---------------------------+
```
See Also
--------
* [GROUP\_CONCAT()](../group_concat/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore Utility Functions ColumnStore Utility Functions
=============================
MariaDB ColumnStore Utility Functions are a set of simple functions that return useful information about the system, such as whether it is ready for queries. These functions were added in version 1.1.3.
| Function | Description |
| --- | --- |
| mcsSystemReady() | Returns 1 if the system can accept queries, 0 if it's not ready yet. |
| mcsSystemReadOnly() | Returns 1 if ColumnStore is in a writes suspended mode. That is, a user executed the SuspendDatabaseWrites.It returns 2 if in a read only state. ColumnStore puts itself into a read only state if it detects a logic error that may have corrupted data. Generally it means a ROLLBACK operation failed. Returns 0 if the system is writable. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CURRENT_TIMESTAMP CURRENT\_TIMESTAMP
==================
Syntax
------
```
CURRENT_TIMESTAMP
CURRENT_TIMESTAMP([precision])
```
Description
-----------
`CURRENT_TIMESTAMP` and `CURRENT_TIMESTAMP()` are synonyms for `[NOW()](../now/index)`.
See Also
--------
* [Microseconds in MariaDB](../microseconds-in-mariadb/index)
* The [TIMESTAMP](../timestamp/index) data type
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Getting Data from MariaDB Getting Data from MariaDB
=========================
The simplest way to retrieve data from MariaDB is to use the [SELECT](../select/index) statement. Since the [SELECT](../select/index) statement is an essential SQL statement, it has many options available with it. It's not necessary to know or use them all—you could execute very basic [SELECT](../select/index) statements if that satisfies your needs. However, as you use MariaDB more, you may need more powerful [SELECT](../select/index) statements. In this article we will go through the basics of [SELECT](../select/index) and will progress to more involved [SELECT](../select/index) statements;we will move from the beginner level to the more intermediate and hopefully you will find some benefit from this article regardless of your skill level. For absolute beginners who are just starting with MariaDB, you may want to read the [MariaDB Basics article](../mariadb-basics/index).
#### Basic Elements
The basic, minimal elements of the [SELECT](../select/index) statement call for the keyword `SELECT`, of course, the columns to select or to retrieve, and the table from which to retrieve rows of data. Actually, for the columns to select, we can use the asterisk as a wildcard to select all columns in a particular table. Using a database from a fictitious bookstore, we might enter the following SQL statement to get a list of all columns and rows in a table containing information on books:
```
SELECT * FROM books;
```
This will retrieve all of the data contained in the `books` table. If we want to retrieve only certain columns, we would list them in place of the asterisk in a comma-separated list like so:
```
SELECT isbn, title, author_id
FROM books;
```
This narrows the width of the results set by retrieving only three columns, but it still retrieves all of the rows in the table. If the table contains thousands of rows of data, this may be more data than we want. If we want to limit the results to just a few books, say five, we would include what is known as a [LIMIT](../select/index#limit) clause:
```
SELECT isbn, title, author_id
FROM books
LIMIT 5;
```
This will give us the first five rows found in the table. If we want to get the next ten found, we would add a starting point parameter just before the number of rows to display, separated by a comma:
```
SELECT isbn, title, author_id
FROM books
LIMIT 5, 10;
```
#### Selectivity and Order
The previous statements have narrowed the number of columns and rows retrieved, but they haven't been very selective. Suppose that we want only books written by a certain author, say Dostoevsky. Looking in the authors table we find that his author identification number is 4729. Using a `WHERE` clause, we can retrieve a list of books from the database for this particular author like so:
```
SELECT isbn, title
FROM books
WHERE author_id = 4729
LIMIT 5;
```
I removed the author\_id from the list of columns to select, but left the basic [LIMIT](../select/index#limit) clause in because we want to point out that the syntax is fairly strict on ordering of clauses and flags. You can't enter them in any order. You'll get an error in return.
The SQL statements we've looked at thus far will display the titles of books in the order in which they're found in the database. If we want to put the results in alphanumeric order based on the values of the title column, for instance, we would add an [ORDER BY](../select/index#order-by) clause like this:
```
SELECT isbn, title
FROM books
WHERE author_id = 4729
ORDER BY title ASC
LIMIT 5;
```
Notice that the [ORDER BY](../select/index#order-by) clause goes after the `WHERE` clause and before the [LIMIT](../select/index#limit) clause. Not only will this statement display the rows in order by book title, but it will retrieve only the first five based on the ordering. That is to say, MariaDB will first retrieve all of the rows based on the `WHERE` clause, order the data based on the [ORDER BY](../select/index#order-by) clause, and then display a limited number of rows based on the [LIMIT](../select/index#limit) clause. Hence the reason for the order of clauses. You may have noticed that we slipped in the ASC flag. It tells MariaDB to order the rows in ascending order for the column name it follows. It's not necessary, though, since ascending order is the default. However, if we want to display data in descending order, we would replace the flag with `DESC`. To order by more than one column, additional columns may be given in the [ORDER BY](../select/index#order-by) clause in a comma separated list, each with the `ASC` or `DESC` flags if preferred.
#### Friendlier and More Complicated
So far we've been working with one table of data containing information on books for a fictitious bookstore. A database will usually have more than one table, of course. In this particular database, there's also one called authors in which the name and other information on authors is contained. To be able to select data from two tables in one [SELECT](../select/index) statement, we will have to tell MariaDB that we want to join the tables and will need to provide a join point. This can be done with a [JOIN](../join-syntax/index) clause as shown in the following SQL statement, with the results following it:
```
SELECT isbn, title,
CONCAT(name_first, ' ', name_last) AS author
FROM books
JOIN authors USING (author_id)
WHERE name_last = 'Dostoevsky'
ORDER BY title ASC
LIMIT 5;
+-------------+------------------------+-------------------+
| isbn | title | author |
+-------------+------------------------+-------------------+
| 0553212168 | Brothers Karamozov | Fyodor Dostoevsky |
| 0679420290 | Crime & Punishment | Fyodor Dostoevsky |
| 0553211757 | Crime & Punishment | Fyodor Dostoevsky |
| 0192834118 | Idiot | Fyodor Dostoevsky |
| 067973452X | Notes from Underground | Fyodor Dostoevsky |
+-------------+------------------------+-------------------+
5 rows in set (0.00 sec)
```
Our [SELECT](../select/index) statement is getting hefty, but it's the same one to which we've been adding. Don't let the clutter fluster you. Looking for the new elements, let's focus on the [JOIN](../join-syntax/index) clause first. There are a few possible ways to construct a join. This method works if you're using a newer version of MariaDB and if both tables contain a column of the same name and value. Otherwise you'll have to redo the [JOIN](../join-syntax/index) clause to look something like this:
```
...
JOIN authors ON author_id = row_id
...
```
This excerpt is based on the assumption that the key field in the authors table is not called author\_id, but row\_id instead. There's much more that can be said about joins, but that would make for a much longer article. If you want to learn more on joins, look at MariaDB's documentation page on [JOIN](../join-syntax/index) syntax.
Looking again at the last full SQL statement above, you must have spotted the [CONCAT()](../concat/index) function that we added to the on-going example statement. This string function takes the values of the columns and strings given and pastes them together, to give one neat field in the results. We also employed the AS parameter to change the heading of the results set for the field to author. This is much tider. Since we joined the books and the authors tables together, we were able to search for books based on the author's last name rather than having to look up the author ID first. This is a much friendlier method, albeit more complicated. Incidentally, we can have MariaDB check columns from both tables to narrow our search. We would just add *column = value* pairs, separated by commas in the WHERE clause. Notice that the string containing the author's name is wrapped in quotes—otherwise, the string would be considered a column name and we'd get an error.
The name Dostoevsky is sometimes spelled Dostoevskii, as well as a few other ways. If we're not sure how it's spelled in the authors table, we could use the [LIKE](../like/index) operator instead of the equal-sign, along with a wildcard. If we think the author's name is probably spelled either of the two ways mentioned, we could enter something like this:
```
SELECT isbn, title,
CONCAT(name_first, ' ', name_last) AS author
FROM books
JOIN authors USING (author_id)
WHERE name_last LIKE 'Dostoevsk%'
ORDER BY title ASC
LIMIT 5;
```
This will match any author last name starting with Dostoevsk. Notice that the wildcard here is not an asterisk, but a percent-sign.
#### Some Flags
There are many flags or parameters that can be used in a [SELECT](../select/index) statement. To list and explain all of them with examples would make this a very lengthy article. The reality is that most people never use some of them anyway. So, let's take a look at a few that you may find useful as you get more involved with MariaDB or if you work with large tables on very active servers.
The first flag that may be given, it goes immediately after the `SELECT` keyword, is `ALL`. By default, all rows that meet the requirements of the various clauses given are selected, so this isn't necessary. If instead we would only want the first occurrence of a particular criteria to be displayed, we could add the [DISTINCT](../select/index#distinct) option. For instance, for authors like Dostoevsky there will be several printings of a particular title. In the results shown earlier you may have noticed that there were two copies of *Crime & Punishment* listed, however they have different ISBN numbers and different publishers. Suppose that for our search we only want one row displayed for each title. We could do that like so:
```
SELECT DISTINCT isbn, title
FROM books
JOIN authors USING (author_id)
WHERE name_last = 'Dostoevsky'
ORDER BY title;
```
We've thinned out the ongoing SQL statement a bit for clarity. This statement will result in only one row displayed for *Crime & Punishment* and it will be the first one found.
If we're retrieving data from an extremely busy database, by default any other SQL statements entered simultaneously which are changing or updating data will be executed before a [SELECT](../select/index) statement. [SELECT](../select/index) statements are considered to be of lower priority. However, if we would like a particular [SELECT](../select/index) statement to be given a higher priority, we can add the keyword HIGH\_PRIORITY. Modifying the previous SQL statement for this factor, we would enter it like this:
```
SELECT DISTINCT HIGH_PRIORITY isbn, title
FROM books
JOIN authors USING (author_id)
WHERE name_last = 'Dostoevsky'
ORDER BY title;
```
You may have noticed in the one example earlier in which the results are shown, that there's a status line displayed that specifies the number of rows in the results set. This is less than the number of rows that were found in the database that met the statement's criteria. It's less because we used a [LIMIT](../select/index#limit) clause. If we add the [SQL\_CALC\_FOUND\_ROWS](../select/index#sql_calc_found_rows) flag just before the column list, MariaDB will calculate the number of columns found even if there is a [LIMIT](../select/index#limit) clause.
```
SELECT SQL_CALC_FOUND_ROWS isbn, title
FROM books
JOIN authors USING (author_id)
WHERE name_last = 'Dostoevsky'
LIMIT 5;
```
To retrieve this information, though, we will have to use the [FOUND\_ROWS()](../found_rows/index) function like so:
```
SELECT FOUND_ROWS();
+--------------+
| FOUND_ROWS() |
+--------------+
| 26 |
+--------------+
```
This value is temporary and will be lost if the connection is terminated. It cannot be retrieved by any other client session. It relates only to the current session and the value for the variable when it was last calculated.
#### Conclusion
There are several more parameters and possibilities for the [SELECT](../select/index) statement that we had to skip to keep this article a reasonable length. A popular one that we left out is the [GROUP BY](../select/index#group-by) clause for calculating aggregate data for columns (e.g., an average). There are several flags for caching results and a clause for exporting a results set to a text file. If you would like to learn more about [SELECT](../select/index) and all of the options available, look at the on-line documentation for [SELECT](../select/index) statements.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Data Warehousing Techniques Data Warehousing Techniques
===========================
Preface
-------
This document discusses techniques for improving performance for data-warehouse-like tables in MariaDB and MySQL.
* How to load large tables.
* [Normalization](../database-normalization/index).
* Developing 'summary tables' to make 'reports' efficient.
* Purging old data.
Details on summary tables is covered in the companion document: [Summary Tables](../data-warehousing-summary-tables/index).
Terminology
-----------
This list mirrors "Data Warehouse" terminology.
* Fact table -- The one huge table with the 'raw' data.
* Summary table -- a redundant table of summarized data that could -- use for efficiency
* Dimension -- columns that identify aspects of the dataset (region, country, user, SKU, zipcode, ...)
* Normalization table (dimension table) -- mapping between strings an ids; used for space and speed.
* Normalization -- The process of building the mapping ('New York City' <-> 123)
Fact table
----------
Techniques that should be applied to the huge Fact table.
* id INT/BIGINT UNSIGNED NOT NULL AUTO\_INCREMENT
* PRIMARY KEY (id)
* Probably no other INDEXes
* Accessed only via id
* All VARCHARs are "normalized"; ids are stored instead
* ENGINE = InnoDB
* All "reports" use summary tables, not the Fact table
* Summary tables may be populated from ranges of id (other techniques described below)
There are exceptions where the Fact table must be accessed to retrieve multiple rows. However, you should minimize the number of INDEXes on the table because they are likely to be costly on INSERT.
Why keep the Fact table?
------------------------
Once you have built the Summary table(s), there is not much need for the Fact table. One option that you should seriously consider is to not have a Fact table. Or, at least, you could purge old data from it sooner than you purge the Summary tables. Maybe even keep the Summary tables forever.
Case 1: You need to find the raw data involved in some event. But how will you find those row(s)? This is where a secondary index may be required.
If a secondary index is bigger than can be cached in RAM, and if the column(s) being indexed is random, then each row inserted may cause a disk hit to update the index. This limits insert speed to something like 100 rows per second (on ordinary disks). Multiple random indexes slow down insertion further. RAID striping and/or SSDs speed up insertion. Write caching helps, but only for bursts.
Case 2: You need some event, but you did not plan ahead with the optimal INDEX. Well, if the data is PARTITIONed on date, so even if you have a clue of when the event occurred, "partition pruning" will keep the query from being too terribly slow.
Case 3: Over time, the application is likely to need new 'reports', which may lead to a new Summary table. At this point, it would be handy to scan through the old data to fill up the new table.
Case 4: You find a flaw in the summarization, and need to rebuild an existing Summary table.
Cases 3 and 4 both need the "raw" data. But they don't necessarily need the data sitting in a database table. It could be in the pre-database format (such as log files). So, consider not building the Fact table, but simply keep the raw data, comressed, on some file system.
Batching the load of the Fact table
-----------------------------------
When talking about billions of rows in the Fact table, it is essentially mandatory that you "batch" the inserts. There are two main ways:
* INSERT INTO Fact (.,.,.) VALUES (.,.,.), (.,.,.), ...; -- "Batch insert"
* LOAD DATA ...;
A third way is to INSERT or LOAD into a Staging table, then
* INSERT INTO Fact SELECT \* FROM Staging; This INSERT..SELECT allows you to do other things, such as normalization. More later.
Batched INSERT Statement
------------------------
Chunk size should usually be 100-1000 rows.
* 100-1000 an insert will run 10 times as fast as single-row inserts.
* Beyond 100, you may be interfering replication and SELECTs.
* Beyond 1000, you are into diminishing returns -- virtually no further performance gains.
* Don't go past, say, 1MB for the constructed INSERT statement. This deals with packet sizes, etc. (1MB is unlikely to be hit for a Fact table.) Decide whether your application should lean toward the 100 or the 1000.
If your data is coming in continually, and you are adding a batching layer, let's do some math. Compute your ingestion rate -- R rows per second.
* If R < 10 (= 1M/day = 300M/year) -- single-row INSERTs would probably work fine (that is, batching is optional)
* If R < 100 (3B records per year) -- secondary indexes on Fact table may be ok
* If R < 1000 (100M records/day) -- avoid secondary indexes on Fact table.
* If R > 1000 -- Batching may not work. Decide how long (S seconds) you can stall loading the data in order to collect a batch of rows.
* If S < 0.1s -- May not be able to keep up
If batching seems viable, then design the batching layer to gather for S seconds or 100-1000 rows, whichever comes first.
(Note: Similar math applies to rapid UPDATEs of a table.)
Normalization (Dimension) table
-------------------------------
Normalization is important in Data Warehouse applications because it significantly cuts down on the disk footprint and improves performance. There are other reasons for normalizing, but space is the important one for DW.
Here is a typical pattern for a Dimension table:
```
CREATE TABLE Emails (
email_id MEDIUMINT UNSIGNED NOT NULL AUTO_INCREMENT, -- don't make bigger than needed
email VARCHAR(...) NOT NULL,
PRIMARY KEY (email), -- for looking up one way
INDEX(email_id) -- for looking up the other way (UNIQUE is not needed)
) ENGINE = InnoDB; -- to get clustering
```
Notes:
* MEDIUMINT is 3 bytes with UNSIGNED range of 0..16M; pick SMALLINT, INT, etc, based on a conservative estimate of how many 'foo's you will eventually have.
* datatype sizes
* There may be more than one VARCHAR in the table. Example: For cities, you might have City and Country.
* InnoDB is better than MyISAM because of way the two keys are structured.
* The secondary key is effectively (email\_id, email), hence 'covering' for certain queries.
* It is OK to not specify an AUTO\_INCREMENT to be UNIQUE.
Batched normalization
---------------------
I bring this up as a separate topic because of some of the subtle issues that can happen.
You may be tempted to do
```
INSERT IGNORE INTO Foos
SELECT DISTINCT foo FROM Staging; -- not wise
```
It has the problem of "burning" AUTO\_INCREMENT ids. This is because MariaDB pre-allocates ids before getting to "IGNORE". That could rapidly increase the AUTO\_INCREMENT values beyond what you expected.
Better is this...
```
INSERT IGNORE INTO Foos
SELECT DISTINCT foo
FROM Staging
LEFT JOIN Foos ON Foos.foo = Staging.foo
WHERE Foos.foo_id IS NULL;
```
Notes:
* The LEFT JOIN .. IS NULL finds the `foo`s that are not yet in Foos.
* This INSERT..SELECT must not be done inside the transaction with the rest of the processing. Otherwise, you add to deadlock risks, leading to burned ids.
* IGNORE is used in case you are doing the INSERT from multiple processes simultaneously.
Once that INSERT is done, this will find all the foo\_ids it needs:
```
INSERT INTO Fact (..., foo_id, ...)
SELECT ..., Foos.foo_id, ...
FROM Staging
JOIN Foos ON Foos.foo = Staging.foo;
```
An advantage of "Batched Normalization" is that you can summarize directly from the Staging table. Two approaches:
Case 1: PRIMARY KEY (dy, foo) and summarization is in lock step with, say, changes in `dy`.
* This approach can have troubles if new data arrives after you have summarized the day's data.
```
INSERT INTO Summary (dy, foo, ct, blah_total)
SELECT DATE(dt) as dy, foo,
COUNT(*) as ct, SUM(blah) as blah_total)
FROM Staging
GROUP BY 1, 2;
```
Case 2: (dy, foo) is a non-UNIQUE INDEX.
* Same code as Case 1.
* By having the index be non-UNIQUE, delayed data simply shows up as extra rows.
* You need to take care to avoid summarizing the data twice. (The id on the Fact table may be a good tool for that.)
Case 3: PRIMARY KEY (dy, foo) and summarization can happen anytime.
```
INSERT INTO Summary (dy, foo, ct, blah_total)
ON DUPLICATE KEY UPDATE
ct = ct + VALUE(ct),
blah_total = blah_total + VALUE(bt)
SELECT DATE(dt) as dy, foo,
COUNT(*) as ct, SUM(blah) as bt)
FROM Staging
GROUP BY 1, 2;
```
Too many choices?
-----------------
This document lists a number of ways to do things. Your situation may lead to one approach being more/less acceptable. But, if you are thinking "Just tell me what to do!", then here:
* Batch load the raw data into a temporary table (`Staging`).
* Normalize from `Staging` -- use code in Case 3.
* INSERT .. SELECT to move the data from `Staging` into the Fact table
* Summarize from `Staging` to Summary table(s) via IODKU (Insert ... On Duplicate Key Update).
* Drop the Staging
Those techniques should perform well and scale well in most cases. As you develop your situation, you may discover why I described alternative solutions.
Purging old data
----------------
Typically the Fact table is PARTITION BY RANGE (10-60 ranges of days/weeks/etc) and needs purging (DROP PARTITION) periodically. This discusses a safe/clean way to design the partitioning and do the DROPs: Purging PARTITIONs
Master / slave
--------------
For "read scaling", backup, and failover, use master-slave replication or something fancier. Do ingestion only on a single active master; it replicate to the slave(s). Generate reports on the slave(s).
Sharding
--------
"Sharding" is the splitting of data across multiple servers. (In contrast, [replication](../replication/index) and [Galera](../galera/index) have the same data on all servers, requiring all data to be written to all servers.)
With the non-sharding techniques described here, terabyte(s) of data can be handled by a single machine. Tens of terabytes probably requires sharding.
Sharding is beyond the scope of this document.
How fast? How big?
------------------
With the techniques described here, you may be able to achieve the following performance numbers. I say "may" because every data warehouse situation is different, and you may require performance-hurting deviations from what I describe here. I give multiple options for some aspects; these may cover some of your deviations.
One big performance killer is UUID/GUID keys. Since they are very 'random', updates of them (at scale) are limited to 1 row = 1 disk hit. Plain disks can handle only 100 hits/second. RAID and/or SSD can increase that to something like 1000 hits/sec. Huge amounts of RAM (for caching the random index) are a costly solution. It is possible to turn type-1 UUIDs into roughly-chronological keys, thereby mittigating the performance problems if the UUIDs are written/read with some chronological clustering. UUID discussion
Hardware, etc:
* Single SATA drive: 100 IOPs (Input/Output operations per second)
* RAID with N physical drives -- 100\*N IOPs (roughly)
* SSD -- 5 times as fast as rotating media (in this context)
* Batch INSERT -- 100-1000 rows is 10 times as fast as INSERTing 1 row at a time (see above)
* Purge "old" data -- Do not use DELETE or TRUNCATE, design so you can use DROP PARTITION (see above)
* Think of each INDEX (except the PRIMARY KEY on InnoDB) as a separate table
* Consider access patterns of each table/index: random vs at-the-end vs something in between
"Count the disk hits" -- back-of-envelope performance analysis
* Random accesses to a table/index -- count each as a disk hit.
* At-the-end accesses (INSERT chronologically or with AUTO\_INCREMENT; range SELECT) -- count as zero hits.
* In between (hot/popular ids, etc) -- count as something in between
* For INSERTs, do the analysis on each index; add them up.
* For SELECTs, do the analysis on the one index used, plus the table. (Use of 2 indexes is rare.) Insert cost, based on datatype of first column in an index:
* AUTO\_INCREMENT -- essentially 0 IOPs
* DATETIME, TIMESTAMP -- essentially 0 for 'current' times
* UUID/GUID -- 1 per insert (terrible)
* Others -- depends on their patterns SELECT cost gets a little tricky:
* Range on PRIMARY KEY -- think of it as getting 100 rows per disk hit.
* IN on PRIMARY KEY -- 1 disk hit per item in IN
* "=" -- 1 hit (for 1 row)
* Secondary key -- First compute the hits for the index, then...
* Think of each row as needing 1 disk hit.
* However, if the rows are likely to be 'near' each other (based on the PRIMARY KEY), then it could be < 1 disk hit/row.
More on Count the Disk Hits
How fast?
---------
Look at your data; compute raw rows per second (or hour or day or year). There are about 30M seconds in a year; 86,400 seconds per day. Inserting 30 rows per second becomes a billion rows per year.
10 rows per second is about all you can expect from an ordinary machine (after allowing for various overheads). If you have less than that, you don't have many worries, but still you should probably create Summary tables. If more than 10/sec, then batching, etc, becomes vital. Even on spiffy hardware, 100/sec is about all you can expect without utilizing the techniques here.
Not so fast?
------------
Let's say your insert rate is only one-tenth of your disk IOPs (eg, 10 rows/sec vs 100 IOPs). Also, let's say your data is not "bursty"; that is, the data comes in somewhat soothly throughout the day.
Note that 10 rows/sec (300M/year) implies maybe 30GB for data + indexes + normalization tables + summary tables for 1 year. I would call this "not so big".
Still, the [normalization](../database-normalization/index) and summarization are important. Normalization keeps the data from being, say, twice as big. Summarization speeds up the reports by orders of magnitude.
Let's design and analyse a "simple ingestion scheme" for 10 rows/second, without 'batching'.
```
# Normalize:
$foo_id = SELECT foo_id FROM Foos WHERE foo = $foo;
if no $foo_id, then
INSERT IGNORE INTO Foos ...
# Inserts:
BEGIN;
INSERT INTO Fact ...;
INSERT INTO Summary ... ON DUPLICATE KEY UPDATE ...;
COMMIT;
# (plus code to deal with errors on INSERTs or COMMIT)
```
Depending on the number and randomness of your indexes, etc, 10 Fact rows may (or may not) take less than 100 IOPs.
Also, note that as the data grows over time, random indexes will become less and less likely to be cached. That is, even if runs fine with 1 year's worth of data, it may be in trouble with 2 year's worth.
For those reasons, I started this discussion with a wide margin (10 rows versus 100 IOPs).
References
----------
* [sec. 3.3.2: Dimensional Model and "Star schema"](http://www.redbooks.ibm.com/redbooks/pdfs/sg247138.pdf)
* Summary Tables
See also
--------
Rick James graciously allowed us to use this article in the Knowledge Base.
[Rick James' site](http://mysql.rjweb.org/) has other useful tips, how-tos, optimizations, and debugging tips.
Original source: <http://mysql.rjweb.org/doc.php/datawarehouse>
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb WKT Definition WKT Definition
==============
Description
-----------
The Well-Known Text (WKT) representation of Geometry is designed to exchange geometry data in ASCII form. Examples of the basic geometry types include:
| Geometry Types |
| --- |
| [POINT](../point/index) |
| [LINESTRING](../linestring/index) |
| [POLYGON](../polygon/index) |
| [MULTIPOINT](../multipoint/index) |
| [MULTILINESTRING](../multilinestring/index) |
| [MULTIPOLYGON](../multipolygon/index) |
| [GEOMETRYCOLLECTION](../geometrycollection/index) |
| [GEOMETRY](../geometry/index) |
See Also
--------
* [Geometry Types](../geometry-types/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MID MID
===
Syntax
------
```
MID(str,pos,len)
```
Description
-----------
MID(str,pos,len) is a synonym for [SUBSTRING(str,pos,len)](../substring/index).
Examples
--------
```
SELECT MID('abcd',4,1);
+-----------------+
| MID('abcd',4,1) |
+-----------------+
| d |
+-----------------+
SELECT MID('abcd',2,2);
+-----------------+
| MID('abcd',2,2) |
+-----------------+
| bc |
+-----------------+
```
A negative starting position:
```
SELECT MID('abcd',-2,4);
+------------------+
| MID('abcd',-2,4) |
+------------------+
| cd |
+------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb list_add list\_add
=========
Syntax
------
```
sys.list_add(list,value)
```
Description
-----------
`list_add` is a [stored function](../stored-functions/index) available with the [Sys Schema](../sys-schema/index).
It takes a *list* to be be modified and a *value* to be added to the list, returning the resulting value. This can be used, for example, to add a value to a system variable taking a comma-delimited list of options, such as [sql\_mode](../sql-mode/index).
The related function [list\_drop](../list_drop/index) can be used to drop a value from a list.
Examples
--------
```
SELECT @@sql_mode;
+-----------------------------------------------------------------------+
| @@sql_mode |
+-----------------------------------------------------------------------+
| STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,
NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION |
+-----------------------------------------------------------------------+
SET @@sql_mode = sys.list_add(@@sql_mode, 'NO_ZERO_DATE');
SELECT @@sql_mode;
+-----------------------------------------------------------------------+
| @@sql_mode |
+-----------------------------------------------------------------------+
| STRICT_TRANS_TABLES,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,
NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION |
+-----------------------------------------------------------------------+
```
See Also
--------
* [list\_drop](../list_drop/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Defragmenting InnoDB Tablespaces Defragmenting InnoDB Tablespaces
================================
Overview
--------
When rows are deleted from an [InnoDB](../innodb/index) table, the rows are simply marked as deleted and not physically deleted. The free space is not returned to the operating system for re-use.
The purge thread will physically delete index keys and rows, but the free space introduced is still not returned to operating system. This can lead to gaps in the pages. If you have variable length rows, new rows may be larger than old rows and cannot make use of the available space.
You can run [OPTIMIZE TABLE](../optimize-table/index) or [ALTER TABLE <table> ENGINE=InnoDB](../alter-table/index) to reconstruct the table. Unfortunately running `OPTIMIZE TABLE` against an InnoDB table stored in the shared table-space file `ibdata1` does two things:
* Makes the table’s data and indexes contiguous inside `ibdata1`.
* Increases the size of `ibdata1` because the contiguous data and index pages are appended to `ibdata1`.
InnoDB Defragmentation
----------------------
[MariaDB 10.1](../what-is-mariadb-101/index) merged Facebook's defragmentation code prepared for MariaDB by Matt, Seong Uck Lee from Kakao. The only major difference to Facebook's code and Matt’s patch is that MariaDB does not introduce new literals to SQL and makes no changes to the server code. Instead, [OPTIMIZE TABLE](../optimize-table/index) is used and all code changes are inside the InnoDB/XtraDB storage engines.
The behaviour of `OPTIMIZE TABLE` is unchanged by default, and to enable this new feature, you need to set the [innodb\_defragment](../innodb-system-variables/index#innodb_defragment) system variable to `1`.
```
[mysqld]
...
innodb-defragment=1
```
No new tables are created and there is no need to copy data from old tables to new tables. Instead, this feature loads `n` pages (determined by [innodb-defragment-n-pages](../innodb-system-variables/index#innodb_defragment_n_pages)) and tries to move records so that pages would be full of records and then frees pages that are fully empty after the operation.
Note that tablespace files (including ibdata1) will not shrink as the result of defragmentation, but one will get better memory utilization in the InnoDB buffer pool as there are fewer data pages in use.
A number of new system and status variables for controlling and monitoring the feature are introduced.
### System Variables
* [innodb\_defragment](../xtradbinnodb-server-system-variables/index#innodb_defragment): Enable InnoDB defragmentation.
* [innodb\_defragment\_n\_pages](../xtradbinnodb-server-system-variables/index#innodb_defragment_n_pages): Number of pages considered at once when merging multiple pages to defragment.
* [innodb\_defragment\_stats\_accuracy](../xtradbinnodb-server-system-variables/index#innodb_defragment_stats_accuracy): Number of defragment stats changes there are before the stats are written to persistent storage.
* [innodb\_defragment\_fill\_factor\_n\_recs](../xtradbinnodb-server-system-variables/index#innodb_defragment_fill_factor_n_recs): Number of records of space that defragmentation should leave on the page.
* [innodb\_defragment\_fill\_factor](../xtradbinnodb-server-system-variables/index#innodb_defragment_fill_factor): Indicates how full defragmentation should fill a page.
* [innodb\_defragment\_frequency](../xtradbinnodb-server-system-variables/index#innodb_defragment_frequency): Maximum times per second for defragmenting a single index.
### Status Variables
* [Innodb\_defragment\_compression\_failures](../xtradbinnodb-server-status-variables/index#innodb_defragment_compression_failures): Number of defragment re-compression failures
* [Innodb\_defragment\_failures](../xtradbinnodb-server-status-variables/index#innodb_defragment_failures): Number of defragment failures.
* [Innodb\_defragment\_count](../xtradbinnodb-server-status-variables/index#innodb_defragment_count): Number of defragment operations.
Example
-------
```
set @@global.innodb_file_per_table = 1;
set @@global.innodb_defragment_n_pages = 32;
set @@global.innodb_defragment_fill_factor = 0.95;
CREATE TABLE tb_defragment (
pk1 bigint(20) NOT NULL,
pk2 bigint(20) NOT NULL,
fd4 text,
fd5 varchar(50) DEFAULT NULL,
PRIMARY KEY (pk1),
KEY ix1 (pk2)
) ENGINE=InnoDB;
delimiter //
create procedure innodb_insert_proc (repeat_count int)
begin
declare current_num int;
set current_num = 0;
while current_num < repeat_count do
INSERT INTO tb_defragment VALUES (current_num, 1, REPEAT('Abcdefg', 20), REPEAT('12345',5));
INSERT INTO tb_defragment VALUES (current_num+1, 2, REPEAT('HIJKLM', 20), REPEAT('67890',5));
INSERT INTO tb_defragment VALUES (current_num+2, 3, REPEAT('HIJKLM', 20), REPEAT('67890',5));
INSERT INTO tb_defragment VALUES (current_num+3, 4, REPEAT('HIJKLM', 20), REPEAT('67890',5));
set current_num = current_num + 4;
end while;
end//
delimiter ;
commit;
set autocommit=0;
call innodb_insert_proc(50000);
commit;
set autocommit=1;
```
After these CREATE and INSERT operations, the following information can be seen from the INFORMATION SCHEMA:
```
select count(*) as Value from information_schema.innodb_buffer_page
where table_name like '%tb_defragment%' and index_name = 'PRIMARY';
Value
313
select count(*) as Value from information_schema.innodb_buffer_page
where table_name like '%tb_defragment%' and index_name = 'ix1';
Value
72
select count(stat_value) from mysql.innodb_index_stats
where table_name like '%tb_defragment%' and stat_name in ('n_pages_freed');
count(stat_value)
0
select count(stat_value) from mysql.innodb_index_stats
where table_name like '%tb_defragment%' and stat_name in ('n_page_split');
count(stat_value)
0
select count(stat_value) from mysql.innodb_index_stats
where table_name like '%tb_defragment%' and stat_name in ('n_leaf_pages_defrag');
count(stat_value)
0
SELECT table_name, data_free/1024/1024 AS data_free_MB, table_rows FROM information_schema.tables
WHERE engine LIKE 'InnoDB' and table_name like '%tb_defragment%';
table_name data_free_MB table_rows
tb_defragment 4.00000000 50051
SELECT table_name, index_name, sum(number_records), sum(data_size) FROM information_schema.innodb_buffer_page
where table_name like '%tb_defragment%' and index_name like 'PRIMARY';
table_name index_name sum(number_records) sum(data_size)
`test`.`tb_defragment` PRIMARY 25873 4739939
SELECT table_name, index_name, sum(number_records), sum(data_size) FROM information_schema.innodb_buffer_page
where table_name like '%tb_defragment%' and index_name like 'ix1';
table_name index_name sum(number_records) sum(data_size)
`test`.`tb_defragment` ix1 50071 1051775
```
Deleting three-quarters of the records, leaving gaps, and then optimizing:
```
delete from tb_defragment where pk2 between 2 and 4;
optimize table tb_defragment;
Table Op Msg_type Msg_text
test.tb_defragment optimize status OK
show status like '%innodb_def%';
Variable_name Value
Innodb_defragment_compression_failures 0
Innodb_defragment_failures 1
Innodb_defragment_count 4
```
Now some pages have been freed, and some merged:
```
select count(*) as Value from information_schema.innodb_buffer_page
where table_name like '%tb_defragment%' and index_name = 'PRIMARY';
Value
0
select count(*) as Value from information_schema.innodb_buffer_page
where table_name like '%tb_defragment%' and index_name = 'ix1';
Value
0
select count(stat_value) from mysql.innodb_index_stats
where table_name like '%tb_defragment%' and stat_name in ('n_pages_freed');
count(stat_value)
2
select count(stat_value) from mysql.innodb_index_stats
where table_name like '%tb_defragment%' and stat_name in ('n_page_split');
count(stat_value)
2
select count(stat_value) from mysql.innodb_index_stats
where table_name like '%tb_defragment%' and stat_name in ('n_leaf_pages_defrag');
count(stat_value)
2
SELECT table_name, data_free/1024/1024 AS data_free_MB, table_rows FROM information_schema.tables
WHERE engine LIKE 'InnoDB';
table_name data_free_MB table_rows
innodb_index_stats 0.00000000 8
innodb_table_stats 0.00000000 0
tb_defragment 4.00000000 12431
SELECT table_name, index_name, sum(number_records), sum(data_size) FROM information_schema.innodb_buffer_page
where table_name like '%tb_defragment%' and index_name like 'PRIMARY';
table_name index_name sum(number_records) sum(data_size)
`test`.`tb_defragment` PRIMARY 690 102145
SELECT table_name, index_name, sum(number_records), sum(data_size) FROM information_schema.innodb_buffer_page
where table_name like '%tb_defragment%' and index_name like 'ix1';
table_name index_name sum(number_records) sum(data_size)
`test`.`tb_defragment` ix1 5295 111263
```
See [Defragmenting unused space on InnoDB tablespace](https://blog.mariadb.org/defragmenting-unused-space-on-innodb-tablespace/) on the Mariadb.org blog for more details.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Plans for MariaDB 10.11 Plans for MariaDB 10.11
=======================
[MariaDB 10.11](../what-is-mariadb-1011/index) is an upcoming major development release.
JIRA
----
We manage our development plans in JIRA, so the definitive list will be there. [This search](https://jira.mariadb.org/issues/?jql=project+%3D+MDEV+AND+issuetype+%3D+Task+AND+fixVersion+in+%2810.11%29+ORDER+BY+priority+DESC) shows what we **currently** plan for 10.11. It shows all tasks with the **Fix-Version** being 10.11. Not all these tasks will really end up in 10.11 but tasks with the "red" priorities have a much higher chance of being done in time for 10.11. Practically, you can think of these tasks as "features that **will** be in 10.11". Tasks with the "green" priorities probably won't be in 10.11. Think of them as "bonus features that would be **nice to have** in 10.11".
Contributing
------------
If you want to be part of developing any of these features, see [Contributing to the MariaDB Project](../contributing-to-the-mariadb-project/index). You can also add new features to this list or to [JIRA](../jira-project-planning-and-tracking/index).
See Also
--------
* [Current tasks for 10.11](https://jira.mariadb.org/issues/?jql=project%20%3D%20MDEV%20AND%20issuetype%20%3D%20Task%20AND%20fixVersion%20in%20(10.11)%20ORDER%20BY%20priority%20DESC)
* [10.11 Features/fixes by vote](https://jira.mariadb.org/issues/?jql=project%20%3D%20MDEV%20AND%20issuetype%20%3D%20Task%20AND%20fixVersion%20in%20(10.11)%20ORDER%20BY%20votes%20DESC%2C%20priority%20DESC)
* [What is MariaDB 10.11?](../what-is-mariadb-1011/index)
* [What is MariaDB 10.10?](../what-is-mariadb-1010/index)
* [What is MariaDB 10.9?](../what-is-mariadb-109/index)
* [What is MariaDB 10.8?](../what-is-mariadb-108/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Buildbot runvm Buildbot runvm
==============
One type of build we do in BuildBot is to build and test [MariaDB](../mariadb/index) binary packages for the platforms we release on. We build and test packages for Debian (4 and 5), Ubuntu (8.04 to 10.04), Centos 5, and generic Linux; amd64 and i386 architectures. This testing is done with virtual machines run in [KVM](http://www.linux-kvm.org/page/Main_Page).
To better control the startup and shutdown of the virtual machines we use a small wrapper around KVM we developed called **runvm**. The purpose of this tool is to make it easy to boot up a virtual machine, run some build commands inside it, and shut it down cleanly.
This wrapper encapsulates the steps needed to boot up a virtual machine, run a series of commands inside it, and shut it down gracefully afterwards. Some special care is taken in the script to ensure that the virtual machine is always shut down after use (gracefully if possible), even in case of various failures or the loss of the parent process or controlling TTY. And if a conflicting virtual machine somehow manages to escape shutdown, runvm automatically attempts to terminate it before starting a new one. This extra robustness is important for fully automated testing as in our [Buildbot](../buildbot/index) setup, to ensure that the system can run unattended for longer periods of time.
Essentially, instead of a normal Buildbot session which would do something like this on the slave:
```
./configure && make
```
We instead use *runvm* to do the same inside a virtual machine running as a KVM guest with the build slave as host:
```
runvm image.qcow2 "./configure" "make"
```
See the **runvm Usage Examples** or **runvm --help** sections below for more detailed examples, but this is the basic idea.
runvm Usage Examples
--------------------
### Usage Example One
Here is an example command you could use to run a build inside a virtual machine using runvm:
```
runvm --port=2222 ubuntu-hardy-i386.qcow2 \
"= scp -P 2222 mariadb-5.1.41-rc.tar.gz localhost:" \
"tar zxf mariadb-5.1.41-rc.tar.gz" \
"cd mariadb-5.1.41-rc && ./configure" \
"cd mariadb-5.1.41-rc && make"
```
In this example, `ubuntu-hardy-amd64.qcow2` is a KVM image already installed with compilers and set up for password-less ssh access (using public key authentication). Port 2222 on the host side is forwarded to the ssh service (port 22) on the guest side (so by specifying different `--port` options it is easy to run multiple `runvm` invocations in parallel; in our Buildbot setup we run 3 builds in parallel this way).
Note the use of the `scp` command, prefixed with an equals sign "`=`" Commands prefixed in this way are run on the host side rather than the guest side; this is a convenient way to copy data in or results out of the virtual machine while the `runvm` session is running.
Using `runvm` in this way we are able to easily and flexibly manage a large number of virtual machines for automated builds with very little overhead and complexity. In fact we have around 70 distinct virtual machines! The only resource they take is a little disk space (around 37 GByte). And the virtual machines images are also simple to set up, requiring only a minimal install; no need to set up networking bridges or IP addresses, or to install a Buildbot client. All the complex logic runs on the host system, which only needs to be installed once.
By keeping the virtual images simple, builds and tests run in a minimal environment, which is useful to detect any missing dependencies or other problems that do not show themselves on normal developer machines with a full desktop install (we even do install testing on a separate virtual machine from the one used to build, with compilers etc. not installed on the one used to test installation).
### Usage Example Two
A further refinement of **example one** (above) is to create a new temporary virtual machine image before each step as a copy of a reference image, run the build, and throw away the temporary image after the build. This avoids any possibility of a previous build influencing a following build in any way (and thus also simplifies the build setup, as we can install stuff freely without any need to do cleanup). It also avoids having to fix a broken image, like needing to manually run `fsck` after a crash or similar issue. We use this technique for most of our binary package builds in Buildbot.
To use this copy-and-discard technique with runvm, the --base-image option is useful:
```
runvm --port=2222 --base-image=ubuntu-hardy-i386.qcow2 tmp.qcow2 \
"= scp -P 2222 mariadb-5.1.41-rc.tar.gz localhost:" \
"tar zxf mariadb-5.1.41-rc.tar.gz" \
"cd mariadb-5.1.41-rc && ./configure" \
"cd mariadb-5.1.41-rc && make"
```
This will run the build in a temporary copy `tmp.qcow2` of the reference image `ubuntu-hardy-i386.qcow2`, without modifying the reference image in any way. This uses the copy-on-write feature of the qcow2 image format (see `qemu-img(1)`), so it even takes only very little time (fraction of a second) and minimal space (only changed blocks are written to the new image).
### Additional Usage Examples
The above two examples show basically how the package testing in our Buildbot setup is done. There are some further details of course, like more options for the build commmands and extra care to get logfiles out to debug problems; the full details are available in our [Buildbot configuration file](http://bazaar.launchpad.net/~maria-captains/mariadb-tools/trunk/annotate/head%3A/buildbot/maria-master.cfg). But the basic principle is just a number of `runvm` commands like the examples above.
Getting runvm
-------------
The `runvm` tool is available under GPL on Lauchpad in the project [Tools for MariaDB](https://launchpad.net/mariadb-tools). In the bzr repository it is found as [buildbot/runvm](http://bazaar.launchpad.net/~maria-captains/mariadb-tools/trunk/annotate/head%3A/buildbot/runvm). If someone finds it useful or has suggestions for improvements, please drop us a line on the [maria-developers](https://launchpad.net/~maria-developers) mailing list.
runvm --help
------------
Since it might be useful, here is the output from **runvm --help** (check the latest version of the tool for up-to-date output):
```
Usage: ./runvm <options> image.qcow2 [command ...]
Boot the given KVM virtual machine image and wait for it to come up.
Run the list of commands one at a time, aborting on receiving an error.
When all commands are run (or one of them failed), shutdown the virtual
machine and exit.
Commands are by default run inside the virtual machine using ssh(1). By
prefixing a command with an equals sign '=', it will instead be run on the
host system (for example to copy files into or out of the virtual machine
using scp(1)).
Some care is taken to ensure that the virtual machine is shutdown
gracefully and not left running even in case the controlling tty is
closed or the parent process killed. If a previous virtual machine is
already running on a conflicting port, an attempt is made to shut it
down first. For this purpose, a PID file is created in $HOME/.runvm/
Available options:
-p, --port=N Forward this port on the host side to the ssh port (port
22) on the guest side. Must be different for each runvm
instance running in parallel to avoid conflicts. The
default is 2222.
To copy files in/out of the guest use a command prefixed
with '=' calling scp(1) with the -P option using the port
specified here, like this:
runvm img.qcow2 "=scp -P 2222 file.txt localhost:"
-u, --user=USER Name of the account to ssh into in the guest. Defaults to
the name of the user invoking runvm on the host.
-m, --memory=N Amount of memory (in megabytes) to allocate to the guest.
Defaults to 2047.
--smp=N Number of CPU cores to allocate to the guest.
Defaults to 2.
-c, --cpu=NAME Type of CPU to emulate for KVM, see qemu(1) for details.
For example:
--cpu=qemu64 For 64-bit amd64 emulation
--cpu=qemu32 For 32-bit x86 emulation
--cpu=qemu32,-nx 32-bit and disable "no-execute"
The default is qemu32,-nx
--netdev=NAME Network device to emulate. The 'virtio' device has good
performance but may not have driver support in all
operating systems, if so another can be specified.
The default is virtio.
--kvm=OPT Pass additional option OPT to kvm. Specify multiple times
to pass more than one option. For example
runvm --kvm=-cdrom --kvm=mycd.iso img.qcow2 ...
--initial-sleep=SECS
Wait this many seconds before starting to poll the guest
ssh port for it to be up. Default 15.
--startup-timeout=SECS
Wait at most this many seconds for the guest OS to respond
to ssh. If this time is exceeded assume it has failed to
boot correctly. Default 300.
--shutdown-timeout=SECS
Wait at most this many seconds for the guest OS to
shutdown gracefully after sending a shutdown command. If
this time is exceeded, assume the guest has failed to
shutdown gracefully and kill it forcibly. Default 120.
--kvm-retries=N If the guest fails to come up, retry the boot this many
times before giving up. This helps if the virtual machine
sometimes crashes during boot. Default 3.
-l, --logfile=FILE File to redirect the output from kvm into. This includes
any (error) messages from kvm, and also includes anything
the guest writes to the kvm emulated serial port (it can
be useful to set the guest to send boot loader and kernel
messages to the serial console and log them with this
option). Default is to not log this output anywhere.
-b, --base-image=IMG
Instead of booting an existing image, create a new
copy-on-write image based on IMG. This uses the -b option
of qemu-img(1). IMG is not modified in any way. This way,
the booted image can be discarded after use, so that each
use of IMG is using the same reference image with no risk
of "polution" between different invocations.
Note that this DELETES any existing image of the same
name as the one specified on the command line to boot! It
will be replaced with the image created as a copy of IMG,
with any modifications done during the runvm session.
```
*This page is based on a [blog post](http://kristiannielsen.livejournal.com/11007.html) by Kristian Nielsen, the primary developer of* `runvm`.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema APPLICABLE_ROLES Table Information Schema APPLICABLE\_ROLES Table
==========================================
The [Information Schema](../information_schema/index) `APPLICABLE_ROLES` table shows the [role authorizations](../roles/index) that the current user may use.
It contains the following columns:
| Column | Description | Added |
| --- | --- | --- |
| `GRANTEE` | Account that the role was granted to. | |
| `ROLE_NAME` | Name of the role. | |
| `IS_GRANTABLE` | Whether the role can be granted or not. | |
| `IS_DEFAULT` | Whether the role is the user's default role or not | [MariaDB 10.1.3](https://mariadb.com/kb/en/mariadb-1013-release-notes/) |
The current role is in the [ENABLED\_ROLES](../information-schema-enabled_roles-table/index) Information Schema table.
Example
-------
```
SELECT * FROM information_schema.APPLICABLE_ROLES;
+----------------+-------------+--------------+------------+
| GRANTEE | ROLE_NAME | IS_GRANTABLE | IS_DEFAULT |
+----------------+-------------+--------------+------------+
| root@localhost | journalist | YES | NO |
| root@localhost | staff | YES | NO |
| root@localhost | dd | YES | NO |
| root@localhost | dog | YES | NO |
+----------------+-------------+--------------+------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ST_EQUALS ST\_EQUALS
==========
Syntax
------
```
ST_EQUALS(g1,g2)
```
Description
-----------
Returns `1` or `0` to indicate whether geometry *`g1`* is spatially equal to geometry *`g2`*.
ST\_EQUALS() uses object shapes, while [EQUALS()](../equals/index), based on the original MySQL implementation, uses object bounding rectangles.
Examples
--------
```
SET @g1 = ST_GEOMFROMTEXT('LINESTRING(174 149, 176 151)');
SET @g2 = ST_GEOMFROMTEXT('LINESTRING(176 151, 174 149)');
SELECT ST_EQUALS(@g1,@g2);
+--------------------+
| ST_EQUALS(@g1,@g2) |
+--------------------+
| 1 |
+--------------------+
```
```
SET @g1 = ST_GEOMFROMTEXT('POINT(0 2)');
SET @g1 = ST_GEOMFROMTEXT('POINT(2 0)');
SELECT ST_EQUALS(@g1,@g2);
+--------------------+
| ST_EQUALS(@g1,@g2) |
+--------------------+
| 0 |
+--------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb InnoDB Recovery Modes InnoDB Recovery Modes
=====================
The InnoDB recovery mode is a mode used for recovering from emergency situations. You should ensure you have a backup of your database before making changes in case you need to restore it. The [innodb\_force\_recovery](../innodb-system-variables/index#innodb_force_recovery) server system variable sets the recovery mode. A mode of 0 is normal use, while the higher the mode, the more stringent the restrictions. Higher modes incorporate all limitations of the lower modes.
The recovery mode should never be set to a value other than zero except in an emergency situation.
Generally, it is best to start with a recovery mode of 1, and increase in single increments if needs be. With a recovery mode < 4, only corrupted pages should be lost. With 4, secondary indexes could be corrupted. With 5, results could be inconsistent and secondary indexes could be corrupted (even if they were not with 4). A value of 6 leaves pages in an obsolete state, which might cause more corruption.
Until [MariaDB 10.2.7](https://mariadb.com/kb/en/mariadb-1027-release-notes/), mode `0` was the only mode permitting changes to the data. From [MariaDB 10.2.7](https://mariadb.com/kb/en/mariadb-1027-release-notes/), write transactions are permitted with mode `3` or less.
To recover the tables, you can execute [SELECTs](../select/index) to dump data, and [DROP TABLE](../drop-table/index) (when write transactions are permitted) to remove corrupted tables.
The following modes are available:
Recovery Modes
--------------
Recovery mode behaviour differs per version (server/storage/innobase/include/srv0srv.h)
[MariaDB 10.4](../what-is-mariadb-104/index) and before:
| Mode | Description |
| --- | --- |
| 0 | The default mode while InnoDB is running normally. Until [MariaDB 10.2.7](https://mariadb.com/kb/en/mariadb-1027-release-notes/), it was the only mode permitting changes to the data. From [MariaDB 10.2.7](https://mariadb.com/kb/en/mariadb-1027-release-notes/), write transactions are permitted with innodb\_force\_recovery<=3. |
| 1 | (SRV\_FORCE\_IGNORE\_CORRUPT) allows the the server to keep running even if corrupt pages are detected. It does so by making redo log based recovery ignore certain errors, such as missing data files or corrupted data pages. Any redo log for affected files or pages will be skipped. You can facilitate dumping tables by getting the SELECT \* FROM table\_name statement to jump over corrupt indexes and pages. |
| 2 | (SRV\_FORCE\_NO\_BACKGROUND) stops the master thread from running, preventing a crash that occurs during a purge. No purge will be performed, so the undo logs will keep growing. |
| 3 | (SRV\_FORCE\_NO\_TRX\_UNDO) does not roll back transactions after the crash recovery. Does not affect rollback of currently active transactions. Starting with [MariaDB 10.2.7](https://mariadb.com/kb/en/mariadb-1027-release-notes/), will also prevent some undo-generating background tasks from running. These tasks could hit a lock wait due to the recovered incomplete transactions whose rollback is being prevented. |
| 4 | (SRV\_FORCE\_NO\_IBUF\_MERGE) does not calculate tables statistics and prevents insert buffer merges. |
| 5 | (SRV\_FORCE\_NO\_UNDO\_LOG\_SCAN) treats incomplete transactions as committed, and does not look at the [undo logs](../undo-log/index) when starting. |
| 6 | (SRV\_FORCE\_NO\_LOG\_REDO) does not perform redo log roll-forward as part of recovery. Running queries that require indexes are likely to fail with this mode active. However, if a table dump still causes a crash, you can try using a `SELECT * FROM tab ORDER BY primary_key DESC` to dump all the data portion after the corrupted part. |
From [MariaDB 10.5](../what-is-mariadb-105/index) to [MariaDB 10.6.4](https://mariadb.com/kb/en/mariadb-1064-release-notes/):
| Mode | Description |
| --- | --- |
| 0 | The default mode while InnoDB is running normally. Write transactions are permitted with innodb\_force\_recovery<=4. |
| 1 | (SRV\_FORCE\_IGNORE\_CORRUPT) allows the the server to keep running even if corrupt pages are detected. It does so by making redo log based recovery ignore certain errors, such as missing data files or corrupted data pages. Any redo log for affected files or pages will be skipped. You can facilitate dumping tables by getting the SELECT \* FROM table\_name statement to jump over corrupt indexes and pages. |
| 2 | (SRV\_FORCE\_NO\_BACKGROUND) stops the master thread from running, preventing a crash that occurs during a purge. No purge will be performed, so the undo logs will keep growing. |
| 3 | (SRV\_FORCE\_NO\_TRX\_UNDO) does not roll back transactions after the crash recovery. Does not affect rollback of currently active transactions. Will also prevent some undo-generating background tasks from running. These tasks could hit a lock wait due to the recovered incomplete transactions whose rollback is being prevented. |
| 4 | (SRV\_FORCE\_NO\_IBUF\_MERGE) The same as 3. |
| 5 | (SRV\_FORCE\_NO\_UNDO\_LOG\_SCAN) treats incomplete transactions as committed, and does not look at the [undo logs](../undo-log/index) when starting. |
| 6 | (SRV\_FORCE\_NO\_LOG\_REDO) does not perform redo log roll-forward as part of recovery. Running queries that require indexes are likely to fail with this mode active. However, if a table dump still causes a crash, you can try using a `SELECT * FROM tab ORDER BY primary_key DESC` to dump all the data portion after the corrupted part. |
From [MariaDB 10.6.5](https://mariadb.com/kb/en/mariadb-1065-release-notes/)
| Mode | Description |
| --- | --- |
| 0 | The default mode while InnoDB is running normally. Write transactions are permitted with innodb\_force\_recovery<=4. |
| 1 | (SRV\_FORCE\_IGNORE\_CORRUPT) allows the the server to keep running even if corrupt pages are detected. It does so by making redo log based recovery ignore certain errors, such as missing data files or corrupted data pages. Any redo log for affected files or pages will be skipped. You can facilitate dumping tables by getting the SELECT \* FROM table\_name statement to jump over corrupt indexes and pages. |
| 2 | (SRV\_FORCE\_NO\_BACKGROUND) stops the master thread from running, preventing a crash that occurs during a purge. No purge will be performed, so the undo logs will keep growing. |
| 3 | (SRV\_FORCE\_NO\_TRX\_UNDO) does not roll back DML transactions after the crash recovery. Does not affect rollback of currently active DML transactions. Will also prevent some undo-generating background tasks from running. These tasks could hit a lock wait due to the recovered incomplete transactions whose rollback is being prevented. |
| 4 | (SRV\_FORCE\_NO\_DDL\_UNDO) does not roll back transactions after the crash recovery. Does not affect rollback of currently active transactions. Will also prevent some undo-generating background tasks from running. These tasks could hit a lock wait due to the recovered incomplete transactions whose rollback is being prevented. |
| 5 | (SRV\_FORCE\_NO\_UNDO\_LOG\_SCAN) treats incomplete transactions as committed, and does not look at the [undo logs](../undo-log/index) when starting. Any DDL log for InnoDB tables will be essentially ignored by InnoDB, but the server will start up |
| 6 | (SRV\_FORCE\_NO\_LOG\_REDO) does not perform redo log roll-forward as part of recovery. Running queries that require indexes are likely to fail with this mode active. However, if a table dump still causes a crash, you can try using a `SELECT * FROM tab ORDER BY primary_key DESC` to dump all the data portion after the corrupted part. |
Note also that XtraDB (<= [MariaDB 10.2.6](https://mariadb.com/kb/en/mariadb-1026-release-notes/)) by default will crash the server when it detects corrupted data in a single-table tablespace. This behaviour can be changed - see the [innodb\_corrupt\_table\_action](../xtradbinnodb-server-system-variables/index#innodb_corrupt_table_action) system variable.
Fixing Things
-------------
Try to set innodb\_force\_recovery to 1 and start mariadb. If that fails, try a value of "2". If a value of 2 works, then there is a chance the only corruption you have experienced is within the innodb "undo logs". If that gets mariadb started, you should be able to dump your database with mysqldump. You can verify any other issues with any tables by running "mysqlcheck --all-databases".
If you were able to successfully dump your databases, or had previously known good backups, drop your database(s) from the mariadb command line like "[DROP DATABASE](../drop-database/index) yourdatabase". Stop mariadb. Go to /var/lib/mysql (or whereever your mysql data directory is located) and "rm -i ib\*". Start mariadb, create the database(s) you dropped ("[CREATE DATABASE](../create-database/index) yourdatabase"), and then import your most recent dumps: "mysql < mydatabasedump.sql"
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb SET STATEMENT SET STATEMENT
=============
**MariaDB starting with [10.1.2](https://mariadb.com/kb/en/mariadb-1012-release-notes/)**Per-query variables were introduced in [MariaDB 10.1.2](https://mariadb.com/kb/en/mariadb-1012-release-notes/)
`SET STATEMENT` can be used to set the value of a system variable for the duration of the statement. It is also possible to set multiple variables.
Syntax
------
```
SET STATEMENT var1=value1 [, var2=value2, ...]
FOR <statement>
```
where `varN` is a system variable (list of allowed variables is provided below), and `valueN` is a constant literal.
Description
-----------
`SET STATEMENT var1=value1 FOR stmt`
is roughly equivalent to
```
SET @save_value=@@var1;
SET SESSION var1=value1;
stmt;
SET SESSION var1=@save_value;
```
The server parses the whole statement before executing it, so any variables set in this fashion that affect the parser may not have the expected effect. Examples include the charset variables, sql\_mode=ansi\_quotes, etc.
Examples
--------
One can limit statement execution time `[max\_statement\_time](../server-system-variables/index#max_statement_time)`:
```
SET STATEMENT max_statement_time=1000 FOR SELECT ... ;
```
One can switch on/off individual optimizations:
```
SET STATEMENT optimizer_switch='materialization=off' FOR SELECT ....;
```
It is possible to enable MRR/BKA for a query:
```
SET STATEMENT join_cache_level=6, optimizer_switch='mrr=on' FOR SELECT ...
```
Note that it makes no sense to try to set a session variable inside a `SET STATEMENT`:
```
#USELESS STATEMENT
SET STATEMENT sort_buffer_size = 100000 for SET SESSION sort_buffer_size = 200000;
```
For the above, after setting sort\_buffer\_size to 200000 it will be reset to its original state (the state before the `SET STATEMENT` started) after the statement execution.
Limitations
-----------
There are a number of variables that cannot be set on per-query basis. These include:
* `autocommit`
* `character_set_client`
* `character_set_connection`
* `character_set_filesystem`
* `collation_connection`
* `default_master_connection`
* `debug_sync`
* `interactive_timeout`
* `gtid_domain_id`
* `last_insert_id`
* `log_slow_filter`
* `log_slow_rate_limit`
* `log_slow_verbosity`
* `long_query_time`
* `min_examined_row_limit`
* `profiling`
* `profiling_history_size`
* `query_cache_type`
* `rand_seed1`
* `rand_seed2`
* `skip_replication`
* `slow_query_log`
* `sql_log_off`
* `tx_isolation`
* `wait_timeout`
Source
------
* The feature was originally implemented as a Google Summer of Code 2009 project by Joseph Lukas.
* Percona Server 5.6 included it as [Per-query variable statement](http://www.percona.com/doc/percona-server/5.6/flexibility/per_query_variable_statement.html)
* MariaDB ported the patch and fixed *many* bugs. The task in MariaDB Jira is [MDEV-5231](https://jira.mariadb.org/browse/MDEV-5231).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema events_stages_history Table Performance Schema events\_stages\_history Table
================================================
The `events_stages_history` table by default contains the ten most recent completed stage events per thread. This number can be adjusted by setting the [performance\_schema\_events\_stages\_history\_size](../performance-schema-system-variables/index#performance_schema_events_stages_history_size) system variable when the server starts up.
The table structure is identical to the [events\_stage\_current](../performance-schema-events_stages_current-table/index) table structure, and contains the following columns:
| Column | Description |
| --- | --- |
| `THREAD_ID` | Thread associated with the event. Together with `EVENT_ID` uniquely identifies the row. |
| `EVENT_ID` | Thread's current event number at the start of the event. Together with `THREAD_ID` uniquely identifies the row. |
| `END_EVENT_ID` | `NULL` when the event starts, set to the thread's current event number at the end of the event. |
| `EVENT_NAME` | Event instrument name and a `NAME` from the `setup_instruments` table |
| `SOURCE` | Name and line number of the source file containing the instrumented code that produced the event. |
| `TIMER_START` | Value in picoseconds when the event timing started or `NULL` if timing is not collected. |
| `TIMER_END` | Value in picoseconds when the event timing ended, or `NULL` if timing is not collected. |
| `TIMER_WAIT` | Value in picoseconds of the event's duration or `NULL` if timing is not collected. |
| `NESTING_EVENT_ID` | `EVENT_ID` of event within which this event nests. |
| `NESTING_EVENT_TYPE` | Nesting event type. One of `transaction`, `statement`, `stage` or `wait`. |
It is possible to empty this table with a `TRUNCATE TABLE` statement.
[events\_stages\_current](../performance-schema-events_stages_current-table/index) and [events\_stages\_history\_long](../performance-schema-events_stages_history_long-table/index) are related tables.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ERBuilder Data Modeler ERBuilder Data Modeler
======================
[ERBuilder Data Modeler](https://soft-builder.com/erbuilder-data-modeler/) is a GUI data modeling tool that allows developers to visualize, design, and model databases by using entity relationship diagrams and automatically generates the most popular SQL databases. Generate and share the data Model documentation with your team. Optimize your data model by using advanced features such as test data generation, schema compare, and schema synchronization
**Supported DBMS include:**
* MariaDB
* MySQL
* Microsoft SQL Server
* Microsoft Azure SQL database
* Oracle
* PostgreSQL
* SQLite
* Firebird
* Amazon Redshift
* Amazon RDS
**Key features:**
* Visual data modeling
* Forward and Reverse Engineering
* Data Model Validation
* Data model documentation
* Schema Comparison and Synchronization
* Test data generation
* Change Database Platform
* Generate web user interface for CRUD
* Version management
**Freeware version**
Feature limited free version is available for download. Advanced features are available in the commercial edition. A [feature comparison matrix](https://soft-builder.com/features/) is available for more details about features, pricing, and "trial" versions,
**More information**
See <https://www.soft-builder.com> for more information.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb sysbench v0.5 - 3x Five Minute Runs on work with 5.1 vs. 5.2-wl86 sysbench v0.5 - 3x Five Minute Runs on work with 5.1 vs. 5.2-wl86
=================================================================
3x Five Minute Runs on work with 5.1 vs. 5.2-wl86 key cache partitions off
[MariaDB 5.1](../what-is-mariadb-51/index) vs. 5.2-wl86 sysbench benchmark comparison in %
Each test was run three times for 5 minutes.
```
Number of threads
1 4 8 16 32 64 128
sysbench test
delete 107.28 94.70 98.10 107.12 93.59 89.24 86.89
insert 103.15 105.13 101.75 102.78 101.52 100.29 100.89
oltp_complex_ro 101.31 101.77 100.41 98.72 98.53 101.59 100.44
oltp_complex_rw Dup key errors (due to sysbench)
oltp_simple 102.28 100.76 102.70 100.94 101.05 101.81 102.06
select 100.88 101.05 100.48 101.61 101.48 101.87 101.44
update_index 97.57 96.81 93.58 102.43 89.19 107.63 88.29
update_non_index 101.58 83.24 110.46 94.52 106.33 103.87 115.22
(MariaDB 5.1 key_cache_partitions off q/s /
MariaDB 5.2-wl86 key_cache_partitions off q/s * 100)
key_buffer_size = 512M
```
Benchmark was run on work: Linux openSUSE 11.1 (x86\_64), daul socket quad-core Intel 3.0GHz. with 6MB L2 cache, 8 GB RAM, data\_dir on single disk.
MariaDB and MySQL were compiled with
```
BUILD/compile-amd64-max
```
[MariaDB 5.1](../what-is-mariadb-51/index) revision was:
```
revno: 2821
committer: Sergei Golubchik <[email protected]>
branch nick: maria-5.1
timestamp: Tue 2010-02-23 13:04:58 +0100
message:
fix for a possible DoS in the my_net_skip_rest()
```
[MariaDB 5.2](../what-is-mariadb-52/index)-wl86 revision was:
```
lp:~maria-captains/maria/maria-5.2-wl86
revno: 2742
committer: Igor Babaev <[email protected]>
branch nick: maria-5.2-keycache
timestamp: Tue 2010-02-16 08:41:11 -0800
message:
WL#86: Partitioned key cache for MyISAM.
This is the base patch for the task.
```
sysbench was run with the following parameters:
```
--oltp-table-size=20000000 \ # 20 mio rows
--max-time=300 \
--max-requests=0 \
--mysql-table-engine=MyISAM \
--mysql-user=root \
--mysql-engine-trx=no \
--myisam-max-rows=50000000"
```
and this variable part of the parameters
```
--num-threads=$THREADS --test=${TEST_DIR}/${SYSBENCH_TEST}
```
Configuration used for MariaDB:
```
--no-defaults \
--datadir=$DATA_DIR \
--language=./sql/share/english \
--key_buffer_size=512M \
--max_connections=256 \
--query_cache_size=0 \
--query_cache_type=0 \
--skip-grant-tables \
--socket=$MY_SOCKET \
--table_open_cache=512 \
--thread_cache=512 \
--tmpdir=$TEMP_DIR"
# --key_cache_partitions=7 \
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ps_thread_trx_info ps\_thread\_trx\_info
=====================
Syntax
------
```
sys.ps_thread_trx_info(thread_id)
```
Description
-----------
`ps_thread_trx_info` is a [stored function](../stored-functions/index) available with the [Sys Schema](../sys-schema/index).
It returns a JSON object with information about the thread specified by the given *thread\_id*. This information includes:
* the current transaction
* executed statements (derived from the [Performance Schema events\_transactions\_current Table](../performance-schema-events_transactions_current-table/index) and the [Performance Schema events\_statements\_history Table](../performance-schema-events_statements_history-table/index) (full data will only returned if the consumers for those tables are enabled).
The maximum length of the returned JSON object is determined by the value of the [ps\_thread\_trx\_info.max\_length sys\_config option](../sys-schema-sys_config-table/index) (by default 65535). If the returned value exceeds this length, a JSON object error is returned.
Examples
--------
See Also
--------
* [Sys Schema sys\_config Table](../sys-schema-sys_config-table/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Loading Data Into MyRocks Loading Data Into MyRocks
=========================
Being a write-optimized storage engine, MyRocks has special ways to load data much faster than normal INSERTs would.
See
* <http://myrocks.io/docs/getting-started/>; the section about "Migrating from InnoDB to MyRocks in production" has some clues.
* <https://github.com/facebook/mysql-5.6/wiki/Data-Loading> covers the topic in greater detail.
Note When one loads data with [rocksdb\_bulk\_load=1](../myrocks-system-variables/index#rocksdb_bulk_load) and the data conflicts with the data already in the database, one may get non-trivial errors, for example:
```
ERROR 1105 (HY000): [./.rocksdb/test.t1_PRIMARY_2_0.bulk_load.tmp] bulk load error:
Invalid argument: External file requires flush
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb New Features for mysqltest in MariaDB New Features for mysqltest in MariaDB
=====================================
Note that not all MariaDB-enhancements are listed on this page. See [mysqltest and mysqltest-embedded](../mysqltest-and-mysqltest-embedded/index) for a full set of options.
Startup Option --connect-timeout
--------------------------------
```
--connect-timeout=N
```
This can be used to set the MYSQL\_OPT\_CONNECT\_TIMEOUT parameter of mysql\_options, to change the number of seconds before an unsuccessful connection attempt times out.
Test Commands for Handling Warnings During Prepare Statements
-------------------------------------------------------------
* `enable_prepare_warnings;`
* `disable_prepare_warnings;`
Normally, when running with the prepared statement protocol with warnings enabled and executing a statement that returns a result set (like SELECT), warnings that occur during the execute phase are shown, but warnings that occur during the prepare phase are ''not'' shown. The reason for this is that some warnings are returned both during prepare and execute; if both copies of warnings were shown, then test cases would show different number of warnings between prepared statement execution and normal execution (where there is no prepare phase).
The `enable_prepare_warnings` command changes this so that warnings from both the prepare and execute phase are shown, regardless of whether the statement produces a result set in the execute phase. The `disable_prepare_warnings` command reverts to the default behaviour.
These commands only have effect when running with the prepared statement protocol (--ps-protocol) *and* with warnings enabled (enable\_warnings). Furthermore, they only have effects for statements that return a result set (as for statements without result sets, warnings from are always shown when warnings are enabled).
**MariaDB [10.0.13](https://mariadb.com/kb/en/mariadb-10013-release-notes/)**The `replace_regex` command supports paired delimiters (like in perl, etc). If the first non-space character in the `replace_regex` argument is one of `(`, `[`, `{`, `<`, then the pattern should end with `)`, `]`, `}`, `>` accordingly. The replacement string can use its own pair of delimiters, not necessarily the same as the pattern. If the first non-space character in the `replace_regex` argument is not one of the above, then it should also separate the pattern and the replacement string and it should end the replacement string. Backslash can be used to escape the current terminating character as usual. The examples below demonstrate valid usage of `replace_regex`:
```
--replace_regex (/some/path)</another/path>
--replace_regex !/foo/bar!foobar!
--replace_regex {pat\}tern}/replace\/ment/i
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Rollup Unique User Counts Rollup Unique User Counts
=========================
The Problem
-----------
The normal way to count "Unique Users" is to take large log files, sort by userid, dedup, and count. This requires a rather large amount of processing. Furthermore, the count derived cannot be rolled up. That is, daily counts cannot be added to get weekly counts -- some users will be counted multiple times.
So, the problem is to store the counts is such a way as to allow rolling up.
The solution
------------
Let's think about what we can do with a hash of the userid. The hash could map to a bit in a bit string. A BIT\_COUNT of the bit string would give the 1-bits, representing the number of users. But that bit string would have to be huge. What if we could use shorter bit strings? Then different userids would be folded into the same bit. Let's assume we can solve that.
Meanwhile, what about the rollup? The daily bit strings can be OR'd together to get a similar bit string for the week.
We have now figured out how to do the rollup, but have created another problem -- the counts are too low.
Inflating the BIT\_COUNT
------------------------
A sufficiently random hash (eg MD5) will fold userids into the same bits with a predictable frequency. We need to figure this out, and work backwards. That is, given that X percent of the bits are set, we need a formula that says approximately how many userids were used to get those bits.
I simulated the problem by generating random hashes and calculated the number of bits that would be set. Then, with the help of Eureqa software, I derived the formula:
Y = 0.5456\*X + 0.6543\*tan(1.39\*X\*X\*X)
How good is it?
---------------
The formula is reasonably precise. It is usually within 1% of the correct value; rarely off by 2%.
Of course, if virtually all the bits are set, the forumla can't be very precise. Hence, you need to plan to have the bit strings big enough to handle the expected number of Uniques. In practice, you can use less than 1 bit per Unique. This would be a huge space savings over trying to save all the userids.
Another suggestion... If you are rolling up over a big span of time (eg hourly -> monthly), the bit strings must all be the same length, and the monthly string must be big enough to handle the expected count. This is likely to lead to very sparse hourly bit strings. Hence, it may be prudent to compress the hourly stings.
Postlog
-------
Invented Nov, 2013; published Apr, 2014
Future: Rick is working on actual code (Sep, 2016) It is complicated by bit-wise operations being limited to BIGINT. However, with MySQL 8.0 (freshly released), the desired bit-wise operations can be applied to BLOB, greatly simplifying my code. I hope to publish the pre-8.0 code soon; 8.0 code later.
See also
--------
Rick James graciously allowed us to use this article in the Knowledge Base.
[Rick James' site](http://mysql.rjweb.org/) has other useful tips, how-tos, optimizations, and debugging tips.
Original source: <http://mysql.rjweb.org/doc.php/uniques>
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Multi Range Read Optimization Multi Range Read Optimization
=============================
Multi Range Read is an optimization aimed at improving performance for IO-bound queries that need to scan lots of rows.
Multi Range Read can be used with
* `range` access
* `ref` and `eq_ref` access, when they are using [Batched Key Access](../block-based-join-algorithms/index#batch-key-access-join)
as shown in this diagram:
The Idea
--------
### Case 1: Rowid Sorting for Range Access
Consider a range query:
```
explain select * from tbl where tbl.key1 between 1000 and 2000;
+----+-------------+-------+-------+---------------+------+---------+------+------+-----------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+------+---------+------+------+-----------------------+
| 1 | SIMPLE | tbl | range | key1 | key1 | 5 | NULL | 960 | Using index condition |
+----+-------------+-------+-------+---------------+------+---------+------+------+-----------------------+
```
When this query is executed, disk IO access pattern will follow the red line in this figure:
Execution will hit the table rows in random places, as marked with the blue line/numbers in the figure.
When the table is sufficiently big, each table record read will need to actually go to disk (and be served from buffer pool or OS cache), and query execution will be too slow to be practical. For example, a 10,000 RPM disk drive is able to make 167 seeks per second, so in the worst case, query execution will be capped at reading about 167 records per second.
SSD drives do not need to do disk seeks, so they will not be hurt as badly, however the performance will still be poor in many cases.
Multi-Range-Read optimization aims to make disk access faster by sorting record read requests and then doing one ordered disk sweep. If one enables Multi Range Read, `EXPLAIN` will show that a "`Rowid-ordered scan`" is used:
```
set optimizer_switch='mrr=on';
Query OK, 0 rows affected (0.06 sec)
explain select * from tbl where tbl.key1 between 1000 and 2000;
+----+-------------+-------+-------+---------------+------+---------+------+------+-------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+------+---------+------+------+-------------------------------------------+
| 1 | SIMPLE | tbl | range | key1 | key1 | 5 | NULL | 960 | Using index condition; Rowid-ordered scan |
+----+-------------+-------+-------+---------------+------+---------+------+------+-------------------------------------------+
1 row in set (0.03 sec)
```
and the execution will proceed as follows:
Reading disk data sequentially is generally faster, because
* Rotating drives do not have to move the head back and forth
* One can take advantage of IO-prefetching done at various levels
* Each disk page will be read exactly once, which means we won't rely on disk cache (or buffer pool) to save us from reading the same page multiple times.
The above can make a huge difference on performance. There is also a catch, though:
* If you're scanning small data ranges in a table that is sufficiently small so that it completely fits into the OS disk cache, then you may observe that the only effect of MRR is that extra buffering/sorting adds some CPU overhead.
* `LIMIT n` and `ORDER BY ... LIMIT n` queries with small values of `n` may become slower. The reason is that MRR reads data *in disk order*, while `ORDER BY ... LIMIT n` wants first `n` records *in index order*.
### Case 2: Rowid Sorting for Batched Key Access
Batched Key Access can benefit from rowid sorting in the same way as range access does. If one has a join that uses index lookups:
```
explain select * from t1,t2 where t2.key1=t1.col1;
+----+-------------+-------+------+---------------+------+---------+--------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+------+---------+--------------+------+-------------+
| 1 | SIMPLE | t1 | ALL | NULL | NULL | NULL | NULL | 1000 | Using where |
| 1 | SIMPLE | t2 | ref | key1 | key1 | 5 | test.t1.col1 | 1 | |
+----+-------------+-------+------+---------------+------+---------+--------------+------+-------------+
2 rows in set (0.00 sec)
```
Execution of this query will cause table `t2` to be hit in random locations by lookups made through `t2.key1=t1.col`. If you enable Multi Range and and Batched Key Access, you will get table `t2` to be accessed using a `Rowid-ordered scan`:
```
set optimizer_switch='mrr=on';
Query OK, 0 rows affected (0.06 sec)
set join_cache_level=6;
Query OK, 0 rows affected (0.00 sec)
explain select * from t1,t2 where t2.key1=t1.col1;
+----+-------------+-------+------+---------------+------+---------+--------------+------+--------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+------+---------+--------------+------+--------------------------------------------------------+
| 1 | SIMPLE | t1 | ALL | NULL | NULL | NULL | NULL | 1000 | Using where |
| 1 | SIMPLE | t2 | ref | key1 | key1 | 5 | test.t1.col1 | 1 | Using join buffer (flat, BKA join); Rowid-ordered scan |
+----+-------------+-------+------+---------------+------+---------+--------------+------+--------------------------------------------------------+
2 rows in set (0.00 sec)
```
The benefits will be similar to those listed for `range` access.
An additional source of speedup is this property: if there are multiple records in `t1` that have the same value of `t1.col1`, then regular Nested-Loops join will make multiple index lookups for the same value of `t2.key1=t1.col1`. The lookups may or may not hit the cache, depending on how big the join is. With Batched Key Access and Multi-Range Read, no duplicate index lookups will be made.
### Case 3: Key Sorting for Batched Key Access
Let us consider again the nested loop join example, with `ref` access on the second table:
```
explain select * from t1,t2 where t2.key1=t1.col1;
+----+-------------+-------+------+---------------+------+---------+--------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+------+---------+--------------+------+-------------+
| 1 | SIMPLE | t1 | ALL | NULL | NULL | NULL | NULL | 1000 | Using where |
| 1 | SIMPLE | t2 | ref | key1 | key1 | 5 | test.t1.col1 | 1 | |
+----+-------------+-------+------+---------------+------+---------+--------------+------+-------------+
```
Execution of this query plan will cause random hits to be made into the index `t2.key1`, as shown in this picture:
In particular, on step #5 we'll read the same index page that we've read on step #2, and the page we've read on step #4 will be re-read on step#6. If all pages you're accessing are in the cache (in the buffer pool, if you're using InnoDB, and in the key cache, if you're using MyISAM), this is not a problem. However, if your hit ratio is poor and you're going to hit the disk, it makes sense to sort the lookup keys, like shown in this figure:
This is roughly what `Key-ordered scan` optimization does. In EXPLAIN, it looks as follows:
```
set optimizer_switch='mrr=on,mrr_sort_keys=on';
Query OK, 0 rows affected (0.00 sec)
set join_cache_level=6;
Query OK, 0 rows affected (0.02 sec)
explain select * from t1,t2 where t2.key1=t1.col1\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: t1
type: ALL
possible_keys: a
key: NULL
key_len: NULL
ref: NULL
rows: 1000
Extra: Using where
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: t2
type: ref
possible_keys: key1
key: key1
key_len: 5
ref: test.t1.col1
rows: 1
Extra: Using join buffer (flat, BKA join); Key-ordered Rowid-ordered scan
2 rows in set (0.00 sec)
```
((TODO: a note about why sweep-read over InnoDB's clustered primary index scan (which is, actually the whole InnoDB table itself) will use `Key-ordered scan` algorithm, but not `Rowid-ordered scan` algorithm, even though conceptually they are the same thing in this case))
Buffer Space Management
-----------------------
As was shown above, Multi Range Read requires sort buffers to operate. The size of the buffers is limited by system variables. If MRR has to process more data than it can fit into its buffer, it will break the scan into multiple passes. The more passes are made, the less is the speedup though, so one needs to balance between having too big buffers (which consume lots of memory) and too small buffers (which limit the possible speedup).
### Range Access
When MRR is used for `range` access, the size of its buffer is controlled by the [mrr\_buffer\_size](../server-system-variables/index#mrr_buffer_size) system variable. Its value specifies how much space can be used for each table. For example, if there is a query which is a 10-way join and MRR is used for each table, `10*@@mrr_buffer_size` bytes may be used.
### Batched Key Access
When Multi Range Read is used by Batched Key Access, then buffer space is managed by BKA code, which will automatically provide a part of its buffer space to MRR. You can control the amount of space used by BKA by setting
* [join\_buffer\_size](../server-system-variables/index#join_buffer_size) to limit how much memory BKA uses for each table, and
* [join\_buffer\_space\_limit](../server-system-variables/index#join_buffer_space_limit) to limit the total amount of memory used by BKA in the join.
Status Variables
----------------
There are three status variables related to Multi Range Read:
| Variable name | Meaning |
| --- | --- |
| [Handler\_mrr\_init](../server-status-variables/index#handler_mrr_init) | Counts how many Multi Range Read scans were performed |
| [Handler\_mrr\_key\_refills](../server-status-variables/index#handler_mrr_key_refills) | Number of times key buffer was refilled (not counting the initial fill) |
| [Handler\_mrr\_rowid\_refills](../server-status-variables/index#handler_mrr_rowid_refills) | Number of times rowid buffer was refilled (not counting the initial fill) |
Non-zero values of `Handler_mrr_key_refills` and/or `Handler_mrr_rowid_refills` mean that Multi Range Read scan did not have enough memory and had to do multiple key/rowid sort-and-sweep passes. The greatest speedup is achieved when Multi Range Read runs everything in one pass, if you see lots of refills it may be beneficial to increase sizes of relevant buffers [mrr\_buffer\_size](../server-system-variables/index#mrr_buffer_size) [join\_buffer\_size](../server-system-variables/index#join_buffer_size) and [join\_buffer\_space\_limit](../server-system-variables/index#join_buffer_space_limit)
### Effect on Other Status Variables
When a Multi Range Read scan makes an index lookup (or some other "basic" operation), the counter of the "basic" operation, e.g. [Handler\_read\_key](../server-status-variables/index#handler_read_key), will also be incremented. This way, you can still see total number of index accesses, including those made by MRR. [Per-user/table/index statistics](../user-statistics/index) counters also include the row reads made by Multi Range Read scans.
### Why Using Multi Range Read Can Cause Higher Values in Status Variables
Multi Range Read is used for scans that do full record reads (i.e., they are not "Index only" scans). A regular non-index-only scan will read
1. an index record, to get a rowid of the table record
2. a table record Both actions will be done by making one call to the storage engine, so the effect of the call will be that the relevan `Handler_read_XXX` counter will be incremented BY ONE, and [Innodb\_rows\_read](../xtradbinnodb-server-status-variables/index#innodb_rows_read) will be incremented BY ONE.
Multi Range Read will make separate calls for steps #1 and #2, causing TWO increments to `Handler_read_XXX` counters and TWO increments to `Innodb_rows_read` counter. To the uninformed, this looks as if Multi Range Read was making things worse. Actually, it doesn't - the query will still read the same index/table records, and actually Multi Range Read may give speedups because it reads data in disk order.
Multi Range Read Factsheet
--------------------------
* Multi Range Read is used by
+ `range` access method for range scans.
+ [Batched Key Access](batched_key_access) for joins
* Multi Range Read can cause slowdowns for small queries over small tables, so it is disabled by default.
* There are two strategies:
+ Rowid-ordered scan
+ Key-ordered scan
* : and you can tell if either of them is used by checking the `Extra` column in `EXPLAIN` output.
* There are three [optimizer\_switch](../server-system-variables/index#optimizer_switch) flags you can switch ON:
+ `mrr=on` - enable MRR and rowid ordered scans
+ `mrr_sort_keys=on` - enable Key-ordered scans (you must also set `mrr=on` for this to have any effect)
+ `mrr_cost_based=on` - enable cost-based choice whether to use MRR. Currently not recommended, because cost model is not sufficiently tuned yet.
Differences from MySQL
----------------------
* MySQL supports only `Rowid ordered scan` strategy, which it shows in `EXPLAIN` as `Using MRR`.
* EXPLAIN in MySQL shows `Using MRR`, while in MariaDB it may show
+ `Rowid-ordered scan`
+ `Key-ordered scan`
+ `Key-ordered Rowid-ordered scan`
* MariaDB uses [mrr\_buffer\_size](../server-system-variables/index#mrr_buffer_size) as a limit of MRR buffer size for `range` access, while MySQL uses [read\_rnd\_buffer\_size](../server-system-variables/index#read_rnd_buffer_size).
* MariaDB has three MRR counters: [Handler\_mrr\_init](../server-status-variables/index#handler_mrr_init), `Handler_mrr_extra_rowid_sorts`, `Handler_mrr_extra_key_sorts`, while MySQL has only `Handler_mrr_init`, and it will only count MRR scans that were used by BKA. MRR scans used by range access are not counted.
See Also
--------
* [What is MariaDB 5.3](../what-is-mariadb-53/index)
* [Multi-Range Read Optimization](http://dev.mysql.com/doc/refman/5.6/en/mrr-optimization.html) page in MySQL manual
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb mysql_install_db.exe mysql\_install\_db.exe
======================
The `mysql_install_db.exe` utility is the Windows equivalent of [mysql\_install\_db](../mysql_install_db/index).
Functionality
-------------
The functionality of `mysql_install_db.exe` is comparable with the shell script `mysql_install_db` used on Unix, however it has been extended with both Windows specific functionality (creating a Windows service) and to generally useful functionality. For example, it can set the 'root' user password during database creation. It also creates the `my.ini` configuration file in the data directory and adds most important parameters to it (e.g port).
`mysql_install_db.exe` is used by the MariaDB installer for Windows if the "Database instance" feature is selected. It obsoletes similar utilities and scripts that were used in the past such as `mysqld.exe` ``--`install`, `mysql_install_db.pl`, and `mysql_secure_installation.pl`.
| Parameter | Description |
| --- | --- |
| `-?`, ``--`help` | Display help message and exit |
| `-d`, ``--`datadir=name` | Data directory of the new database |
| `-S`, ``--`service=name` | Name of the Windows service |
| `-p`, ``--`password=name` | Password of the root user |
| `-P`, ``--`port=#` | `mysqld` port |
| `-W`, ``--`socket=name` | named pipe name |
| `-D`, ``--`default-user` | Create default user |
| `-R`, ``--`allow-remote-root-access` | Allow remote access from network for user root |
| `-N`, ``--`skip-networking` | Do not use TCP connections, use pipe instead |
| `-i`, ``--`innodb-page-size` | Innodb page size, since [MariaDB 10.2.5](https://mariadb.com/kb/en/mariadb-1025-release-notes/) |
| `-s`,``--`silent` | Print less information |
| `-o`,``--`verbose-bootstrap` | Include mysqld bootstrap output |
| `-l`,``--`large-pages` | Use large pages, since [MariaDB 10.6.1](https://mariadb.com/kb/en/mariadb-1061-release-notes/) |
| `-c`,``--`config` | my.ini config template file, since [MariaDB 10.6.1](https://mariadb.com/kb/en/mariadb-1061-release-notes/) |
**Note** : to create a Windows service, `mysql_install_db.exe` should be run by a user with full administrator privileges (which means elevated command prompt on systems with UAC). For example, if you are running it on Windows 7, make sure that your command prompt was launched via 'Run as Administrator' option.
Example
-------
```
mysql_install_db.exe --datadir=C:\db --service=MyDB --password=secret
```
will create the database in the directory C:\db, register the auto-start Windows service "MyDB", and set the root password to 'secret'.
To start the service from the command line, execute
```
sc start MyDB
```
Removing Database Instances
---------------------------
If you run your database instance as service, to remove it completely from the command line, use
```
sc stop <servicename>
sc delete <servicename>
rmdir /s /q <path-to-datadir>
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema INNODB_BUFFER_POOL_PAGES_BLOB Table Information Schema INNODB\_BUFFER\_POOL\_PAGES\_BLOB Table
==========================================================
The [Information Schema](../information_schema/index) `INNODB_BUFFER_POOL_PAGES_BLOB` table is a Percona enchancement, and is only available for XtraDB, not InnoDB (see [XtraDB and InnoDB](../xtradb-and-innodb/index)). It contains information about [buffer pool](../xtradbinnodb-memory-buffer/index) blob pages.
It has the following columns:
| Column | Description |
| --- | --- |
| `SPACE_ID` | Tablespace ID. |
| `PAGE_NO` | Page offset within tablespace. |
| `COMPRESSED` | `1` if the blob contains compressed data, `0` if not. |
| `PART_LEN` | Page data length. |
| `NEXT_PAGE_NO` | Next page number. |
| `LRU_POSITION` | Page position in the LRU (least-recently-used) list. |
| `FIX_COUNT` | Page reference count, incremented each time the page is accessed. `0` if the page is not currently being accessed. |
| `FLUSH_TYPE` | Flush type of the most recent flush.`0` (LRU), `2` (flush\_list) |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Configuring Linux for MariaDB Configuring Linux for MariaDB
=============================
Linux kernel settings
---------------------
### IO scheduler
For optimal IO performance running a database we are using the *none* (previously called *noop*) scheduler. Recommended schedulers are *none* and *mq-deadline* (previously called *deadline*). You can check your scheduler setting with:
```
cat /sys/block/${DEVICE}/queue/scheduler
```
For instance, it should look like this output:
```
cat /sys/block/vdb/queue/scheduler
[none] mq-deadline kyber bfq
```
Older kernels may look like:
```
cat /sys/block/sda/queue/scheduler
[noop] deadline cfq
```
Writing the new scheduler name to the same /sys node will change the scheduler:
```
echo mq-deadline > /sys/block/vdb/queue/scheduler
```
The impact of schedulers depend significantly on workload and hardware. You can measure the IO-latency using the [biolatency](https://github.com/iovisor/bcc/blob/master/tools/biolatency_example.txt) bcc-tools script with an aim to keep the mean as low as possible.
Resource Limits
---------------
### Configuring the Open Files Limit
By default, the system limits how many open file descriptors a process can have open at one time. It has both a soft and hard limit. On many systems, both the soft and hard limit default to 1024. On an active database server, it is very easy to exceed 1024 open file descriptors. Therefore, you may need to increase the soft and hard limits. There are a few ways to do so.
If you are using `[mysqld\_safe](../mysqld_safe/index)` to start `mysqld`, then see the instructions at [mysqld\_safe: Configuring the Open Files Limit](../mysqld_safe/index#configuring-the-open-files-limit).
If you are using `[systemd](../systemd/index)` to start `mysqld`, then see the instructions at [systemd: Configuring the Open Files Limit](../systemd/index#configuring-the-open-files-limit).
Otherwise, you can set the soft and hard limits for the `mysql` user account by adding the following lines to `[/etc/security/limits.conf](https://linux.die.net/man/5/limits.conf)`:
```
mysql soft nofile 65535
mysql hard nofile 65535
```
After the system is rebooted, the `mysql` user should use the new limits, and the user's `ulimit` output should look like the following:
```
$ ulimit -Sn
65535
$ ulimit -Hn
65535
```
### Configuring the Core File Size
By default, the system limits the size of core files that could be created. It has both a soft and hard limit. On many systems, the soft limit defaults to 0. If you want to [enable core dumps](../enabling-core-dumps/index), then you may need to increase this. Therefore, you may need to increase the soft and hard limits. There are a few ways to do so.
If you are using `[mysqld\_safe](../mysqld_safe/index)` to start `mysqld`, then see the instructions at [mysqld\_safe: Configuring the Core File Size](../mysqld_safe/index#configuring-the-core-file-size).
If you are using `[systemd](../systemd/index)` to start `mysqld`, then see the instructions at [systemd: Configuring the Core File Size](../systemd/index#configuring-the-core-file-size).
Otherwise, you can set the soft and hard limits for the `mysql` user account by adding the following lines to `[/etc/security/limits.conf](https://linux.die.net/man/5/limits.conf)`:
```
mysql soft core unlimited
mysql hard core unlimited
```
After the system is rebooted, the `mysql` user should use the new limits, and the user's `ulimit` output should look like the following:
```
$ ulimit -Sc
unlimited
$ ulimit -Hc
unlimited
```
Swappiness
----------
See [configuring swappiness](../configuring-swappiness/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Building MariaDB on CentOS Building MariaDB on CentOS
==========================
In the event that you are using the Linux-based operating system CentOS or any of its derivatives, you can optionally compile MariaDB from source code. This is useful in cases where you want use a more advanced release than the one that's available in the official repositories, or when you want to enable certain feature that are not otherwise accessible.
Installing Build Dependencies
-----------------------------
Before you start building MariaDB, you first need to install the build dependencies required to run the compile. CentOS provides a tool for installing build dependencies. The `yum-builddep` utility reads a package and generates a list of the packages required to build from source, then calls YUM to install them for you. In the event that this utility is not available on your system, you can install it through the `yum-utils` package. Once you have it, install the MariaDB build dependencies.
```
# yum-builddep mariadb-server
```
Running this command installs many of the build dependencies, but it doesn't install all of them. Not all the required packages are noted and it's run against the official CentOS package of MariaDB, not necessarily the version that you want to install. Use YUM to install the remaining packages.
```
# yum install git \
gcc \
gcc-c++ \
bison \
libxml2-devel \
libevent-devel \
rpm-build
```
In addition to these, you also need to install `gnutls` or `openssl`, depending on the TLS implementation you want to use.
For more information on dependencies, see [Linux Build Environment](../build_environment_setup_for_linux/index).
Building MariaDB
----------------
Once you have the base dependencies installed, you can retrieve the source code and start building MariaDB. The source code is available on GitHub. Use the `--branch` option to specify the particular version of MariaDB you want to build.
```
$ git clone --branch 10.3 https://github.com/MariaDB/server.git
```
With the source repository cloned onto your system, you can start building MariaDB. Run CMake to read MariaDB for the build,
```
$ cmake -DRPM=centos7 server/
```
Once CMake readies the relevant Makefile for your system, use Make to build MariaDB.
```
$ make package
```
This generates an RPM file, which you can then install on your system or copy over to install on other CentOS hosts.
Creating MariaDB-compat package
-------------------------------
MariaDB-compat package contains libraries from older MariaDB releases. They cannot be built from the current source tree, so cpack creates them by repackaging old MariaDB-shared packages. If you want to have -compat package created, you need to download MariaDB-shared-5.3 and MariaDB-shared-10.1 rpm packages for your architecture (any minor version will do) and put them *one level above* the source tree you're building. CMake will pick them up and create a MariaDB-compat package. CMake reports it as
```
$ ls ../*.rpm
../MariaDB-shared-10.1.17-centos7-x86_64.rpm
../MariaDB-shared-5.3.12-122.el5.x86_64.rpm
$ cmake -DRPM=centos7 .
...
Using ../MariaDB-shared-5.3.12-122.el5.x86_64.rpm to build MariaDB-compat
Using ../MariaDB-shared-10.1.17-centos7-x86_64.rpm to build MariaDB-compat
```
Additional Dependencies
-----------------------
In the event that you miss a package while installing build dependencies, CMake may continue to fail after you install the necessary packages. If this happens to you, delete the CMake cache then run the above the command again:
```
$ rm CMakeCache.txt
```
When CMake runs through the tests again, it should now find the packages it needs, instead of the cache telling it they're unavailable.
More about CMake and CPackRPM
-----------------------------
See also [building RPM packages from source](../building-rpm-packages-from-source/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Parentheses Parentheses
===========
Parentheses are sometimes called precedence operators - this means that they can be used to change the other [operator's precedence](../operator-precedence/index) in an expression. The expressions that are written between parentheses are computed before the expressions that are written outside. Parentheses must always contain an expression (that is, they cannot be empty), and can be nested.
For example, the following expressions could return different results:
* `NOT a OR b`
* `NOT (a OR b)`
In the first case, `NOT` applies to `a`, so if `a` is `FALSE` or `b` is `TRUE`, the expression returns `TRUE`. In the second case, `NOT` applies to the result of `a OR b`, so if at least one of `a` or `b` is `TRUE`, the expression is `TRUE`.
When the precedence of operators is not intuitive, you can use parentheses to make it immediately clear for whoever reads the statement.
The precedence of the `NOT` operator can also be affected by the `HIGH_NOT_PRECEDENCE` [SQL\_MODE](../sql-mode/index) flag.
Other uses
----------
Parentheses must always be used to enclose [subqueries](../subqueries/index).
Parentheses can also be used in a `[JOIN](../join/index)` statement between multiple tables to determine which tables must be joined first.
Also, parentheses are used to enclose the list of parameters to be passed to built-in functions, user-defined functions and stored routines. However, when no parameter is passed to a stored procedure, parentheses are optional. For builtin functions and user-defined functions, spaces are not allowed between the function name and the open parenthesis, unless the `IGNORE_SPACE` [SQL\_MODE](../sql-mode/index) is set. For stored routines (and for functions if `IGNORE_SPACE` is set) spaces are allowed before the open parenthesis, including tab characters and new line characters.
Syntax errors
-------------
If there are more open parentheses than closed parentheses, the error usually looks like this:
```
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that
corresponds to your MariaDB server version for the right syntax to use near '' a
t line 1
```
Note the empty string.
If there are more closed parentheses than open parentheses, the error usually looks like this:
```
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that
corresponds to your MariaDB server version for the right syntax to use near ')'
at line 1
```
Note the quoted closed parenthesis.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mariadbd mariadbd
========
**MariaDB starting with [10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/)**From [MariaDB 10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/), `mariadbd` is a symlink to [mysqld](../mysqld-options/index), the MariaDB server.
**MariaDB starting with [10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/)**From [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/), `mariadbd` is the name of the server, with `mysqld` a symlink .
See [mysqld](../mysqld-options/index) for details.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb InnoDB Background Encryption Threads InnoDB Background Encryption Threads
====================================
InnoDB performs some encryption and decryption operations with background encryption threads. The [innodb\_encryption\_threads](../innodb-system-variables/index#innodb_encryption_threads) system variable controls the number of threads that the storage engine uses for encryption-related background operations, including encrypting and decrypting pages after key rotations or configuration changes, and [scrubbing](../innodb-data-scrubbing/index) data to permanently delete it.
Background Operations
---------------------
InnoDB performs the following encryption and decryption operations using background encryption threads:
* When [rotating encryption keys](../encryption-key-management/index#key-rotation), InnoDB's background encryption threads re-encrypt pages that use key versions older than [innodb\_encryption\_rotate\_key\_age](../innodb-system-variables/index#innodb_encryption_rotate_key_age) to the new key version.
* When changing the [innodb\_encrypt\_tables](../innodb-system-variables/index#innodb_encrypt_tables) system variable to `FORCE`, InnoDB's background encryption threads encrypt the [system](../innodb-system-tablespaces/index) tablespace and any [file-per-table](../innodb-file-per-table-tablespaces/index) tablespaces that have the `[ENCRYPTED](../create-table/index#encrypted)` table option set to `DEFAULT`.
* When changing the [innodb\_encrypt\_tables](../innodb-system-variables/index#innodb_encrypt_tables) system variable to `OFF`, InnoDB's background encryption threads decrypt the [system](../innodb-system-tablespaces/index) tablespace and any [file-per-table](../innodb-file-per-table-tablespaces/index) tablespacs that have the `[ENCRYPTED](../create-table/index#encrypted)` table option set to `DEFAULT`.
The [innodb\_encryption\_rotation\_iops](../innodb-system-variables/index#innodb_encryption_rotation_iops) system variable can be used to configure how many I/O operations you want to allow for the operations performed by InnoDB's background encryption threads.
Whenever you change the value on the [innodb\_encrypt\_tables](../innodb-system-variables/index#innodb_encrypt_tables) system variable, InnoDB's background encryption threads perform the necessary encryption or decryption operations. Because of this, you must have a non-zero value set for the [innodb\_encryption\_threads](../innodb-system-variables/index#innodb_encryption_threads) system variable. InnoDB also considers these operations to be key rotations internally. Because of this, you must have a non-zero value set for the [innodb\_encryption\_rotate\_key\_age](../innodb-system-variables/index#innodb_encryption_rotate_key_age) system variable. For more information, see [disabling key rotations](#disabling-background-key-rotation-operations).
Non-background Operations
-------------------------
InnoDB performs the following encryption and decryption operations **without** using background encryption threads:
* When a [file-per-table](innodb-file-per-table-tablspaces) tablespaces and using [ALTER TABLE](../alter-table/index) to manually set the [ENCRYPTED](../create-table/index#encrypted) table option to `YES`, InnoDB does **not** use background threads to encrypt the tablespaces.
* Similarly, when using [file-per-table](innodb-file-per-table-tablspaces) tablespaces and using [ALTER TABLE](../alter-table/index) to manually set the [ENCRYPTED](../create-table/index#encrypted) table option to `NO`, InnoDB does **not** use background threads to decrypt the tablespaces.
In these cases, InnoDB performs the encryption or decryption operation using the server thread for the client connection that executes the statement. This means that you can update encryption on [file-per-table](../innodb-file-per-table-tablespaces/index) tablespaces with an [ALTER TABLE](../alter-table/index) statement, even when the [innodb\_encryption\_threads](../innodb-system-variables/index#innodb_encryption_threads) and/or the [innodb\_rotate\_key\_age](../innodb-system-variables/index#innodb_encryption_rotate_key_age) system variables are set to `0`.
InnoDB does not permit manual encryption changes to tables in the [system](../innodb-system-tablespaces/index) tablespace using [ALTER TABLE](../alter-table/index). Encryption of the [system](../innodb-system-tablespaces/index) tablespace can only be configured by setting the value of the [innodb\_encrypt\_tables](../innodb-system-variables/index#innodb_encrypt_tables) system variable. This means that when you want to encrypt or decrypt the [system](../innodb-system-tablespaces/index) tablespace, you must also set a non-zero value for the [innodb\_encryption\_threads](../innodb-system-variables/index#innodb_encryption_threads) system variable, and you must also set the [innodb\_system\_rotate\_key\_age](../innodb-system-variables/index#innodb_encryption_rotate_key_age) system variable to `1` to ensure that the system tablespace is properly encrypted or decrypted by the background threads. See [MDEV-14398](https://jira.mariadb.org/browse/MDEV-14398) for more information.
Checking the Status of Background Operations
--------------------------------------------
InnoDB records the status of background encryption operations in the [INNODB\_TABLESPACES\_ENCRYPTION](information-schema-innodb_tablespaces_encryption) table in the [information\_schema](../information-schema/index) database.
For example, to see which InnoDB tablespaces are currently being decrypted or encrypted on by background encryption, you can check which InnoDB tablespaces have the `ROTATING_OR_FLUSHING` column set to `1`:
```
SELECT SPACE, NAME
FROM information_schema.INNODB_TABLESPACES_ENCRYPTION
WHERE ROTATING_OR_FLUSHING = 1;
```
And to see how many InnoDB tablespaces are currently being decrypted or encrypted by background encryption threads, you can call the [COUNT()](../count/index) aggregate function.
```
SELECT COUNT(*) AS 'encrypting'
FROM information_schema.INNODB_TABLESPACES_ENCRYPTION
WHERE ROTATING_OR_FLUSHING = 1;
```
And to see how many InnoDB tablespaces are currently being decrypted or encrypted by background encryption threads, while comparing that to the total number of InnoDB tablespaces and the total number of encrypted InnoDB tablespaces, you can join the table with the [INNODB\_SYS\_TABLESPACES](../information-schema-innodb_sys_tablespaces-table/index) table in the [information\_schema](../information-schema/index) database:
```
/* information_schema.INNODB_TABLESPACES_ENCRYPTION does not always have rows for all tablespaces,
so let's join it with information_schema.INNODB_SYS_TABLESPACES */
WITH tablespace_ids AS (
SELECT SPACE
FROM information_schema.INNODB_SYS_TABLESPACES ist
UNION
/* information_schema.INNODB_SYS_TABLESPACES doesn't have a row for the system tablespace (MDEV-20802) */
SELECT 0 AS SPACE
)
SELECT NOW() as 'time',
'tablespaces', COUNT(*) AS 'tablespaces',
'encrypted', SUM(IF(ite.ENCRYPTION_SCHEME IS NOT NULL, ite.ENCRYPTION_SCHEME, 0)) AS 'encrypted',
'encrypting', SUM(IF(ite.ROTATING_OR_FLUSHING IS NOT NULL, ite.ROTATING_OR_FLUSHING, 0)) AS 'encrypting'
FROM tablespace_ids
LEFT JOIN information_schema.INNODB_TABLESPACES_ENCRYPTION ite
ON tablespace_ids.SPACE = ite.SPACE
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb POLYGON POLYGON
=======
Syntax
------
```
Polygon(ls1,ls2,...)
```
Description
-----------
Constructs a WKB Polygon value from a number of [WKB](../wkb/index) [LineString](../linestring/index) arguments. If any argument does not represent the WKB of a LinearRing (that is, not a closed and simple LineString) the return value is `NULL`.
Note that according to the OpenGIS standard, a POLYGON should have exactly one ExteriorRing and all other rings should lie within that ExteriorRing and thus be the InteriorRings. Practically, however, some systems, including MariaDB's, permit polygons to have several 'ExteriorRings'. In the case of there being multiple, non-overlapping exterior rings [ST\_NUMINTERIORRINGS()](../st_numinteriorrings/index) will return 1.
Examples
--------
```
SET @g = ST_GEOMFROMTEXT('POLYGON((1 1,1 5,4 9,6 9,9 3,7 2,1 1))');
CREATE TABLE gis_polygon (g POLYGON);
INSERT INTO gis_polygon VALUES
(PolygonFromText('POLYGON((10 10,20 10,20 20,10 20,10 10))')),
(PolyFromText('POLYGON((0 0,50 0,50 50,0 50,0 0), (10 10,20 10,20 20,10 20,10 10))')),
(PolyFromWKB(AsWKB(Polygon(LineString(Point(0, 0), Point(30, 0), Point(30, 30), Point(0, 0))))));
```
Non-overlapping 'polygon':
```
SELECT ST_NumInteriorRings(ST_PolyFromText('POLYGON((0 0,10 0,10 10,0 10,0 0),
(-1 -1,-5 -1,-5 -5,-1 -5,-1 -1))')) AS NumInteriorRings;
+------------------+
| NumInteriorRings |
+------------------+
| 1 |
+------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb EXPLAIN FORMAT=JSON EXPLAIN FORMAT=JSON
===================
Synopsis
--------
`EXPLAIN FORMAT=JSON` is a variant of [EXPLAIN](../explain/index) command that produces output in JSON form. The output always has one row which has only one column titled "`JSON`". The contents are a JSON representation of the query plan, formatted for readability:
```
EXPLAIN FORMAT=JSON SELECT * FROM t1 WHERE col1=1\G
```
```
*************************** 1. row ***************************
EXPLAIN: {
"query_block": {
"select_id": 1,
"table": {
"table_name": "t1",
"access_type": "ALL",
"rows": 1000,
"filtered": 100,
"attached_condition": "(t1.col1 = 1)"
}
}
}
```
Output is different from MySQL
------------------------------
The output of MariaDB's `EXPLAIN FORMAT=JSON` is different from `EXPLAIN FORMAT=JSON` in MySQL.The reasons for that are:
* MySQL's output has deficiencies. Some are listed here: [EXPLAIN FORMAT=JSON in MySQL](../explain-formatjson-in-mysql/index)
* The output of MySQL's `EXPLAIN FORMAT=JSON` is not defined. Even MySQL Workbench has trouble parsing it (see this [blog post](http://s.petrunia.net/blog/?p=93)).
* MariaDB has query optimizations that MySQL does not have. Ergo, MariaDB generates query plans that MySQL does not generate.
A (as yet incomplete) list of how MariaDB's output is different from MySQL can be found here: [EXPLAIN FORMAT=JSON differences from MySQL](../explain-formatjson-differences-from-mysql/index).
Output Format
-------------
TODO: MariaDB's output format description.
See Also
--------
* [ANALYZE FORMAT=JSON](../analyze-formatjson/index) produces output like `EXPLAIN FORMAT=JSON`, but amended with the data from query execution.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Window Functions Overview Window Functions Overview
=========================
**MariaDB starting with [10.2](../what-is-mariadb-102/index)**Window functions were introduced in [MariaDB 10.2](../what-is-mariadb-102/index).
Introduction
------------
Window functions allow calculations to be performed across a set of rows related to the current row.
### Syntax
```
function (expression) OVER (
[ PARTITION BY expression_list ]
[ ORDER BY order_list [ frame_clause ] ] )
function:
A valid window function
expression_list:
expression | column_name [, expr_list ]
order_list:
expression | column_name [ ASC | DESC ]
[, ... ]
frame_clause:
{ROWS | RANGE} {frame_border | BETWEEN frame_border AND frame_border}
frame_border:
| UNBOUNDED PRECEDING
| UNBOUNDED FOLLOWING
| CURRENT ROW
| expr PRECEDING
| expr FOLLOWING
```
### Description
In some ways, window functions are similar to [aggregate functions](../aggregate-functions/index) in that they perform calculations across a set of rows. However, unlike aggregate functions, the output is not grouped into a single row.
Non-aggregate window functions include
* [CUME\_DIST](../cume_dist/index)
* [DENSE\_RANK](../dense_rank/index)
* [FIRST\_VALUE](../first_value/index)
* [LAG](../lag/index)
* [LAST\_VALUE](../last_value/index)
* [LEAD](../lead/index)
* [MEDIAN](../median/index)
* [NTH\_VALUE](../nth_value/index)
* [NTILE](../ntile/index)
* [PERCENT\_RANK](../percent_rank/index)
* [PERCENTILE\_CONT](../percentile_cont/index)
* [PERCENTILE\_DISC](../percentile_disc/index)
* [RANK](../rank/index), [ROW\_NUMBER](../row_number/index)
[Aggregate functions](../aggregate-functions/index) that can also be used as window functions include
* [AVG](../avg/index)
* [BIT\_AND](../bit_and/index)
* [BIT\_OR](../bit_or/index)
* [BIT\_XOR](../bit_xor/index)
* [COUNT](../count/index)
* [MAX](../max/index)
* [MIN](../min/index)
* [STD](../std/index)
* [STDDEV](../stddev/index)
* [STDDEV\_POP](../stddev_pop/index)
* [STDDEV\_SAMP](../stddev_samp/index)
* [SUM](../sum/index)
* [VAR\_POP](../var_pop/index)
* [VAR\_SAMP](../var_samp/index)
* [VARIANCE](../variance/index)
Window function queries are characterised by the OVER keyword, following which the set of rows used for the calculation is specified. By default, the set of rows used for the calculation (the "window) is the entire dataset, which can be ordered with the ORDER BY clause. The PARTITION BY clause is used to reduce the window to a particular group within the dataset.
For example, given the following data:
```
CREATE TABLE student (name CHAR(10), test CHAR(10), score TINYINT);
INSERT INTO student VALUES
('Chun', 'SQL', 75), ('Chun', 'Tuning', 73),
('Esben', 'SQL', 43), ('Esben', 'Tuning', 31),
('Kaolin', 'SQL', 56), ('Kaolin', 'Tuning', 88),
('Tatiana', 'SQL', 87), ('Tatiana', 'Tuning', 83);
```
the following two queries return the average partitioned by test and by name respectively:
```
SELECT name, test, score, AVG(score) OVER (PARTITION BY test)
AS average_by_test FROM student;
+---------+--------+-------+-----------------+
| name | test | score | average_by_test |
+---------+--------+-------+-----------------+
| Chun | SQL | 75 | 65.2500 |
| Chun | Tuning | 73 | 68.7500 |
| Esben | SQL | 43 | 65.2500 |
| Esben | Tuning | 31 | 68.7500 |
| Kaolin | SQL | 56 | 65.2500 |
| Kaolin | Tuning | 88 | 68.7500 |
| Tatiana | SQL | 87 | 65.2500 |
| Tatiana | Tuning | 83 | 68.7500 |
+---------+--------+-------+-----------------+
SELECT name, test, score, AVG(score) OVER (PARTITION BY name)
AS average_by_name FROM student;
+---------+--------+-------+-----------------+
| name | test | score | average_by_name |
+---------+--------+-------+-----------------+
| Chun | SQL | 75 | 74.0000 |
| Chun | Tuning | 73 | 74.0000 |
| Esben | SQL | 43 | 37.0000 |
| Esben | Tuning | 31 | 37.0000 |
| Kaolin | SQL | 56 | 72.0000 |
| Kaolin | Tuning | 88 | 72.0000 |
| Tatiana | SQL | 87 | 85.0000 |
| Tatiana | Tuning | 83 | 85.0000 |
+---------+--------+-------+-----------------+
```
It is also possible to specify which rows to include for the window function (for example, the current row and all preceding rows). See [Window Frames](../window-frames/index) for more details.
Scope
-----
Window functions were introduced in SQL:2003, and their definition was expanded in subsequent versions of the standard. The last expansion was in the latest version of the standard, SQL:2011.
Most database products support a subset of the standard, they implement some functions defined as late as in SQL:2011, and at the same time leave some parts of SQL:2008 unimplemented.
MariaDB:
* Supports ROWS and RANGE-type frames
+ All kinds of frame bounds are supported, including `RANGE PRECEDING|FOLLOWING n` frame bounds (unlike PostgreSQL or MS SQL Server)
+ Does not yet support DATE[TIME] datatype and arithmetic for RANGE-type frames ([MDEV-9727](https://jira.mariadb.org/browse/MDEV-9727))
* Does not support GROUPS-type frames (it seems that no popular database supports it, either)
* Does not support frame exclusion (no other database seems to support it, either) ([MDEV-9724](https://jira.mariadb.org/browse/MDEV-9724))
* Does not support explicit `NULLS FIRST` or `NULLS LAST`.
* Does not support nested navigation in window functions (this is `VALUE_OF(expr AT row_marker [, default_value)` syntax)
* The following window functions are supported:
+ "Streamable" window functions: [ROW\_NUMBER](../row_number/index), [RANK](../rank/index), [DENSE\_RANK](../dense_rank/index),
+ Window functions that can be streamed once the number of rows in partition is known: [PERCENT\_RANK](../percent_rank/index), [CUME\_DIST](../cume_dist/index), [NTILE](../ntile/index)
* Aggregate functions that are currently supported as window functions are: [COUNT](../count/index), [SUM](../sum/index), [AVG](../avg/index), [BIT\_OR](../bit_or/index), [BIT\_AND](../bit_and/index), [BIT\_XOR](../bit_xor/index).
* Aggregate functions with the `DISTINCT` specifier (e.g. `COUNT( DISTINCT x)`) are not supported as window functions.
Links
-----
* [MDEV-6115](https://jira.mariadb.org/browse/MDEV-6115) is the main jira task for window functions development. Other tasks are are attached as sub-tasks
* [bb-10.2-mdev9543](https://github.com/MariaDB/server/commits/bb-10.2-mdev9543) is the feature tree for window functions. Development is ongoing, and this tree has the newest changes.
* Testcases are in `mysql-test/t/win*.test`
Examples
--------
Given the following sample data:
```
CREATE TABLE users (
email VARCHAR(30),
first_name VARCHAR(30),
last_name VARCHAR(30),
account_type VARCHAR(30)
);
INSERT INTO users VALUES
('[email protected]', 'Admin', 'Boss', 'admin'),
('[email protected]', 'Bob', 'Carlsen', 'regular'),
('[email protected]', 'Eddie', 'Stevens', 'regular'),
('[email protected]', 'John', 'Smith', 'regular'),
('[email protected]', 'Root', 'Chief', 'admin')
```
First, let's order the records by email alphabetically, giving each an ascending *rnum* value starting with 1. This will make use of the [ROW\_NUMBER](../row_number/index) window function:
```
SELECT row_number() OVER (ORDER BY email) AS rnum,
email, first_name, last_name, account_type
FROM users ORDER BY email;
+------+------------------------+------------+-----------+--------------+
| rnum | email | first_name | last_name | account_type |
+------+------------------------+------------+-----------+--------------+
| 1 | [email protected] | Admin | Boss | admin |
| 2 | [email protected] | Bob | Carlsen | regular |
| 3 | [email protected] | Eddie | Stevens | regular |
| 4 | [email protected] | John | Smith | regular |
| 5 | [email protected] | Root | Chief | admin |
+------+------------------------+------------+-----------+--------------
```
We can generate separate sequences based on account type, using the PARTITION BY clause:
```
SELECT row_number() OVER (PARTITION BY account_type ORDER BY email) AS rnum,
email, first_name, last_name, account_type
FROM users ORDER BY account_type,email;
+------+------------------------+------------+-----------+--------------+
| rnum | email | first_name | last_name | account_type |
+------+------------------------+------------+-----------+--------------+
| 1 | [email protected] | Admin | Boss | admin |
| 2 | [email protected] | Root | Chief | admin |
| 1 | [email protected] | Bob | Carlsen | regular |
| 2 | [email protected] | Eddie | Stevens | regular |
| 3 | [email protected] | John | Smith | regular |
+------+------------------------+------------+-----------+--------------+
```
Given the following structure and data, we want to find the top 5 salaries from each department.
```
CREATE TABLE employee_salaries (dept VARCHAR(20), name VARCHAR(20), salary INT(11));
INSERT INTO employee_salaries VALUES
('Engineering', 'Dharma', 3500),
('Engineering', 'Binh', 3000),
('Engineering', 'Adalynn', 2800),
('Engineering', 'Samuel', 2500),
('Engineering', 'Cveta', 2200),
('Engineering', 'Ebele', 1800),
('Sales', 'Carbry', 500),
('Sales', 'Clytemnestra', 400),
('Sales', 'Juraj', 300),
('Sales', 'Kalpana', 300),
('Sales', 'Svantepolk', 250),
('Sales', 'Angelo', 200);
```
We could do this without using window functions, as follows:
```
select dept, name, salary
from employee_salaries as t1
where (select count(t2.salary)
from employee_salaries as t2
where t1.name != t2.name and
t1.dept = t2.dept and
t2.salary > t1.salary) < 5
order by dept, salary desc;
+-------------+--------------+--------+
| dept | name | salary |
+-------------+--------------+--------+
| Engineering | Dharma | 3500 |
| Engineering | Binh | 3000 |
| Engineering | Adalynn | 2800 |
| Engineering | Samuel | 2500 |
| Engineering | Cveta | 2200 |
| Sales | Carbry | 500 |
| Sales | Clytemnestra | 400 |
| Sales | Juraj | 300 |
| Sales | Kalpana | 300 |
| Sales | Svantepolk | 250 |
+-------------+--------------+--------+
```
This has a number of disadvantages:
* if there is no index, the query could take a long time if the employee\_salary\_table is large
* Adding and maintaining indexes adds overhead, and even with indexes on *dept* and *salary*, each subquery execution adds overhead by performing a lookup through the index.
Let's try achieve the same with window functions. First, generate a rank for all employees, using the [RANK](../rank/index) function.
```
select rank() over (partition by dept order by salary desc) as ranking,
dept, name, salary
from employee_salaries
order by dept, ranking;
+---------+-------------+--------------+--------+
| ranking | dept | name | salary |
+---------+-------------+--------------+--------+
| 1 | Engineering | Dharma | 3500 |
| 2 | Engineering | Binh | 3000 |
| 3 | Engineering | Adalynn | 2800 |
| 4 | Engineering | Samuel | 2500 |
| 5 | Engineering | Cveta | 2200 |
| 6 | Engineering | Ebele | 1800 |
| 1 | Sales | Carbry | 500 |
| 2 | Sales | Clytemnestra | 400 |
| 3 | Sales | Juraj | 300 |
| 3 | Sales | Kalpana | 300 |
| 5 | Sales | Svantepolk | 250 |
| 6 | Sales | Angelo | 200 |
+---------+-------------+--------------+--------+
```
Each department has a separate sequence of ranks due to the *PARTITION BY* clause. This particular sequence of values for *rank()* is given by the *ORDER BY* clause inside the window function’s *OVER* clause. Finally, to get our results in a readable format we order the data by *dept* and the newly generated *ranking* column.
Now, we need to reduce the results to find only the top 5 per department. Here is a common mistake:
```
select
rank() over (partition by dept order by salary desc) as ranking,
dept, name, salary
from employee_salaries
where ranking <= 5
order by dept, ranking;
ERROR 1054 (42S22): Unknown column 'ranking' in 'where clause'
```
Trying to filter only the first 5 values per department by putting a where clause in the statement does not work, due to the way window functions are computed. The computation of window functions happens after all WHERE, GROUP BY and HAVING clauses have been completed, right before ORDER BY, so the WHERE clause has no idea that the ranking column exists. It is only present after we have filtered and grouped all the rows.
To counteract this problem, we need to wrap our query into a derived table. We can then attach a where clause to it:
```
select *from (select rank() over (partition by dept order by salary desc) as ranking,
dept, name, salary
from employee_salaries) as salary_ranks
where (salary_ranks.ranking <= 5)
order by dept, ranking;
+---------+-------------+--------------+--------+
| ranking | dept | name | salary |
+---------+-------------+--------------+--------+
| 1 | Engineering | Dharma | 3500 |
| 2 | Engineering | Binh | 3000 |
| 3 | Engineering | Adalynn | 2800 |
| 4 | Engineering | Samuel | 2500 |
| 5 | Engineering | Cveta | 2200 |
| 1 | Sales | Carbry | 500 |
| 2 | Sales | Clytemnestra | 400 |
| 3 | Sales | Juraj | 300 |
| 3 | Sales | Kalpana | 300 |
| 5 | Sales | Svantepolk | 250 |
+---------+-------------+--------------+--------+
```
See Also
--------
* [Window Frames](../window-frames/index)
* [Introduction to Window Functions in MariaDB Server 10.2](https://mariadb.com/resources/blog/introduction-window-functions-mariadb-server-102)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Histogram-Based Statistics Histogram-Based Statistics
==========================
**MariaDB starting with [10.4.3](https://mariadb.com/kb/en/mariadb-1043-release-notes/)**Histograms are collected by default from [MariaDB 10.4.3](https://mariadb.com/kb/en/mariadb-1043-release-notes/).
Histogram-based statistics are a mechanism to improve the query plan chosen by the optimizer in certain situations. Before their introduction, all conditions on non-indexed columns were ignored when searching for the best execution plan. Histograms can be collected for both indexed and non-indexed columns, and are made available to the optimizer.
Histogram statistics are stored in the [mysql.column\_stats](../mysqlcolumn_stats-table/index) table, which stores data for [engine-independent table statistics](../engine-independent-table-statistics/index), and so are essentially a subset of engine-independent table statistics.
Consider this example, using the following query:
```
SELECT * FROM t1,t2 WHERE t1.a=t2.a and t2.b BETWEEN 1 AND 3;
```
Let's assume that
* table t1 contains 100 records
* table t2 contains 1000 records
* there is a primary index on t1(a)
* there is a secondary index on t2(a)
* there is no index defined on column t2.b
* the selectivity of the condition t2.b BETWEEN (1,3) is high (~ 1%)
Before histograms were introduced, the optimizer would choose the plan that:
* accesses t1 using a table scan
* accesses t2 using index t2(a)
* checks the condition t2.b BETWEEN 1 AND 3
This plan examines all rows of both tables and performs 100 index look-ups.
With histograms available, the optimizer can choose the following, more efficient plan:
* accesses table t2 in a table scan
* checks the condition t2.b BETWEEN 1 AND 3
* accesses t1 using index t1(a)
This plan also examine all rows from t2, but it performs only 10 look-ups to access 10 rows of table t1.
System Variables
----------------
There are a number of system variables that affect histograms.
### histogram\_size
The [histogram\_size](../server-system-variables/index#histogram_size) variable determines the size, in bytes, from 0 to 255, used for a histogram. This is effectively the number of bins for `histogram_type=SINGLE_PREC_HB` or number of bins/2 for `histogram_type=DOUBLE_PREC_HB`. If it is set to 0 (the default for [MariaDB 10.4.2](https://mariadb.com/kb/en/mariadb-1042-release-notes/) and below), no histograms are created when running an [ANALYZE TABLE](../analyze-table/index).
### histogram\_type
The [histogram\_type](../server-system-variables/index#histogram_type) variable determines whether single precision (`SINGLE_PREC_HB`) or double precision (`DOUBLE_PREC_HB`) height-balanced histograms are created. From [MariaDB 10.4.3](https://mariadb.com/kb/en/mariadb-1043-release-notes/), double precision is the default. For [MariaDB 10.4.2](https://mariadb.com/kb/en/mariadb-1042-release-notes/) and below, single precision is the default.
From [MariaDB 10.8](../what-is-mariadb-108/index), `JSON_HB`, JSON-format histograms, are accepted.
### optimizer\_use\_condition\_selectivity
The [optimizer\_use\_condition\_selectivity](../server-system-variables/index#optimizer_use_condition_selectivity) controls which statistics can be used by the optimizer when looking for the best query execution plan.
* `1` Use selectivity of predicates as in [MariaDB 5.5](../what-is-mariadb-55/index).
* `2` Use selectivity of all range predicates supported by indexes.
* `3` Use selectivity of all range predicates estimated without histogram.
* `4` Use selectivity of all range predicates estimated with histogram.
* `5` Additionally use selectivity of certain non-range predicates calculated on record sample.
From [MariaDB 10.4.1](https://mariadb.com/kb/en/mariadb-1041-release-notes/), the default is `4`. Until [MariaDB 10.4.0](https://mariadb.com/kb/en/mariadb-1040-release-notes/), the default is `1`.
Example
-------
Here is an example of the dramatic impact histogram-based statistics can make. The query is based on [DBT3 Benchmark Q20](../dbt3-benchmark-queries/index#q20) with 60 millions records in the `lineitem` table.
```
select sql_calc_found_rows s_name, s_address from
supplier, nation where
s_suppkey in
(select ps_suppkey from partsupp where
ps_partkey in (select p_partkey from part where
p_name like 'forest%') and
ps_availqty >
(select 0.5 * sum(l_quantity) from lineitem where
l_partkey = ps_partkey and l_suppkey = ps_suppkey and
l_shipdate >= date('1994-01-01') and
l_shipdate < date('1994-01-01') + interval '1' year ))
and s_nationkey = n_nationkey
and n_name = 'CANADA'
order by s_name
limit 10;
```
First,
```
set optimizer_switch='materialization=off,semijoin=off';
```
```
+---+-------- +----------+-------+...+------+----------+------------
| id| sel_type| table | type |...| rows | filt | Extra
+---+-------- +----------+-------+...+------+----------+------------
| 1 | PRIMARY | nation | ALL |...| 25 |100.00 | Using where;...
| 1 | PRIMARY | supplier | ref |...| 1447 |100.00 | Using where; Subq
| 2 | DEP SUBQ| partsupp | idxsq |...| 38 |100.00 | Using where
| 4 | DEP SUBQ| lineitem | ref |...| 3 |100.00 | Using where
| 3 | DEP SUBQ| part | unqsb |...| 1 |100.00 | Using where
+---+-------- +----------+-------+...+------+----------+------------
10 rows in set
(51.78 sec)
```
Next, a really bad plan, yet one sometimes chosen:
```
+---+-------- +----------+-------+...+------+----------+------------
| id| sel_type| table | type |...| rows | filt | Extra
+---+-------- +----------+-------+...+------+----------+------------
| 1 | PRIMARY | supplier | ALL |...|100381|100.00 | Using where; Subq
| 1 | PRIMARY | nation | ref |...| 1 |100.00 | Using where
| 2 | DEP SUBQ| partsupp | idxsq |...| 38 |100.00 | Using where
| 4 | DEP SUBQ| lineitem | ref |...| 3 |100.00 | Using where
| 3 | DEP SUBQ| part | unqsb |...| 1 |100.00 | Using where
+---+-------- +----------+-------+...+------+----------+------------
10 rows in set
(7 min 33.42 sec)
```
[Persistent statistics](../engine-independent-table-statistics/index) don't improve matters:
```
set use_stat_tables='preferably';
+---+-------- +----------+-------+...+------+----------+------------
| id| sel_type| table | type |...| rows | filt | Extra
+---+-------- +----------+-------+...+------+----------+------------
| 1 | PRIMARY | supplier | ALL |...|10000 |100.00 | Using where;
| 1 | PRIMARY | nation | ref |...| 1 |100.00 | Using where
| 2 | DEP SUBQ| partsupp | idxsq |...| 80 |100.00 | Using where
| 4 | DEP SUBQ| lineitem | ref |...| 7 |100.00 | Using where
| 3 | DEP SUBQ| part | unqsb |...| 1 |100.00 | Using where
+---+-------- +----------+-------+...+------+----------+------------
10 rows in set
(7 min 40.44 sec)
```
The default flags for [optimizer\_switch](../server-system-variables/index#optimizer_switch) do not help much:
```
set optimizer_switch='materialization=default,semijoin=default';
+---+-------- +----------+-------+...+------+----------+------------
| id| sel_type| table | type |...| rows | filt | Extra
+---+-------- +----------+-------+...+------+----------+------------
| 1 | PRIMARY | supplier | ALL |...|10000 |100.00 | Using where;
| 1 | PRIMARY | nation | ref |...| 1 |100.00 | Using where
| 1 | PRIMARY | <subq2> | eq_ref|...| 1 |100.00 |
| 2 | MATER | part | ALL |.. |2000000|100.00 | Using where
| 2 | MATER | partsupp | ref |...| 4 |100.00 | Using where; Subq
| 4 | DEP SUBQ| lineitem | ref |...| 7 |100.00 | Using where
+---+-------- +----------+-------+...+------+----------+------------
10 rows in set
(5 min 21.44 sec)
```
Using statistics doesn't help either:
```
set optimizer_switch='materialization=default,semijoin=default';
set optimizer_use_condition_selectivity=4;
+---+-------- +----------+-------+...+------+----------+------------
| id| sel_type| table | type |...| rows | filt | Extra
+---+-------- +----------+-------+...+------+----------+------------
| 1 | PRIMARY | nation | ALL |...| 25 |4.00 | Using where
| 1 | PRIMARY | supplier | ref |...| 4000 |100.00 | Using where;
| 1 | PRIMARY | <subq2> | eq_ref|...| 1 |100.00 |
| 2 | MATER | part | ALL |.. |2000000|1.56 | Using where
| 2 | MATER | partsupp | ref |...| 4 |100.00 | Using where; Subq
| 4 | DEP SUBQ| lineitem | ref |...| 7 | 30.72 | Using where
+---+-------- +----------+-------+...+------+----------+------------
10 rows in set
(5 min 22.41 sec)
```
Now, taking into account the cost of the dependent subquery:
```
set optimizer_switch='materialization=default,semijoin=default';
set optimizer_use_condition_selectivity=4;
set optimizer_switch='expensive_pred_static_pushdown=on';
+---+-------- +----------+-------+...+------+----------+------------
| id| sel_type| table | type |...| rows | filt | Extra
+---+-------- +----------+-------+...+------+----------+------------
| 1 | PRIMARY | nation | ALL |...| 25 | 4.00 | Using where
| 1 | PRIMARY | supplier | ref |...| 4000 |100.00 | Using where;
| 2 | PRIMARY | partsupp | ref |...| 80 |100.00 |
| 2 | PRIMARY | part | eq_ref|...| 1 | 1.56 | where; Subq; FM
| 4 | DEP SUBQ| lineitem | ref |...| 7 | 30.72 | Using where
+---+-------- +----------+-------+...+------+----------+------------
10 rows in set
(49.89 sec)
```
Finally, using [join\_buffer](../server-system-variables/index#join_buffer_size) as well:
```
set optimizer_switch= 'materialization=default,semijoin=default';
set optimizer_use_condition_selectivity=4;
set optimizer_switch='expensive_pred_static_pushdown=on';
set join_cache_level=6;
set optimizer_switch='mrr=on';
set optimizer_switch='mrr_sort_keys=on';
set join_buffer_size=1024*1024*16;
set join_buffer_space_limit=1024*1024*32;
+---+-------- +----------+-------+...+------+----------+------------
| id| sel_type| table | type |...| rows | filt | Extra
+---+-------- +----------+-------+...+------+----------+------------
| 1 | PRIMARY | nation | AL L |...| 25 | 4.00 | Using where
| 1 | PRIMARY | supplier | ref |...| 4000 |100.00 | where; BKA
| 2 | PRIMARY | partsupp | ref |...| 80 |100.00 | BKA
| 2 | PRIMARY | part | eq_ref|...| 1 | 1.56 | where Sq; FM; BKA
| 4 | DEP SUBQ| lineitem | ref |...| 7 | 30.72 | Using where
+---+-------- +----------+-------+...+------+----------+------------
10 rows in set
(35.71 sec)
```
See Also
--------
* [DECODE\_HISTOGRAM()](../decode_histogram/index)
* [Index Statistics](../index-statistics/index)
* [InnoDB Persistent Statistics](../innodb-persistent-statistics/index)
* [Engine-independent Statistics](../engine-independent-table-statistics/index)
* [JSON Histograms](https://mariadb.org/10-7-preview-feature-json-histograms/) (mariadb.org blog)
* [Improved histograms in MariaDB 10.8](https://youtu.be/uz3rr3WnQOs) (video)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb SOUNDEX SOUNDEX
=======
Syntax
------
```
SOUNDEX(str)
```
Description
-----------
Returns a soundex string from *`str`*. Two strings that sound almost the same should have identical soundex strings. A standard soundex string is four characters long, but the `SOUNDEX()` function returns an arbitrarily long string. You can use `SUBSTRING()` on the result to get a standard soundex string. All non-alphabetic characters in *`str`* are ignored. All international alphabetic characters outside the A-Z range are treated as vowels.
**Important:** When using SOUNDEX(), you should be aware of the following details:
* This function, as currently implemented, is intended to work well with strings that are in the English language only. Strings in other languages may not produce reasonable results.
* This function implements the original Soundex algorithm, not the more popular enhanced version (also described by D. Knuth). The difference is that original version discards vowels first and duplicates second, whereas the enhanced version discards duplicates first and vowels second.
Examples
--------
```
SOUNDEX('Hello');
+------------------+
| SOUNDEX('Hello') |
+------------------+
| H400 |
+------------------+
```
```
SELECT SOUNDEX('MariaDB');
+--------------------+
| SOUNDEX('MariaDB') |
+--------------------+
| M631 |
+--------------------+
```
```
SELECT SOUNDEX('Knowledgebase');
+--------------------------+
| SOUNDEX('Knowledgebase') |
+--------------------------+
| K543212 |
+--------------------------+
```
```
SELECT givenname, surname FROM users WHERE SOUNDEX(givenname) = SOUNDEX("robert");
+-----------+---------+
| givenname | surname |
+-----------+---------+
| Roberto | Castro |
+-----------+---------+
```
See Also
--------
* [SOUNDS LIKE](../sounds-like/index)()
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb COLUMN_CHECK COLUMN\_CHECK
=============
Syntax
------
```
COLUMN_CHECK(dyncol_blob);
```
Description
-----------
Check if `dyncol_blob` is a valid packed dynamic columns blob. Return value of 1 means the blob is valid, return value of 0 means it is not.
**Rationale:** Normally, one works with valid dynamic column blobs. Functions like [COLUMN\_CREATE](../column_create/index), [COLUMN\_ADD](../column_add/index), [COLUMN\_DELETE](../column_delete/index) always return valid dynamic column blobs. However, if a dynamic column blob is accidentally truncated, or transcoded from one character set to another, it will be corrupted. This function can be used to check if a value in a blob field is a valid dynamic column blob.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Parallel Replication Parallel Replication
====================
The terms *master* and *slave* have historically been used in replication, but the terms terms *primary* and *replica* are now preferred. The old terms are used still used in parts of the documentation, and in MariaDB commands, although [MariaDB 10.5](../what-is-mariadb-105/index) has begun the process of renaming. The documentation process is ongoing. See [MDEV-18777](https://jira.mariadb.org/browse/MDEV-18777) to follow progress on this effort.
Some writes, [replicated](../standard-replication/index) from the primary can be executed in parallel (simultaneously) on the replica. Note that for parallel replication to work, both the primary and replica need to be [MariaDB 10.0.5](https://mariadb.com/kb/en/mariadb-1005-release-notes/) or later.
Parallel Replication Overview
-----------------------------
MariaDB replication in general takes place in three parts:
* Replication events are read from the primary by the IO thread and queued in the [relay log](../relay-log/index).
* Replication events are fetched one at a time by the SQL thread from the relay log
* Each event is applied on the replica to replicate all changes done on the primary.
Before MariaDB 10, the third step was also performed by the SQL thread; this meant that only one event could execute at a time, and replication was essentially single-threaded. Since MariaDB 10, the third step can optionally be performed by a pool of separate replication worker threads, and thereby potentially increase replication performance by applying multiple events in parallel.
How to Enable Parallel Replica
------------------------------
To enable, specify [slave-parallel-threads=#](../replication-and-binary-log-server-system-variables/index#slave_parallel_threads) in your [my.cnf](../mysqld-startup-options/index) file as an argument to mysql. Parallel replication can in addition be disabled on a per-multi-source connection by setting [@@connection\_name.slave-parallel-mode](../replication-and-binary-log-server-system-variables/index#slave_parallel_mode) to "none".
The value (#) of slave\_parallel\_threads specifies how many threads will be created in a pool of worker threads used to apply events in parallel for \*all\* your replicas (this includes [multi-source replication](../multi-source-replication/index)). If the value is zero, then no worker threads are created, and old-style replication is used where events are applied inside the SQL thread. Usually the value, if non-zero, should be at least two times the number of multi-source primary connections used. It makes little sense to use only a single worker thread for one connection; this will incur some overhead in inter-thread communication between the SQL thread and the worker thread, but with just a single worker thread events can not be applied in parallel anyway.
`slave-parallel-threads=#` is a dynamic variable that can be changed without restarting mysqld. All replicas connections must however be stopped when changing the value.
Configuring the Replica Parallel Mode
-------------------------------------
Parallel replication can be in-order or out-of-order:
* In-order executes transactions in parallel, but orders the commit step of the transactions to happen in the exact same order as on the primary. Transactions are only executed in parallel to the extent that this can be automatically verified to be possible without any conflicts. This means that the use of parallelism is completely transparent to the application.
* Out-of-order can execute and commit transactions in different order on the replica than originally on the primary. This means that the application must be tolerant to seeing updates occur in different order. The application is also responsible for ensuring that there are no conflicts between transactions that are replicated out-of-order. Out-of-order is only used in GTID mode and only when explicitly enabled by the application, using the replication domain that is part of the GTID.
### In-Order Parallel Replication
#### Optimistic Mode of In-Order Parallel Replication
Optimistic mode of in-order parallel replication provides a lot of opportunities for parallel apply on the replica while still preserving exact transaction semantics from the point of view of applications. It is the default mode from [MariaDB 10.5.1](https://mariadb.com/kb/en/mariadb-1051-release-notes/).
Optimistic mode of in-order parallel replication can be configured by setting the [slave\_parallel\_mode](../replication-and-binary-log-server-system-variables/index#slave_parallel_mode) system variable to `optimistic` on the replica.
Any transactional DML (INSERT/UPDATE/DELETE) is allowed to run in parallel, up to the limit of [@@slave\_domain\_parallel\_threads](../replication-and-binary-log-server-system-variables/index#slave_domain_parallel_threads). This may cause conflicts on the replica, eg. if two transactions try to modify the same row. Any such conflict is detected, and the latter of the two transactions is rolled back, allowing the former to proceed. The latter transaction is then re-tried once the former has completed.
The term "optimistic" is used for this mode, because the server optimistically assumes that few conflicts will occur, and that the extra work spent rolling back and retrying conflicting transactions is justified from the gain from running most transactions in parallel.
There are a few heuristics to try to avoid needless conflicts. If a transaction executed a row lock wait on the primary, it will not be run in parallel on the replica. Transactions can also be marked explicitly as potentially conflicting on the primary, by setting the variable [@@skip\_parallel\_replication](../replication-and-binary-log-server-system-variables/index#skip_parallel_replication). More such heuristics may be added in later MariaDB versions. There is a further [--slave-parallel-mode](../replication-and-binary-log-server-system-variables/index#slave_parallel_mode) called "aggressive", where these heuristics are disabled, allowing even more transactions to be applied in parallel.
Non-transactional DML and DDL is not safe to optimistically apply in parallel, as it cannot be rolled back in case of conflicts. Thus, in optimistic mode, non-transactional (such as MyISAM) updates are not applied in parallel with earlier events (it is however possible to apply a MyISAM update in parallel with a later InnoDB update). DDL statements are not applied in parallel with any other transactions, earlier or later.
The different kind of transactions can be identified in the output of [mariadb-binlog/mysqlbinlog](../mysqlbinlog/index). For example:
```
#150324 13:06:26 server id 1 end_log_pos 6881 GTID 0-1-42 ddl
...
#150324 13:06:26 server id 1 end_log_pos 7816 GTID 0-1-47
...
#150324 13:06:26 server id 1 end_log_pos 8177 GTID 0-1-49 trans
/*!100101 SET @@session.skip_parallel_replication=1*//*!*/;
...
#150324 13:06:26 server id 1 end_log_pos 9836 GTID 0-1-59 trans waited
```
GTID 0-1-42 is marked as being DDL. GTID 0-1-47 is marked as being non-transactional DML, while GTID 0-1-49 is transactional DML (seen on the "trans" keyword). GTID 0-1-49 was additionally run with [@@skip\_parallel\_replication](../replication-and-binary-log-server-system-variables/index#skip_parallel_replication) set on the primary. GTID 0-1-59 is transactional DML that had a row lock wait when run on the primary (the "waited" keyword).
#### Aggressive Mode of In-Order Parallel Replication
Aggressive mode of in-order parallel replication is very similar to optimistic mode. The main difference is that the replica does not consider whether transactions conflicted on the primary when deciding whether to apply the transactions in parallel.
Aggressive mode of in-order parallel replication can be configured by setting the [slave\_parallel\_mode](../replication-and-binary-log-server-system-variables/index#slave_parallel_mode) system variable to `aggressive` on the replica.
#### Conservative Mode of In-Order Parallel Replication
Conservative mode of in-order parallel replication uses the [group commit](../group-commit-for-the-binary-log/index) on the primary to discover potential for parallel apply of events on the replica. If two transactions commit together in a [group commit](../group-commit-for-the-binary-log/index) on the primary, they are written into the binlog with the same commit id. Such events are certain to not conflict with each other, and they can be scheduled by the parallel replication to run in different worker threads.
Conservative mode of in-order parallel replication is the default mode until [MariaDB 10.5.0](https://mariadb.com/kb/en/mariadb-1050-release-notes/), but it can also be configured by setting the [slave\_parallel\_mode](../replication-and-binary-log-server-system-variables/index#slave_parallel_mode) system variable to `conservative` on the replica.
Two transactions that were committed separately on the primary can potentially conflict (eg. modify the same row of a table). Thus, the worker that applies the second transaction will not start immediately, but wait until the first transaction begins the commit step; at this point it is safe to start the second transaction, as it can no longer disrupt the execution of the first one.
Here is example output from [mariadb-binlog/mysqlbinlog](../mysqlbinlog/index) that shows how GTID events are marked with commit id. The GTID 0-1-47 has no commit id, and can not run in parallel. The GTIDs 0-1-48 and 0-1-49 have the same commit id 630, and can thus replicate in parallel with one another on a replica:
```
#150324 12:54:24 server id 1 end_log_pos 20052 GTID 0-1-47 trans
...
#150324 12:54:24 server id 1 end_log_pos 20212 GTID 0-1-48 cid=630 trans
...
#150324 12:54:24 server id 1 end_log_pos 20372 GTID 0-1-49 cid=630 trans
```
In either case, when the two transactions reach the point where the low-level commit happens and commit order is determined, the two commits are sequenced to happen in the same order as on the primary, so that operation is transparent to applications.
The opportunities for parallel replication on replicas can be highly increased if more transactions are committed in a [group commit](../group-commit-for-the-binary-log/index) on the primary. This can be tuned using the [binlog\_commit\_wait\_count](../replication-and-binary-log-server-system-variables/index#binlog_commit_wait_count) and [binlog\_commit\_wait\_usec](../replication-and-binary-log-server-system-variables/index#binlog_commit_wait_usec) variables. If for example the application can tolerate up to 50 milliseconds extra delay for transactions on the primary, one can set `binlog_commit_wait_usec=50000` and `binlog_commit_wait_count=20` to get up to 20 transactions at a time available for replication in parallel. Care must however be taken to not set `binlog_commit_wait_usec` too high, as this could cause significant slowdown for applications that run a lot of small transactions serially one after the other.
Note that even if there is no parallelism available from the primary [group commit](../group-commit-for-the-binary-log/index), there is still an opportunity for speedup from in-order parallel replication, since the actual commit steps of different transactions can run in parallel. This can be particularly effective on a replica with binlog enabled ([log\_slave\_updates=1](../replication-and-binary-log-server-system-variables/index#log_slave_updates)), and more so if replica is configured to be crash-safe ([sync\_binlog=1](../replication-and-binary-log-server-system-variables/index#sync_binlog) and [innodb\_flush\_log\_at\_trx\_commit=1](../xtradbinnodb-server-system-variables/index#innodb_flush_log_at_trx_commit)), as this makes [group commit](../group-commit-for-the-binary-log/index) possible on the replica.
#### Minimal Mode of In-Order Parallel Replication
Minimal mode of in-order parallel replication *only*allows the commit step of transactions to be applied in parallel; all other steps are applied serially.
Minimal mode of in-order parallel replication can be configured by setting the [slave\_parallel\_mode](../replication-and-binary-log-server-system-variables/index#slave_parallel_mode) system variable to `minimal` on the replica.
### Out-of-Order Parallel Replication
Out-of-order parallel replication happens (only) when using GTID mode, when GTIDs with different replication domains are used. The replication domain is set by the DBA/application using the variable `gtid_domain_id`.
Two transactions having GTIDs with different domain\_id are scheduled to different worker threads by parallel replication, and are allowed to execute completely independently from each other. It is the responsibility of the application to only set different domain\_ids for transactions that are truly independent, and are guaranteed to not conflict with each other. The application must also be able to work correctly even though the transactions with different domain\_id are seen as committing in different order between the replica and the primary, and between different replicas.
Out-of-order parallel replication can potentially give more performance gain than in-order parallel replication, since the application can explicitly give more opportunities for running transactions in parallel than what the server can determine on its own automatically.
One simple but effective usage is to run long-running statements, such as ALTER TABLE, in a separate replication domain. This allows replication of other transactions to proceed uninterrupted:
```
SET SESSION gtid_domain_id=1
ALTER TABLE t ADD INDEX myidx(b)
SET SESSION gtid_domain_id=0
```
Normally, a long-running ALTER TABLE or other query will stall all following transactions, causing the replica to become behind the primary as least as long time as it takes to run the long-running query. By using out-of-order parallel replication by setting the replication domain id, this can be avoided. The DBA/application must ensure that no conflicting transactions will be replicated while the ALTER TABLE runs.
Another common opportunity for out-of-order parallel replication comes in connection with multi-source replication. Suppose we have two different primaries M1 and M2, and we are using multi-source replication to have S1 as a replica of both M1 and M2. S1 will apply events received from M1 in parallel with events received from M2. If we now have a third-level replica S2 that replicates from S1 as primary, we want S2 to also be able to apply events that originated on M1 in parallel with events that originated on M2. This can be achieved with out-of-order parallel replication, by setting `gtid_domain_id` different on M1 and M2.
Note that there are no special restrictions on what operations can be replicated in parallel using out-of-order; such operations can be on the same database/schema or even on the same table. The only restriction is that the operations must not conflict, that is they must be able to be applied in any order and still end up with the same result.
When using out-of-order parallel replication, the current replica position in the primary's binlog becomes multi-dimensional - each replication domain can have reached a different point in the primary binlog at any one time. The current position can be seen from the variable `gtid_slave_pos`. When the replica is stopped, restarted, or switched to replicate from a different primary using CHANGE MASTER, MariaDB automatically handles restarting each replication domain at the appropriate point in the binlog.
Out-of-order parallel replication is disabled when [--slave-parallel-mode=minimal](../replication-and-binary-log-server-system-variables/index#slave_parallel_mode) (or none).
Checking Worker Thread Status in SHOW PROCESSLIST
-------------------------------------------------
The worker threads will be listed as "system user" in [SHOW PROCESSLIST](../show-processlist/index). Their state will show the query they are currently working on, or it can show one of these:
* "Waiting for work from main SQL threads". This means that the worker thread is idle, no work is available for it at the moment.
* "Waiting for prior transaction to start commit before starting next transaction". This means that the previous batch of transactions that committed together on the primary primary has to complete first. This worker thread is waiting for that to happen before it can start working on the following batch.
* "Waiting for prior transaction to commit". This means that the transaction has been executed by the worker thread. In order to ensure in-order commit, the worker thread is waiting to commit until the previous transaction is ready to commit before it.
Expected Performance Gain
-------------------------
Here is an article showing up to ten times improvement when using parallel replication: <http://kristiannielsen.livejournal.com/18435.html>.
Configuring the Maximum Size of the Parallel Replica Queue
----------------------------------------------------------
The [slave\_parallel\_max\_queued](../replication-and-binary-log-server-system-variables/index#slave_parallel_max_queued) system variable can be used to configure the maximum size of the parallel replica queue. This system variable is only meaningful when parallel replication is configured (i.e. when [slave\_parallel\_threads](../replication-and-binary-log-server-system-variables/index#slave_parallel_threads) > `0`).
When parallel replication is used, the [SQL thread](../replication-threads/index#slave-sql-thread) will read ahead in the relay logs, queueing events in memory while looking for opportunities for executing events in parallel. The [slave\_parallel\_max\_queued](../replication-and-binary-log-server-system-variables/index#slave_parallel_max_queued) system variable sets a limit for how much memory it will use for this.
The configured value of the [slave\_parallel\_max\_queued](../replication-and-binary-log-server-system-variables/index#slave_parallel_max_queued) system variable is actually allocated for each [worker thread](../replication-threads/index#worker-threads), so the total allocation is actually equivalent to the following:
[slave\_parallel\_max\_queued](../replication-and-binary-log-server-system-variables/index#slave_parallel_max_queued) \* [slave\_parallel\_threads](../replication-and-binary-log-server-system-variables/index#slave_parallel_threads)
If this value is set too high, and the replica is far (eg. gigabytes of binlog) behind the primary, then the [SQL thread](../replication-threads/index#slave-sql-thread) can quickly read all of that and fill up memory with huge amounts of binlog events faster than the [worker threads](../replication-threads/index#worker-threads) can consume them.
On the other hand, if set too low, the [SQL thread](../replication-threads/index#slave-sql-thread) might not have sufficient space for queuing enough events to keep the worker threads busy, which could reduce performance. In this case, the [SQL thread](../replication-threads/index#slave-sql-thread) will have the [thread state](../thread-states/index) that states `Waiting for room in worker thread event queue`. For example:
```
+----+-------------+-----------+------+---------+--------+-----------------------------------------------+------------------+----------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+----+-------------+-----------+------+---------+--------+-----------------------------------------------+------------------+----------+
| 3 | system user | | NULL | Connect | 139 | closing tables | NULL | 0.000 |
| 4 | system user | | NULL | Connect | 139 | Waiting for work from SQL thread | NULL | 0.000 |
| 6 | system user | | NULL | Connect | 264274 | Waiting for master to send event | NULL | 0.000 |
| 10 | root | localhost | NULL | Sleep | 43 | | NULL | 0.000 |
| 21 | system user | | NULL | Connect | 45 | Waiting for room in worker thread event queue | NULL | 0.000 |
| 54 | root | localhost | NULL | Query | 0 | init | SHOW PROCESSLIST | 0.000 |
+----+-------------+-----------+------+---------+--------+-----------------------------------------------+------------------+----------+
```
The [slave\_parallel\_max\_queued](../replication-and-binary-log-server-system-variables/index#slave_parallel_max_queued) system variable does not define a hard limit, since the [binary log](../binary-log/index) events that are currently executing always need to be held in-memory. This means that at least two events per [worker thread](../replication-threads/index#worker-threads) can always be queued in-memory, regardless of the value of [slave\_parallel\_threads](../replication-and-binary-log-server-system-variables/index#slave_parallel_threads).
Usually, the [slave\_parallel\_threads](../replication-and-binary-log-server-system-variables/index#slave_parallel_threads) system variable should be set large enough that the [SQL thread](../replication-threads/index#slave-sql-thread) is able to read far enough ahead in the [binary log](../binary-log/index) to exploit all possible parallelism. In normal operation, the replica will hopefully not be too far behind, so there will not be a need to queue much data in-memory. The [slave\_parallel\_max\_queued](../replication-and-binary-log-server-system-variables/index#slave_parallel_max_queued) system variable could be set fairly high (eg. a few hundred kilobytes) to not limit throughtput. It should just be set low enough that total allocation of the parallel replica queue will not cause the server to run out of memory.
Configuration Variable slave\_domain\_parallel\_threads
-------------------------------------------------------
The pool of replication worker threads is shared among all multi-source primary connections, and among all replication domains that can replicate in parallel using out-of-order.
If one primary connection or replication domain is currently processing a long-running query, it is possible that it will allocate all the worker threads in the pool, only to have them wait for the long-running query to complete, stalling any other primary connection or replication domain, which will have to wait for a worker thread to become free.
This can be avoided by setting [slave\_domain\_parallel\_threads](../replication-and-binary-log-server-system-variables/index#slave_domain_parallel_threads) to a number that is lower than `slave_parallel_threads`. When set different from zero, each replication domain in one primary connection can reserve at most that many worker threads at any one time, leaving the rest (up to the value of [slave\_parallel\_threads](../replication-and-binary-log-server-system-variables/index#slave_parallel_threads)) free for other primary connections or replication domains to use in parallel.
The `slave_domain_parallel_threads` variable is dynamic and can be changed without restarting the server; all replicas must be stopped while changing it, though.
Implementation Details
----------------------
The implementation is described in [MDEV-4506](https://jira.mariadb.org/browse/MDEV-4506).
See Also
--------
* [Better Parallel Replication for MariaDB and MySQL](https://mariadb.com/blog/better-parallel-replication-mariadb-and-mysql) (MariaDB.com blog)
* [Evaluating MariaDB & MySQL Parallel Replication Part 2: Slave Group Commit](https://mariadb.com/blog/evaluating-mariadb-mysql-parallel-replication-part-2-slave-group-commit) (MariaDB.com blog)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Plugins & Storage Engines Summit for MySQL/MariaDB/Drizzle 2011 Plugins & Storage Engines Summit for MySQL/MariaDB/Drizzle 2011
===============================================================
**Note:** This page is obsolete. The information is old, outdated, or otherwise currently incorrect. We are keeping the page for historical reasons only. **Do not** rely on the information in this article.
Continuing the tradition of having a storage engine summit (see notes from [2010](../storage-engine-summit-2010/index)) after the [O'Reilly MySQL Conference](http://en.oreilly.com/mysql2011), this year it has been decided that the focus of the summit to be expanded.
Location
========
The Facebook Campus, 1601 S. California Ave. Palo Alto, CA 94304.
For directions, use Google Maps. However, if you're at the O'Reilly MySQL Conference, you can take the light rail from the conference center to Mountain View and then Caltrain from Mountain View to the California Ave exit and then walk 1 mile to the Facebook campus.
Date and Time
=============
April 15 2011, from 10am - 4pm. Lunch will be served, courtesy of Facebook.
Target Audience
===============
Developers who are writing storage engines and plugins, knowing the plugin architecture of either MySQL, the extensions in MariaDB as well as the differences in Drizzle. User defined function (UDF) writers are welcome too.
Its expanded from just an invited audience of MySQL storage engine vendors this year.
Seats are limited to **twenty-four (24)** attendees.
Who's Attending
===============
If you're attending, **this is the signup form!** Please fill up your name, email address, company, and any other contact information you may have (like Facebook, Twitter). You will need to login to the Knowledgebase (OpenID login is available).
1. Michael "Monty" Widenius, monty at askmonty dot org, Monty Program Ab
2. Sergei Golubchik, serg at askmonty dot org, Monty Program Ab
3. Colin Charles, colin at montyprogram dot com, Monty Program Ab, [@bytebot](http://twitter.com/bytebot), [fb:bytebot](http://www.facebook.com/bytebot)
4. Zardosht Kasheff, zardosht at gmail dot com, Tokutek
5. Mark Callaghan, mdcallag at gmail dot com, Facebook
6. Paul McCullagh, paul dot mccullagh at primebase dot org, PrimeBase
7. Konstantin Osipov, kostja dot osipov at gmail dot com, Mail.Ru
8. Oleksandr "Sanja" Byelkin, sanja at askmonty dot org, Monty Program Ab
9. Bradley C. Kuszmaul, [email protected], Tokutek
10. Felix Schupp, [email protected], BlackRay Data Engine, [fb:fschupp](http://facebook.com/fschupp)
11. Hartmut Holzgraefe, hartmut@php,net
12. Timour Katchaounov, timour at askmonty dot org, Monty Program Ab
13. praburam upendran, [email protected], scaledb
14. Rich Kelm, richard at sphinxsearch dot com, Sphinx Search
15. Antony T Curtis, atcurtis at gmail dot com, [Blizzard Entertainment](http://blizzard.com/) and [Open Query](http://openquery.com).
16. Daijiro MORI, morita at razil dot jp, Brazil Inc.
17. Tasuku SUENAGA, a at razil dot jp, Brazil Inc.
18. Kentoku SHIBA, kentokushiba at gmail dot com, WildGrowth
19. Peter Zaitsev, peter at percona dot com, Percona
20. Vadim Tkachenko, vadim at percona dot com, Percona
21. Louis Fahrberger, louis.fahrberger at infobright dot com, Infobright
22. Sergey Petrunya, psergey at askmonty.org, Monty Program Ab
23. Serge Frezefond
24. Igor Babaev, igor at askmonty.org Monty Program Ab
25. Volker Oboda, PrimeBase
26. Jeff Rothschild, Facebook
27. Rasmus Johansson, Monty Program Ab
28. Moshe Shadmon, ScaleDB
29. Chip Turner, Facebook
30. Domas Mituzas, Facebook/Wikipedia
Notes
=====
Linking with the Storage Engine Advisory Board at Oracle
========================================================
Volker Odoba - member of the storage engine advisory board at Oracle
* an explanation from Oracle as to why/if/when Oracle will take patches to extend the storage engine layer from the community
* its been mentioned that you might have to pay to be on the advisory board (i.e. you must be a customer)
* next meeting is at Oracle OpenWorld in October
* Schooner, Primebase, InnoDB, Infobright, Kickfire was there (now no longer since they do not exist)
Topics
======
* HBase storage engine layer to work with dynamic columns
* Discovery (Shared metadata)
* Materialised views
* Online ALTER TABLE
* Parallel replication (not SE, but a related topic)
* Group commit
* Parallel execution
* Fulltext search for every storage engine
* Cross-engine Online Backup
* Finish off server definition interface for FederatedX. Make it generic enough so that Spider can also use it.
* Pluggable data types
* Indexing/Dynamic Functions
* Fast load
HBase by Chip Turner
--------------------
Distributed key value store. Single index on a given table. Distributed servers each serving a row, multiple rows. Lexicographically sorted. No C API, there's a proxy API and Chip is fixing that now. Current protocol is serialized Java. Eventually each HBase server will speak Thrift. See <http://wiki.apache.org/hadoop/Hbase/ThriftApi> and <http://incubator.apache.org/thrift/>
Index Condition Pushdown and MRR are useful here.
Messages is a HBase application.
Index Condition Pushdown
------------------------
* idx\_cond\_push()
* opt\_index\_condition\_pushdown.cc
* EXPLAIN will say Using index condition
* Don't copy HA\_NDBCLUSTER::COND\_PUSH() since it is not done in a seinsble way. Much smarter implementations are possible with easy code - psergey
* This will be continued in discussion via email (sphinx, infobright, tokutek, spider, oqgraph)
+ TODO: Timour will create a Worklog entry and involve appropriate attendees
Discovery
---------
Cluster with many MySQL servers connected to it. It was originally designed for MySQL Cluster.
* TODO: Serg to write an explanation here. Session led by him.
Loadable Indexes
----------------
* TODO: Paul, Kentoku, Richard, and Serg to discuss further
* Fulltext search
* Dynamic indexing method to existing storage engines are required. Do you want to store the data and index in the engine? It is not necessary (Kentoku). Want to implement just an index, not a storage engine. Storage format might be completely different.
* Practical use of having indexes for all engines? GIS.
Parallel Execution
------------------
* Read in two threads, will that be a problem for the storage engine? Two handlers within the same transaction ID calling in. Could just be a switch for the storage engine. Same THD in two different threads.
Online Backup
-------------
* Status hasn't changed since last year after Oracle cancelled it. No work has been done on this.
* Needs about 6 months sponsorship of one person's salary to get such a feature enabled
* Mark doesn't like running hot backup in the server and thinks its not generally a good idea. He prefers using xtrabackup, a separate process from the server. xtrabackup is a transactional log reader (Domas has joined us and is now enlightening everyone on xtrabackup features).
+ Replication team and this meeting are the only folk that ask for this
* MyISAM has implemented backups twice now, and it has never made it outside
* Focus on what will gain you traction. Look on Percona Server. It just focuses on what customers want.
Pluggable/abstract data types
-----------------------------
* UUID, IPv4/IPv6 address, complex numbers, etc.
Online ALTER
------------
* Zardosht has an example from MySQL Cluster. He likes the implementation in MySQL Cluster 7.1. They have a different alter table API, and they've ported this to TokuDB with support for MySQL 5.1 and [MariaDB 5.1](../what-is-mariadb-51/index)/5.2.
* Wait and see the work that Oracle is doing?
* Old 6.0.3 - online alter and metadata alter - serg checked
* Zardosht's patch will work with [MariaDB 5.3](../what-is-mariadb-53/index), but once there's [MariaDB 5.5](../what-is-mariadb-55/index) based on MySQL 5.5 he's terribly worried about porting
Group commit
------------
* See slides for Monty+Kristian's talk
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Debugging a Running Server (on Linux) Debugging a Running Server (on Linux)
=====================================
Even if you don't have a server that is [compiled for debugging](../compiling-mariadb-for-debugging/index), there are still ways to get more information out from it if things go wrong.
When things go wrong, it's always better to have a version of mysqld daemon that is not stripped.
```
shell> file /usr/sbin/mysqld
```
If this doesn't say 'stripped' then you are fine. If not, you should either [download a binary with debugging information](https://downloads.mariadb.org) or [compile it, without stripping the binary](../compiling-mariadb-for-debugging/index#building-with-debug-symbols).
### Debugging Memory Consumption With tcmalloc
If you have a problem with a mysqld process that keeps on growing, you can use tcmalloc to find out what is allocating memory:
Depending on the system you have to install the `tcmalloc` (OpenSuse) or the `google-perftools-lib` (RedHat, Centos) package.
The following set of commands starts mysqld with memory profiling and if you kill it with SIGABRT, you will get a core dump that you can examine:
```
HEAPPROFILE=/tmp/mysqld.prof /usr/sbin/mysqld_safe --malloc-lib=tcmalloc --core-file-size=unlimited --core-file
```
or if you prefer to invoke mysqld directly:
```
ulimit -c unlimted
LD_PRELOAD=/usr/lib64/libtcmalloc.so.4 HEAPPROFILE=/tmp/mysqld.prof /usr/sbin/mysqld --core-file
```
You can of course add other [mysqld options](../mysqld-options/index) to the end of the above line.
Now start your client/application that uses MariaDB. You can find where memory is allocated in the `/tmp/mysqld.prof` file. If you find any memory issues, please report this in the [MariaDB bug tracker](https://jira.mariadb.org/secure/Dashboard.jspa)!
### ptrace Protection and Attaching GDB to a mysqld Instance
New Ubuntu releases do not allow one process to examine the memory of an arbitrary user's process. As a result, when trying to attach GDB to a running MariaDB (or any other process) instance, one gets the following error in GDB:
```
ptrace: Operation not permitted
```
More details are available in the [Ubuntu Wiki](https://wiki.ubuntu.com/SecurityTeam/Roadmap/KernelHardening#ptrace%20Protection).
To allow GDB to attach, one needs to edit the value of the `/proc/sys/kernel/yama/ptrace_scope` sysctl value.
* To change it temporarily, open a root shell and issue:
```
echo 0 > /proc/sys/kernel/yama/ptrace_scope
```
* To change it permanently, edit as root:
```
/etc/sysctl.d/10-ptrace.conf
```
and set the value to `0`.
### Debugging a Server That Hangs
If your mysqld server hangs, you may want to debug it to know what happened.
Preferably the server should be compiled for debugging, but it's not strictly necessary:
```
cmake -DCMAKE_BUILD_TYPE=Debug -DWITH_VALGRIND=ON .
make -j4
```
To know what the server is doing:
* Find out the process number of mysqld
```
ps -edalf | grep mysqld
```
* Attach to the process and get a back trace:
```
gdb -p 'pid of mysqld' path-to-mysqld
set height 0
set logging file /tmp/mysqld.log
set logging on
thread apply all backtrace full
```
After the above, you have a full backtrace, including all local variables, in the `mysqld.log` file. Note that you will only get all variables if the server is not stripped.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Compiling MariaDB with Vanilla XtraDB Compiling MariaDB with Vanilla XtraDB
=====================================
Sometimes, one needs to have MariaDB compiled with Vanilla XtraDB. This page describes the process to do this. The process is rather crude, as my goal was just a once-off compile for testing (that is, not packaging or shipping) purposes.
The process is applicable to [MariaDB 5.3.4](https://mariadb.com/kb/en/mariadb-534-release-notes/) and XtraDB from Percona Server 5.1.61.
```
wget http://s.petrunia.net/scratch/make-vanilla-xtradb-work-with-mariadb.diff
bzr branch /path/to/mariadb-5.3 5.3-vanilla-xtradb-r3
cd 5.3-vanilla-xtradb-r3/storage/
tar czf innodb_plugin.tgz innodb_plugin/
rm -rf innodb_plugin
tar czf xtradb.tgz xtradb/
rm -rf xtradb
cd ../../
tar zxvf ~/Percona-Server-5.1.61.tar.gz
cp -r Percona-Server-5.1.61/storage/innodb_plugin 5.3-vanilla-xtradb-r3/storage/
patch -p1 -d 5.3-vanilla-xtradb-r3/storage/innodb_plugin/ < make-vanilla-xtradb-work-with-mariadb.diff
cd 5.3-vanilla-xtradb-r3/
BUILD/autorun.sh
./configure --with-plugin-innodb_plugin
make
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb InnoDB Online DDL Operations with the INSTANT Alter Algorithm InnoDB Online DDL Operations with the INSTANT Alter Algorithm
=============================================================
Column Operations
-----------------
### `ALTER TABLE ... ADD COLUMN`
In [MariaDB 10.3.2](https://mariadb.com/kb/en/mariadb-1032-release-notes/) and later, InnoDB supports adding columns to a table with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT` if the new column is the last column in the table. See [MDEV-11369](https://jira.mariadb.org/browse/MDEV-11369) for more information. If the table has a hidden `FTS_DOC_ID` column is present, then this is not supported.
In [MariaDB 10.4](../what-is-mariadb-104/index) and later, InnoDB supports adding columns to a table with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`, regardless of where in the column list the new column is added.
When this operation is performed with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`, the tablespace file will have a non-canonical storage format. See [Non-canonical Storage Format Caused by Some Operations](#non-canonical-storage-format-caused-by-some-operations) for more information.
With the exception of adding an [auto-increment](../auto_increment/index) column, this operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `NONE`. When this strategy is used, all concurrent DML is permitted.
For example, this succeeds:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab ADD COLUMN c varchar(50);
Query OK, 0 rows affected (0.004 sec)
```
And this succeeds in [MariaDB 10.4](../what-is-mariadb-104/index) and later:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab ADD COLUMN c varchar(50) AFTER a;
Query OK, 0 rows affected (0.004 sec)
```
This applies to [ALTER TABLE ... ADD COLUMN](../alter-table/index#add-column) for [InnoDB](../innodb/index) tables.
See [Instant ADD COLUMN for InnoDB](../instant-add-column-for-innodb/index) for more information.
### `ALTER TABLE ... DROP COLUMN`
In [MariaDB 10.4](../what-is-mariadb-104/index) and later, InnoDB supports dropping columns from a table with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`. See [MDEV-15562](https://jira.mariadb.org/browse/MDEV-15562) for more information.
When this operation is performed with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`, the tablespace file will have a non-canonical storage format. See [Non-canonical Storage Format Caused by Some Operations](#non-canonical-storage-format-caused-by-some-operations) for more information.
This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `NONE`. When this strategy is used, all concurrent DML is permitted.
For example:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab DROP COLUMN c;
Query OK, 0 rows affected (0.004 sec)
```
This applies to [ALTER TABLE ... DROP COLUMN](../alter-table/index#drop-column) for [InnoDB](../innodb/index) tables.
### `ALTER TABLE ... MODIFY COLUMN`
This applies to [ALTER TABLE ... MODIFY COLUMN](../alter-table/index#modify-column) for [InnoDB](../innodb/index) tables.
#### Reordering Columns
In [MariaDB 10.4](../what-is-mariadb-104/index) and later, InnoDB supports reordering columns within a table with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`. See [MDEV-15562](https://jira.mariadb.org/browse/MDEV-15562) for more information.
When this operation is performed with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`, the tablespace file will have a non-canonical storage format. See [Non-canonical Storage Format Caused by Some Operations](#non-canonical-storage-format-caused-by-some-operations) for more information.
This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `NONE`. When this strategy is used, all concurrent DML is permitted.
For example:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab MODIFY COLUMN c varchar(50) AFTER a;
Query OK, 0 rows affected (0.004 sec)
```
#### Changing the Data Type of a Column
InnoDB does **not** support modifying a column's data type with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT` in most cases. There are some exceptions:
* InnoDB supports increasing the length of `VARCHAR` columns with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`, unless it would require changing the number of bytes requires to represent the column's length. A `VARCHAR` column that is between 0 and 255 bytes in size requires 1 byte to represent its length, while a `VARCHAR` column that is 256 bytes or longer requires 2 bytes to represent its length. This means that the length of a column cannot be increased with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT` if the original length was less than 256 bytes, and the new length is 256 bytes or more.
* In [MariaDB 10.4.3](https://mariadb.com/kb/en/mariadb-1043-release-notes/) and later, InnoDB supports increasing the length of `VARCHAR` columns with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT` with no restrictions if the [ROW\_FORMAT](../create-table/index#row_format) table option is set to [REDUNDANT](../innodb-storage-formats/index#redundant). See [MDEV-15563](https://jira.mariadb.org/browse/MDEV-15563) for more information.
* In [MariaDB 10.4.3](https://mariadb.com/kb/en/mariadb-1043-release-notes/) and later, InnoDB also supports increasing the length of `VARCHAR` columns with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT` in a more limited manner if the [ROW\_FORMAT](../create-table/index#row_format) table option is set to [COMPACT](../innodb-storage-formats/index#compact), [DYNAMIC](../innodb-storage-formats/index#dynamic), or [COMPRESSED](../innodb-storage-formats/index#compressed). In this scenario, the following limitations apply:
+ The length can be increased with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT` if the original length of the column is 127 bytes or less, and the new length of the column is 256 bytes or more.
+ The length can be increased with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT` if the original length of the column is 255 bytes or less, and the new length of the column is still 255 bytes or less.
+ The length can be increased with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT` if the original length of the column is 256 bytes or more, and the new length of the column is still 256 bytes or more.
+ The length can **not** be increased with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT` if the original length was between 128 bytes and 255 bytes, and the new length is 256 bytes or more.
+ See [MDEV-15563](https://jira.mariadb.org/browse/MDEV-15563) for more information.
The supported operations in this category support the non-locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `NONE`. When this strategy is used, all concurrent DML is permitted.
For example, this fails:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab MODIFY COLUMN c int;
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
```
But this succeeds because the original length of the column is less than 256 bytes, and the new length is still less than 256 bytes:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
) CHARACTER SET=latin1;
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab MODIFY COLUMN c varchar(100);
Query OK, 0 rows affected (0.005 sec)
```
But this fails because the original length of the column is between 128 bytes and 255 bytes, and the new length is greater than 256 bytes:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(255)
) CHARACTER SET=latin1;
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab MODIFY COLUMN c varchar(256);
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
```
But this succeeds in [MariaDB 10.4.3](https://mariadb.com/kb/en/mariadb-1043-release-notes/) and later because the table has `ROW_FORMAT=REDUNDANT`:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(200)
) ROW_FORMAT=REDUNDANT;
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab MODIFY COLUMN c varchar(300);
Query OK, 0 rows affected (0.004 sec)
```
And this succeeds in [MariaDB 10.4.3](https://mariadb.com/kb/en/mariadb-1043-release-notes/) and later because the table has `ROW_FORMAT=DYNAMIC` and the column's original length is 127 bytes or less:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(127)
) ROW_FORMAT=DYNAMIC
CHARACTER SET=latin1;
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab MODIFY COLUMN c varchar(300);
Query OK, 0 rows affected (0.003 sec)
```
And this succeeds in [MariaDB 10.4.3](https://mariadb.com/kb/en/mariadb-1043-release-notes/) and later because the table has `ROW_FORMAT=COMPRESSED` and the column's original length is 127 bytes or less:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(127)
) ROW_FORMAT=COMPRESSED
CHARACTER SET=latin1;
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab MODIFY COLUMN c varchar(300);
Query OK, 0 rows affected (0.003 sec)
```
But this fails even in [MariaDB 10.4.3](https://mariadb.com/kb/en/mariadb-1043-release-notes/) and later because the table has `ROW_FORMAT=DYNAMIC` and the column's original length is between 128 bytes and 255 bytes:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(128)
) ROW_FORMAT=DYNAMIC
CHARACTER SET=latin1;
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab MODIFY COLUMN c varchar(300);
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
```
#### Changing a Column to NULL
In [MariaDB 10.4.3](https://mariadb.com/kb/en/mariadb-1043-release-notes/) and later, InnoDB supports modifying a column to allow [NULL](../create-table/index#null-and-not-null) values with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT` if the [ROW\_FORMAT](../create-table/index#row_format) table option is set to [REDUNDANT](../innodb-storage-formats/index#redundant). See [MDEV-15563](https://jira.mariadb.org/browse/MDEV-15563) for more information.
This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `NONE`. When this strategy is used, all concurrent DML is permitted.
For example:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50) NOT NULL
) ROW_FORMAT=REDUNDANT;
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab MODIFY COLUMN c varchar(50) NULL;
Query OK, 0 rows affected (0.004 sec)
```
#### Changing a Column to NOT NULL
InnoDB does **not** support modifying a column to **not** allow [NULL](../create-table/index#null-and-not-null) values with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
For example:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
) ROW_FORMAT=REDUNDANT;
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab MODIFY COLUMN c varchar(50) NOT NULL;
ERROR 1845 (0A000): ALGORITHM=INSTANT is not supported for this operation. Try ALGORITHM=INPLACE
```
#### Adding a New `ENUM` Option
InnoDB supports adding a new [ENUM](../enum/index) option to a column with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`. In order to add a new [ENUM](../enum/index) option with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`, the following requirements must be met:
* It must be added to the end of the list.
* The storage requirements must not change.
This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `NONE`. When this strategy is used, all concurrent DML is permitted.
For example, this succeeds:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c ENUM('red', 'green')
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab MODIFY COLUMN c ENUM('red', 'green', 'blue');
Query OK, 0 rows affected (0.002 sec)
```
But this fails:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c ENUM('red', 'green')
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab MODIFY COLUMN c ENUM('red', 'blue', 'green');
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
```
#### Adding a New `SET` Option
InnoDB supports adding a new [SET](../set-data-type/index) option to a column with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`. In order to add a new [SET](../set-data-type/index) option with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`, the following requirements must be met:
* It must be added to the end of the list.
* The storage requirements must not change.
This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `NONE`. When this strategy is used, all concurrent DML is permitted.
For example, this succeeds:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c SET('red', 'green')
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab MODIFY COLUMN c SET('red', 'green', 'blue');
Query OK, 0 rows affected (0.002 sec)
```
But this fails:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c SET('red', 'green')
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab MODIFY COLUMN c SET('red', 'blue', 'green');
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
```
#### Removing System Versioning from a Column
In [MariaDB 10.3.8](https://mariadb.com/kb/en/mariadb-1038-release-notes/) and later, InnoDB supports removing [system versioning](../system-versioned-tables/index) from a column with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`. In order for this to work, the [system\_versioning\_alter\_history](../system-versioned-tables/index#system_versioning_alter_history) system variable must be set to `KEEP`. See [MDEV-16330](https://jira.mariadb.org/browse/MDEV-16330) for more information.
This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `NONE`. When this strategy is used, all concurrent DML is permitted.
For example:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50) WITH SYSTEM VERSIONING
);
SET SESSION system_versioning_alter_history='KEEP';
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab MODIFY COLUMN c varchar(50) WITHOUT SYSTEM VERSIONING;
Query OK, 0 rows affected (0.004 sec)
```
### `ALTER TABLE ... ALTER COLUMN`
This applies to [ALTER TABLE ... ALTER COLUMN](../alter-table/index#alter-column) for [InnoDB](../innodb/index) tables.
#### Setting a Column's Default Value
InnoDB supports modifying a column's [DEFAULT](../create-table/index#default-column-option) value with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `NONE`. When this strategy is used, all concurrent DML is permitted.
For example:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab ALTER COLUMN c SET DEFAULT 'No value explicitly provided.';
Query OK, 0 rows affected (0.003 sec)
```
#### Removing a Column's Default Value
InnoDB supports removing a column's [DEFAULT](../create-table/index#default-column-option) value with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `NONE`. When this strategy is used, all concurrent DML is permitted.
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50) DEFAULT 'No value explicitly provided.'
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab ALTER COLUMN c DROP DEFAULT;
Query OK, 0 rows affected (0.002 sec)
```
### `ALTER TABLE ... CHANGE COLUMN`
InnoDB supports renaming a column with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`, unless the column's data type or attributes changed in addition to the name.
This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `NONE`. When this strategy is used, all concurrent DML is permitted.
For example, this succeeds:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab CHANGE COLUMN c str varchar(50);
Query OK, 0 rows affected (0.004 sec)
```
But this fails:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab CHANGE COLUMN c num int;
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
```
This applies to [ALTER TABLE ... CHANGE COLUMN](../alter-table/index#change-column) for [InnoDB](../innodb/index) tables.
Index Operations
----------------
### `ALTER TABLE ... ADD PRIMARY KEY`
InnoDB does **not** support adding a primary key to a table with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
For example:
```
CREATE OR REPLACE TABLE tab (
a int,
b varchar(50),
c varchar(50)
);
SET SESSION sql_mode='STRICT_TRANS_TABLES';
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab ADD PRIMARY KEY (a);
ERROR 1845 (0A000): ALGORITHM=INSTANT is not supported for this operation. Try ALGORITHM=INPLACE
```
This applies to [ALTER TABLE ... ADD PRIMARY KEY](../alter-table/index#add-primary-key) for [InnoDB](../innodb/index) tables.
### `ALTER TABLE ... DROP PRIMARY KEY`
InnoDB does **not** support dropping a primary key with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
For example:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab DROP PRIMARY KEY;
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Dropping a primary key is not allowed without also adding a new primary key. Try ALGORITHM=COPY
```
This applies to [ALTER TABLE ... DROP PRIMARY KEY](../alter-table/index#drop-primary-key) for [InnoDB](../innodb/index) tables.
###
`ALTER TABLE ... ADD INDEX` and `CREATE INDEX`
This applies to [ALTER TABLE ... ADD INDEX](../alter-table/index#add-index) and [CREATE INDEX](../create-index/index) for [InnoDB](../innodb/index) tables.
#### Adding a Plain Index
InnoDB does **not** support adding a plain index to a table with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
For example, this fails:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab ADD INDEX b_index (b);
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: ADD INDEX. Try ALGORITHM=NOCOPY
```
And this fails:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
CREATE INDEX b_index ON tab (b);
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: ADD INDEX. Try ALGORITHM=NOCOPY
```
#### Adding a Fulltext Index
InnoDB does **not** support adding a [FULLTEXT](../full-text-indexes/index) index to a table with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
For example, this fails:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INPLACE';
ALTER TABLE tab ADD FULLTEXT INDEX b_index (b);
Query OK, 0 rows affected (0.042 sec)
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab ADD FULLTEXT INDEX c_index (c);
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: ADD INDEX. Try ALGORITHM=NOCOPY
```
And this fails:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INPLACE';
CREATE FULLTEXT INDEX b_index ON tab (b);
Query OK, 0 rows affected (0.040 sec)
SET SESSION alter_algorithm='INSTANT';
CREATE FULLTEXT INDEX c_index ON tab (c);
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: ADD INDEX. Try ALGORITHM=NOCOPY
```
#### Adding a Spatial Index
InnoDB does **not** support adding a [SPATIAL](../spatial-index/index) index to a table with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
For example, this fails:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c GEOMETRY NOT NULL
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab ADD SPATIAL INDEX c_index (c);
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: ADD INDEX. Try ALGORITHM=NOCOPY
```
And this fails:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c GEOMETRY NOT NULL
);
SET SESSION alter_algorithm='INSTANT';
CREATE SPATIAL INDEX c_index ON tab (c);
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: ADD INDEX. Try ALGORITHM=NOCOPY
```
### `ALTER TABLE ... ADD FOREIGN KEY`
InnoDB does **not** support adding foreign key constraints to a table with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
For example:
```
CREATE OR REPLACE TABLE tab1 (
a int PRIMARY KEY,
b varchar(50),
c varchar(50),
d int
);
CREATE OR REPLACE TABLE tab2 (
a int PRIMARY KEY,
b varchar(50)
);
SET SESSION foreign_key_checks=OFF;
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab1 ADD FOREIGN KEY tab2_fk (d) REFERENCES tab2 (a);
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: ADD INDEX. Try ALGORITHM=NOCOPY
```
This applies to [ALTER TABLE ... ADD FOREIGN KEY](../alter-table/index#add-foreign-key) for [InnoDB](../innodb/index) tables.
### `ALTER TABLE ... DROP FOREIGN KEY`
InnoDB supports dropping foreign key constraints from a table with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `NONE`. When this strategy is used, all concurrent DML is permitted.
For example:
```
CREATE OR REPLACE TABLE tab2 (
a int PRIMARY KEY,
b varchar(50)
);
CREATE OR REPLACE TABLE tab1 (
a int PRIMARY KEY,
b varchar(50),
c varchar(50),
d int,
FOREIGN KEY tab2_fk (d) REFERENCES tab2 (a)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab1 DROP FOREIGN KEY tab2_fk;
Query OK, 0 rows affected (0.004 sec)
```
This applies to [ALTER TABLE ... DROP FOREIGN KEY](../alter-table/index#drop-foreign-key) for [InnoDB](../innodb/index) tables.
Table Operations
----------------
### `ALTER TABLE ... AUTO_INCREMENT=...`
InnoDB supports changing a table's [AUTO\_INCREMENT](../auto_increment/index) value with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `NONE`. When this strategy is used, all concurrent DML is permitted.
For example:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab AUTO_INCREMENT=100;
Query OK, 0 rows affected (0.002 sec)
```
This applies to [ALTER TABLE ... AUTO\_INCREMENT=...](../create-table/index#auto_increment) for [InnoDB](../innodb/index) tables.
### `ALTER TABLE ... ROW_FORMAT=...`
InnoDB does **not** support changing a table's [row format](../innodb-storage-formats/index) with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
For example:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
) ROW_FORMAT=DYNAMIC;
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab ROW_FORMAT=COMPRESSED;
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Changing table options requires the table to be rebuilt. Try ALGORITHM=INPLACE
```
This applies to [ALTER TABLE ... ROW\_FORMAT=...](../create-table/index#row_format) for [InnoDB](../innodb/index) tables.
### `ALTER TABLE ... KEY_BLOCK_SIZE=...`
InnoDB does **not** support changing a table's [KEY\_BLOCK\_SIZE](../innodb-storage-formats/index#using-the-compressed-row-format) with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
For example:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
) ROW_FORMAT=COMPRESSED
KEY_BLOCK_SIZE=4;
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab KEY_BLOCK_SIZE=2;
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Changing table options requires the table to be rebuilt. Try ALGORITHM=INPLACE
```
This applies to [KEY\_BLOCK\_SIZE=...](../create-table/index#key_block_size) for [InnoDB](../innodb/index) tables.
###
`ALTER TABLE ... PAGE_COMPRESSED=1` and `ALTER TABLE ... PAGE_COMPRESSION_LEVEL=...`
In [MariaDB 10.3.10](https://mariadb.com/kb/en/mariadb-10310-release-notes/) and later, InnoDB supports setting a table's [PAGE\_COMPRESSED](../create-table/index#page_compressed) value to `1` with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`. InnoDB does **not** support changing a table's [PAGE\_COMPRESSED](../create-table/index#page_compressed) value from `1` to `0` with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
In these versions, InnoDB also supports changing a table's [PAGE\_COMPRESSION\_LEVEL](../create-table/index#page_compression_level) value with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `NONE`. When this strategy is used, all concurrent DML is permitted.
See [MDEV-16328](https://jira.mariadb.org/browse/MDEV-16328) for more information.
For example, this succeeds:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab PAGE_COMPRESSED=1;
Query OK, 0 rows affected (0.004 sec)
```
And this succeeds:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
) PAGE_COMPRESSED=1
PAGE_COMPRESSION_LEVEL=5;
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab PAGE_COMPRESSION_LEVEL=4;
Query OK, 0 rows affected (0.004 sec)
```
But this fails:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
) PAGE_COMPRESSED=1;
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab PAGE_COMPRESSED=0;
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Changing table options requires the table to be rebuilt. Try ALGORITHM=INPLACE
```
This applies to [ALTER TABLE ... PAGE\_COMPRESSED=...](../create-table/index#page_compressed) and [ALTER TABLE ... PAGE\_COMPRESSION\_LEVEL=...](../create-table/index#page_compression_level) for [InnoDB](../innodb/index) tables.
### `ALTER TABLE ... DROP SYSTEM VERSIONING`
InnoDB does **not** support dropping [system versioning](../system-versioned-tables/index) from a table with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
For example:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
) WITH SYSTEM VERSIONING;
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab DROP SYSTEM VERSIONING;
ERROR 1845 (0A000): ALGORITHM=INSTANT is not supported for this operation. Try ALGORITHM=INPLACE
```
This applies to [ALTER TABLE ... DROP SYSTEM VERSIONING](../alter-table/index#drop-system-versioning) for [InnoDB](../innodb/index) tables.
### `ALTER TABLE ... DROP CONSTRAINT`
In [MariaDB 10.3.6](https://mariadb.com/kb/en/mariadb-1036-release-notes/) and later, InnoDB supports dropping a [CHECK](../constraint/index#check-constraints) constraint from a table with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`. See [MDEV-16331](https://jira.mariadb.org/browse/MDEV-16331) for more information.
This operation supports the non-locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `NONE`. When this strategy is used, all concurrent DML is permitted.
For example:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50),
CONSTRAINT b_not_empty CHECK (b != '')
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab DROP CONSTRAINT b_not_empty;
Query OK, 0 rows affected (0.002 sec)
```
This applies to [ALTER TABLE ... DROP CONSTRAINT](../alter-table/index#drop-constraint) for [InnoDB](../innodb/index) tables.
### `ALTER TABLE ... FORCE`
InnoDB does **not** support forcing a table rebuild with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
For example:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab FORCE;
ERROR 1845 (0A000): ALGORITHM=INSTANT is not supported for this operation. Try ALGORITHM=INPLACE
```
This applies to [ALTER TABLE ... FORCE](../alter-table/index#force) for [InnoDB](../innodb/index) tables.
### `ALTER TABLE ... ENGINE=InnoDB`
InnoDB does **not** support forcing a table rebuild with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
For example:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab ENGINE=InnoDB;
ERROR 1845 (0A000): ALGORITHM=INSTANT is not supported for this operation. Try ALGORITHM=INPLACE
```
This applies to [ALTER TABLE ... ENGINE=InnoDB](../create-table/index#storage-engine) for [InnoDB](../innodb/index) tables.
### `OPTIMIZE TABLE ...`
InnoDB does **not** support optimizing a table with with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
For example:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SHOW GLOBAL VARIABLES WHERE Variable_name IN('innodb_defragment', 'innodb_optimize_fulltext_only');
+-------------------------------+-------+
| Variable_name | Value |
+-------------------------------+-------+
| innodb_defragment | OFF |
| innodb_optimize_fulltext_only | OFF |
+-------------------------------+-------+
2 rows in set (0.001 sec)
SET SESSION alter_algorithm='INSTANT';
OPTIMIZE TABLE tab;
+---------+----------+----------+------------------------------------------------------------------------------+
| Table | Op | Msg_type | Msg_text |
+---------+----------+----------+------------------------------------------------------------------------------+
| db1.tab | optimize | note | Table does not support optimize, doing recreate + analyze instead |
| db1.tab | optimize | error | ALGORITHM=INSTANT is not supported for this operation. Try ALGORITHM=INPLACE |
| db1.tab | optimize | status | Operation failed |
+---------+----------+----------+------------------------------------------------------------------------------+
3 rows in set, 1 warning (0.002 sec)
```
This applies to [OPTIMIZE TABLE](../optimize-table/index) for [InnoDB](../innodb/index) tables.
###
`ALTER TABLE ... RENAME TO` and `RENAME TABLE ...`
InnoDB supports renaming a table with [ALGORITHM](../alter-table/index#algorithm) set to `INSTANT`.
This operation supports the exclusive locking strategy. This strategy can be explicitly chosen by setting the [LOCK](../alter-table/index#lock) clause to `EXCLUSIVE`. When this strategy is used, concurrent DML is **not** permitted.
For example, this succeeds:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
ALTER TABLE tab RENAME TO old_tab;
Query OK, 0 rows affected (0.008 sec)
```
And this succeeds:
```
CREATE OR REPLACE TABLE tab (
a int PRIMARY KEY,
b varchar(50),
c varchar(50)
);
SET SESSION alter_algorithm='INSTANT';
RENAME TABLE tab TO old_tab;
Query OK, 0 rows affected (0.008 sec)
```
This applies to [ALTER TABLE ... RENAME TO](../alter-table/index#rename-to) and [RENAME TABLE](../rename-table/index) for [InnoDB](../innodb/index) tables.
Limitations
-----------
### Limitations Related to Generated (Virtual and Persistent/Stored) Columns
[Generated columns](../generated-columns/index) do not currently support online DDL for all of the same operations that are supported for "real" columns.
See [Generated (Virtual and Persistent/Stored) Columns: Statement Support](../generated-columns/index#statement-support) for more information on the limitations.
### Non-canonical Storage Format Caused by Some Operations
Some operations cause a table's tablespace file to use a non-canonical storage format when the `INSTANT` algorithm is used. The affected operations include:
* [Adding a column.](#alter-table-add-column)
* [Dropping a column.](#alter-table-drop-column)
* [Reordering columns.](#reordering-columns)
These operations require the following non-canonical changes to the storage format:
* A hidden metadata record at the start of the clustered index is used to store each column's [DEFAULT](../default/index) value. This makes it possible to add new columns that have default values without rebuilding the table.
* A [BLOB](../blob/index) in the hidden metadata record is used to store column mappings. This makes it possible to drop or reorder columns without rebuilding the table. This also makes it possible to add columns to any position or drop columns from any position in the table without rebuilding the table.
* If a column is dropped, old records will contain garbage in that column's former position, and new records will be written with [NULL](../null-values/index) values, empty strings, or dummy values.
This non-canonical storage format has the potential to incur some performance or storage overhead for all subsequent DML operations. If you notice some issues like this and you want to normalize a table's storage format to avoid this problem, then you can do so by forcing a table rebuild by executing [ALTER TABLE ... FORCE](../alter-table/index#force) with [ALGORITHM](../alter-table/index#algorithm) set to `INPLACE`. For example:
```
SET SESSION alter_algorithm='INPLACE';
ALTER TABLE tab FORCE;
Query OK, 0 rows affected (0.008 sec)
```
However, keep in mind that there are certain scenarios where you may not be able to rebuild the table with [ALGORITHM](../alter-table/index#algorithm) set to `INPLACE`. See [InnoDB Online DDL Operations with ALGORITHM=INPLACE: Limitations](../innodb-online-ddl-operations-with-algorithminplace/index#limitations) for more information on those cases. If you hit one of those scenarios, but you still want to rebuild the table, then you would have to do so with [ALGORITHM](../alter-table/index#algorithm) set to `COPY`.
### Known Bugs
There are some known bugs that could lead to issues when an InnoDB DDL operation is performed using the [INSTANT](../innodb-online-ddl-overview/index#instant-algorithm) algorithm. This algorithm will usually be chosen by default if the operation supports the algorithm.
The effect of many of these bugs is that the table seems to *forget* that its tablespace file is in the [non-canonical storage format](#non-canonical-storage-format-caused-by-some-operations).
If you are concerned that a table may be affected by one of these bugs, then your best option would be to normalize the table structure. This can be done by rebuilding the table. For example:
```
SET SESSION alter_algorithm='INPLACE';
ALTER TABLE tab FORCE;
Query OK, 0 rows affected (0.008 sec)
```
If you are concerned about these bugs, and you want to perform an operation that supports the [INSTANT](../innodb-online-ddl-overview/index#algorithminstant) algorithm, but you want to avoid using that algorithm, then you can set the algorithm to [INPLACE](../innodb-online-ddl-overview/index#inplace-algorithm) and add the `FORCE` keyword to the [ALTER TABLE](../alter-table/index) statement:
```
SET SESSION alter_algorithm='INPLACE';
ALTER TABLE tab ADD COLUMN c varchar(50), FORCE;
```
#### Closed Bugs
* [MDEV-20066](https://jira.mariadb.org/browse/MDEV-20066): This bug could cause a table to become corrupt if a column was added instantly. It is fixed in [MariaDB 10.3.18](https://mariadb.com/kb/en/mariadb-10318-release-notes/) and [MariaDB 10.4.8](https://mariadb.com/kb/en/mariadb-1048-release-notes/).
* [MDEV-20117](https://jira.mariadb.org/browse/MDEV-20117): This bug could cause a table to become corrupt if a column was dropped instantly. It is fixed in [MariaDB 10.4.9](https://mariadb.com/kb/en/mariadb-1049-release-notes/).
#### Open Bugs
* [MDEV-18519](https://jira.mariadb.org/browse/MDEV-18519): This bug could cause a table to become corrupt if a column was added instantly.
* [MDEV-19743](https://jira.mariadb.org/browse/MDEV-19743): This bug could cause a table to become corrupt during page reorganization if a column was added instantly.
* [MDEV-19783](https://jira.mariadb.org/browse/MDEV-19783): This bug could cause a table to become corrupt if a column was added instantly.
* [MDEV-20090](https://jira.mariadb.org/browse/MDEV-20090): This bug could cause a table to become corrupt if columns were added, dropped, or reordered instantly.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb WSREP_LAST_WRITTEN_GTID WSREP\_LAST\_WRITTEN\_GTID
==========================
**MariaDB starting with [10.4.2](https://mariadb.com/kb/en/mariadb-1042-release-notes/)**WSREP\_LAST\_WRITTEN\_GTID was added as part of Galera 4 in [MariaDB 10.4.2](https://mariadb.com/kb/en/mariadb-1042-release-notes/).
Syntax
------
```
WSREP_LAST_WRITTEN_GTID()
```
Description
-----------
Returns the [Global Transaction ID](../gtid/index) of the most recent write transaction performed by the client.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Upgrading from MariaDB 10.3 to MariaDB 10.4 Upgrading from MariaDB 10.3 to MariaDB 10.4
===========================================
### How to Upgrade
For Windows, see [Upgrading MariaDB on Windows](../upgrading-mariadb-on-windows/index) instead.
For MariaDB Galera Cluster, see [Upgrading from MariaDB 10.3 to MariaDB 10.4 with Galera Cluster](../upgrading-from-mariadb-103-to-mariadb-104-with-galera-cluster/index) instead.
Before you upgrade, it would be best to take a backup of your database. This is always a good idea to do before an upgrade. We would recommend [Mariabackup](../mariabackup/index).
The suggested upgrade procedure is:
1. Modify the repository configuration, so the system's package manager installs [MariaDB 10.4](../what-is-mariadb-104/index). For example,
* On Debian, Ubuntu, and other similar Linux distributions, see [Updating the MariaDB APT repository to a New Major Release](../installing-mariadb-deb-files/index#updating-the-mariadb-apt-repository-to-a-new-major-release) for more information.
* On RHEL, CentOS, Fedora, and other similar Linux distributions, see [Updating the MariaDB YUM repository to a New Major Release](../yum/index#updating-the-mariadb-yum-repository-to-a-new-major-release) for more information.
* On SLES, OpenSUSE, and other similar Linux distributions, see [Updating the MariaDB ZYpp repository to a New Major Release](../installing-mariadb-with-zypper/index#updating-the-mariadb-zypp-repository-to-a-new-major-release) for more information.
2. [Stop MariaDB](../starting-and-stopping-mariadb-automatically/index).
3. Uninstall the old version of MariaDB.
* On Debian, Ubuntu, and other similar Linux distributions, execute the following:
`sudo apt-get remove mariadb-server`
* On RHEL, CentOS, Fedora, and other similar Linux distributions, execute the following:
`sudo yum remove MariaDB-server`
* On SLES, OpenSUSE, and other similar Linux distributions, execute the following:
`sudo zypper remove MariaDB-server`
4. Install the new version of MariaDB.
* On Debian, Ubuntu, and other similar Linux distributions, see [Installing MariaDB Packages with APT](../installing-mariadb-deb-files/index#installing-mariadb-packages-with-apt) for more information.
* On RHEL, CentOS, Fedora, and other similar Linux distributions, see [Installing MariaDB Packages with YUM](../yum/index#installing-mariadb-packages-with-yum) for more information.
* On SLES, OpenSUSE, and other similar Linux distributions, see [Installing MariaDB Packages with ZYpp](../installing-mariadb-with-zypper/index#installing-mariadb-packages-with-zypp) for more information.
5. Make any desired changes to configuration options in [option files](../configuring-mariadb-with-option-files/index), such as `my.cnf`. This includes removing any options that are no longer supported.
6. [Start MariaDB](../starting-and-stopping-mariadb-automatically/index).
7. Run `[mysql\_upgrade](../mysql_upgrade/index)`.
* `mysql_upgrade` does two things:
1. Ensures that the system tables in the `[mysq](../the-mysql-database-tables/index)l` database are fully compatible with the new version.
2. Does a very quick check of all tables and marks them as compatible with the new version of MariaDB .
### Incompatible Changes Between 10.3 and 10.4
On most servers upgrading from 10.3 should be painless. However, there are some things that have changed which could affect an upgrade:
#### Options That Have Changed Default Values
| Option | Old default value | New default value |
| --- | --- | --- |
| [slave\_transaction\_retry\_errors](../replication-and-binary-log-system-variables/index#slave_transaction_retry_errors) | 1213,1205 | 1158,1159,1160,1161,1205,1213,1429,2013,12701 |
| [wsrep\_debug](../galera-cluster-system-variables/index#wsrep_debug) | OFF | NONE |
| [wsrep\_load\_data\_splitting](../galera-cluster-system-variables/index#wsrep_load_data_splitting) | ON | OFF |
#### Options That Have Been Removed or Renamed
The following options should be removed or renamed if you use them in your [option files](../configuring-mariadb-with-option-files/index):
| Option | Reason |
| --- | --- |
#### Authentication and TLS
* See [Authentication from MariaDB 10.4](../authentication-from-mariadb-104/index) for an overview of the changes.
* The [unix\_socket authentication plugin](../authentication-plugin-unix-socket/index) is now default on Unix-like systems.
* TLSv1.0 is disabled by default in [MariaDB 10.4](../what-is-mariadb-104/index). See [tls\_version](../ssltls-system-variables/index#tls_version) and [TLS Protocol Versions](../secure-connections-overview/index#tls-protocol-versions).
### Major New Features To Consider
You might consider using the following major new features in [MariaDB 10.4](../what-is-mariadb-104/index):
* [Galera](../galera-cluster/index) has been upgraded from [Galera](../galera-cluster/index) 3 to [Galera](../galera-cluster/index) 4.
* [System-versioning](../temporal-data-tables/index) extended with support for [application-time periods](../temporal-data-tables/index#application-time-periods).
* [User password expiry](../user-password-expiry/index)
* [Account Locking](../account-locking/index)
* See also [System Variables Added in MariaDB 10.4](../system-variables-added-in-mariadb-104/index).
### See Also
* [The features in MariaDB 10.4](../what-is-mariadb-104/index)
* [Upgrading from MariaDB 10.3 to MariaDB 10.4 with Galera Cluster](../upgrading-from-mariadb-103-to-mariadb-104-with-galera-cluster/index)
* [Upgrading from MariaDB 10.2 to MariaDB 10.3](../upgrading-from-mariadb-102-to-mariadb-103/index)
* [Upgrading from MariaDB 10.1 to MariaDB 10.2](../upgrading-from-mariadb-101-to-mariadb-102/index)
* [Upgrading from MariaDB 10.0 to MariaDB 10.1](../upgrading-from-mariadb-100-to-mariadb-101/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Legacy Storage Engines Legacy Storage Engines
=======================
The following storage engines are no longer maintained.
| Title | Description |
| --- | --- |
| [BDB](../bdb-obsolete/index) | BDB storage engine |
| [Cassandra Storage Engine](../cassandra/index) | A storage engine interface to Cassandra. |
| [EXAMPLE Storage Engine](../example-storage-engine/index) | Legacy storage engine designed to assist developers with writing new storage engines |
| [FEDERATED Storage Engine](../federated-storage-engine/index) | A legacy storage engine no longer in active development. |
| [IBMDB2I Storage Engine](../ibmdb2i-storage-engine/index) | Legacy storage engine designed to allow storage on DB2 tables |
| [NDB in MariaDB](../ndb-in-mariadb/index) | MySQL Cluster has been removed from MariaDB |
| [PBXT](../pbxt-storage-engine/index) | PBXT is a general purpose transactional storage engine. It's however not maintained anymore. |
| [TokuDB](../tokudb/index) | For use in high-performance and write-intensive environments. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore Architecture ColumnStore Architecture
=========================
Documentation for the latest release of Columnstore is not available on the Knowledge Base. Instead, see:
* [Release Notes](https://mariadb.com/docs/release-notes/mariadb-columnstore-1-5-2-release-notes/)
* [Deployment Instructions](https://mariadb.com/docs/deploy/community-single-columnstore/)
| Title | Description |
| --- | --- |
| [ColumnStore Architectural Overview](../columnstore-architectural-overview/index) | Overview of the components making up the MariaDB ColumnStore architecture |
| [ColumnStore User Module](../columnstore-user-module/index) | The User Module manages and controls the operation of end-user queries |
| [ColumnStore Performance Module](../columnstore-performance-module/index) | Responsible for performing I/O operations in support of query and write processing |
| [ColumnStore Storage Architecture](../columnstore-storage-architecture/index) | MariaDB ColumnStore Storage Architecture and Concepts |
| [ColumnStore Query Processing](../columnstore-query-processing/index) | How ColumnStore processes an end user query through the User and Performance Modules |
| [ColumnStore System Databases](../columnstore-system-databases/index) | When using ColumnStore, MariaDB Server creates a series of system databases... |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore Storage Architecture ColumnStore Storage Architecture
================================
When you create a table on MariaDB ColumnStore, the system creates at least one file per column in the table. So, for instance, a table created with three columns would have a minimum of three, separately addressable logical objects created on a SAN or on the local disk of a Performance Module.
ColumnStore writes the table schema locally to /usr/local/mariadb/columnstore/mysql/db with all the other non-ColumnStore tables. The data you write to the ColumnStore table is stored across the Performance Modules in DB Roots, which are located in /usr/local/mariadb/columnstore/datax.
Extents
-------
Each column in the table is stored independently in a logical measure of 8,388,608 rows called an Extent. Extents for 1 byte datatypes consume 8MB; 2 byte datatypes require 16MB; 4 byte datatypes 32MB; 8 bytes 64MB; and variable size datatypes 64MB. Once an Extent becomes full, ColumnStore creates a new Extent. String columns greater than 8 characters store indexes in the main column file and actual values in separate dictionary files.
Extents are physically stored as a collection of blocks. Each block is 8KB. Every database block is uniquely identified by its Logical Block Identifier, or LBID.
The physical file ColumnStore writes to disk is called a segment file. Once segment files reach the maximum number of extents, ColumnStore automatically creates a new segment file. You can set the maximum number of extents in a segment file using `ExtentsPreSegmentFile` in the `ColumnStore.xml` file. It should be set to a multiple of the number of DB Roots. The default value is 2.
Collectively, all of a column's segment files for one or more extents for a partition. This is the horizontal partitioning in ColumnStore. Partitions are stored in hierarchical structures organized by segments, (that is, folders). ColumnStore meta-stores maps file structure and location to the DB schema as well as in the `FilesPerColumnPartition` in the `ColumnStore.xml` file. The default value is 4. Additionally, by default, ColumnStore compresses data.
### Extent Maps
ColumnStore uses a smart structure called an Extent Map to provide a logical range for partitioning and remove the need for indexing, manual table partitioning, materialized views, summary tables and other structures and objects that row-based databases must implement for query performance.
Extents are logical blocks of space that exist within a physical segment file, and is anywhere between 8 and 64 MB in size. Each Extent supports the same number of rows, with smaller data types using less disk space. The Extent Map catalogs Extents to their corresponding blocks (LBID's), along with minimum and maximum values for the column's data within the Extent.
The primary Performance Module has a master copy of the Extent Map. On system startup, the file is read into memory, then physically copied to all other participating User and Performance modules for disaster recovery and failover. Nodes keep the Extent Map in memory for quick access. As Extents are modified, updates are broadcast to participating nodes.
### Extent Elimination
Using the Extent Map, ColumnStore can perform logical range partitioning and only retrieve the blocks needed to satisfy the query. This is done through Extent Elimination, the process of eliminating Extents from the results that don't meet the given join and filter conditions of the query, which reduces the overall I/O operations.
In Extent Elimination, ColumnStore scans the columns in join and filter conditions. It then extracts the logical horizontal partitioning information of each extent along with the minimum and maximum values for the column to further eliminate Extents. To eliminate an Extent when a column scan involves a filter, that filter is compared to the minimum and maximum values stored in each extent for the column. If the filter value is outside the Extents minimum and maximum value range, ColumnStore eliminates the Extent.
This behavior is automatic and well suited for series, ordered, patterned and time-based data, where the data is loaded frequently and often referenced by time. Any column with clustered values is a good candidate for Extent Elimination.
Compression with Real-time Decompression
----------------------------------------
In columnar storage, similar data is stored within each column file, which allows for excellent compressibility. While the actual space savings depends on the randomness of the data and the number of distinct values that exists, many data-sets show compression rates saving between 65% and 95% space.
ColumnStore optimizes its compression strategy for read performance from disk. It is tuned to accelerate the decompression rate, maximizing the performance benefits when reading from disk. This allows systems that are I/O bound to improve performance on disk reads.
By default, compression is turned on in ColumnStore. In addition, you can enable or disable it at the table-level or column-level, or control it at the session-level by setting the `[infinidb\_compression\_type](../columnstore-system-variables/index#compression-mode)` system variable. When enabled, ColumnStore uses snappy compression.
Version Buffer
--------------
MariaDB ColumnStore uses the Version Buffer to store disk blocks that are being modified, manage transaction rollbacks, and service the MVCC (multi-version concurrency control) or "snapshot read" function of the database. This allows it to offer a query consistent view of the database.
All statements in ColumnStore run at a particular version (or, snapshot) of the database, which the system refers to as the System Change Number, (SCN).
**Note:** Although it is called a "buffer", the Version Buffer uses both memory and disk structures.
### How the Version Buffer Works
The Version Buffer uses in-memory hash tables to supply memory access to in-flight transaction information. It starts at 4MB with the memory region growing from that amount to handle blocks that are being modified by a transaction. Each entry in the hash table is a 40-byte reference to the 8KB block being modified.
The limiting factor of the Version Buffer is not the number of rows being updated, but rather the number of disk blocks. You can increase the size, but use caution, since increasing the number of disk blocks means that `[UPDATE](../update/index)` and `[DELETE](../delete/index)` statements that run for long periods of time can take even longer in the event that you need to roll back the changes.
Transaction Log
---------------
MariaDB ColumnStore supports logging committed transaction to the server's [Binary Log](../binlog/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb LENGTH LENGTH
======
Syntax
------
```
LENGTH(str)
```
Description
-----------
Returns the length of the string `str`.
In the default mode, when [Oracle mode from MariaDB 10.3](../sql_modeoracle/index#functions) is not set, the length is measured in bytes. In this case, a multi-byte character counts as multiple bytes. This means that for a string containing five two-byte characters, `LENGTH()` returns 10, whereas [CHAR\_LENGTH()](../char_length/index) returns 5.
When running [Oracle mode from MariaDB 10.3](../sql_modeoracle/index#functions), the length is measured in characters, and `LENGTH` is a synonym for [CHAR\_LENGTH()](../char_length/index).
If `str` is not a string value, it is converted into a string. If `str` is `NULL`, the function returns `NULL`.
Examples
--------
```
SELECT LENGTH('MariaDB');
+-------------------+
| LENGTH('MariaDB') |
+-------------------+
| 7 |
+-------------------+
```
When [Oracle mode](../sql_modeoracle/index) from [MariaDB 10.3](../what-is-mariadb-103/index) is not set:
```
SELECT CHAR_LENGTH('π'), LENGTH('π'), LENGTHB('π'), OCTET_LENGTH('π');
+-------------------+--------------+---------------+--------------------+
| CHAR_LENGTH('π') | LENGTH('π') | LENGTHB('π') | OCTET_LENGTH('π') |
+-------------------+--------------+---------------+--------------------+
| 1 | 2 | 2 | 2 |
+-------------------+--------------+---------------+--------------------+
```
In [Oracle mode from MariaDB 10.3](../sql_modeoracle/index#functions):
```
SELECT CHAR_LENGTH('π'), LENGTH('π'), LENGTHB('π'), OCTET_LENGTH('π');
+-------------------+--------------+---------------+--------------------+
| CHAR_LENGTH('π') | LENGTH('π') | LENGTHB('π') | OCTET_LENGTH('π') |
+-------------------+--------------+---------------+--------------------+
| 1 | 1 | 2 | 2 |
+-------------------+--------------+---------------+--------------------+
```
See Also
--------
* [CHAR\_LENGTH()](../char_length/index)
* [LENGTHB()](../lengthb/index)
* [OCTET\_LENGTH()](../octet_length/index)
* [Oracle mode from MariaDB 10.3](../sql_modeoracle/index#simple-syntax-compatibility)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB ColumnStore software upgrade 1.1.7 GA to 1.2.4 GA MariaDB ColumnStore software upgrade 1.1.7 GA to 1.2.4 GA
=========================================================
MariaDB ColumnStore software upgrade 1.1.7 GA to 1.2.4 GA
---------------------------------------------------------
This upgrade also applies to 1.2.0 Alpha to 1.2.4 GA upgrades
### Changes in 1.2.1 and later
#### libjemalloc dependency
ColumnStore 1.2.3 onwards requires libjemalloc to be installed. For Ubuntu & Debian based distributions this is installed using the package "libjemalloc1" in the standard repositories.
For CentOS the package is in RedHat's EPEL repository:
```
sudo yum -y install epel-release
sudo yum install jemalloc
```
#### Non-distributed is the default distribution mode in postConfigure
The default distribution mode has changed from 'distributed' to 'non-distributed'. During an upgrade, however, the default is to use the distribution mode used in the original installation. The options '-d' and '-n' can always be used to override the default.
#### Non-root user sudo setup
Root-level permissions are no longer required to install or upgrade ColumnStore for some types of installations. Installations requiring some level of sudo access, and the instructions, are listed here: [https://mariadb.com/kb/en/library/preparing-for-columnstore-installation-121/#update-sudo-configuration-if-needed-by-root-user](../library/preparing-for-columnstore-installation-121/index#update-sudo-configuration-if-needed-by-root-user)
#### Running the mysql\_upgrade script
As part of the upgrade process to 1.2.4, the user is required to run the mysql\_upgrade script on all of the following nodes.
* All User Modules on a system configured with separate User and Performance Modules
* All Performance Modules on a system configured with separate User and Performance Modules and Local Query Feature is enabled
* All Performance Modules on a system configured with combined User and Performance Modules
mysql\_upgrade should be run once the upgrade has been completed.
This is an example of how it run on a root user install:
```
/usr/local/mariadb/columnstore/mysql/bin/mysql_upgrade --defaults-file=/usr/local/mariadb/columnstore/mysql/my.cnf --force
```
This is an example of how it run on a non-root user install, assuming ColumnStore is installed under the user's home directory:
```
$HOME/mariadb/columnstore/mysql/bin/mysql_upgrade --defaults-file=$HOME/mariadb/columnstore/mysql/my.cnf --force
```
In addition you should run the upgrade stored procedure below for a major version upgrade.
#### Executing the upgrade stored procedure
If you are upgrading from 1.1.7 or have upgraded in the past you should run the MariaDB ColumnStore stored procedure. This updates the MariaDB FRM files by altering every ColumnStore table with a blank table comment. This will not affect options set using table comments but will erase any table comment the user has manually set.
You only need to execute this as part of a major version upgrade. It is executed using the following query which should be executed by a user which has access to alter every ColumnStore table:
```
call columnstore_info.columnstore_upgrade();
```
### Setup
In this section, we will refer to the directory ColumnStore is installed in as <CSROOT>. If you installed the RPM or DEB package, then your <CSROOT> will be /usr/local. If you installed it from the tarball, <CSROOT> will be where you unpacked it.
#### Columnstore.xml / my.cnf
Configuration changes made manually are not automatically carried forward during the upgrade. These modifications will need to be made again manually after the upgrade is complete.
After the upgrade process the configuration files will be saved at:
* <CSROOT>/mariadb/columnstore/etc/Columnstore.xml.rpmsave
* <CSROOT>/mariadb/columnstore/mysql/my.cnf.rpmsave
#### MariaDB root user database password
If you have specified a root user database password (which is good practice), then you must configure a .my.cnf file with user credentials for the upgrade process to use. Create a .my.cnf file in the user home directory with 600 file permissions with the following content (updating PASSWORD as appropriate):
```
[mysqladmin]
user = root
password = PASSWORD
```
### Choosing the type of upgrade
Note, softlinks may cause a problem during the upgrade if you use the RPM or DEB packages. If you have linked a directory above /usr/local/mariadb/columnstore, the softlinks will be deleted and the upgrade will fail. In that case you will need to upgrade using the binary tarball instead. If you have only linked the data directories (ie /usr/local/MariaDB/columnstore/data\*), the RPM/DEB package upgrade will work.
#### Root User Installs
##### Upgrading MariaDB ColumnStore using the tarball of RPMs (distributed mode)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Download the package mariadb-columnstore-1.2.4-1-centos#.x86\_64.rpm.tar.gz to the PM1 server where you are installing MariaDB ColumnStore.**
**Shutdown the MariaDB ColumnStore system:**
```
# mcsadmin shutdownsystem y
```
* Unpack the tarball, which will generate a set of RPMs that will reside in the /root/ directory.
```
# tar -zxf mariadb-columnstore-1.2.4-1-centos#.x86_64.rpm.tar.gz
```
* Uninstall the old packages, then install the new packages. The MariaDB ColumnStore software will be installed in /usr/local/.
```
# rpm -e --nodeps $(rpm -qa | grep '^mariadb-columnstore')
# rpm -ivh mariadb-columnstore-*1.2.4*rpm
```
* Run postConfigure using the upgrade option
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-117-ga-to-124-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-117-ga-to-124-ga/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using RPM Package Repositories (non-distributed mode)
The system can be upgraded when it was previously installed from the Package Repositories. This will need to be run on each module in the system.
Additional information can be found in this document on how to setup and install using the 'yum' package repo command:
[https://mariadb.com/kb/en/library/installing-mariadb-ax-from-the-package-repositories](../library/installing-mariadb-ax-from-the-package-repositories)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Shutdown the MariaDB ColumnStore system:**
```
# mcsadmin shutdownsystem y
```
* Uninstall MariaDB ColumnStore Packages
```
# yum remove mariadb-columnstore*
```
* Install MariaDB ColumnStore Packages
```
# yum --enablerepo=mariadb-columnstore clean metadata
# yum install mariadb-columnstore*
```
NOTE: On all modules except for PM1, start the columnstore service
```
# /usr/local/mariadb/columnstore/bin/columnstore start
```
* Run postConfigure using the upgrade and non-distributed options
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u -n
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-117-ga-to-124-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-117-ga-to-124-ga/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using the binary tarball (distributed mode)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /usr/local directory mariadb-columnstore-1.2.4-1.x86\_64.bin.tar.gz
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Run pre-uninstall script
```
# /usr/local/mariadb/columnstore/bin/pre-uninstall
```
* Unpack the tarball in the /usr/local/ directory.
```
# tar -zxvf mariadb-columnstore-1.2.4-1.x86_64.bin.tar.gz
```
* Run post-install scripts
```
# /usr/local/mariadb/columnstore/bin/post-install
```
* Run postConfigure using the upgrade option
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-117-ga-to-124-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-117-ga-to-124-ga/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using the DEB tarball (distributed mode)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /root directory mariadb-columnstore-1.2.4-1.amd64.deb.tar.gz
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Unpack the tarball, which contains DEBs.
```
# tar -zxf mariadb-columnstore-1.2.4-1.amd64.deb.tar.gz
```
* Remove and install all MariaDB ColumnStore debs
```
# cd /root/
# dpkg -r $(dpkg --list | grep 'mariadb-columnstore' | awk '{print $2}')
# dpkg --install mariadb-columnstore-*1.2.4-1*deb
```
* Run postConfigure using the upgrade option
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-117-ga-to-124-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-117-ga-to-124-ga/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using DEB Package Repositories (non-distributed mode)
The system can be upgraded when it was previously installed from the Package Repositories. This will need to be run on each module in the system
Additional information can be found in this document on how to setup and install using the 'apt-get' package repo command:
[https://mariadb.com/kb/en/library/installing-mariadb-ax-from-the-package-repositories](../library/installing-mariadb-ax-from-the-package-repositories)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Shutdown the MariaDB ColumnStore system:**
```
# mcsadmin shutdownsystem y
```
* Uninstall MariaDB ColumnStore Packages
```
# apt-get remove mariadb-columnstore*
```
* Install MariaDB ColumnStore Packages
```
# apt-get update
# sudo apt-get install mariadb-columnstore*
```
NOTE: On all modules except for PM1, start the columnstore service
```
# /usr/local/mariadb/columnstore/bin/columnstore start
```
* Run postConfigure using the upgrade and non-distributed options
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u -n
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-117-ga-to-124-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-117-ga-to-124-ga/index#running-the-mysql_upgrade-script)
#### Non-Root User Installs
##### Upgrade MariaDB ColumnStore from the binary tarball without sudo access (non-distributed mode)
This upgrade method applies when root/sudo access is not an option.
The uninstall script for 1.1.7 requires root access to perform some operations. These operations are the following:
* removing /etc/profile.d/columnstore{Alias,Env}.sh to remove aliases and environment variables from all users.
* running '<CSROOT>/mysql/columnstore/bin/syslogSetup.sh uninstall' to remove ColumnStore from the logging system
* removing the columnstore startup script
* remove /etc/ld.so.conf.d/columnstore.conf to ColumnStore directories from the ld library search path
Because you are upgrading ColumnStore and not uninstalling it, they are not necessary. If at some point you wish to uninstall it, you (or your sysadmin) will have to perform those operations by hand.
The upgrade instructions:
* Download the binary tarball to the current installation location on all nodes. See <https://downloads.mariadb.com/ColumnStore/>
* Shutdown the MariaDB ColumnStore system:
```
$ mcsadmin shutdownsystem y
```
* Copy Columnstore.xml to Columnstore.xml.rpmsave, and my.cnf to my.cnf.rpmsave
```
$ cp <CSROOT>/mariadb/columnstore/etc/Columnstore{.xml,.xml.rpmsave}
$ cp <CSROOT>/mariadb/columnstore/mysql/my{.cnf,.cnf.rpmsave}
```
* On all nodes, untar the new files in the same location as the old ones
```
$ tar zxf columnstore-1.2.4-1.x86_64.bin.tar.gz
```
* On all nodes, run post-install, specifying where ColumnStore is installed
```
$ <CSROOT>/mariadb/columnstore/bin/post-install --installdir=<CSROOT>/mariadb/columnstore
```
* On all nodes except for PM1, start the columnstore service
```
$ <CSROOT>/mariadb/columnstore/bin/columnstore start
```
* On PM1 only, run postConfigure, specifying the upgrade, non-distributed installation mode, and the location of the installation
```
$ <CSROOT>/mariadb/columnstore/bin/postConfigure -u -n -i <CSROOT>/mariadb/columnstore
```
* Run the mysql\_upgrade script on the nodes documented above for a non-root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-117-ga-to-124-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-117-ga-to-124-ga/index#running-the-mysql_upgrade-script)
##### Upgrade MariaDB ColumnStore from the binary tarball (distributed mode)
Upgrade MariaDB ColumnStore as user USER on the server designated as PM1:
* Download the package into the user's home directory mariadb-columnstore-1.2.4-1.x86\_64.bin.tar.gz
* Shutdown the MariaDB ColumnStore system:
```
$ mcsadmin shutdownsystem y
```
* Run the pre-uninstall script; this will require sudo access as you are running a script from 1.1.7.
```
$ <CSROOT>/mariadb/columnstore/bin/pre-uninstall --installdir=<CSROOT>/mariadb/columnstore
```
* Make the sudo changes as noted at the beginning of this document
* Unpack the tarball in the same place as the original installation
```
$ tar -zxvf mariadb-columnstore-1.2.4-1.x86_64.bin.tar.gz
```
* Run post-install scripts
```
$ <CSROOT>/mariadb/columnstore/bin/post-install --installdir=<CSROOT>/mariadb/columnstore
```
* Run postConfigure using the upgrade option
```
$ <CSROOT>/mariadb/columnstore/bin/postConfigure -u -i <CSROOT>/mariadb/columnstore
```
* Run the mysql\_upgrade script on the nodes documented above for a non-root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-117-ga-to-124-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-117-ga-to-124-ga/index#running-the-mysql_upgrade-script)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Memory and Disk Use With myisamchk Memory and Disk Use With myisamchk
==================================
myisamchk's performance can be dramatically enhanced for larger tables by making sure that its memory-related variables are set to an optimum level.
By default, myisamchk will use very little memory (about 3MB is allocated), but can temporarily use a lot of disk space. If disk space is a limitation when repairing, the *--safe-recover* option should be used instead of *--recover*. However, if TMPDIR points to a memory file system, an out of memory error can easily be caused, as myisamchk places temporary files in TMPDIR. The *--tmpdir=path* option should be used in this case to specify a directory on disk.
myisamchk has the following requirements for disk space:
* When repairing, space for twice the size of the data file, available in the same directory as the original file. This is for the original file as well as a copy. This space is not required if the *--quick* option is used, in which case only the index file is re-created.
* Disk space in the temporary directory (TMPDIR or the *tmpdir=path* option) is needed for sorting if the *--recover* or *--sort-recover* options are used when not using *--safe-recover*). The space required will be approximately *(largest\_key + row\_pointer\_length) \* number\_of\_rows \* 2*. To get information about the length of the keys as well as the row pointer length, use *myisamchk -dv table\_name*.
* Space for a new index file to replace the existing one. The old index is first truncated, so unless the old index file is not present or is smaller for some reason, no significant extra space will be needed.
There are a number of [system variables](../server-system-variables/index) that are useful to adjust when running myisamchk. They will increase memory usage, and since some are per-session variables, you don't want to increase the general value, but you can either pass an increased value to myisamchk as a commandline option, or with a [myisamchk] section in your [my.cnf](../configuring-mariadb-with-mycnf/index) file.
* [sort\_buffer\_size](../server-system-variables/index#sort_buffer_size). By default this is 4M, but it's very useful to increase to make myisamchk sorting much faster. Since the server won't be running when you run myisamchk, you can increase substantially. 16M is usually a minimum, but values such as 256M are not uncommon if memory is available.
* [key\_buffer\_size](../myisam-system-variables/index#key_buffer_size) (which particularly helps with the *--extend-check* and *--safe-recover* options.
* [read\_buffer\_size](../server-system-variables/index#read_buffer_size)
* [write\_buffer\_size](../server-system-variables/index#write_buffer_size)
For example, if you have more than 512MB available to allocate to the process, the following settings could be used:
```
myisamchk
--myisam_sort_buffer_size=256M
--key_buffer_size=512M
--read_buffer_size=64M
--write_buffer_size=64M
...
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Installing MariaDB Server on macOS Using Homebrew Installing MariaDB Server on macOS Using Homebrew
=================================================
MariaDB Server is available for installation on macOS (formerly Mac OS X) via the [Homebrew](http://brew.sh/) package manager.
MariaDB Server is available as a Homebrew "bottle", a pre-compiled package. This means you can install it without having to build from source yourself. This saves time.
After installing Homebrew, MariaDB Server can be installed with this command:
```
brew install mariadb
```
After installation, start MariaDB Server:
```
mysql.server start
```
To auto-start MariaDB Server, use Homebrew's services functionality, which configures auto-start with the launchctl utility from [launchd](../launchd/index):
```
brew services start mariadb
```
After MariaDB Server is started, you can log in as your user:
```
mysql
```
Or log in as root:
```
sudo mysql -u root
```
Upgrading MariaDB
-----------------
First you may need to update your brew installation:
```
brew update
```
Then, to upgrade MariaDB Server:
```
brew upgrade mariadb
```
Building MariaDB Server from source
-----------------------------------
In addition to the "bottled" MariaDB Server package available from Homebrew, you can use Homebrew to build MariaDB from source. This is useful if you want to use a different version of the server or enable some different capabilities that are not included in the bottle package.
Two components not included in the bottle package (as of MariaDB Server 10.1.19) are the CONNECT and OQGRAPH engines, because they have non-standard dependencies. To build MariaDB Server with these engines, you must first install `boost` and `judy`. As of December 2016, judy is in the Homebrew "boneyard", but the old formula still works on macOS Sierra. Follow these steps to install the dependencies and build the server:
```
brew install boost homebrew/boneyard/judy
brew install mariadb --build-from-source
```
You can also use Homebrew to build and install a pre-release version of MariaDB Server (for example MariaDB Server 10.2, when the highest GA version is MariaDB Server 10.1). Use this command to build and install a "development" version of MariaDB Server:
```
brew install mariadb --devel
```
Other resources
---------------
* [mariadb.rb on github](https://github.com/mxcl/homebrew/commit/debd033ad7bcd73df68d7370d7f2386e60fd24a0#Library/Formula/mariadb.rb)
* [Terin Stock (terinjokes)](https://github.com/terinjokes) who is the packager for Homebrew
* [MariaDB Server on macOS: Does it even make sense to try?](https://www.youtube.com/watch?v=VoAPP6GDyYw) (video)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb STD STD
===
Syntax
------
```
STD(expr)
```
Description
-----------
Returns the population standard deviation of *`expr`*. This is an extension to standard SQL. The standard SQL function `[STDDEV\_POP()](../stddev_pop/index)` can be used instead.
It is an [aggregate function](../aggregate-functions/index), and so can be used with the [GROUP BY](../group-by/index) clause.
From [MariaDB 10.2.2](https://mariadb.com/kb/en/mariadb-1022-release-notes/), STD() can be used as a [window function](../window-functions/index).
This function returns `NULL` if there were no matching rows.
Examples
--------
As an [aggregate function](../aggregate-functions/index):
```
CREATE OR REPLACE TABLE stats (category VARCHAR(2), x INT);
INSERT INTO stats VALUES
('a',1),('a',2),('a',3),
('b',11),('b',12),('b',20),('b',30),('b',60);
SELECT category, STDDEV_POP(x), STDDEV_SAMP(x), VAR_POP(x)
FROM stats GROUP BY category;
+----------+---------------+----------------+------------+
| category | STDDEV_POP(x) | STDDEV_SAMP(x) | VAR_POP(x) |
+----------+---------------+----------------+------------+
| a | 0.8165 | 1.0000 | 0.6667 |
| b | 18.0400 | 20.1693 | 325.4400 |
+----------+---------------+----------------+------------+
```
As a [window function](../window-functions/index):
```
CREATE OR REPLACE TABLE student_test (name CHAR(10), test CHAR(10), score TINYINT);
INSERT INTO student_test VALUES
('Chun', 'SQL', 75), ('Chun', 'Tuning', 73),
('Esben', 'SQL', 43), ('Esben', 'Tuning', 31),
('Kaolin', 'SQL', 56), ('Kaolin', 'Tuning', 88),
('Tatiana', 'SQL', 87);
SELECT name, test, score, STDDEV_POP(score)
OVER (PARTITION BY test) AS stddev_results FROM student_test;
+---------+--------+-------+----------------+
| name | test | score | stddev_results |
+---------+--------+-------+----------------+
| Chun | SQL | 75 | 16.9466 |
| Chun | Tuning | 73 | 24.1247 |
| Esben | SQL | 43 | 16.9466 |
| Esben | Tuning | 31 | 24.1247 |
| Kaolin | SQL | 56 | 16.9466 |
| Kaolin | Tuning | 88 | 24.1247 |
| Tatiana | SQL | 87 | 16.9466 |
+---------+--------+-------+----------------+
```
See Also
--------
* [STDDEV\_POP](../stddev_pop/index) (equivalent, standard SQL)
* [STDDEV](../stddev/index) (equivalent, Oracle-compatible non-standard SQL)
* [VAR\_POP](../var_pop/index) (variance)
* [STDDEV\_SAMP](../stddev_samp/index) (sample standard deviation)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb UNIX_TIMESTAMP UNIX\_TIMESTAMP
===============
Syntax
------
```
UNIX_TIMESTAMP()
UNIX_TIMESTAMP(date)
```
Description
-----------
If called with no argument, returns a Unix timestamp (seconds since '1970-01-01 00:00:00' [UTC](../coordinated-universal-time/index)) as an unsigned integer. If `UNIX_TIMESTAMP()` is called with a date argument, it returns the value of the argument as seconds since '1970-01-01 00:00:00' UTC. date may be a `[DATE](../date/index)` string, a `[DATETIME](../datetime/index)` string, a `[TIMESTAMP](../timestamp/index)`, or a number in the format YYMMDD or YYYYMMDD. The server interprets date as a value in the current [time zone](../time-zones/index) and converts it to an internal value in [UTC](../coordinated-universal-time/index). Clients can set their time zone as described in [time zones](../time-zones/index).
The inverse function of `UNIX_TIMESTAMP()` is `[FROM\_UNIXTIME()](../from_unixtime/index)`
`UNIX_TIMESTAMP()` supports [microseconds](../microseconds-in-mariadb/index).
Timestamps in MariaDB have a maximum value of 2147483647, equivalent to 2038-01-19 05:14:07. This is due to the underlying 32-bit limitation. Using the function on a date beyond this will result in NULL being returned. Use [DATETIME](../datetime/index) as a storage type if you require dates beyond this.
### Error Handling
Returns NULL for wrong arguments to `UNIX_TIMESTAMP()`. In MySQL and MariaDB before 5.3 wrong arguments to `UNIX_TIMESTAMP()` returned 0.
### Compatibility
As you can see in the examples above, UNIX\_TIMESTAMP(constant-date-string) returns a timestamp with 6 decimals while [MariaDB 5.2](../what-is-mariadb-52/index) and before returns it without decimals. This can cause a problem if you are using UNIX\_TIMESTAMP() as a partitioning function. You can fix this by using [FLOOR](../floor/index)(UNIX\_TIMESTAMP(..)) or changing the date string to a date number, like 20080101000000.
Examples
--------
```
SELECT UNIX_TIMESTAMP();
+------------------+
| UNIX_TIMESTAMP() |
+------------------+
| 1269711082 |
+------------------+
SELECT UNIX_TIMESTAMP('2007-11-30 10:30:19');
+---------------------------------------+
| UNIX_TIMESTAMP('2007-11-30 10:30:19') |
+---------------------------------------+
| 1196436619.000000 |
+---------------------------------------+
SELECT UNIX_TIMESTAMP("2007-11-30 10:30:19.123456");
+----------------------------------------------+
| unix_timestamp("2007-11-30 10:30:19.123456") |
+----------------------------------------------+
| 1196411419.123456 |
+----------------------------------------------+
SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('2007-11-30 10:30:19'));
+------------------------------------------------------+
| FROM_UNIXTIME(UNIX_TIMESTAMP('2007-11-30 10:30:19')) |
+------------------------------------------------------+
| 2007-11-30 10:30:19.000000 |
+------------------------------------------------------+
SELECT FROM_UNIXTIME(FLOOR(UNIX_TIMESTAMP('2007-11-30 10:30:19')));
+-------------------------------------------------------------+
| FROM_UNIXTIME(FLOOR(UNIX_TIMESTAMP('2007-11-30 10:30:19'))) |
+-------------------------------------------------------------+
| 2007-11-30 10:30:19 |
+-------------------------------------------------------------+
```
See Also
--------
* [FROM\_UNIXTIME()](../from_unixtime/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Benchmarks and Long Running Tests Benchmarks and Long Running Tests
==================================
Here you will find details about our automated benchmark runs and long running tests. Feel free to suggest other benchmarks and tests. You can also send us your findings about benchmarks and tests, which you have run.
| Title | Description |
| --- | --- |
### [Benchmarks](../benchmarks/index)
| Title | Description |
| --- | --- |
| [mariadb-tools](../mariadb-tools/index) | Helper and wrapper scripts. |
| [Recommended Settings for Benchmarks](../recommended-settings-for-benchmarks/index) | Best known settings and recommendations for benchmarks. |
| [Benchmarking Aria](../benchmarking-aria/index) | Aria benchmarks |
| [DBT-3 Dataset](../dbt-3-dataset/index) | This page describes our setup for DBT-3 tests. A very cogent resource on th... |
| [DBT3 Automation Scripts](../dbt3-automation-scripts/index) | DBT-3 (OSDL Database Test 3) is a workload tool for the Linux kernel that ... |
| [Segmented Key Cache Performance](../segmented-key-cache-performance/index) | Testing method for segmented key cache performance We used SysBench v0.5 f... |
| [run-sql-bench.pl](../run-sql-benchpl/index) | run-sql-bench.pl is a perl script for automating runs of sql-bench (You can... |
| [sysbench Benchmark Setup](../sysbench-benchmark-setup/index) | Basic parameters and configuration for sysbench |
| [MariaDB 5.3 - Asynchronous I/O on Windows with InnoDB](../mariadb-53-asynchronous-io-on-windows-with-innodb/index) | MariaDB 5.3 - Asynchronous I/O on Windows with InnoDB |
| [MariaDB 5.3/MySQL 5.5 Windows performance patches](../mariadb-53mysql-55-windows-performance-patches/index) | I just backported Windows performance patches I've done for 5.5 back to Mar... |
| [Aria LIMIT clause performance](../aria-limit-clause-performance/index) | Does MariaDB / Aria storage engine improve upon the poor performance of MyS... |
| [SHARE PERFORMANCE ACROSS 2 SERVERS](../share-performance-across-2-servers/index) | Good morning in very l 'équipe of monty , sorry for my english, i am french... |
| [Benchmark Builds](../benchmark-builds/index) | how to build with cmake and consistent settings (CFLAGS etc) |
| [DBT3 Benchmark Results InnoDB](../dbt3-benchmark-results-innodb/index) | Results for DBT3 benchmarking with MariaDB 5.3 InnoDB |
| [DBT3 Benchmark Results MyISAM](../dbt3-benchmark-results-myisam/index) | Results for DBT3 benchmarking with MariaDB 5.3/5/5 MyISAM |
| [DBT3 Example Preparation Time](../dbt3-example-preparation-time/index) | Database preparation and creation times discovered while working on the DBT3 Automation Scripts |
| [Performance of MEMORY Tables](../performance-of-memory-tables/index) | Up to 60% better performance with MariaDB 5.5.22 for inserting rows into MEMORY tables |
| [RQG Performance Comparisons](../rqg-performance-comparisons/index) | Performance testing The performance/perfrun.pl executes each query against ... |
### [Benchmark Results](../benchmark-results/index)
| Title | Description |
| --- | --- |
| [Threadpool Benchmarks](../threadpool-benchmarks/index) | Here are some benchmarks of some development threadpool code (the 5.5 thre... |
| [Sysbench Results](../sysbench-results/index) | Results from various Sysbench runs. The data is in OpenDocument Spreadsheet... |
| [sysbench v0.5 - Single Five Minute Runs on T500 Laptop](../1643/index) | MariDB/MySQL sysbench benchmark comparison in % Number of threads |
| [sysbench v0.5 - Single Five Minute Runs on perro](../sysbench-v05-single-five-minute-runs-on-perro/index) | MariDB/MySQL sysbench benchmark comparison in % Each test was run for 5 minutes. |
| [sysbench v0.5 - Single Five Minute Runs on work](../sysbench-v05-single-five-minute-runs-on-work/index) | MariDB/MySQL sysbench benchmark comparison in % Each test was run for 5 minutes. |
| [sysbench v0.5 - Three Times Five Minutes Runs on work with 5.1.42](../1646/index) | MariDB/MySQL sysbench benchmark comparison in % Each test was run for 5 minutes 3 times |
| [sysbench v0.5 - 3x Five Minute Runs on work with 5.2-wl86](../1647/index) | 3x Five Minute Runs on work with 5.2-wl86 key cache partitions on and off M... |
| [sysbench v0.5 - 3x Five Minute Runs on work with 5.1 vs. 5.2-wl86](../1648/index) | 3x Five Minute Runs on work with 5.1 vs. 5.2-wl86 key cache partitions off ... |
| [sysbench v0.5 - 3x 15 Minute Runs on perro with 5.2-wl86 a](../1649/index) | sysbench v0.5 - 3x 15 Minute Runs on perro with 5.2-wl86 key cache partitio... |
| [sysbench v0.5 - 3x 15 Minute Runs on perro with 5.2-wl86 b](../1650/index) | 3x 15 Minute Runs on perro with 5.2-wl86 key cache partitions off, 8, and 3... |
| [Select Random Ranges and Select Random Point](../select-random-ranges-and-select-random-point/index) | select\_random\_ranges (select 10 ranges with a delta as parameter) select\_ra... |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema events_statements_summary_by_thread_by_event_name Table Performance Schema events\_statements\_summary\_by\_thread\_by\_event\_name Table
=================================================================================
The [Performance Schema](../performance-schema/index) `events_statements_summary_by_thread_by_event_name` table contains statement events summarized by thread and event name. It contains the following columns:
| Column | Description |
| --- | --- |
| `THREAD_ID` | Thread associated with the event. Together with `EVENT_NAME` uniquely identifies the row. |
| `EVENT_NAME` | Event name. Used together with `THREAD_ID` for grouping events. |
| `COUNT_STAR` | Number of summarized events |
| `SUM_TIMER_WAIT` | Total wait time of the summarized events that are timed. |
| `MIN_TIMER_WAIT` | Minimum wait time of the summarized events that are timed. |
| `AVG_TIMER_WAIT` | Average wait time of the summarized events that are timed. |
| `MAX_TIMER_WAIT` | Maximum wait time of the summarized events that are timed. |
| `SUM_LOCK_TIME` | Sum of the `LOCK_TIME` column in the `events_statements_current` table. |
| `SUM_ERRORS` | Sum of the `ERRORS` column in the `events_statements_current` table. |
| `SUM_WARNINGS` | Sum of the `WARNINGS` column in the `events_statements_current` table. |
| `SUM_ROWS_AFFECTED` | Sum of the `ROWS_AFFECTED` column in the `events_statements_current` table. |
| `SUM_ROWS_SENT` | Sum of the `ROWS_SENT` column in the `events_statements_current` table. |
| `SUM_ROWS_EXAMINED` | Sum of the `ROWS_EXAMINED` column in the `events_statements_current` table. |
| `SUM_CREATED_TMP_DISK_TABLES` | Sum of the `CREATED_TMP_DISK_TABLES` column in the `events_statements_current` table. |
| `SUM_CREATED_TMP_TABLES` | Sum of the `CREATED_TMP_TABLES` column in the `events_statements_current` table. |
| `SUM_SELECT_FULL_JOIN` | Sum of the `SELECT_FULL_JOIN` column in the `events_statements_current` table. |
| `SUM_SELECT_FULL_RANGE_JOIN` | Sum of the `SELECT_FULL_RANGE_JOIN` column in the `events_statements_current` table. |
| `SUM_SELECT_RANGE` | Sum of the `SELECT_RANGE` column in the `events_statements_current` table. |
| `SUM_SELECT_RANGE_CHECK` | Sum of the `SELECT_RANGE_CHECK` column in the `events_statements_current` table. |
| `SUM_SELECT_SCAN` | Sum of the `SELECT_SCAN` column in the `events_statements_current` table. |
| `SUM_SORT_MERGE_PASSES` | Sum of the `SORT_MERGE_PASSES` column in the `events_statements_current` table. |
| `SUM_SORT_RANGE` | Sum of the `SORT_RANGE` column in the `events_statements_current` table. |
| `SUM_SORT_ROWS` | Sum of the `SORT_ROWS` column in the `events_statements_current` table. |
| `SUM_SORT_SCAN` | Sum of the `SORT_SCAN` column in the `events_statements_current` table. |
| `SUM_NO_INDEX_USED` | Sum of the `NO_INDEX_USED` column in the `events_statements_current` table. |
| `SUM_NO_GOOD_INDEX_USED` | Sum of the `NO_GOOD_INDEX_USED` column in the `events_statements_current` table. |
The `*_TIMER_WAIT` columns only calculate results for timed events, as non-timed events have a `NULL` wait time.
Example
-------
```
SELECT * FROM events_statements_summary_by_thread_by_event_name\G
...
*************************** 3653. row ***************************
THREAD_ID: 64
EVENT_NAME: statement/com/Error
COUNT_STAR: 0
SUM_TIMER_WAIT: 0
MIN_TIMER_WAIT: 0
AVG_TIMER_WAIT: 0
MAX_TIMER_WAIT: 0
SUM_LOCK_TIME: 0
SUM_ERRORS: 0
SUM_WARNINGS: 0
SUM_ROWS_AFFECTED: 0
SUM_ROWS_SENT: 0
SUM_ROWS_EXAMINED: 0
SUM_CREATED_TMP_DISK_TABLES: 0
SUM_CREATED_TMP_TABLES: 0
SUM_SELECT_FULL_JOIN: 0
SUM_SELECT_FULL_RANGE_JOIN: 0
SUM_SELECT_RANGE: 0
SUM_SELECT_RANGE_CHECK: 0
SUM_SELECT_SCAN: 0
SUM_SORT_MERGE_PASSES: 0
SUM_SORT_RANGE: 0
SUM_SORT_ROWS: 0
SUM_SORT_SCAN: 0
SUM_NO_INDEX_USED: 0
SUM_NO_GOOD_INDEX_USED: 0
*************************** 3654. row ***************************
THREAD_ID: 64
EVENT_NAME: statement/com/
COUNT_STAR: 0
SUM_TIMER_WAIT: 0
MIN_TIMER_WAIT: 0
AVG_TIMER_WAIT: 0
MAX_TIMER_WAIT: 0
SUM_LOCK_TIME: 0
SUM_ERRORS: 0
SUM_WARNINGS: 0
SUM_ROWS_AFFECTED: 0
SUM_ROWS_SENT: 0
SUM_ROWS_EXAMINED: 0
SUM_CREATED_TMP_DISK_TABLES: 0
SUM_CREATED_TMP_TABLES: 0
SUM_SELECT_FULL_JOIN: 0
SUM_SELECT_FULL_RANGE_JOIN: 0
SUM_SELECT_RANGE: 0
SUM_SELECT_RANGE_CHECK: 0
SUM_SELECT_SCAN: 0
SUM_SORT_MERGE_PASSES: 0
SUM_SORT_RANGE: 0
SUM_SORT_ROWS: 0
SUM_SORT_SCAN: 0
SUM_NO_INDEX_USED: 0
SUM_NO_GOOD_INDEX_USED: 0
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Obsolete mysql Database Tables Obsolete mysql Database Tables
===============================
Tables that used to be part of the `mysql` system database but are no longer present in a supported version.
| Title | Description |
| --- | --- |
| [mysql.host Table](../mysqlhost-table/index) | Obsolete/optional table that contained host-level access and privileges. |
| [mysql.ndb\_binlog\_index Table](../mysqlndb_binlog_index-table/index) | Unused by MariaDB, kept until MariaDB 10.0 for MySQL compatibility reasons. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Aria Storage Engine Aria Storage Engine
===================
The [Aria](../aria/index) storage engine is compiled in by default from [MariaDB 5.1](../what-is-mariadb-51/index) and it is required to be 'in use' when mysqld is started.
From [MariaDB 10.4](../what-is-mariadb-104/index), all [system tables](../system-tables/index) are Aria.
Additionally, internal on-disk tables are in the Aria table format instead of the [MyISAM](../myisam/index) table format. This should speed up some [GROUP BY](../group-by/index) and [DISTINCT](../count-distinct/index) queries because Aria has better caching than MyISAM.
Note: The ***Aria*** storage engine was previously called ***Maria*** (see [The Aria Name](../the-aria-name/index) for details on the rename) and in previous versions of [MariaDB](../mariadb/index) the engine was still called Maria.
The following table options to Aria tables in `[CREATE TABLE](../create-table/index)` and `[ALTER TABLE](../alter-table/index)#:`
* **`TRANSACTIONAL= 0 `|` 1` :** If the `TRANSACTIONAL` table option is set for an Aria table, then the table will be crash-safe. This is implemented by logging any changes to the table to Aria's transaction log, and syncing those writes at the end of the statement. This will marginally slow down writes and updates. However, the benefit is that if the server dies before the statement ends, all non-durable changes will roll back to the state at the beginning of the statement. This also needs up to 6 bytes more for each row and key to store the transaction id (to allow concurrent insert's and selects).
+ `TRANSACTIONAL=1` is not supported for partitioned tables.
+ An Aria table's default value for the `TRANSACTIONAL` table option depends on the table's value for the `ROW_FORMAT` table option. See below for more details.
+ If the `TRANSACTIONAL` table option is set for an Aria table, the table does not actually support transactions. See [MDEV-21364](https://jira.mariadb.org/browse/MDEV-21364) for more information. In this context, *transactional* just means *crash-safe*.
* **`PAGE_CHECKSUM= 0 `|` 1` :** If index and data should use page checksums for extra safety.
* **`TABLE_CHECKSUM= 0 `|` 1` :** Same as `CHECKSUM` in MySQL 5.1
* **`ROW_FORMAT=PAGE `|` FIXED `|` DYNAMIC` :** The table's [row format](../aria-storage-formats/index).
+ The default value is `PAGE`.
+ To emulate MyISAM, set `ROW_FORMAT=FIXED` or `ROW_FORMAT=DYNAMIC`
The `TRANSACTIONAL` and `ROW_FORMAT` table options interact as follows:
* If `TRANSACTIONAL=1` is set, then the only supported row format is `PAGE`. If `ROW_FORMAT` is set to some other value, then Aria issues a warning, but still forces the row format to be `PAGE`.
* If `TRANSACTIONAL=0` is set, then the table will be not be crash-safe, and any row format is supported.
* If `TRANSACTIONAL` is not set to any value, then any row format is supported. If `ROW_FORMAT` is set, then the table will use that row format. Otherwise, the table will use the default `PAGE` row format. In this case, if the table uses the `PAGE` row format, then it will be crash-safe. If it uses some other row format, then it will not be crash-safe.
Some other improvements are:
* `[CHECKSUM TABLE](../checksum-table/index)` now ignores values in NULL fields. This makes `CHECKSUM TABLE` faster and fixes some cases where same table definition could give different checksum values depending on [row format](../aria-storage-formats/index). The disadvantage is that the value is now different compared to other MySQL installations. The new checksum calculation is fixed for all table engines that uses the default way to calculate and MyISAM which does the calculation internally. Note: Old MyISAM tables with internal checksum will return the same checksum as before. To fix them to calculate according to new rules you have to do an `[ALTER TABLE](../alter-table/index)`. You can use the old ways to calculate checksums by using the option `--old` to mysqld or set the system variable '`@@old`' to `1` when you do `CHECKSUM TABLE ... EXTENDED;`
* At startup Aria will check the Aria logs and automatically recover the tables from the last checkpoint if mysqld was not taken down correctly.
mysqld Startup Options for Aria
-------------------------------
For a full list, see [Aria System Variables](../aria-server-system-variables/index).
In normal operations, the only variables you have to consider are:
* [aria-pagecache-buffer-size](../aria-server-system-variables/index#aria_pagecache_buffer_size)
+ This is where all index and data pages are cached. The bigger this is, the faster Aria will work.
* [aria-block-size](%5baria-server-system-variables#aria_block_size)
+ The default value 8192, should be ok for most cases. The only problem with a higher value is that it takes longer to find a packed key in the block as one has to search roughly 8192/2 to find each key. We plan to fix this by adding a dictionary at the end of the page to be able to do a binary search within the block before starting a scan. Until this is done and key lookups takes too long time even if you are not hitting disk, then you should consider making this smaller.
+ Possible values to try are `2048`, `4096` or `8192`
+ Note that you can't change this without dumping, deleting old tables and deleting all log files and then restoring your Aria tables. (This is the only option that requires a dump and load)
* [aria-log-purge-type](../aria-server-system-variables/index#aria_log_purge_type)
+ Set this to "`at_flush`" if you want to keep a copy of the transaction logs (good as an extra backup). The logs will stay around until you execute [FLUSH ENGINE LOGS](../flush/index).
See Also
--------
* [Aria FAQ](../aria-faq/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Triggers & Events Triggers & Events
==================
| Title | Description |
| --- | --- |
| [Triggers](../triggers/index) | Set of statements that run when an event occurs on a table. |
| [Event Scheduler](../event-scheduler/index) | Named database objects containing SQL statements to be executed at a later stage. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb IF Function IF Function
===========
Syntax
------
```
IF(expr1,expr2,expr3)
```
Description
-----------
If `expr1` is `TRUE` (`expr1 <> 0` and `expr1 <> NULL`) then `IF()` returns `expr2`; otherwise it returns `expr3`. `IF()` returns a numeric or string value, depending on the context in which it is used.
**Note:** There is also an [IF statement](../if-statement/index) which differs from the `IF()` function described here.
Examples
--------
```
SELECT IF(1>2,2,3);
+-------------+
| IF(1>2,2,3) |
+-------------+
| 3 |
+-------------+
```
```
SELECT IF(1<2,'yes','no');
+--------------------+
| IF(1<2,'yes','no') |
+--------------------+
| yes |
+--------------------+
```
```
SELECT IF(STRCMP('test','test1'),'no','yes');
+---------------------------------------+
| IF(STRCMP('test','test1'),'no','yes') |
+---------------------------------------+
| no |
+---------------------------------------+
```
See Also
--------
There is also an [IF statement](../if-statement/index), which differs from the `IF()` function described above.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB ColumnStore Performance Concepts MariaDB ColumnStore Performance Concepts
========================================
Introduction
============
The high level components of the ColumnStore architecture are:
* User Module (UM): The UM is responsible for parsing the SQL requests into an optimized set of primitive job steps executed by one or more PM servers. The UM is thus responsible for query optimization and orchestration of query execution by the PM servers. While multiple UM instances can be deployed in a multi server deployment, a single UM is responsible for each individual query. A database load balancer such as MariaDB MaxScale can be deployed to appropriately balance external requests against individual UM servers.
* Performance Module (PM): The PM executes granular job steps received from a UM in a multi-threaded manner. ColumnStore allows distribution of the work across many Performance Modules. The UM is composed of the MariaDB mysqld process and ExeMgr process.
* Extent Maps: ColumnStore maintains metadata about each column in a shared distributed object known as the Extent Map The UM server references the Extent Map to help assist in generating the correct primitive job steps. The PM server references the Extent Map to identify the correct disk blocks to read. Each column is made up of one or more files and each file can contain multiple extents. As much as possible the system attempts to allocate contiguous physical storage to improve read performance.
* Storage: ColumnStore can use either local storage or shared storage (e.g. SAN or EBS) to store data. Using shared storage allows for data processing to fail over to another node automatically in case of a PM server failing.
Data loading
============
The system supports full MVCC ACID transactional logic via Insert, Update, and Delete statements. The MVCC architecture allows for concurrent query and DML / batch load. Although DML is supported, the system is optimized more for batch inserts and so larger data loads should be achieved through a batch load. The most flexible and optimal way to load data is via the cpimport tool. This tool optimizes the load path and can be run centrally or in parallel on each pm server.
If the data contains a time or (time correlated ascending value) column then significant performance gains will be achieved if the data is sorted by this field and also typically queried with a where clause on that column. This is because the system records a minimum and maximum value for each extent providing for a system maintained range partitioning scheme. This allows the system to completely eliminate scanning an extent map if the query includes a where clause for that field limiting the results to a subset of extent maps.
Query execution
===============
MariaDB ColumnStore has it's own query optimizer and execution engine distinct from the MariaDB server implementation. This allows for scaling out query execution to multiple PM servers and to optimize for handling data stored as columns rather than rows. As such the factors influencing query performance are very different:
A query is first parsed by the MariaDB server *mysqld* process and passed through to the ColumnStore storage engine. This passes the request onto the *ExeMgr* process which is responsible for optimizing and orchestrating execution of the query. The *ExeMgr* optimizer creates a series of batch primitive steps that are executed on the PM nodes by the *PrimProc* processes. Since multiple PM servers can be deployed this allows for scale out execution of the queries by multiple servers. As much as possible the optimizer attempts to push query execution down to the PM server however certain operations inherently must be executed centrally by the *ExeMgr* process, for example final result ordering. Filtering, joins, aggregates, and group by are in general pushed down and executed at the PM level. At the PM level batch primitive steps are performed at a granular level where individual threads operate on individual 1K-8K blocks within an extent. This enables a larger multi core server to be fully consumed and scale out within a single server. The current batch primitive steps available in the system include:
* **Single Column Scan** - Scan one or more Extents for a given column based on a single column predicate, including: =, <>, in (list), between, isnull, etc. See first scan section of [performance configuration](../mariadb-columnstore-performance-related-configuration-settings/index) for additional details on tuning this.
* **Additional Single Column Filters** - Project additional column(s) for any rows found by a previous scan and apply additional single column predicates as needed. Access of blocks is based on row identifier, going directly to the block(s). See additional column read section of [performance configuration](../mariadb-columnstore-performance-related-configuration-settings/index) for additional details on tuning this.
* **Table Level Filters** - Project additional columns as required for any table level filters such as: column1 < column2, or more advanced functions and expressions. Access of blocks is again based on row identifier, going directly to the block(s).
* **Project Join Columns for Joins** - Project additional join column(s) as needed for any join operations. Access of blocks is again based on row identifier, going directly to the block(s). See join tuning section of[performance configuration](../mariadb-columnstore-performance-related-configuration-settings/index) for additional details on tuning this.
* **Execute Multi-Join** - Apply one or more hash join operation against projected join column(s) and use that value to probe a previously built hash map. Build out tuples as need to satisfy inner or outer join requirements. Note: Depending on the size of the previously built hash map, the actual join behavior may be executed either on the server running PM processes, or the server running UM processes. In either case, the Batch Primitive Step is functionally identical. See multi table join section of [performance configuration](../mariadb-columnstore-performance-related-configuration-settings/index) for additional details on tuning this.
* **Cross-Table Level Filters** - Project additional columns from the range of rows for the Primitive Step as needed for any cross-table level filters such as: table1.column1 < table2.column2, or more advanced functions and expressions. Access of blocks is again based on row identifier, going directly to the block(s). When a pre-requisite join operation takes place on the UM, then this operation will also take place on the UM, otherwise it will occur on the PM.
* **Aggregation/Distinct Operation Part 1** - Apply any local group by, distinct, or aggregation operation against the set of joined rows assigned to a given Batch Primitive. Part 1 of This process is handled by Performance Modules
* **Aggregation/Distinct Operation Part 2** Apply any final group by, distinct, or aggregation operation against the set of joined rows assigned to a given Batch Primitive. This processing is handled by the User Module. See memory manageent section of [performance configuration](../mariadb-columnstore-performance-related-configuration-settings/index) for additional details on tuning this.
ColumnStore query execution paradigms
=====================================
The following items should be considered when thinking about query execution in ColumnStore vs a row based store such as InnoDB.
Data scanning and filtering
---------------------------
ColumnStore is optimized for large scale aggregation / OLAP queries over large data sets. As such indexes typically used to optimize query access for row based systems do not make sense since selectivity is low for such queries. Instead ColumnStore gains performance by only scanning necessary columns, utilizing system maintained partitioning, and utilizing multiple threads and servers to scale query response time.
Since ColumnStore only reads the necessary columns to resolve a query, only include the necessary columns required. For example select \* will be significantly slower than select col1, col2 from table.
Datatype size is important. If say you have a column that can only have values 0 through 100 then declare this as a tinyint as this will be represented with 1 byte rather than 4 bytes for int. This will reduce the I/O cost by 4 times.
For string types an important threshold is char(9) and varchar(8) or greater. Each column storage file uses a fixed number of bytes per value. This enables fast positional lookup of other columns to form the row. Currently the upper limit for columnar data storage is 8 bytes. So for strings longer than this the system maintains an additional 'dictionary' extent where the values are stored. The columnar extent file then stores a pointer into the dictionary. So it is more expensive to read and process a varchar(8) column than a char(8) column for example. So where possible you will get better performance if you can utilize shorter strings especially if you avoid the dictionary lookup. All TEXT/BLOB data types in 1.1 onward utilize a dictionary and do a multiple block 8KB lookup to retrieve that data if required, the longer the data the more blocks are retrieved and the greater a potential performance impact.
In a row based system adding redundant columns adds to the overall query cost but in a columnar system a cost is only occurred if the column is referenced. Therefore additional columns should be created to support different access paths. For instance store a leading portion of a field in one column to allow for faster lookups but additionally store the long form value as another column. Scans on a shorter code or leading portion column will be faster.
ColumnStore will distribute function application across PM nodes for greater performance but this requires a distributed implementation of the function in addition to the MariaDB server implementation. See [Distributed Functions](../columnstore-distributed-functions/index) for the full list.
Joins
-----
Hash joins are utilized by ColumnStore to optimize for large scale joins and avoid the need for indexes and the overhead of nested loop processing. ColumnStore maintains table statistics so as to determine the optimal join order. This is implemented by first identifying the small table side (based on extent map data) and materializing the necessary rows from that table for the join. If the size of this is less than the configuration setting "PmMaxMemorySmallSide" then the join is pushed down to the PMs for distributed processing. Otherwise the larger side rows are pulled up to the UM for joining in the UM where only the where clause on that side is executed across PMs. If the join is too large for UM memory then disk based join can be enabled to allow the query to complete.
Aggregations
------------
Similarly to scalar functions ColumnStore distributes aggregate evaluation as much as possible. However some post processing is required to combine the final results in the UM. Enough memory must exist on both the PM and UM to handle queries where there are a very large number of values in the aggregate column(s).
Aggregation performance is also influenced by the number of distinct aggregate column values. Generally you'll see that for the same number of rows 100 distinct values will compute faster than 10000 distinct values. This is due to increased memory management as well as transfer overhead.
Select count(\*) is internally optimized to be select count(COL-N) where COL-N is the column that uses the least number of bytes for storage. For example it would be pick a char(1) column over int column because char(1) uses 1 byte for storage and int uses 4 bytes. The implementation still honors ANSI semantics in that select count(\*) will include nulls in the total count as opposed to an explicit select(COL-N) which excludes nulls in the count.
Order by and limit
------------------
Order by and limit are currently implemented at the very end by the mariadb server process on the temporary result set table. This means that the unsorted results must be fully retrieved before either are applied. The performance overhead of this is relatively minimal on small to medium results but for larger results it can be significant.
Complex queries
---------------
Subqueries are executed in sequence thus the subquery intermediate results must be materialized in the UM and then the join logic applies with the outer query.
Window functions are executed at the UM level due to the need for ordering of the window results. The ColumnStore window function engines uses a dedicated faster sort process.
Partitioning
------------
Automated system partitioning of columns is provided by ColumnStore. As data is loaded into extent maps, the system will capture and maintain min/max values of column data in that extent map. New rows are appended to each extent map until full at which point a new extent map is created. For column values that are ordered or semi-ordered this allows for very effective data partitioning. By using the min and max values, entire extent maps can be eliminated and not read to filter data. This generally works particularly well for time dimension / series data or similar values that increase over time.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Stored Routine Statements Stored Routine Statements
==========================
| Title | Description |
| --- | --- |
| [CALL](../call/index) | Invokes a stored procedure. |
| [DO](../do/index) | Executes expressions without returning results. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Adding and Changing Data in MariaDB Adding and Changing Data in MariaDB
===================================
There are several ways to add and to change data in MariaDB. There are a few SQL statements that you can use, each with a few options. Additionally, there are twists that you can do by mixing SQL statements together with various clauses. In this article, we will explore the ways in which data can be added and changed in MariaDB.
#### Adding Data
To add data to a table in MariaDB, you will need to use the [INSERT](../insert/index) statement. Its basic, minimal syntax is the command [INSERT](../insert/index) followed by the table name and then the keyword VALUES with a comma separated list of values contained in parentheses:
```
INSERT table1
VALUES('text1','text2','text3');
```
In this example, text is added to a table called table1, which contains only three columns—the same number of values that we're inserting. The number of columns must match. If you don't want to insert data into all of the columns of a table, though, you could name the columns desired:
```
INSERT INTO table1
(col3, col1)
VALUES('text3','text1');
```
Notice that the keyword `INTO` was added here. This is optional and has no effect on MariaDB. It's only a matter of grammatical preference. In this example we not only name the columns, but we list them in a different order. This is acceptable to MariaDB. Just be sure to list the values in the same order. If you're going to insert data into a table and want to specify all of the values except one (say the key column since it's an auto-incremented one), then you could just give a value of DEFAULT to keep from having to list the columns. Incidentally, you can give the column names even if you're naming all of them. It's just unnecessary unless you're going to reorder them as we did in this last example.
When you have many rows of data to insert into the same table, it can be more efficient to insert all of the rows in one SQL statement. Multiple row insertions can be done like so:
```
INSERT IGNORE
INTO table2
VALUES('id1','text','text'),
('id2','text','text'),
('id2','text','text');
```
Notice that the keyword VALUES is used only once and each row is contained in its own set of parentheses and each set is separated by commas. We've added an intentional mistake to this example: We are attempting to insert three rows of data into table2 for which the first column happens to be a `UNIQUE` key field. The third row entered here has the same identification number for the key column as the second row. This would normally result in an error and none of the three rows would be inserted. However, since the statement has an `IGNORE` flag, duplicates will be ignored and not inserted, but the other rows will still be inserted. So, the first and second rows above will be inserted and the third one won't.
#### Priority
An [INSERT](../insert/index) statement takes priority over read statements (i.e., SELECT statements). An [INSERT](../insert/index) will lock the table and force other clients to wait until it's finished. On a busy MariaDB server that has many simultaneous requests for data, this could cause users to experience delays when you run a script that performs a series of [INSERT](../insert/index) statements. If you don't want user requests to be put on hold and you can wait to insert the data, you could use the `LOW_PRIORITY` flag:
```
INSERT LOW_PRIORITY
INTO table1
VALUES('text1','text2','text3');
```
The `LOW_PRIORITY` flag will put the [INSERT](../insert/index) statement in queue, waiting for all current and pending requests to be completed before it's performed. If new requests are made while a low priority statement is waiting, then they are put ahead of it in the queue. MariaDB does not begin to execute a low priority statement until there are no other requests waiting. Once the transaction begins, though, the table is locked and any other requests for data from the table that come in after it starts must wait until it's completed. Because it locks the table, low priority statements will prevent simultaneous insertions from other clients even if you're dealing with a MyISAM table. Incidentally, notice that the `LOW_PRIORITY` flag comes before the `INTO`.
One potential inconvenience with an `INSERT LOW_PRIORITY` statement is that the client will be tied up waiting for the statement to be completed successfully. So if you're inserting data into a busy server with a low priority setting using the mysql client, your client could be locked up for minutes, maybe hours depending on how busy your server is at the time. As an alternative either to making other clients with read requests wait or to having your client wait, you can use the DELAYED flag instead of the `LOW_PRIORITY` flag:
```
INSERT DELAYED
INTO table1
VALUES('text1','text2','text3');
```
MariaDB will take the request as a low priority one and put it on its list of tasks to perform when it has a break. However, it will immediately release the client so that the client can go on to enter other SQL statements or even exit. Another advantage of this method is that multiple `INSERT DELAYED` requests are batched together for block insertion when there is a gap, making the process potentially faster than `INSERT LOW_PRIORITY`. The flaw in this choice, however, is that the client is never told if a delayed insertion is successfully made or not. The client is informed of error messages when the statement is entered—the statement has to be valid before it will be queued—but it's not told of problems that occur after it's accepted. This brings up another flaw: delayed insertions are stored in the server's memory. So if the MariaDB daemon (mysqld) dies or is manually killed, then the transactions are lost and the client is not notified of the failure. So DELAYED is not always a good alternative.
#### Contingent Additions
As an added twist to [INSERT](../insert/index), you can combine it with a SELECT statement. Suppose that you have a table called employees which contains employee information for your company. Suppose further that you have a column to indicate whether an employee is on the company's softball team. However, you one day decide to create a separate database and table for the softball team's data that someone else will administer. To get the database ready for the new administrator, you have to copy some data for team members to the new table. Here's one way you can accomplish this task:
```
INSERT INTO softball_team
(last, first, telephone)
SELECT name_last, name_first, tel_home
FROM company.employees
WHERE softball='Y';
```
In this SQL statement the columns in which data is to be inserted into are listed, then the complete SELECT statement follows with the appropriate WHERE clause to determine if an employee is on the softball team. Since we're executing this statement from the new database and since the table employees is in a separate database called company, we have to specify it as you see here. By the way, [INSERT...SELECT](../insert-select/index) statements cannot be performed on the same table.
#### Replacement Data
When you're adding massive amounts of data to a table that has a key field, as mentioned earlier, you can use the `IGNORE` flag to prevent duplicates from being inserted, but still allow unique rows to be entered. However, there may be times when you actually want to replace the rows with the same key fields with the new ones. In such a situation, instead of using [INSERT](../insert/index) you can use a [REPLACE](../replace/index) statement:
```
REPLACE LOW_PRIORITY
INTO table2 (id, col1, col2)
VALUES('id1','text','text'),
('id2','text','text'),
('id3','text','text');
```
Notice that the syntax is the same as an [INSERT](../insert/index) statement. The flags all have the same effect, as well. Also, multiple rows may be inserted, but there's no need for the `IGNORE` flag since duplicates won't happen—the originals are just overwritten. Actually, when a row is replaced, it's first deleted completely and the new row is then inserted. Any columns without values in the new row will be given the default values for the columns. None of the values of the old row are kept. Incidentally, [REPLACE](../replace/index) will also allow you to combine it with a SELECT statement as we saw with the [INSERT](../insert/index) statement earlier.
#### Updating Data
If you want to change the data contained in existing records, but only for certain columns, then you would need to use an [UPDATE](../update/index) statement. The syntax for [UPDATE](../update/index) is a little bit different from the syntax shown before for [INSERT](../insert/index) and [REPLACE](../replace/index) statements:
```
UPDATE LOW_PRIORITY table3
SET col1 = 'text-a', col2='text-b'
WHERE id < 100;
```
In the SQL statement here, we are changing the value of the two columns named individually using the `SET` clause. Incidentally, the `SET` clause optionally can be used in [INSERT](../insert/index) and [REPLACE](../replace/index) statements, but it eliminates the multiple row option. In the statement above, we're also using a `WHERE` clause to determine which records are changed: only rows with an id that has a value less than 100 are updated. Notice that the `LOW_PRIORITY` flag can be used with this statement, too. The `IGNORE` flag can be used, as well.
A useful feature of the [UPDATE](../update/index) statement is that it allows the use of the current value of a column to update the same column. For instance, suppose you want to add one day to the value of a date column where the date is a Sunday. You could do the following:
```
UPDATE table5
SET col_date = DATE_ADD(col_date, INTERVAL 1 DAY)
WHERE DAYOFWEEK(col_date) = 1;
```
For rows where the day of the week is Sunday, the DATE\_ADD() function will take the value of col\_date before it's updated and add one day to it. MariaDB will then take this sum and set col\_date to it.
There are a couple more twists that you can now do with the [UPDATE](../update/index) statement: if you want to update the rows in a specific order, you can add an [ORDER BY](../select/index#order-by) clause. You can also limit the number of rows that are updated with a [LIMIT](../select/index#limit) clause. Below is an example of both of these clauses:
```
UPDATE LOW_PRIORITY table3
SET col1='text-a', col2='text-b'
WHERE id < 100
ORDER BY col3 DESC
LIMIT 10;
```
The ordering can be descending as indicated here by the `DESC` flag, or ascending with either the ASC flag or by just leaving it out, as ascending is the default. The [LIMIT](../select/index#limit) clause, of course, limits the number of rows affected to ten here.
If you want to refer to multiple tables in one [UPDATE](../update/index) statement, you can do so like this:
```
UPDATE table3, table4
SET table3.col1 = table4.col1
WHERE table3.id = table4.id;
```
Here we see a join between the two tables named. In table3, the value of col1 is set to the value of the same column in table4 where the values of id from each match. We're not updating both tables here; we're just accessing both. We must specify the table name for each column to prevent an ambiguity error. Incidentally, [ORDER BY](../select/index#order-by) and [LIMIT](../select/index#limit) clauses aren't allowed with multiple table updates.
There's another combination that you can do with the [INSERT](../insert/index) statement that we didn't mention earlier. It involves the [UPDATE](../update/index) statement. When inserting multiple rows of data, if you want to note which rows had potentially duplicate entries and which ones are new, you could add a column called status and change it's value accordingly with a statement like this one:
```
INSERT IGNORE INTO table1
(id, col1, col2, status)
VALUES('1012','text','text','new'),
('1025,'text','text','new'),
('1030,'text','text','new')
ON DUPLICATE KEY
UPDATE status = 'old';
```
Because of the `IGNORE` flag, errors will not be generated, duplicates won't be inserted or replaced, but the rest will be added. Because of the [ON DUPLICATE KEY](../insert-on-duplicate-key-update/index), the column status of the original row will be set to old when there are duplicate entry attempts. The rest will be inserted and their status set to new.
#### Conclusion
As you can see from some of these SQL statements, MariaDB offers you quite a few ways to add and to change data. In addition to these methods, there are also some bulk methods of adding and changing data in a table. You could use the [LOAD DATA INFILE](../load-data-infile/index) statement and the [mysqldump](../mysqldump/index) command-line utility. These methods are covered in another article on Importing Data into MariaDB.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Building the Galera wsrep Package on Fedora Building the Galera wsrep Package on Fedora
===========================================
The instructions on this page were used to create the *galera* package on the Fedora Linux distribution. This package contains the wsrep provider for [MariaDB Galera Cluster](../galera/index).
The following table lists each version of the [Galera](../galera/index) 4 wsrep provider, and it lists which version of MariaDB each one was first released in. If you would like to install [Galera](../galera/index) 4 using [yum](../yum/index), [apt](../installing-mariadb-deb-files/index#installing-mariadb-with-apt), or [zypper](../installing-mariadb-with-zypper/index), then the package is called `galera-4`.
| Galera Version | Released in MariaDB Version |
| --- | --- |
| **26.4.11** | [MariaDB 10.8.1](https://mariadb.com/kb/en/mariadb-1081-release-notes/), [MariaDB 10.7.2](https://mariadb.com/kb/en/mariadb-1072-release-notes/), [MariaDB 10.6.6](https://mariadb.com/kb/en/mariadb-1066-release-notes/), [MariaDB 10.5.14](https://mariadb.com/kb/en/mariadb-10514-release-notes/), [MariaDB 10.4.22](https://mariadb.com/kb/en/mariadb-10422-release-notes/) |
| **26.4.9** | [MariaDB 10.6.4](https://mariadb.com/kb/en/mariadb-1064-release-notes/), [MariaDB 10.5.12](https://mariadb.com/kb/en/mariadb-10512-release-notes/), [MariaDB 10.4.21](https://mariadb.com/kb/en/mariadb-10421-release-notes/) |
| **26.4.8** | [MariaDB 10.6.1](https://mariadb.com/kb/en/mariadb-1061-release-notes/), [MariaDB 10.5.10](https://mariadb.com/kb/en/mariadb-10510-release-notes/), [MariaDB 10.4.19](https://mariadb.com/kb/en/mariadb-10419-release-notes/) |
| **26.4.7** | [MariaDB 10.5.9](https://mariadb.com/kb/en/mariadb-1059-release-notes/), [MariaDB 10.4.18](https://mariadb.com/kb/en/mariadb-10418-release-notes/) |
| **26.4.6** | [MariaDB 10.5.7](https://mariadb.com/kb/en/mariadb-1057-release-notes/), [MariaDB 10.4.16](https://mariadb.com/kb/en/mariadb-10416-release-notes/) |
| **26.4.5** | [MariaDB 10.5.4](https://mariadb.com/kb/en/mariadb-1054-release-notes/), [MariaDB 10.4.14](https://mariadb.com/kb/en/mariadb-10414-release-notes/) |
| **26.4.4** | [MariaDB 10.5.1](https://mariadb.com/kb/en/mariadb-1051-release-notes/), [MariaDB 10.4.13](https://mariadb.com/kb/en/mariadb-10413-release-notes/) |
| **26.4.3** | [MariaDB 10.5.0](https://mariadb.com/kb/en/mariadb-1050-release-notes/), [MariaDB 10.4.9](https://mariadb.com/kb/en/mariadb-1049-release-notes/) |
| **26.4.2** | [MariaDB 10.4.4](https://mariadb.com/kb/en/mariadb-1044-release-notes/) |
| **26.4.1** | [MariaDB 10.4.3](https://mariadb.com/kb/en/mariadb-1043-release-notes/) |
| **26.4.0** | [MariaDB 10.4.2](https://mariadb.com/kb/en/mariadb-1042-release-notes/) |
The following table lists each version of the [Galera](../galera/index) 3 wsrep provider, and it lists which version of MariaDB each one was first released in. If you would like to install [Galera](../galera/index) 3 using [yum](../yum/index), [apt](../installing-mariadb-deb-files/index#installing-mariadb-with-apt), or [zypper](../installing-mariadb-with-zypper/index), then the package is called `galera`.
| Galera Version | Released in MariaDB Version |
| --- | --- |
| **25.3.35** | [MariaDB 10.3.33](https://mariadb.com/kb/en/mariadb-10333-release-notes/), [MariaDB 10.2.42](https://mariadb.com/kb/en/mariadb-10242-release-notes/) |
| **25.3.34** | [MariaDB 10.3.31](https://mariadb.com/kb/en/mariadb-10331-release-notes/), [MariaDB 10.2.40](https://mariadb.com/kb/en/mariadb-10240-release-notes/) |
| **25.3.33** | [MariaDB 10.3.29](https://mariadb.com/kb/en/mariadb-10329-release-notes/), [MariaDB 10.2.38](https://mariadb.com/kb/en/mariadb-10238-release-notes/) |
| **25.3.32** | [MariaDB 10.3.28](https://mariadb.com/kb/en/mariadb-10328-release-notes/), [MariaDB 10.2.37](https://mariadb.com/kb/en/mariadb-10237-release-notes/) |
| **25.3.31** | [MariaDB 10.3.26](https://mariadb.com/kb/en/mariadb-10326-release-notes/), [MariaDB 10.2.35](https://mariadb.com/kb/en/mariadb-10235-release-notes/), [MariaDB 10.1.48](https://mariadb.com/kb/en/mariadb-10148-release-notes/) |
| **25.3.30** | [MariaDB 10.3.25](https://mariadb.com/kb/en/mariadb-10325-release-notes/), [MariaDB 10.2.34](https://mariadb.com/kb/en/mariadb-10234-release-notes/), [MariaDB 10.1.47](https://mariadb.com/kb/en/mariadb-10147-release-notes/) |
| **25.3.29** | [MariaDB 10.3.23](https://mariadb.com/kb/en/mariadb-10323-release-notes/), [MariaDB 10.2.32](https://mariadb.com/kb/en/mariadb-10232-release-notes/), [MariaDB 10.1.45](https://mariadb.com/kb/en/mariadb-10145-release-notes/) |
| **25.3.28** | [MariaDB 10.3.19](https://mariadb.com/kb/en/mariadb-10319-release-notes/), [MariaDB 10.2.28](https://mariadb.com/kb/en/mariadb-10228-release-notes/), [MariaDB 10.1.42](https://mariadb.com/kb/en/mariadb-10142-release-notes/) |
| **25.3.27** | [MariaDB 10.3.18](https://mariadb.com/kb/en/mariadb-10318-release-notes/), [MariaDB 10.2.27](https://mariadb.com/kb/en/mariadb-10227-release-notes/) |
| **25.3.26** | [MariaDB 10.3.14](https://mariadb.com/kb/en/mariadb-10314-release-notes/), [MariaDB 10.2.23](https://mariadb.com/kb/en/mariadb-10223-release-notes/), [MariaDB 10.1.39](https://mariadb.com/kb/en/mariadb-10139-release-notes/) |
| **25.3.25** | [MariaDB 10.3.12](https://mariadb.com/kb/en/mariadb-10312-release-notes/), [MariaDB 10.2.20](https://mariadb.com/kb/en/mariadb-10220-release-notes/), [MariaDB 10.1.38](https://mariadb.com/kb/en/mariadb-10138-release-notes/), [MariaDB Galera Cluster 10.0.38](https://mariadb.com/kb/en/mariadb-galera-cluster-10038-release-notes/), [MariaDB Galera Cluster 5.5.63](https://mariadb.com/kb/en/mariadb-galera-cluster-5563-release-notes/) |
| **25.3.24** | [MariaDB 10.4.0](https://mariadb.com/kb/en/mariadb-1040-release-notes/), [MariaDB 10.3.10](https://mariadb.com/kb/en/mariadb-10310-release-notes/), [MariaDB 10.2.18](https://mariadb.com/kb/en/mariadb-10218-release-notes/), [MariaDB 10.1.37](https://mariadb.com/kb/en/mariadb-10137-release-notes/), [MariaDB Galera Cluster 10.0.37](https://mariadb.com/kb/en/mariadb-galera-cluster-10037-release-notes/), [MariaDB Galera Cluster 5.5.62](https://mariadb.com/kb/en/mariadb-galera-cluster-5562-release-notes/) |
| **25.3.23** | [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/), [MariaDB 10.2.13](https://mariadb.com/kb/en/mariadb-10213-release-notes/), [MariaDB 10.1.32](https://mariadb.com/kb/en/mariadb-10132-release-notes/), [MariaDB Galera Cluster 10.0.35](https://mariadb.com/kb/en/mariadb-galera-cluster-10035-release-notes/), [MariaDB Galera Cluster 5.5.60](https://mariadb.com/kb/en/mariadb-galera-cluster-5560-release-notes/) |
| **25.3.22** | [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/), [MariaDB 10.2.11](https://mariadb.com/kb/en/mariadb-10211-release-notes/), [MariaDB 10.1.29](https://mariadb.com/kb/en/mariadb-10129-release-notes/), [MariaDB Galera Cluster 10.0.33](https://mariadb.com/kb/en/mariadb-galera-cluster-10033-release-notes/), [MariaDB Galera Cluster 5.5.59](https://mariadb.com/kb/en/mariadb-galera-cluster-5559-release-notes/) |
| **25.3.21** | N/A |
| **25.3.20** | [MariaDB 10.3.1](https://mariadb.com/kb/en/mariadb-1031-release-notes/), [MariaDB 10.2.6](https://mariadb.com/kb/en/mariadb-1026-release-notes/), [MariaDB 10.1.23](https://mariadb.com/kb/en/mariadb-10123-release-notes/), [MariaDB Galera Cluster 10.0.31](https://mariadb.com/kb/en/mariadb-galera-cluster-10031-release-notes/), [MariaDB Galera Cluster 5.5.56](https://mariadb.com/kb/en/mariadb-galera-cluster-5556-release-notes/) |
| **25.3.19** | [MariaDB 10.3.0](https://mariadb.com/kb/en/mariadb-1030-release-notes/), [MariaDB 10.2.3](https://mariadb.com/kb/en/mariadb-1023-release-notes/), [MariaDB 10.1.20](https://mariadb.com/kb/en/mariadb-10120-release-notes/), [MariaDB Galera Cluster 10.0.29](https://mariadb.com/kb/en/mariadb-galera-cluster-10029-release-notes/), [MariaDB Galera Cluster 5.5.54](https://mariadb.com/kb/en/mariadb-galera-cluster-5554-release-notes/) |
| **25.3.18** | [MariaDB 10.2.2](https://mariadb.com/kb/en/mariadb-1022-release-notes/), [MariaDB 10.1.18](https://mariadb.com/kb/en/mariadb-10118-release-notes/), [MariaDB Galera Cluster 10.0.28](https://mariadb.com/kb/en/mariadb-galera-cluster-10028-release-notes/), [MariaDB Galera Cluster 5.5.53](https://mariadb.com/kb/en/mariadb-galera-cluster-5553-release-notes/) |
| **25.3.17** | [MariaDB 10.1.17](https://mariadb.com/kb/en/mariadb-10117-release-notes/), [MariaDB Galera Cluster 10.0.27](https://mariadb.com/kb/en/mariadb-galera-cluster-10027-release-notes/), [MariaDB Galera Cluster 5.5.51](https://mariadb.com/kb/en/mariadb-galera-cluster-5551-release-notes/) |
| **25.3.16** | N/A |
| **25.3.15** | [MariaDB 10.2.0](https://mariadb.com/kb/en/mariadb-1020-release-notes/), [MariaDB 10.1.13](https://mariadb.com/kb/en/mariadb-10113-release-notes/), [MariaDB Galera Cluster 10.0.25](https://mariadb.com/kb/en/mariadb-galera-cluster-10025-release-notes/), [MariaDB Galera Cluster 5.5.49](https://mariadb.com/kb/en/mariadb-galera-cluster-5549-release-notes/) |
| **25.3.14** | [MariaDB 10.1.12](https://mariadb.com/kb/en/mariadb-10112-release-notes/), [MariaDB Galera Cluster 10.0.24](https://mariadb.com/kb/en/mariadb-galera-cluster-10024-release-notes/), [MariaDB Galera Cluster 5.5.48](https://mariadb.com/kb/en/mariadb-galera-cluster-5548-release-notes/) |
| **25.3.12** | [MariaDB 10.1.11](https://mariadb.com/kb/en/mariadb-10111-release-notes/) |
| **25.3.11** | N/A |
| **25.3.10** | N/A |
| **25.3.9** | [MariaDB 10.1.3](https://mariadb.com/kb/en/mariadb-1013-release-notes/), [MariaDB Galera Cluster 10.0.17](https://mariadb.com/kb/en/mariadb-galera-cluster-10017-release-notes/), [MariaDB Galera Cluster 5.5.42](https://mariadb.com/kb/en/mariadb-galera-cluster-5542-release-notes/) |
| **25.3.8** | N/A |
| **25.3.7** | N/A |
| **25.3.6** | N/A |
| **25.3.5** | [MariaDB 10.1.1](https://mariadb.com/kb/en/mariadb-1011-release-notes/), [MariaDB Galera Cluster 10.0.10](https://mariadb.com/kb/en/mariadb-galera-cluster-10010-release-notes/), [MariaDB Galera Cluster 5.5.37](https://mariadb.com/kb/en/mariadb-galera-cluster-5537-release-notes/) |
| **25.3.4** | N/A |
| **25.3.3** | N/A |
| **25.3.2** | [MariaDB Galera Cluster 10.0.7](https://mariadb.com/kb/en/mariadb-galera-cluster-1007-release-notes/), [MariaDB Galera Cluster 5.5.35](https://mariadb.com/kb/en/mariadb-galera-cluster-5535-release-notes/) |
The following table lists each version of the [Galera](../galera/index) 2 wsrep provider, and it lists which version of MariaDB each one was first released in.
| Galera Version | Released in MariaDB Galera Cluster Version |
| --- | --- |
| **25.2.9** | [10.0.10](https://mariadb.com/kb/en/mariadb-galera-cluster-10010-release-notes/), [5.5.37](https://mariadb.com/kb/en/mariadb-galera-cluster-5537-release-notes/) |
| **25.2.8** | [10.0.7](https://mariadb.com/kb/en/mariadb-galera-cluster-1007-release-notes/), [5.5.35](https://mariadb.com/kb/en/mariadb-galera-cluster-5535-release-notes/) |
| **23.2.7** | [5.5.34](https://mariadb.com/kb/en/mariadb-galera-cluster-5534-release-notes/) |
For convenience, a *galera* package containing the **preferred** wsrep provider is included in the MariaDB [YUM and APT repositories](https://downloads.mariadb.org/mariadb/repositories/) (the preferred versions are **bolded** in the table above).
See also [Deciphering Galera Version Numbers](https://mariadb.com/blog/deciphering-galera-version-numbers).
1. Install the prerequisites:
```
sudo yum update
sudo yum -y install boost-devel check-devel glibc-devel openssl-devel scons
```
2. Clone [galera.git](https://github.com/mariadb/galera) from [github.com/mariadb](https://github.com/mariadb) and checkout mariadb-3.x banch:
```
git init repo
cd repo
git clone -b mariadb-3.x https://github.com/MariaDB/galera.git
```
3. Build the packages by executing `build.sh` under scripts/ directory with `-p` switch:
```
cd galera
./scripts/build.sh -p
```
When finished, you will have an RPM package containing galera library, arbitrator and related files in the current directory. Note: The same set of instructions can be applied to other RPM based platforms to generate galera package.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Partitioning Types Partitioning Types
===================
| Title | Description |
| --- | --- |
| [Partitioning Types Overview](../partitioning-types-overview/index) | A partition type determines how a partitioned table rows are distributed across partitions |
| [LIST Partitioning Type](../list-partitioning-type/index) | LIST partitioning is used to assign each partition a list of values |
| [RANGE Partitioning Type](../range-partitioning-type/index) | The RANGE partitioning type is used to assign each partition a range of values. |
| [HASH Partitioning Type](../hash-partitioning-type/index) | Form of partitioning in which the server takes care of the partition in which to place the data. |
| [RANGE COLUMNS and LIST COLUMNS Partitioning Types](../range-columns-and-list-columns-partitioning-types/index) | Used to assign each partition a range or a list of values |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb DEFAULT DEFAULT
=======
Syntax
------
```
DEFAULT(col_name)
```
Description
-----------
Returns the default value for a table column. If the column has no default value (and is not NULLABLE - NULLABLE fields have a NULL default), an error is returned.
For integer columns using [AUTO\_INCREMENT](../auto_increment/index), `0` is returned.
When using `DEFAULT` as a value to set in an [INSERT](../insert/index) or [UPDATE](../update/index) statement, you can use the bare keyword `DEFAULT` without the parentheses and argument to refer to the column in context. You can only use `DEFAULT` as a bare keyword if you are using it alone without a surrounding expression or function.
Examples
--------
Select only non-default values for a column:
```
SELECT i FROM t WHERE i != DEFAULT(i);
```
Update values to be one greater than the default value:
```
UPDATE t SET i = DEFAULT(i)+1 WHERE i < 100;
```
When referring to the default value exactly in `UPDATE` or `INSERT`, you can omit the argument:
```
INSERT INTO t (i) VALUES (DEFAULT);
UPDATE t SET i = DEFAULT WHERE i < 100;
```
```
CREATE OR REPLACE TABLE t (
i INT NOT NULL AUTO_INCREMENT,
j INT NOT NULL,
k INT DEFAULT 3,
l INT NOT NULL DEFAULT 4,
m INT,
PRIMARY KEY (i)
);
DESC t;
+-------+---------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+---------+------+-----+---------+----------------+
| i | int(11) | NO | PRI | NULL | auto_increment |
| j | int(11) | NO | | NULL | |
| k | int(11) | YES | | 3 | |
| l | int(11) | NO | | 4 | |
| m | int(11) | YES | | NULL | |
+-------+---------+------+-----+---------+----------------+
INSERT INTO t (j) VALUES (1);
INSERT INTO t (j,m) VALUES (2,2);
INSERT INTO t (j,l,m) VALUES (3,3,3);
SELECT * FROM t;
+---+---+------+---+------+
| i | j | k | l | m |
+---+---+------+---+------+
| 1 | 1 | 3 | 4 | NULL |
| 2 | 2 | 3 | 4 | 2 |
| 3 | 3 | 3 | 3 | 3 |
+---+---+------+---+------+
SELECT DEFAULT(i), DEFAULT(k), DEFAULT (l), DEFAULT(m) FROM t;
+------------+------------+-------------+------------+
| DEFAULT(i) | DEFAULT(k) | DEFAULT (l) | DEFAULT(m) |
+------------+------------+-------------+------------+
| 0 | 3 | 4 | NULL |
| 0 | 3 | 4 | NULL |
| 0 | 3 | 4 | NULL |
+------------+------------+-------------+------------+
SELECT DEFAULT(i), DEFAULT(k), DEFAULT (l), DEFAULT(m), DEFAULT(j) FROM t;
ERROR 1364 (HY000): Field 'j' doesn't have a default value
SELECT * FROM t WHERE i = DEFAULT(i);
Empty set (0.001 sec)
SELECT * FROM t WHERE j = DEFAULT(j);
ERROR 1364 (HY000): Field 'j' doesn't have a default value
SELECT * FROM t WHERE k = DEFAULT(k);
+---+---+------+---+------+
| i | j | k | l | m |
+---+---+------+---+------+
| 1 | 1 | 3 | 4 | NULL |
| 2 | 2 | 3 | 4 | 2 |
| 3 | 3 | 3 | 3 | 3 |
+---+---+------+---+------+
SELECT * FROM t WHERE l = DEFAULT(l);
+---+---+------+---+------+
| i | j | k | l | m |
+---+---+------+---+------+
| 1 | 1 | 3 | 4 | NULL |
| 2 | 2 | 3 | 4 | 2 |
+---+---+------+---+------+
SELECT * FROM t WHERE m = DEFAULT(m);
Empty set (0.001 sec)
SELECT * FROM t WHERE m <=> DEFAULT(m);
+---+---+------+---+------+
| i | j | k | l | m |
+---+---+------+---+------+
| 1 | 1 | 3 | 4 | NULL |
+---+---+------+---+------+
```
See Also
--------
* [CREATE TABLE DEFAULT Clause](../create-table/index#default-column-option)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ST_NUMPOINTS ST\_NUMPOINTS
=============
Syntax
------
```
ST_NumPoints(ls)
NumPoints(ls)
```
Description
-----------
Returns the number of [Point](../point/index) objects in the [LineString](../linestring/index) value `ls`.
`ST_NumPoints()` and `NumPoints()` are synonyms.
Examples
--------
```
SET @ls = 'LineString(1 1,2 2,3 3)';
SELECT NumPoints(GeomFromText(@ls));
+------------------------------+
| NumPoints(GeomFromText(@ls)) |
+------------------------------+
| 3 |
+------------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Heuristic Recovery with the Transaction Coordinator Log Heuristic Recovery with the Transaction Coordinator Log
=======================================================
The transaction coordinator log (tc\_log) is used to coordinate transactions that affect multiple [XA-capable](../xa-transactions/index) [storage engines](../storage-engines/index). One of the main purposes of this log is in crash recovery.
Modes of Crash Recovery
-----------------------
There are two modes of crash recovery:
* Automatic crash recovery.
* Manual heuristic recovery when `[--tc-heuristic-recover](../mysqld-options/index#-tc-heuristic-recover)` is set to some value other than `OFF`.
Automatic Crash Recovery
------------------------
Automatic crash recovery occurs during startup when MariaDB needs to recover from a crash and `[--tc-heuristic-recover](../mysqld-options/index#-tc-heuristic-recover)` is set to `OFF`, which is the default value.
### Automatic Crash Recovery with the Binary Log-Based Transaction Coordinator Log
If MariaDB needs to perform automatic crash recovery and if the [binary log](../binary-log/index) is enabled, then the [error log](../error-log/index) will contain messages like this:
```
[Note] Recovering after a crash using cmdb-mariadb-0-bin
[Note] InnoDB: Buffer pool(s) load completed at 190313 11:24:29
[Note] Starting crash recovery...
[Note] Crash recovery finished.
```
### Automatic Crash Recovery with the Memory-Mapped File-Based Transaction Coordinator Log
If MariaDB needs to perform automatic crash recovery and if the [binary log](../binary-log/index) is **not** enabled, then the [error log](../error-log/index) will contain messages like this:
```
[Note] Recovering after a crash using tc.log
[Note] InnoDB: Buffer pool(s) load completed at 190313 11:26:32
[Note] Starting crash recovery...
[Note] Crash recovery finished.
```
Manual Heuristic Recovery
-------------------------
Manual heuristic recovery occurs when `[--tc-heuristic-recover](../mysqld-options/index#-tc-heuristic-recover)` is set to some value other than `OFF`. This might be needed if the server finds prepared transactions during crash recovery that are not in the transaction coordinator log. For example, the [error log](../error-log/index) might contain an error like this:
```
[ERROR] Found 1 prepared transactions! It means that mysqld was not shut down properly last time and critical recovery information (last binlog or tc.log file) was manually deleted after a crash. You have to start mysqld with --tc-heuristic-recover switch to commit or rollback pending transactions.
```
When manual heuristic recovery is initiated, MariaDB will ignore information about transactions in the transaction coordinator log during the recovery process. Prepared transactions that are encountered during the recovery process will either be rolled back or committed, depending on the value of `[--tc-heuristic-recover](../mysqld-options/index#-tc-heuristic-recover)`.
When manual heuristic recovery is initiated, the [error log](../error-log/index) will contain a message like this:
```
[Note] Heuristic crash recovery mode
```
### Manual Heuristic Recovery with the Binary Log-Based Transaction Coordinator Log
If `[--tc-heuristic-recover](../mysqld-options/index#-tc-heuristic-recover)` is set to some value other than `OFF` and if the [binary log](../binary-log/index) is enabled, then MariaDB will ignore information about transactions in the [binary log](../binary-log/index) during the recovery process. Prepared transactions that are encountered during the recovery process will either be rolled back or committed, depending on the value of `[--tc-heuristic-recover](../mysqld-options/index#-tc-heuristic-recover)`.
After the recovery process is complete, MariaDB will create a new empty [binary log](../binary-log/index) file, so that the old corrupt ones can be ignored.
### Manual Heuristic Recovery with the Memory-Mapped File-Based Transaction Coordinator Log
If `[--tc-heuristic-recover](../mysqld-options/index#-tc-heuristic-recover)` is set to some value other than `OFF` and if the [binary log](../binary-log/index) is **not** enabled, then MariaDB will ignore information about transactions in the the memory-mapped file defined by the `[--log-tc](../mysqld-options/index#-log-tc)` option during the recovery process. Prepared transactions that are encountered during the recovery process will either be rolled back or committed, depending on the value of `[--tc-heuristic-recover](../mysqld-options/index#-tc-heuristic-recover)`.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Puppet Overview for MariaDB Users Puppet Overview for MariaDB Users
=================================
Puppet is a tool to automate servers configuration management. It is produced by Puppet Inc, and released under the terms of the Apache License, version 2.
It is entirely possible to use Ansible to automate MariaDB deployments and configuration. This page contains generic information for MariaDB users who want to learn, or evaluate, Puppet.
Puppet modules can be searched using [Puppet Forge](https://forge.puppet.com/). Most of them are also published on GitHub with open source licenses. Puppet Forge allows filtering modules to only view the most reliable: supported by Puppet, supported by a Puppet partner, or approved.
For information about installing Puppet, see [Installing and upgrading](https://puppet.com/docs/puppet/7.3/architecture.html) in Puppet documentation.
Design Principles
-----------------
With Puppet, you write **manifests** that describe the resources you need to run on certain servers and their **attributes**.
Therefore manifests are **declarative**. You don't write the steps to achieve the desired result. Instead, you describe the desired result. When Puppet detects differences between your description and the current state of a server, it decides what to do to fix those differences.
Manifests are also **idempotent**. You don't need to worry about the effects of applying a manifest twice. This may happen (see Architecture below) but it won't have any side effects.
### Defining Resources
Here's an example of how to describe a resource in a manifest:
```
file { '/etc/motd':
content => '',
ensure => present,
}
```
This block describes a resource. The resource type is `file`, while the resource itself is `/etc/motd`. The description consists of a set of attributes. The most important is `ensure`, which in this case states that the file must exist. It is also common to use this resource to indicate that a file (probably created by a previous version of the manifest) doesn't exist.
These classes of resource types exist:
* **Built-in resources**, or **Puppet core resources**: Resources that are part of Puppet, maintained by the Puppet team.
* **Defined resources**: Resources that are defined as a combination of other resources. They are written in the Puppet domain-specific language.
* **Custom resources**: Resources that are written by users, in the Ruby language.
To obtain information about resources:
```
# list existing resource types
puppet resource --types
# print information about the file resource type
puppet describe file
```
To group several resources in a reusable class:
```
class ssh_server {
file { '/etc/motd':
content => '',
ensure => present,
}
file { '/etc/issue.net':
content => '',
ensure => present,
}
}
```
There are several ways to include a class. For example:
```
include Class['ssh_server']
```
### Defining Nodes
Puppet has a **main manifest** that could be a `site.pp` file or a directory containing `.pp` files. For simple infrastructures, we can define the nodes here. For more complex infrastructures, we may prefer to import other files that define the nodes.
Nodes are defined in this way:
```
node 'maria-1.example.com' {
include common
include mariadb
}
```
The resource type is `node`. Then we specify a hostname that is used to match this node to an existing host. This can also be a list of hostnames, a regular expression that matches multiple nodes, or the `default` keyword that matches all hosts. To use a regular expression:
```
node /^(maria|mysql)-[1-3]\.example\.com$/ {
include common
}
```
Concepts
--------
The most important Puppet concepts are the following:
* **Target**: A host whose configuration is managed via Puppet.
* **Group**: A logical group of targets. For example there may be a `mariadb` group, and several targets may be part of this group.
* **Facts**: Information collected from the targets, like the system name or system version. They're collected by a Ruby gem called [Facter](https://puppet.com/docs/puppet/latest/facter.html). They can be [core facts](https://puppet.com/docs/puppet/latest/core_facts.html) (collected by default) or [custom facts](https://puppet.com/docs/puppet/latest/custom_facts.html) (defined by the user).
* **Manifest**: A description that can be applied to a target.
* **Catalog**: A compiled manifest.
* **Apply**: Modifying the state of a target so that it reflects its description in a manifest.
* **Module**: A set of manifests.
* **Resource**: A minimal piece of description. A manifest consists of a piece of resources, which describe components of a system, like a file or a service.
* **Resource type**: Determines the class of a resource. For example there is a `file` resource type, and a manifest can contain any number of resources of this type, which describe different files.
* **Attribute**: It's a characteristic of a resource, like a file owner, or its mode.
* **Class**: A group of resources that can be reused in several manifests.
Architecture
------------
Depending on how the user decides to deploy changes, Puppet can use two different architectures:
* An **Agent-master** architecture. This is the preferred way to use Puppet.
* A **standalone architecture**, that is similar to [Ansible architecture](../ansible-overview-for-mariadb-users/index#architecture).
### Agent-master Architecture
A **Puppet master** stores a catalog for each target. There may be more than one Puppet master, for redundancy.
Each target runs a **Puppet agent** in background. Each Puppet agent periodically connects to the Puppet master, sending its facts. The Puppet master compiles the relevant manifest using the facts it receives, and send back a catalog. Note that it is also possible to store the catalogs in PuppetDB instead.
Once the Puppet agent receives the up-to-date catalog, it checks all resources and compares them with its current state. It applies the necessary changes to make sure that its state reflects the resources present in the catalog.
### Standalone Architecture
With this architecture, the targets run **Puppet apply**. This application usually runs as a Linux cron job or a Windows scheduled task, but it can also be manually invoked by the user.
When Puppet apply runs, it compiles the latest versions of manifests using the local facts. Then it checks every resource from the resulting catalogs and compares it to the state of the local system, applying changes where needed.
Newly created or modified manifests are normally deployed to the targets, so Puppet apply can read them from the local host. However it is possible to use PuppetDB instead.
### PuppetDB
PuppetDB is a Puppet node that runs a PostgreSQL database to store information that can be used by other nodes. PuppetDB can be used with both the Agent-master and the standalone architectures, but it is always optional. However it is necessary to use some advanced Puppet features.
PuppetDB stored the following information:
* The latest facts from each target.
* The latest catalogs, compiled by Puppet apply or a Puppet master.
* Optionally, the recent history of each node activities.
### External Node Classifiers
With both architectures, it is possible to have a component called an External Node Classifier (ENC). This is a script or an executable written in any language that Puppet can call to determine the list of classes that should be applied to a certain target.
An ENC received a node name in input, and should return a list of classes, parameters, etc, as a YAML hash.
### Bolt
Bolt can be used in both architectures to run operations against a target or a set of targets. These operations can be commands passed manually to Bolt, scripts, Puppet tasks or plans. Bolt directly connects to targets via ssh and runs system commands.
See [Bolt Examples](../bolt-examples/index) to get an idea of what you can do with Bolt.
hiera
-----
hiera is a hierarchical configuration system that allows us to:
* Store configuration in separate files;
* Include the relevant configuration files for every server we automate with Puppet.
See [Puppet hiera Configuration System](../puppet-hiera-configuration-system/index) for more information.
Puppet Resources
----------------
* [Puppet documentation](https://puppet.com/docs/).
* [forge.puppet.com](https://forge.puppet.com/).
* [Puppet on GitHub](https://github.com/puppetlabs/puppet).
* [Puppet on Wikipedia](https://en.wikipedia.org/wiki/Puppet_(company)).
More information about the topics discussed in this page can be found in the Ansible documentation:
* [Puppet Glossary](https://puppet.com/docs/puppet/latest/glossary.html) in Puppet documentation.
* [Overview of Puppet's architecture](https://puppet.com/docs/puppet/latest/architecture.html) in Puppet documentation.
* [PuppetDB documentation](https://puppet.com/docs/puppetdb/latest/index.html).
* [Classifying nodes](https://puppet.com/docs/puppet/latest/nodes_external.html) in Puppet documentation.
* [Hiera](https://puppet.com/docs/puppet/latest/hiera_intro.html) in Puppet documentation.
* [Bolt documentation](https://puppet.com/docs/bolt/latest/bolt.html).
---
Content initially contributed by [Vettabase Ltd](https://vettabase.com/).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb EXISTS-to-IN Optimization EXISTS-to-IN Optimization
=========================
MySQL (including MySQL 5.6) has only one execution strategy for EXISTS subqueries. The strategy is essentially the straightforward, "naive" execution, without any rewrites.
[MariaDB 5.3](../what-is-mariadb-53/index) introduced a rich set of optimizations for IN subqueries. Since then, it makes sense to convert an EXISTS subquery into an IN so that the new optimizations can be used.
`EXISTS` will be converted into `IN` in two cases:
1. Trivially correlated EXISTS subqueries
2. Semi-join EXISTS
We will now describe these two cases in detail
Trivially-correlated EXISTS subqueries
--------------------------------------
Often, EXISTS subquery is correlated, but the correlation is trivial. The subquery has form
```
EXISTS (SELECT ... FROM ... WHERE outer_col= inner_col AND inner_where)
```
and "outer\_col" is the only place where the subquery refers to outside fields. In this case, the subquery can be re-written into uncorrelated IN:
```
outer_col IN (SELECT inner_col FROM ... WHERE inner_where)
```
(`NULL` values require some special handling, see below). For uncorrelated IN subqueries, MariaDB is able a cost-based choice between two execution strategies:
* [IN-to-EXISTS](../non-semi-join-subquery-optimizations/index#the-in-to-exists-transformation) (basically, convert back into EXISTS)
* [Materialization](../non-semi-join-subquery-optimizations/index#materialization-for-non-correlated-in-subqueries)
That is, converting trivially-correlated `EXISTS` into uncorrelated `IN` gives query optimizer an option to use Materialization strategy for the subquery.
Currently, EXISTS->IN conversion works only for subqueries that are at top level of the WHERE clause, or are under NOT operation which is directly at top level of the WHERE clause.
Semi-join EXISTS subqueries
---------------------------
If `EXISTS` subquery is an AND-part of the `WHERE` clause:
```
SELECT ... FROM outer_tables WHERE EXISTS (SELECT ...) AND ...
```
then it satisfies the main property of [semi-join subqueries](../semi-join-subquery-optimizations/index):
*with semi-join subquery, we're only interested in records of outer\_tables that have matches in the subquery*
Semi-join optimizer offers a rich set of execution strategies for both correlated and uncorrelated subqueries. The set includes FirstMatch strategy which is an equivalent of how EXISTS suqueries are executed, so we do not lose any opportunities when converting an EXISTS subquery into a semi-join.
In theory, it makes sense to convert all kinds of EXISTS subqueries: convert both correlated and uncorrelated ones, convert irrespectively of whether the subquery has inner=outer equality.
In practice, the subquery will be converted only if it has inner=outer equality. Both correlated and uncorrelated subqueries are converted.
Handling of NULL values
-----------------------
TODO: rephrase this:
* IN has complicated NULL-semantics. NOT EXISTS doesn't.
* EXISTS-to-IN adds IS NOT NULL before the subquery predicate, when required
Control
-------
The optimization is controlled by the `exists_to_in` flag in [optimizer\_switch](../server-system-variables/index#optimizer_switch). Before [MariaDB 10.0.12](https://mariadb.com/kb/en/mariadb-10012-release-notes/), the optimization was OFF by default. Since [MariaDB 10.0.12](https://mariadb.com/kb/en/mariadb-10012-release-notes/), it has been ON by default.
Limitations
-----------
EXISTS-to-IN doesn't handle
* subqueries that have GROUP BY, aggregate functions, or HAVING clause
* subqueries are UNIONs
* a number of degenerate edge cases
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb TOAD for MySQL TOAD for MySQL
==============
#### Features
* Version control integration.
* Macro record and playback.
* Database browser.
* Code snippet editor.
* Security manager.
* SQL editor.
* Fast, multi-tabbed schema browser.
* DB extract, compare-and-search utility.
* Import/export utility.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Stored Routines Stored Routines
================
Stored procedures and stored functions.
| Title | Description |
| --- | --- |
| [Stored Procedures](../stored-procedures/index) | Routine invoked with a CALL statement. |
| [Stored Functions](../stored-functions/index) | Defined functions for use with SQL statements. |
| [Stored Routine Statements](../stored-routine-statements/index) | SQL statements related to creating and using stored routines. |
| [Binary Logging of Stored Routines](../binary-logging-of-stored-routines/index) | Stored routines require extra consideration when binary logging. |
| [Stored Routine Limitations](../stored-routine-limitations/index) | SQL statements not permitted in stored programs. |
| [Stored Routine Privileges](../stored-routine-privileges/index) | Privileges associated with stored functions and stored procedures. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb High Availability & Performance Tuning High Availability & Performance Tuning
=======================================
Information on replication, clustering, and multi-master solutions for MariaDB, as well as performance tuning.
| Title | Description |
| --- | --- |
| [MariaDB Replication](../standard-replication/index) | Documentation on standard primary and replica replication. |
| [MariaDB Galera Cluster](../galera-cluster/index) | MariaDB Galera Cluster is a virtually synchronous multi-master cluster. |
| [Optimization and Tuning](../optimization-and-tuning/index) | Using indexes, writing better queries and adjusting variables for better performance. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Starting and Stopping MariaDB Server Starting and Stopping MariaDB Server
====================================
There are several different methods to start or stop the MariaDB Server process. There are two primary categories that most of these methods fall into: starting the process with the help of a service manager, and starting the process manually.
Service Managers
----------------
[sysVinit](../sysvinit/index) and [systemd](../systemd/index) are the most common Linux service managers. [launchd](../launchd/index) is used in MacOS X. [Upstart](https://en.wikipedia.org/wiki/Upstart_(software)) is a less common service manager.
### Systemd
RHEL/CentOS 7 and above, Debian 8 Jessie and above, and Ubuntu 15.04 and above use [systemd](../systemd/index) by default.
For information on how to start and stop MariaDB with this service manager, see [systemd: Interacting with the MariaDB Server Process](../systemd/index#interacting-with-the-mariadb-server-process).
### SysVinit
RHEL/CentOS 6 and below, and Debian 7 Wheezy and below use [sysVinit](../sysvinit/index) by default.
For information on how to start and stop MariaDB with this service manager, see [sysVinit: Interacting with the MariaDB Server Process](../sysvinit/index#interacting-with-the-mariadb-server-process).
### launchd
[launchd](../launchd/index) is used in MacOS X.
### Upstart
Ubuntu 14.10 and below use Upstart by default.
Starting the Server Process Manually
------------------------------------
### mysqld
[mysqld](../mysqld-options/index) is the actual MariaDB Server binary. It can be started manually on its own.
### mysqld\_safe
[mysqld\_safe](../mysqld_safe/index) is a wrapper that can be used to start the [mysqld](../mysqld-options/index) server process. The script has some built-in safeguards, such as automatically restarting the server process if it dies. See [mysqld\_safe](../mysqld_safe/index) for more information.
### mysqld\_multi
[mysqld\_multi](../mysqld_multi/index) is a wrapper that can be used to start the [mysqld](../mysqld-options/index) server process if you plan to run multiple server processes on the same host. See [mysqld\_multi](../mysqld_multi/index) for more information.
### mysql.server
[mysql.server](../mysqlserver/index) is a wrapper that works as a standard [sysVinit](../sysvinit/index) script. However, it can be used independently of [sysVinit](../sysvinit/index) as a regular `sh` script. The script starts the [mysqld](../mysqld-options/index) server process by first changing its current working directory to the MariaDB install directory and then starting [mysqld\_safe](../mysqld_safe/index). The script requires the standard [sysVinit](../sysvinit/index) arguments, such as `start`, `stop`, and `status`. See [mysql.server](../mysqlserver/index) for more information.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb RENAME TABLE RENAME TABLE
============
Syntax
------
```
RENAME TABLE[S] [IF EXISTS] tbl_name
[WAIT n | NOWAIT]
TO new_tbl_name
[, tbl_name2 TO new_tbl_name2] ...
```
Description
-----------
This statement renames one or more tables or [views](../views/index), but not the privileges associated with them.
### IF EXISTS
**MariaDB starting with [10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/)**If this directive is used, one will not get an error if the table to be renamed doesn't exist.
The rename operation is done atomically, which means that no other session can access any of the tables while the rename is running. For example, if you have an existing table `old_table`, you can create another table `new_table` that has the same structure but is empty, and then replace the existing table with the empty one as follows (assuming that `backup_table` does not already exist):
```
CREATE TABLE new_table (...);
RENAME TABLE old_table TO backup_table, new_table TO old_table;
```
`tbl_name` can optionally be specified as `db_name`.`tbl_name`. See [Identifier Qualifiers](../identifier-qualifiers/index). This allows to use `RENAME` to move a table from a database to another (as long as they are on the same filesystem):
```
RENAME TABLE db1.t TO db2.t;
```
Note that moving a table to another database is not possible if it has some [triggers](../triggers/index). Trying to do so produces the following error:
```
ERROR 1435 (HY000): Trigger in wrong schema
```
Also, views cannot be moved to another database:
```
ERROR 1450 (HY000): Changing schema from 'old_db' to 'new_db' is not allowed.
```
Multiple tables can be renamed in a single statement. The presence or absence of the optional `S` (`RENAME TABLE` or `RENAME TABLES`) has no impact, whether a single or multiple tables are being renamed.
If a `RENAME TABLE` renames more than one table and one renaming fails, all renames executed by the same statement are rolled back.
Renames are always executed in the specified order. Knowing this, it is also possible to swap two tables' names:
```
RENAME TABLE t1 TO tmp_table,
t2 TO t1,
tmp_table TO t2;
```
### WAIT/NOWAIT
**MariaDB starting with [10.3.0](https://mariadb.com/kb/en/mariadb-1030-release-notes/)**Set the lock wait timeout. See [WAIT and NOWAIT](../wait-and-nowait/index).
### Privileges
Executing the `RENAME TABLE` statement requires the [DROP](../grant/index#table-privileges), [CREATE](../grant/index#table-privileges) and [INSERT](../grant/index#table-privileges) privileges for the table or the database.
### Atomic RENAME TABLE
**MariaDB starting with [10.6.1](https://mariadb.com/kb/en/mariadb-1061-release-notes/)**From [MariaDB 10.6](../what-is-mariadb-106/index), `RENAME TABLE` is atomic for most engines, including InnoDB, MyRocks, MyISAM and Aria ([MDEV-23842](https://jira.mariadb.org/browse/MDEV-23842)). This means that if there is a crash (server down or power outage) during `RENAME TABLE`, all tables will revert to their original names and any changes to trigger files will be reverted.
In older MariaDB version there was a small chance that, during a server crash happening in the middle of `RENAME TABLE`, some tables could have been renamed (in the worst case partly) while others would not be renamed.
See [Atomic DDL](../atomic-ddl/index) for more information.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Hexadecimal Literals Hexadecimal Literals
====================
Hexadecimal literals can be written using any of the following syntaxes:
* x'`value`'
* X'`value`' (SQL standard)
* 0x`value` (ODBC)
`value` is a sequence of hexadecimal digits (from `0` to `9` and from `A` to `F`). The case of the digits does not matter. With the first two syntaxes, `value` must consist of an even number of digits. With the last syntax, digits can be even, and they are treated as if they had an extra 0 at the beginning.
Normally, hexadecimal literals are interpreted as binary string, where each pair of digits represents a character. When used in a numeric context, they are interpreted as integers. (See the example below). In no case can a hexadecimal literal be a decimal number.
The first two syntaxes; `X'value'` and x'`value`, follow the SQL standard, and behave as a string in all contexts in MariaDB since [MariaDB 10.0.3](https://mariadb.com/kb/en/mariadb-1003-release-notes/) and [MariaDB 5.5.31](https://mariadb.com/kb/en/mariadb-5531-release-notes/) (fixing [MDEV-4489](https://jira.mariadb.org/browse/MDEV-4489)). The latter syntax, 0x`value`, is a MySQL/MariaDB extension for hex hybrids and behaves as a string or as a number depending on context. MySQL treats all syntaxes the same, so there may be different results in MariaDB and MySQL (see below).
Examples
--------
Representing the `a` character with the three syntaxes explained above:
```
SELECT x'61', X'61', 0x61;
+-------+-------+------+
| x'61' | X'61' | 0x61 |
+-------+-------+------+
| a | a | a |
+-------+-------+------+
```
Hexadecimal literals in a numeric context:
```
SELECT 0 + 0xF, -0xF;
+---------+------+
| 0 + 0xF | -0xF |
+---------+------+
| 15 | -15 |
+---------+------+
```
### Fun with Types
```
CREATE TABLE t1 (a INT, b VARCHAR(10));
INSERT INTO t1 VALUES (0x31, 0x61),(COALESCE(0x31), COALESCE(0x61));
SELECT * FROM t1;
+------+------+
| a | b |
+------+------+
| 49 | a |
| 1 | a |
+------+------+
```
The reason for the differing results above is that when 0x31 is inserted directly to the column, it's treated as a number, while when 0x31 is passed to [COALESCE()](../coalesce/index), it's treated as a string, because:
* HEX values have a string data type by default.
* COALESCE() has the same data type as the argument.
### Differences Between MariaDB and MySQL
```
SELECT x'0a'+0;
+---------+
| x'0a'+0 |
+---------+
| 0 |
+---------+
1 row in set, 1 warning (0.00 sec)
Warning (Code 1292): Truncated incorrect DOUBLE value: '\x0A'
SELECT X'0a'+0;
+---------+
| X'0a'+0 |
+---------+
| 0 |
+---------+
1 row in set, 1 warning (0.00 sec)
Warning (Code 1292): Truncated incorrect DOUBLE value: '\x0A'
SELECT 0x0a+0;
+--------+
| 0x0a+0 |
+--------+
| 10 |
+--------+
```
In MySQL (up until at least MySQL 8.0.26):
```
SELECT x'0a'+0;
+---------+
| x'0a'+0 |
+---------+
| 10 |
+---------+
SELECT X'0a'+0;
+---------+
| X'0a'+0 |
+---------+
| 10 |
+---------+
SELECT 0x0a+0;
+--------+
| 0x0a+0 |
+--------+
| 10 |
+--------+
```
See Also
--------
* [HEX()](../hex/index)
* [UNHEX()](../unhex/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema WSREP_STATUS Table Information Schema WSREP\_STATUS Table
======================================
The WSREP\_STATUS table makes [Galera](../galera/index) node cluster status information available through the [Information Schema](../information-schema/index). The same information can be returned using the [SHOW WSREP\_STATUS](../show-wsrep_status/index) statement. Only users with the [SUPER](../grant/index#super) privilege can access information from this table.
The `WSREP_STATUS` table is part of the [WSREP\_INFO plugin](../wsrep_info-plugin/index).
Example
-------
```
SELECT * FROM information_schema.WSREP_STATUS\G
*************************** 1. row ***************************
NODE_INDEX: 0
NODE_STATUS: Synced
CLUSTER_STATUS: Primary
CLUSTER_SIZE: 3
CLUSTER_STATE_UUID: 00b0fbad-6e84-11e4-8a8b-376f19ce8ee7
CLUSTER_STATE_SEQNO: 2
CLUSTER_CONF_ID: 3
GAP: NO
PROTOCOL_VERSION: 3
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb RESIGNAL RESIGNAL
========
Syntax
------
```
RESIGNAL [error_condition]
[SET error_property
[, error_property] ...]
error_condition:
SQLSTATE [VALUE] 'sqlstate_value'
| condition_name
error_property:
error_property_name = <error_property_value>
error_property_name:
CLASS_ORIGIN
| SUBCLASS_ORIGIN
| MESSAGE_TEXT
| MYSQL_ERRNO
| CONSTRAINT_CATALOG
| CONSTRAINT_SCHEMA
| CONSTRAINT_NAME
| CATALOG_NAME
| SCHEMA_NAME
| TABLE_NAME
| COLUMN_NAME
| CURSOR_NAME
```
Description
-----------
The syntax of `RESIGNAL` and its semantics are very similar to [SIGNAL](../signal/index). This statement can only be used within an error [HANDLER](../declare-handler/index). It produces an error, like [SIGNAL](../signal/index). `RESIGNAL` clauses are the same as SIGNAL, except that they all are optional, even [SQLSTATE](../sqlstate/index). All the properties which are not specified in `RESIGNAL`, will be identical to the properties of the error that was received by the error [HANDLER](../handler/index). For a description of the clauses, see [diagnostics area](../diagnostics-area/index).
Note that `RESIGNAL` does not empty the diagnostics area: it just appends another error condition.
`RESIGNAL`, without any clauses, produces an error which is identical to the error that was received by [HANDLER](../handler/index).
If used out of a [HANDLER](../handler/index) construct, RESIGNAL produces the following error:
```
ERROR 1645 (0K000): RESIGNAL when handler not active
```
In [MariaDB 5.5](../what-is-mariadb-55/index), if a [HANDLER](../handler/index) contained a [CALL](../call/index) to another procedure, that procedure could use `RESIGNAL`. Since [MariaDB 10.0](../what-is-mariadb-100/index), trying to do this raises the above error.
For a list of `SQLSTATE` values and MariaDB error codes, see [MariaDB Error Codes](../mariadb-error-codes/index).
The following procedure tries to query two tables which don't exist, producing a 1146 error in both cases. Those errors will trigger the [HANDLER](../handler/index). The first time the error will be ignored and the client will not receive it, but the second time, the error is re-signaled, so the client will receive it.
```
CREATE PROCEDURE test_error( )
BEGIN
DECLARE CONTINUE HANDLER
FOR 1146
BEGIN
IF @hide_errors IS FALSE THEN
RESIGNAL;
END IF;
END;
SET @hide_errors = TRUE;
SELECT 'Next error will be ignored' AS msg;
SELECT `c` FROM `temptab_one`;
SELECT 'Next error won''t be ignored' AS msg;
SET @hide_errors = FALSE;
SELECT `c` FROM `temptab_two`;
END;
CALL test_error( );
+----------------------------+
| msg |
+----------------------------+
| Next error will be ignored |
+----------------------------+
+-----------------------------+
| msg |
+-----------------------------+
| Next error won't be ignored |
+-----------------------------+
ERROR 1146 (42S02): Table 'test.temptab_two' doesn't exist
```
The following procedure re-signals an error, modifying only the error message to clarify the cause of the problem.
```
CREATE PROCEDURE test_error()
BEGIN
DECLARE CONTINUE HANDLER
FOR 1146
BEGIN
RESIGNAL SET
MESSAGE_TEXT = '`temptab` does not exist';
END;
SELECT `c` FROM `temptab`;
END;
CALL test_error( );
ERROR 1146 (42S02): `temptab` does not exist
```
As explained above, this works on [MariaDB 5.5](../what-is-mariadb-55/index), but produces a 1645 error since 10.0.
```
CREATE PROCEDURE handle_error()
BEGIN
RESIGNAL;
END;
CREATE PROCEDURE p()
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION CALL p();
SIGNAL SQLSTATE '45000';
END;
```
See Also
--------
* [Diagnostics Area](../diagnostics-area/index)
* [SIGNAL](../signal/index)
* [HANDLER](../handler/index)
* [Stored Routines](../stored-programs-and-views/index)
* [MariaDB Error Codes](../mariadb-error-codes/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb NEXTVAL NEXTVAL
=======
`NEXTVAL` is a synonym for [NEXT VALUE for sequence\_name](../next-value-for-sequence_name/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysql.slow_log Table mysql.slow\_log Table
=====================
The `mysql.slow_log` table stores the contents of the [Slow Query Log](../slow-query-log/index) if slow logging is active and the output is being written to table (see [Writing logs into tables](../writing-logs-into-tables/index)).
It contains the following fields:
| Field | Type | Null | Key | Default | Description |
| --- | --- | --- | --- | --- | --- |
| `start_time` | `timestamp(6)` | NO | | `CURRENT_TIMESTAMP(6)` | Time the query began. |
| `user_host` | `mediumtext` | NO | | `NULL` | User and host combination. |
| `query_time` | `time(6)` | NO | | `NULL` | Total time the query took to execute. |
| `lock_time` | `time(6)` | NO | | `NULL` | Total time the query was locked. |
| `rows_sent` | `int(11)` | NO | | `NULL` | Number of rows sent. |
| `rows_examined` | `int(11)` | NO | | `NULL` | Number of rows examined. |
| `db` | `varchar(512)` | NO | | `NULL` | Default database. |
| `last_insert_id` | `int(11)` | NO | | `NULL` | [last\_insert\_id](../last_insert_id/index). |
| `insert_id` | `int(11)` | NO | | `NULL` | Insert id. |
| `server_id` | `int(10) unsigned` | NO | | `NULL` | The server's id. |
| `sql_text` | `mediumtext` | NO | | `NULL` | Full query. |
| `thread_id` | `bigint(21) unsigned` | NO | | `NULL` | Thread id. |
| `rows_affected` | `int(11)` | NO | | `NULL` | Number of rows affected by an [UPDATE](../update/index) or [DELETE](../delete/index) (from [MariaDB 10.1.2](https://mariadb.com/kb/en/mariadb-1012-release-notes/)) |
Example
-------
```
SELECT * FROM mysql.slow_log\G
...
*************************** 2. row ***************************
start_time: 2014-11-11 07:56:28.721519
user_host: root[root] @ localhost []
query_time: 00:00:12.000215
lock_time: 00:00:00.000000
rows_sent: 1
rows_examined: 0
db: test
last_insert_id: 0
insert_id: 0
server_id: 1
sql_text: SELECT SLEEP(12)
thread_id: 74
...
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema InnoDB Tables Information Schema InnoDB Tables
=================================
List of Information Schema tables specifically related to [InnoDB](../innodb/index). Tables that are specific to XtraDB shares with InnoDB are listed in [Information Schema XtraDB Tables](../information-schema-xtradb-tables/index).
| Title | Description |
| --- | --- |
| [Information Schema INNODB\_BUFFER\_PAGE Table](../information-schema-innodb_buffer_page-table/index) | Buffer pool page information. |
| [Information Schema INNODB\_BUFFER\_PAGE\_LRU Table](../information-schema-innodb_buffer_page_lru-table/index) | Buffer pool pages and their eviction order. |
| [Information Schema INNODB\_BUFFER\_POOL\_PAGES Table](../information-schema-innodb_buffer_pool_pages-table/index) | XtraDB buffer pool page information. |
| [Information Schema INNODB\_BUFFER\_POOL\_PAGES\_BLOB Table](../information-schema-innodb_buffer_pool_pages_blob-table/index) | XtraDB buffer pool blob pages. |
| [Information Schema INNODB\_BUFFER\_POOL\_PAGES\_INDEX Table](../information-schema-innodb_buffer_pool_pages_index-table/index) | XtraDB buffer pool index pages. |
| [Information Schema INNODB\_BUFFER\_POOL\_STATS Table](../information-schema-innodb_buffer_pool_stats-table/index) | InnoDB buffer pool information. |
| [Information Schema INNODB\_CHANGED\_PAGES Table](../information-schema-innodb_changed_pages-table/index) | Modified pages from the bitmap file data. |
| [Information Schema INNODB\_CMP and INNODB\_CMP\_RESET Tables](../information-schema-innodb_cmp-and-innodb_cmp_reset-tables/index) | XtraDB/InnoDB compression performances with different page sizes. |
| [Information Schema INNODB\_CMPMEM and INNODB\_CMPMEM\_RESET Tables](../information-schema-innodb_cmpmem-and-innodb_cmpmem_reset-tables/index) | Number of InnoDB compressed pages of different page sizes. |
| [Information Schema INNODB\_CMP\_PER\_INDEX and INNODB\_CMP\_PER\_INDEX\_RESET Tables](../information-schema-innodb-tables-information-schema-innodb_cmp_per_index-an/index) | XtraDB/InnoDB compression performances for different indexes and tables. |
| [Information Schema INNODB\_FT\_BEING\_DELETED Table](../information-schema-innodb_ft_being_deleted-table/index) | Fulltext being deleted. |
| [Information Schema INNODB\_FT\_CONFIG Table](../information-schema-innodb_ft_config-table/index) | InnoDB fulltext metadata. |
| [Information Schema INNODB\_FT\_DEFAULT\_STOPWORD Table](../information-schema-innodb_ft_default_stopword-table/index) | Default InnoDB stopwords. |
| [Information Schema INNODB\_FT\_DELETED Table](../information-schema-innodb_ft_deleted-table/index) | Deleted InnoDB fulltext rows. |
| [Information Schema INNODB\_FT\_INDEX\_CACHE Table](../information-schema-innodb_ft_index_cache-table/index) | Newly added fulltext row information. |
| [Information Schema INNODB\_FT\_INDEX\_TABLE Table](../information-schema-innodb_ft_index_table-table/index) | InnoDB fulltext information. |
| [Information Schema INNODB\_LOCK\_WAITS Table](../information-schema-innodb_lock_waits-table/index) | Blocked InnoDB transactions. |
| [Information Schema INNODB\_LOCKS Table](../information-schema-innodb_locks-table/index) | InnoDB lock information. |
| [Information Schema INNODB\_METRICS Table](../information-schema-innodb_metrics-table/index) | InnoDB performance metrics. |
| [Information Schema INNODB\_MUTEXES Table](../information-schema-innodb_mutexes-table/index) | Monitor mutex waits. |
| [Information Schema INNODB\_SYS\_COLUMNS Table](../information-schema-innodb_sys_columns-table/index) | InnoDB column information. |
| [Information Schema INNODB\_SYS\_DATAFILES Table](../information-schema-innodb_sys_datafiles-table/index) | InnoDB tablespace paths. |
| [Information Schema INNODB\_SYS\_FIELDS Table](../information-schema-innodb_sys_fields-table/index) | Fields part of an InnoDB index. |
| [Information Schema INNODB\_SYS\_FOREIGN Table](../information-schema-innodb_sys_foreign-table/index) | InnoDB foreign key information. |
| [Information Schema INNODB\_SYS\_FOREIGN\_COLS Table](../information-schema-innodb_sys_foreign_cols-table/index) | Foreign key column information. |
| [Information Schema INNODB\_SYS\_INDEXES Table](../information-schema-innodb_sys_indexes-table/index) | InnoDB index information. |
| [Information Schema INNODB\_SYS\_SEMAPHORE\_WAITS Table](../information-schema-innodb_sys_semaphore_waits-table/index) | Information about current semaphore waits. |
| [Information Schema INNODB\_SYS\_TABLES Table](../information-schema-innodb_sys_tables-table/index) | InnoDB table information. |
| [Information Schema INNODB\_SYS\_TABLESPACES Table](../information-schema-innodb_sys_tablespaces-table/index) | InnoDB tablespace information. |
| [Information Schema INNODB\_SYS\_TABLESTATS Table](../information-schema-innodb_sys_tablestats-table/index) | InnoDB status for high-level performance monitoring. |
| [Information Schema INNODB\_SYS\_VIRTUAL Table](../information-schema-innodb_sys_virtual-table/index) | Information about base columns of virtual columns. |
| [Information Schema INNODB\_TABLESPACES\_ENCRYPTION Table](../information-schema-innodb_tablespaces_encryption-table/index) | Encryption metadata for InnoDB tablespaces. |
| [Information Schema INNODB\_TABLESPACES\_SCRUBBING Table](../information-schema-innodb_tablespaces_scrubbing-table/index) | Data scrubbing information. |
| [Information Schema INNODB\_TRX Table](../information-schema-innodb_trx-table/index) | Currently-executing InnoDB locks. |
| [Information Schema TEMP\_TABLES\_INFO Table](../information-schema-temp_tables_info-table/index) | Information about active InnoDB temporary tables. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Buildbot Setup for Virtual Machines - Ubuntu 12.10 "quantal" Buildbot Setup for Virtual Machines - Ubuntu 12.10 "quantal"
============================================================
Base install
------------
```
qemu-img create -f qcow2 /kvm/vms/vm-quantal-amd64-serial.qcow2 10G
qemu-img create -f qcow2 /kvm/vms/vm-quantal-i386-serial.qcow2 10G
```
Start each VM booting from the server install iso one at a time and perform the following install steps:
```
kvm -m 2048 -hda /kvm/vms/vm-quantal-amd64-serial.qcow2 -cdrom /kvm/iso/ubuntu/ubuntu-12.10-server-amd64.iso -boot d -smp 2 -cpu qemu64 -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:2275-:22
kvm -m 2048 -hda /kvm/vms/vm-quantal-i386-serial.qcow2 -cdrom /kvm/iso/ubuntu/ubuntu-12.10-server-i386.iso -boot d -smp 2 -cpu qemu64 -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:2276-:22
```
Once running you can connect to the VNC server from your local host with:
```
vncviewer -via ${remote-host} localhost
```
Replace ${remote-host} with the host the vm is running on.
**Note:** When you activate the install, vncviewer may disconnect with a complaint about the rect being too large. This is fine. Ubuntu has just resized the vnc screen. Simply reconnect.
Install, picking default options mostly, with the following notes:
* Set the hostname to ubuntu-quantal-amd64 or ubuntu-quantal-i386
* When partitioning disks, choose "Guided - use entire disk" (we do not want LVM)
* No automatic updates
* Choose software to install: OpenSSH server
Now that the VM is installed, it's time to configure it. If you have the memory you can do the following simultaneously:
```
kvm -m 2048 -hda /kvm/vms/vm-quantal-amd64-serial.qcow2 -cdrom /kvm/iso/ubuntu/ubuntu-12.10-server-amd64.iso -boot c -smp 2 -cpu qemu64 -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:2275-:22 -nographic
kvm -m 2048 -hda /kvm/vms/vm-quantal-i386-serial.qcow2 -cdrom /kvm/iso/ubuntu/ubuntu-12.10-server-i386.iso -boot c -smp 2 -cpu qemu64 -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:2276-:22 -nographic
ssh -p 2275 localhost
# edit /boot/grub/menu.lst and visudo, see below
ssh -p 2276 localhost
# edit /boot/grub/menu.lst and visudo, see below
ssh -t -p 2275 localhost "mkdir -v .ssh; sudo addgroup $USER sudo"
ssh -t -p 2276 localhost "mkdir -v .ssh; sudo addgroup $USER sudo"
scp -P 2275 /kvm/vms/authorized_keys localhost:.ssh/
scp -P 2276 /kvm/vms/authorized_keys localhost:.ssh/
echo $'Buildbot\n\n\n\n\ny' | ssh -p 2275 localhost 'chmod -vR go-rwx .ssh; sudo adduser --disabled-password buildbot; sudo addgroup buildbot sudo; sudo mkdir -v ~buildbot/.ssh; sudo cp -vi .ssh/authorized_keys ~buildbot/.ssh/; sudo chown -vR buildbot:buildbot ~buildbot/.ssh; sudo chmod -vR go-rwx ~buildbot/.ssh'
echo $'Buildbot\n\n\n\n\ny' | ssh -p 2276 localhost 'chmod -vR go-rwx .ssh; sudo adduser --disabled-password buildbot; sudo addgroup buildbot sudo; sudo mkdir -v ~buildbot/.ssh; sudo cp -vi .ssh/authorized_keys ~buildbot/.ssh/; sudo chown -vR buildbot:buildbot ~buildbot/.ssh; sudo chmod -vR go-rwx ~buildbot/.ssh'
scp -P 2275 /kvm/vms/ttyS0.conf buildbot@localhost:
scp -P 2276 /kvm/vms/ttyS0.conf buildbot@localhost:
ssh -p 2275 buildbot@localhost 'sudo apt-get update && sudo apt-get -y dist-upgrade;'
ssh -p 2276 buildbot@localhost 'sudo apt-get update && sudo apt-get -y dist-upgrade;'
ssh -p 2275 buildbot@localhost 'sudo cp -vi ttyS0.conf /etc/init/; rm -v ttyS0.conf; sudo shutdown -h now'
ssh -p 2276 buildbot@localhost 'sudo cp -vi ttyS0.conf /etc/init/; rm -v ttyS0.conf; sudo shutdown -h now'
```
Enabling passwordless sudo:
```
sudo VISUAL=vi visudo
# Add line at end: `%sudo ALL=NOPASSWD: ALL'
```
Editing /boot/grub/menu.lst:
```
sudo vi /etc/default/grub
# Add/edit these entries:
GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0,115200n8"
GRUB_TERMINAL="serial"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"
sudo update-grub
# exit back to the host server
```
VMs for building .debs
----------------------
```
for i in '/kvm/vms/vm-quantal-amd64-serial.qcow2 2275 qemu64' '/kvm/vms/vm-quantal-i386-serial.qcow2 2276 qemu64' ; do \
set $i; \
runvm --user=buildbot --logfile=kernel_$2.log --base-image=$1 --port=$2 --cpu=$3 "$(echo $1 | sed -e 's/serial/build/')" \
"= scp -P $2 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no /kvm/boost_1_49_0.tar.gz buildbot@localhost:/dev/shm/" \
"= scp -P $2 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no /kvm/thrift-0.9.0.tar.gz buildbot@localhost:/dev/shm/" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get update" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get -y build-dep mysql-server-5.5" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get install -y devscripts hardening-wrapper fakeroot doxygen texlive-latex-base ghostscript libevent-dev libssl-dev zlib1g-dev libpam0g-dev libreadline-gplv2-dev autoconf automake automake1.9 defoma dpatch ghostscript-x libfontenc1 libjpeg62 libltdl-dev libltdl7 libmail-sendmail-perl libxfont1 lmodern psfontmgr texlive-latex-base-doc ttf-dejavu ttf-dejavu-extra libaio-dev xfonts-encodings xfonts-utils libxml2-dev unixodbc-dev" \
"cd /usr/local/src;sudo tar zxf /dev/shm/thrift-0.9.0.tar.gz;pwd;ls" \
"cd /usr/local/src/thrift-0.9.0;echo;pwd;sudo ./configure --prefix=/usr --enable-shared=no --enable-static=yes CXXFLAGS=-fPIC CFLAGS=-fPIC && echo && echo 'now making' && echo && sleep 3 && sudo make && echo && echo 'now installing' && echo && sleep 3 && sudo make install" \
"cd /usr/local/src;sudo tar zxf /dev/shm/boost_1_49_0.tar.gz;cd /usr/local/include/;sudo ln -vs ../src/boost_1_49_0/boost ." ; \
done
```
VMs for install testing.
------------------------
See [Buildbot Setup for Virtual Machines - General Principles](../buildbot-setup-for-virtual-machines-general-principles/index) for how to obtain `my.seed` and `sources.append`.
```
for i in '/kvm/vms/vm-quantal-amd64-serial.qcow2 2275 qemu64' '/kvm/vms/vm-quantal-i386-serial.qcow2 2276 qemu64' ; do \
set $i; \
runvm --user=buildbot --logfile=kernel_$2.log --base-image=$1 --port=$2 --cpu=$3 "$(echo $1 | sed -e 's/serial/install/')" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get update" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get install -y patch libaio1 debconf-utils" \
"= scp -P $2 /kvm/vms/my55.seed /kvm/vms/sources.append buildbot@localhost:/tmp/" \
"sudo debconf-set-selections /tmp/my55.seed" \
"sudo sh -c 'cat /tmp/sources.append >> /etc/apt/sources.list'"; \
done
```
VMs for MySQL upgrade testing
-----------------------------
```
for i in '/kvm/vms/vm-quantal-amd64-serial.qcow2 2275 qemu64' '/kvm/vms/vm-quantal-i386-serial.qcow2 2276 qemu64' ; do \
set $i; \
runvm --user=buildbot --logfile=kernel_$2.log --base-image=$1 --port=$2 --cpu=$3 "$(echo $1 | sed -e 's/serial/upgrade/')" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get update" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get install -y patch libaio1 debconf-utils" \
"= scp -P $2 /kvm/vms/my55.seed /kvm/vms/sources.append buildbot@localhost:/tmp/" \
"sudo debconf-set-selections /tmp/my55.seed" \
"sudo sh -c 'cat /tmp/sources.append >> /etc/apt/sources.list'" \
'sudo DEBIAN_FRONTEND=noninteractive apt-get install -y mysql-server-5.5' \
'mysql -uroot -prootpass -e "create database mytest; use mytest; create table t(a int primary key); insert into t values (1); select * from t"' ;\
done
```
VMs for MariaDB upgrade testing
-------------------------------
*The steps below are based on the Natty steps on [Installing VM images for testing .deb upgrade between versions](../installing-vm-images-for-testing-deb-upgrade-between-versions/index).*
```
for i in '/kvm/vms/vm-quantal-amd64-serial.qcow2 2275 qemu64' '/kvm/vms/vm-quantal-i386-serial.qcow2 2276 qemu64' ; do \
set $i; \
runvm --user=buildbot --logfile=kernel_$2.log --base-image=$1 --port=$2 --cpu=$3 "$(echo $1 | sed -e 's/serial/upgrade2/')" \
"= scp -P $2 /kvm/vms/my55.seed /kvm/vms/sources.append buildbot@localhost:/tmp/" \
"= scp -P $2 /kvm/vms/mariadb-quantal.list buildbot@localhost:/tmp/tmp.list" \
"sudo debconf-set-selections /tmp/my55.seed" \
'sudo mv -vi /tmp/tmp.list /etc/apt/sources.list.d/' \
'sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db' \
"sudo DEBIAN_FRONTEND=noninteractive apt-get update" \
'sudo DEBIAN_FRONTEND=noninteractive apt-get install -y mariadb-server mariadb-server-5.5 mariadb-client mariadb-client-5.5 mariadb-test libmariadbclient-dev' \
'mysql -uroot -prootpass -e "create database mytest; use mytest; create table t(a int primary key); insert into t values (1); select * from t"' \
'sudo rm -v /etc/apt/sources.list.d/tmp.list' \
'sudo DEBIAN_FRONTEND=noninteractive apt-get update' \
"sudo sh -c 'cat /tmp/sources.append >> /etc/apt/sources.list'" \
'sudo DEBIAN_FRONTEND=noninteractive apt-get install -y patch libaio1 debconf-utils' \
'sudo DEBIAN_FRONTEND=noninteractive apt-get upgrade -y'; \
done
```
Add Key to known\_hosts
-----------------------
Do the following on each kvm host server (terrier, terrier2, i7, etc...) to add the VMs to known\_hosts.
```
# quantal-amd64
cp -avi /kvm/vms/vm-quantal-amd64-install.qcow2 /kvm/vms/vm-quantal-amd64-test.qcow2
kvm -m 1024 -hda /kvm/vms/vm-quantal-amd64-test.qcow2 -redir tcp:2275::22 -boot c -smp 2 -cpu qemu64 -net nic,model=virtio -net user -nographic
sudo su - buildbot
ssh -p 2275 buildbot@localhost sudo shutdown -h now
# answer "yes" when prompted
exit # the buildbot user
rm -v /kvm/vms/vm-quantal-amd64-test.qcow2
# quantal-i386
cp -avi /kvm/vms/vm-quantal-i386-install.qcow2 /kvm/vms/vm-quantal-i386-test.qcow2
kvm -m 1024 -hda /kvm/vms/vm-quantal-i386-test.qcow2 -redir tcp:2276::22 -boot c -smp 2 -cpu qemu64 -net nic,model=virtio -net user -nographic
sudo su - buildbot
ssh -p 2276 buildbot@localhost sudo shutdown -h now
# answer "yes" when prompted
exit # the buildbot user
rm -v /kvm/vms/vm-quantal-i386-test.qcow2
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb InnoDB Doublewrite Buffer InnoDB Doublewrite Buffer
=========================
The [InnoDB](../innodb/index) doublewrite buffer was implemented to recover from half-written pages. This can happen when there's a power failure while InnoDB is writing a page to disk. On reading that page, InnoDB can discover the corruption from the mismatch of the page checksum. However, in order to recover, an intact copy of the page would be needed.
The double write buffer provides such a copy.
Whenever InnoDB flushes a page to disk, it is first written to the double write buffer. Only when the buffer is safely flushed to disk will InnoDB write the page to the final destination. When recovering, InnoDB scans the double write buffer and for each valid page in the buffer checks if the page in the data file is valid too.
Doublewrite Buffer Settings
---------------------------
To turn off the doublewrite buffer, set the [innodb\_doublewrite](../innodb-system-variables/index#innodb_doublewrite) system variable to `0`. This is safe on filesystems that write pages atomically - that is, a page write fully succeeds or fails. But with other filesystems, it is not recommended for production systems. An alternative option is atomic writes. See [atomic write support](../atomic-write-support/index) for more details.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Roles Roles
======
Roles bundle privileges together to ease account management
| Title | Description |
| --- | --- |
| [Roles Overview](../roles_overview/index) | Bundling privileges together. |
| [CREATE ROLE](../create-role/index) | Add new roles. |
| [DROP ROLE](../drop-role/index) | Drop a role. |
| [CURRENT\_ROLE](../current_role/index) | Current role name. |
| [SET ROLE](../set-role/index) | Enable a role. |
| [SET DEFAULT ROLE](../set-default-role/index) | Sets a default role for a specified (or current) user. |
| [GRANT](../grant/index) | Create accounts and set privileges or roles. |
| [REVOKE](../revoke/index) | Remove privileges or roles. |
| [mysql.roles\_mapping Table](../mysqlroles_mapping-table/index) | MariaDB roles information. |
| [Information Schema APPLICABLE\_ROLES Table](../information-schema-applicable_roles-table/index) | Roles available to be used. |
| [Information Schema ENABLED\_ROLES Table](../information-schema-enabled_roles-table/index) | Enabled roles for the current session. |
| [SecuRich](../securich/index) | Library of security-related stored procedures. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Authentication Plugin - ed25519 Authentication Plugin - ed25519
===============================
MySQL has used SHA-1 based authentication since version 4.1. Since [MariaDB 5.2](../what-is-mariadb-52/index) this authentication plugin has been called [mysql\_native\_password](../authentication-plugin-mysql_native_password/index). Over the years as computers became faster, new attacks on SHA-1 were being developed. Nowadays SHA-1 is no longer considered as secure as it was in 2001. That's why the `ed25519` authentication plugin was created.
The `ed25519` authentication plugin uses [Elliptic Curve Digital Signature Algorithm (ECDSA)](https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm) to securely store users' passwords and to authenticate users. The [ed25519](https://en.wikipedia.org/wiki/EdDSA#Ed25519) algorithm is the same one that is [used by OpenSSH](https://www.openssh.com/txt/release-6.5). It is based on the elliptic curve and code created by [Daniel J. Bernstein](https://en.wikipedia.org/wiki/Daniel_J._Bernstein).
From a user's perspective, the `ed25519` authentication plugin still provides conventional password-based authentication.
Installing the Plugin
---------------------
Although the plugin's shared library is distributed with MariaDB by default as `auth_ed25519.so` or `auth_ed25519.dll` depending on the operating system, the plugin is not actually installed by MariaDB by default. There are two methods that can be used to install the plugin with MariaDB.
The first method can be used to install the plugin without restarting the server. You can install the plugin dynamically by executing [INSTALL SONAME](../install-soname/index) or [INSTALL PLUGIN](../install-plugin/index). For example:
```
INSTALL SONAME 'auth_ed25519';
```
The second method can be used to tell the server to load the plugin when it starts up. The plugin can be installed this way by providing the [--plugin-load](../mysqld-options/index#-plugin-load) or the [--plugin-load-add](../mysqld-options/index#-plugin-load-add) options. This can be specified as a command-line argument to [mysqld](../mysqld-options/index) or it can be specified in a relevant server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index). For example:
```
[mariadb]
...
plugin_load_add = auth_ed25519
```
Uninstalling the Plugin
-----------------------
You can uninstall the plugin dynamically by executing [UNINSTALL SONAME](../uninstall-soname/index) or [UNINSTALL PLUGIN](../uninstall-plugin/index). For example:
```
UNINSTALL SONAME 'auth_ed25519';
```
If you installed the plugin by providing the [--plugin-load](../mysqld-options/index#-plugin-load) or the [--plugin-load-add](../mysqld-options/index#-plugin-load-add) options in a relevant server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index), then those options should be removed to prevent the plugin from being loaded the next time the server is restarted.
Creating Users
--------------
**MariaDB starting with [10.4](../what-is-mariadb-104/index)**In [MariaDB 10.4](../what-is-mariadb-104/index) and later, you can create a user account by executing the [CREATE USER](../create-user/index) statement and providing the [IDENTIFIED VIA](../create-user/index#identified-viawith-authentication_plugin) clause followied by the the name of the plugin, which is `ed25519`, and providing the the `USING` clause followed by the [PASSWORD()](../password/index) function with the plain-text password as an argument. For example:
```
CREATE USER username@hostname IDENTIFIED VIA ed25519 USING PASSWORD('secret');
```
If [SQL\_MODE](../sql-mode/index) does not have `NO_AUTO_CREATE_USER` set, then you can also create the user account via [GRANT](../grant/index). For example:
```
GRANT SELECT ON db.* TO username@hostname IDENTIFIED VIA ed25519 USING PASSWORD('secret');
```
**MariaDB until [10.3](../what-is-mariadb-103/index)**In [MariaDB 10.3](../what-is-mariadb-103/index) and before, the [PASSWORD()](../password/index) function and [SET PASSWORD](../set-password/index) statement did not work with the `ed25519` authentication plugin. Instead, you would have to use the [UDF](../user-defined-functions/index) that comes with the authentication plugin to calculate the password hash. For example:
```
CREATE FUNCTION ed25519_password RETURNS STRING SONAME "auth_ed25519.so";
```
Now you can calculate a password hash by executing:
```
SELECT ed25519_password("secret");
+---------------------------------------------+
| SELECT ed25519_password("secret"); |
+---------------------------------------------+
| ZIgUREUg5PVgQ6LskhXmO+eZLS0nC8be6HPjYWR4YJY |
+---------------------------------------------+
```
Now you can use it to create the user account using the new password hash.
To create a user account via [CREATE USER](../create-user/index), specify the name of the plugin in the [IDENTIFIED VIA](../create-user/index#identified-viawith-authentication_plugin) clause while providing the password hash as the `USING` clause. For example:
```
CREATE USER username@hostname IDENTIFIED VIA ed25519
USING 'ZIgUREUg5PVgQ6LskhXmO+eZLS0nC8be6HPjYWR4YJY';
```
If [SQL\_MODE](../sql-mode/index) does not have `NO_AUTO_CREATE_USER` set, then you can also create the user account via [GRANT](../grant/index). For example:
```
GRANT SELECT ON db.* TO username@hostname IDENTIFIED VIA ed25519
USING 'ZIgUREUg5PVgQ6LskhXmO+eZLS0nC8be6HPjYWR4YJY';
```
Note that users require a password in order to be able to connect. It is possible to create a user without specifying a password, but they will be unable to connect.
Changing User Passwords
-----------------------
**MariaDB starting with [10.4](../what-is-mariadb-104/index)**In [MariaDB 10.4](../what-is-mariadb-104/index) and later, you can change a user account's password by executing the [SET PASSWORD](../set-password/index) statement followed by the [PASSWORD()](../password/index) function and providing the plain-text password as an argument. For example:
```
SET PASSWORD = PASSWORD('new_secret')
```
You can also change the user account's password with the [ALTER USER](../alter-user/index) statement. You would have to specify the name of the plugin in the [IDENTIFIED VIA](../alter-user/index#identified-viawith-authentication_plugin) clause while providing the plain-text password as an argument to the [PASSWORD()](../password/index) function in the `USING` clause. For example:
```
ALTER USER username@hostname IDENTIFIED VIA ed25519 USING PASSWORD('new_secret');
```
**MariaDB until [10.3](../what-is-mariadb-103/index)**In [MariaDB 10.3](../what-is-mariadb-103/index) and before, the [PASSWORD()](../password/index) function and [SET PASSWORD](../set-password/index) statement did not work with the `ed25519` authentication plugin. Instead, you would have to use the [UDF](../user-defined-functions/index) that comes with the authentication plugin to calculate the password hash. For example:
```
CREATE FUNCTION ed25519_password RETURNS STRING SONAME "auth_ed25519.so";
```
Now you can calculate a password hash by executing:
```
SELECT ed25519_password("secret");
+---------------------------------------------+
| SELECT ed25519_password("secret"); |
+---------------------------------------------+
| ZIgUREUg5PVgQ6LskhXmO+eZLS0nC8be6HPjYWR4YJY |
+---------------------------------------------+
```
Now you can change the user account's password using the new password hash.
You can change the user account's password with the [ALTER USER](../alter-user/index) statement. You would have to specify the name of the plugin in the [IDENTIFIED VIA](../alter-user/index#identified-viawith-authentication_plugin) clause while providing the password hash as the `USING` clause. For example:
```
ALTER USER username@hostname IDENTIFIED VIA ed25519
USING 'ZIgUREUg5PVgQ6LskhXmO+eZLS0nC8be6HPjYWR4YJY';
```
Client Authentication Plugins
-----------------------------
For clients that use the `libmysqlclient` or [MariaDB Connector/C](../mariadb-connector-c/index) libraries, MariaDB provides one client authentication plugin that is compatible with the `ed25519` authentication plugin:
* `client_ed25519`
When connecting with a [client or utility](../clients-utilities/index) to a server as a user account that authenticates with the `ed25519` authentication plugin, you may need to tell the client where to find the relevant client authentication plugin by specifying the `--plugin-dir` option. For example:
```
mysql --plugin-dir=/usr/local/mysql/lib64/mysql/plugin --user=alice
```
### `client_ed25519`
The `client_ed25519` client authentication plugin hashes and signs the password using the [Elliptic Curve Digital Signature Algorithm (ECDSA)](https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm) before sending it to the server.
Support in Client Libraries
---------------------------
### Using the Plugin with MariaDB Connector/C
[MariaDB Connector/C](../mariadb-connector-c/index) supports `ed25519` authentication using the [client authentication plugins](client-authentication-plugins) mentioned in the previous section since MariaDB Connector/C 3.1.0.
### Using the Plugin with MariaDB Connector/ODBC
[MariaDB Connector/ODBC](../mariadb-connector-odbc/index) supports `ed25519` authentication using the [client authentication plugins](client-authentication-plugins) mentioned in the previous section since MariaDB Connector/ODBC 3.1.2.
### Using the Plugin with MariaDB Connector/J
[MariaDB Connector/J](../mariadb-connector-j/index) supports `ed25519` authentication since MariaDB Connector/J 2.2.1.
### Using the Plugin with MariaDB Connector/Node.js
[MariaDB Connector/Node.js](../nodejs-connector/index) supports `ed25519` authentication since MariaDB Connector/Node.js 2.1.0.
### Using the Plugin with MySqlConnector for .NET
[MySqlConnector for ADO.NET](../mysqlconnector-for-adonet/index) supports `ed25519` authentication since MySqlConnector 0.56.0.
The connector implemented support for this authentication plugin in a separate [NuGet](https://docs.microsoft.com/en-us/nuget/what-is-nuget) package called [MySqlConnector.Authentication.Ed25519](https://www.nuget.org/packages/MySqlConnector.Authentication.Ed25519/). After the package is installed, your application must call `Ed25519AuthenticationPlugin.Install` to enable it.
Versions
--------
| Version | Status | Introduced |
| --- | --- | --- |
| 1.1 | Stable | [MariaDB 10.4.0](https://mariadb.com/kb/en/mariadb-1040-release-notes/) |
| 1.0 | Stable | [MariaDB 10.3.8](https://mariadb.com/kb/en/mariadb-1038-release-notes/), [MariaDB 10.2.17](https://mariadb.com/kb/en/mariadb-10217-release-notes/), [MariaDB 10.1.35](https://mariadb.com/kb/en/mariadb-10135-release-notes/) |
| 1.0 | Beta | [MariaDB 10.2.5](https://mariadb.com/kb/en/mariadb-1025-release-notes/), [MariaDB 10.1.22](https://mariadb.com/kb/en/mariadb-10122-release-notes/) |
Options
-------
### `ed25519`
* **Description:** Controls how the server should treat the plugin when the server starts up.
+ Valid values are:
- `OFF` - Disables the plugin without removing it from the [mysql.plugins](../mysqlplugin-table/index) table.
- `ON` - Enables the plugin. If the plugin cannot be initialized, then the server will still continue starting up, but the plugin will be disabled.
- `FORCE` - Enables the plugin. If the plugin cannot be initialized, then the server will fail to start with an error.
- `FORCE_PLUS_PERMANENT` - Enables the plugin. If the plugin cannot be initialized, then the server will fail to start with an error. In addition, the plugin cannot be uninstalled with [UNINSTALL SONAME](../uninstall-soname/index) or [UNINSTALL PLUGIN](../uninstall-plugin/index) while the server is running.
+ See [Plugin Overview: Configuring Plugin Activation at Server Startup](../plugin-overview/index#configuring-plugin-activation-at-server-startup) for more information.
* **Commandline:** `--ed25519=value`
* **Data Type:** `enumerated`
* **Default Value:** `ON`
* **Valid Values:** `OFF`, `ON`, `FORCE`, `FORCE_PLUS_PERMANENT`
---
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb InnoDB Row Formats InnoDB Row Formats
===================
| Title | Description |
| --- | --- |
| [InnoDB Row Formats Overview](../innodb-row-formats-overview/index) | InnoDB's row formats are REDUNDANT, COMPACT, DYNAMIC, and COMPRESSED. |
| [InnoDB REDUNDANT Row Format](../innodb-redundant-row-format/index) | The REDUNDANT row format is the original non-compacted row format. |
| [InnoDB COMPACT Row Format](../innodb-compact-row-format/index) | Similar to the REDUNDANT row format, but stores data in a more compact manner. |
| [InnoDB DYNAMIC Row Format](../innodb-dynamic-row-format/index) | Similar to the COMPACT row format, but can store even more data on overflow pages. |
| [InnoDB COMPRESSED Row Format](../innodb-compressed-row-format/index) | Similar to the COMPACT row format, but can store even more data on overflow pages. |
| [Troubleshooting Row Size Too Large Errors with InnoDB](../troubleshooting-row-size-too-large-errors-with-innodb/index) | Fixing "Row size too large (> 8126). Changing some columns to TEXT or BLOB may help." |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema PARAMETERS Table Information Schema PARAMETERS Table
===================================
The [Information Schema](../information_schema/index) `PARAMETERS` table stores information about [stored procedures](../stored-procedures/index) and [stored functions](../stored-functions/index) parameters.
It contains the following columns:
| Column | Description |
| --- | --- |
| `SPECIFIC_CATALOG` | Always `def`. |
| `SPECIFIC_SCHEMA` | Database name containing the stored routine parameter. |
| `SPECIFIC_NAME` | Stored routine name. |
| `ORDINAL_POSITION` | Ordinal position of the parameter, starting at `1`. `0` for a function RETURNS clause. |
| `PARAMETER_MODE` | One of `IN`, `OUT`, `INOUT` or `NULL` for RETURNS. |
| `PARAMETER_NAME` | Name of the parameter, or `NULL` for RETURNS. |
| `DATA_TYPE` | The column's [data type](../data-types/index). |
| `CHARACTER_MAXIMUM_LENGTH` | Maximum length. |
| `CHARACTER_OCTET_LENGTH` | Same as the `CHARACTER_MAXIMUM_LENGTH` except for multi-byte [character sets](../data-types-character-sets-and-collations/index). |
| `NUMERIC_PRECISION` | For numeric types, the precision (number of significant digits) for the column. NULL if not a numeric field. |
| `NUMERIC_SCALE` | For numeric types, the scale (significant digits to the right of the decimal point). NULL if not a numeric field. |
| `DATETIME_PRECISION` | Fractional-seconds precision, or `NULL` if not a [time data type](../date-and-time-data-types/index). |
| `CHARACTER_SET_NAME` | [Character set](../data-types-character-sets-and-collations/index) if a non-binary [string data type](../string-data-types/index), otherwise `NULL`. |
| `COLLATION_NAME` | [Collation](../data-types-character-sets-and-collations/index) if a non-binary [string data type](../string-data-types/index), otherwise `NULL`. |
| `DTD_IDENTIFIER` | Description of the data type. |
| `ROUTINE_TYPE` | `PROCEDURE` or `FUNCTION`. |
Information from this table is similar to that found in the `param_list` column in the [mysql.proc](../mysqlproc-table/index) table, and the output of the `[SHOW CREATE PROCEDURE](../show-create-procedure/index)` and `[SHOW CREATE FUNCTION](../show-create-function/index)` statements.
To obtain information about the routine itself, you can query the [Information Schema ROUTINES table](../information-schema-routines-table/index).
Example
-------
```
SELECT * FROM information_schema.PARAMETERS
LIMIT 1 \G
********************** 1. row **********************
SPECIFIC_CATALOG: def
SPECIFIC_SCHEMA: accounts
SPECIFIC_NAME: user_counts
ORDINAL_POSITION: 1
PARAMETER_MODE: IN
PARAMETER_NAME: user_order
DATA_TYPE: varchar
CHARACTER_MAXIMUM_LENGTH: 255
CHARACTER_OCTET_LENGTH: 765
NUMERIC_PRECISION: NULL
NUMERIC_SCALE: NULL
DATETIME_PRECISION: NULL
CHARACTER_SET_NAME: utf8
COLLATION_NAME: utf8_general_ci
DTD_IDENTIFIER: varchar(255)
ROUTINE_TYPE: PROCEDURE
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Flashback Flashback
=========
**MariaDB starting with [10.2.4](https://mariadb.com/kb/en/mariadb-1024-release-notes/)**DML-only flashback was introduced in [MariaDB 10.2.4](https://mariadb.com/kb/en/mariadb-1024-release-notes/)
Flashback is a feature that will allow instances, databases or tables to be rolled back to an old snapshot.
Flashback is currently supported only over DML statements ([INSERT](../insert/index), [DELETE](../delete/index), [UPDATE](../update/index)). An upcoming version of MariaDB will add support for flashback over DDL statements ([DROP](../drop-table/index), [TRUNCATE](../truncate-table/index), [ALTER](../alter-table/index), etc.) by copying or moving the current table to a reserved and hidden database, and then copying or moving back when using flashback. See [MDEV-10571](https://jira.mariadb.org/browse/MDEV-10571).
Flashback is achieved in MariaDB Server using existing support for full image format binary logs ([binlog\_row\_image=FULL](../replication-and-binary-log-system-variables/index#binlog_row_image)), so it supports all engines.
The real work of Flashback is done by [mariadb-binlog / mysqlbinlog](../mysqlbinlog/index) with `--flashback`. This causes events to be translated: INSERT to DELETE, DELETE to INSERT, and for UPDATEs, the before and after images are swapped.
When executing `mariadb-binlog / mysqlbinlog` with `--flashback`, the Flashback events will be stored in memory. You should make sure your server has enough memory for this feature.
Arguments
---------
* [mariadb-binlog / mysqlbinlog](../mysqlbinlog/index) has the option `--flashback` or `-B` that will let it work in flashback mode.
* [mariadbd / mysqld](../mysqld-options/index) has the option [--flashback](../mysqld-options/index#-flashback) that enables the binary log and sets `binlog_format=ROW`. It is not mandatory to use this option if you have already enabled those options directly.
Do not use `-v` `-vv` options, as this adds verbose information to the binary log which can cause problems when importing. See [MDEV-12066](https://jira.mariadb.org/browse/MDEV-12066) and [MDEV-12067](https://jira.mariadb.org/browse/MDEV-12067).
Example
-------
With a table "mytable" in database "test", you can compare the output with `--flashback` and without.
```
mysqlbinlog /var/lib/mysql/mysql-bin.000001 -vv -d test -T mytable \
--start-datetime="2013-03-27 14:54:00" > review.sql
```
```
mysqlbinlog /var/lib/mysql/mysql-bin.000001 -vv -d test -T mytable \
--start-datetime="2013-03-27 14:54:00" --flashback > flashback.sql
```
If you know the exact position, `--start-position` can be used instead of `--start-datetime`.
Then, by importing the output file (`mysql < flashback.sql`), you can flash your database/table back to the specified time or position.
Common Use Case
---------------
A common use case for Flashback is the following scenario:
* You have one primary and two replicas, one started with `--flashback` (i.e. with binary logging enabled, using [binlog\_format=ROW](../replication-and-binary-log-system-variables/index#binlog_format), and [binlog\_row\_image=FULL](../replication-and-binary-log-system-variables/index#binlog_row_image)).
* Something goes wrong on the primary (like a wrong update or delete) and you would like to revert to a state of the database (or just a table) at a certain point in time.
* Remove the flashback-enabled replica from replication.
* Invoke [mariadb-binlog / mysqlbinlog](../mysqlbinlog/index) to find the exact log position of the first offending operation after the state you want to revert to.
* Run `mysqlbinlog --flashback --start-position=xyz | mysql` to pipe the output of `mariadb-binlog / mysqlbinlog` directly to the `mariadb / mysql` client, or save the output to a file and then direct the file to the command-line client.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Benchmark Results Benchmark Results
==================
This section is for the posting of benchmark results.
| Title | Description |
| --- | --- |
| [Threadpool Benchmarks](../threadpool-benchmarks/index) | Here are some benchmarks of some development threadpool code (the 5.5 thre... |
| [Sysbench Results](../sysbench-results/index) | Results from various Sysbench runs. The data is in OpenDocument Spreadsheet... |
| [sysbench v0.5 - Single Five Minute Runs on T500 Laptop](../1643/index) | MariDB/MySQL sysbench benchmark comparison in % Number of threads |
| [sysbench v0.5 - Single Five Minute Runs on perro](../sysbench-v05-single-five-minute-runs-on-perro/index) | MariDB/MySQL sysbench benchmark comparison in % Each test was run for 5 minutes. |
| [sysbench v0.5 - Single Five Minute Runs on work](../sysbench-v05-single-five-minute-runs-on-work/index) | MariDB/MySQL sysbench benchmark comparison in % Each test was run for 5 minutes. |
| [sysbench v0.5 - Three Times Five Minutes Runs on work with 5.1.42](../1646/index) | MariDB/MySQL sysbench benchmark comparison in % Each test was run for 5 minutes 3 times |
| [sysbench v0.5 - 3x Five Minute Runs on work with 5.2-wl86](../1647/index) | 3x Five Minute Runs on work with 5.2-wl86 key cache partitions on and off M... |
| [sysbench v0.5 - 3x Five Minute Runs on work with 5.1 vs. 5.2-wl86](../1648/index) | 3x Five Minute Runs on work with 5.1 vs. 5.2-wl86 key cache partitions off ... |
| [sysbench v0.5 - 3x 15 Minute Runs on perro with 5.2-wl86 a](../1649/index) | sysbench v0.5 - 3x 15 Minute Runs on perro with 5.2-wl86 key cache partitio... |
| [sysbench v0.5 - 3x 15 Minute Runs on perro with 5.2-wl86 b](../1650/index) | 3x 15 Minute Runs on perro with 5.2-wl86 key cache partitions off, 8, and 3... |
| [Select Random Ranges and Select Random Point](../select-random-ranges-and-select-random-point/index) | select\_random\_ranges (select 10 ranges with a delta as parameter) select\_ra... |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Choosing the Right Storage Engine Choosing the Right Storage Engine
=================================
A high-level overview of the main reasons for choosing a particular storage engine:
Topic List
----------
### General Purpose
* [InnoDB](../innodb/index) is a good general transaction storage engine, and, from [MariaDB 10.2](../what-is-mariadb-102/index), the best choice in most cases. It is the default storage engine from [MariaDB 10.2](../what-is-mariadb-102/index). For earlier releases, XtraDB was a performance enhanced fork of InnoDB and is usually preferred.
* [XtraDB](../xtradb/index) is the best choice in [MariaDB 10.1](../what-is-mariadb-101/index) and earlier in the majority of cases. It is a performance-enhanced fork of InnoDB and is MariaDB's default engine until [MariaDB 10.1](../what-is-mariadb-101/index).
* [Aria](../aria/index), MariaDB's more modern improvement on [MyISAM](../myisam/index), has a small footprint and allows for easy copying between systems.
* [MyISAM](../myisam/index) has a small footprint and allows for easy copying between systems. MyISAM is MySQL's oldest storage engine. There is usually little reason to use it except for legacy purposes. Aria is MariaDB's more modern improvement.
### Scaling, Partitioning
When you want to split your database load on several servers or optimize for scaling. We also suggest looking at [Galera](../galera/index), a synchronous multi-master cluster.
* [Spider](../spider/index) uses partitioning to provide data sharding through multiple servers.
* [ColumnStore](../columnstore/index) utilizes a massively parallel distributed data architecture and is designed for big data scaling to process petabytes of data.
* The [MERGE](../merge/index) storage engine is a collection of identical [MyISAM](../myisam/index) tables that can be used as one. "Identical" means that all tables have identical column and index information.
* [TokuDB](../tokudb/index) is a transactional storage engine which is optimized for workloads that do not fit in memory, and provides a good compression ratio. TokuDB has been deprecated by its upstream developers, and is disabled in [MariaDB 10.5](../what-is-mariadb-105/index), and removed in [MariaDB 10.6](../what-is-mariadb-106/index)
### Compression / Archive
* [MyRocks](../myrocks/index) enables greater compression than InnoDB, as well as less write amplification giving better endurance of flash storage and improving overall throughput.
* The [Archive](../archive/index) storage engine is, unsurprisingly, best used for archiving.
* [TokuDB](../tokudb/index) is a transactional storage engine which is optimized for workloads that do not fit in memory, and provides a good compression ratio. TokuDB has been deprecated by its upstream developers, and is disabled in [MariaDB 10.5](../what-is-mariadb-105/index), and removed in [MariaDB 10.6](../what-is-mariadb-106/index)
### Connecting to Other Data Sources
When you want to use data not stored in a MariaDB database.
* [CONNECT](../connect/index) allows access to different kinds of text files and remote resources as if they were regular MariaDB tables.
* The [CSV](../csv/index) storage engine can read and append to files stored in CSV (comma-separated-values) format. However, since [MariaDB 10.0](../what-is-mariadb-100/index), CONNECT is a better choice and is more flexibly able to read and write such files.
* [FederatedX](../federatedx/index) uses libmysql to talk to the data source, the data source being a remote RDBMS. Currently, since FederatedX only uses libmysql, it can only talk to another MySQL RDBMS.
* [CassandraSE](../cassandrase/index) is a storage engine allowing access to an older version of Apache Cassandra NoSQL DBMS. It was relatively experimental, is no longer being actively developed and has been removed in [MariaDB 10.6](../what-is-mariadb-106/index).
### Search Optimized
Search engines optimized for search.
* [SphinxSE](../sphinxse/index) is used as a proxy to run statements on a remote Sphinx database server (mainly useful for advanced fulltext searches).
* [Mroonga](../mroonga/index) provides fast CJK-ready full text searching using column store.
### Cache, Read-only
* [MEMORY](../memory-storage-engine/index) does not write data on-disk (all rows are lost on crash) and is best-used for read-only caches of data from other tables, or for temporary work areas. With the default [InnoDB](../innodb/index) and other storage engines having good caching, there is less need for this engine than in the past.
### Other Specialized Storage Engines
* [S3 Storage Engine](../s3-storage-engine/index) is a read-only storage engine that stores its data in Amazon S3.
* [Sequence](../sequence/index) allows the creation of ascending or descending sequences of numbers (positive integers) with a given starting value, ending value and increment, creating virtual, ephemeral tables automatically when you need them.
* The [BLACKHOLE](../blackhole/index) storage engine accepts data but does not store it and always returns an empty result. This can be useful in [replication](../replication/index) environments, for example, if you want to run complex filtering rules on a slave without incurring any overhead on a master.
* [OQGRAPH](../oqgraph/index) allows you to handle hierarchies (tree structures) and complex graphs (nodes having many connections in several directions).
Alphabetical List
-----------------
* The [Archive](../archive/index) storage engine is, unsurprisingly, best used for archiving.
* [Aria](../aria/index), MariaDB's more modern improvement on MyISAM, has a small footprint and allows for easy copy between systems.
* The [BLACKHOLE](../blackhole/index) storage engine accepts data but does not store it and always returns an empty result. This can be useful in [replication](../replication/index) environments, for example, if you want to run complex filtering rules on a slave without incurring any overhead on a master.
* [CassandraSE](../cassandrase/index) is a storage engine allowing access to an older version of Apache Cassandra NoSQL DBMS. It was relatively experimental, is no longer being actively developed and has been removed in [MariaDB 10.6](../what-is-mariadb-106/index).
* [ColumnStore](../columnstore/index) utilizes a massively parallel distributed data architecture and is designed for big data scaling to process petabytes of data.
* [CONNECT](../connect/index) allows access to different kinds of text files and remote resources as if they were regular MariaDB tables.
* The [CSV](../csv-overview/index) storage engine can read and append to files stored in CSV (comma-separated-values) format. However, since [MariaDB 10.0](../what-is-mariadb-100/index), CONNECT is a better choice and is more flexibly able to read and write such files.
* [FederatedX](../federatedx/index) uses libmysql to talk to the data source, the data source being a remote RDBMS. Currently, since FederatedX only uses libmysql, it can only talk to another MySQL RDBMS.
* [InnoDB](../innodb/index) is a good general transaction storage engine, and, from [MariaDB 10.2](../what-is-mariadb-102/index), the best choice in most cases. It is the default storage engine from [MariaDB 10.2](../what-is-mariadb-102/index). For earlier releases, XtraDB was a performance enhanced fork of InnoDB and is usually preferred.
* The [MERGE](../merge/index) storage engine is a collection of identical MyISAM tables that can be used as one. "Identical" means that all tables have identical column and index information.
* [MEMORY](../memory-storage-engine/index) does not write data on-disk (all rows are lost on crash) and is best-used for read-only caches of data from other tables, or for temporary work areas. With the default [InnoDB](../innodb/index) and other storage engines having good caching, there is less need for this engine than in the past.
* [Mroonga](../mroonga/index) provides fast CJK-ready full text searching using column store.
* [MyISAM](../myisam/index) has a small footprint and allows for easy copying between systems. MyISAM is MySQL's oldest storage engine. There is usually little reason to use it except for legacy purposes. Aria is MariaDB's more modern improvement.
* [MyRocks](../myrocks/index) enables greater compression than InnoDB, as well as less write amplification giving better endurance of flash storage and improving overall throughput.
* [OQGRAPH](../oqgraph/index) allows you to handle hierarchies (tree structures) and complex graphs (nodes having many connections in several directions).
* [S3 Storage Engine](../s3-storage-engine/index) is a read-only storage engine that stores its data in Amazon S3.
* [Sequence](../sequence/index) allows the creation of ascending or descending sequences of numbers (positive integers) with a given starting value, ending value and increment, creating virtual, ephemeral tables automatically when you need them.
* [SphinxSE](../sphinx-storage-engine/index) is used as a proxy to run statements on a remote Sphinx database server (mainly useful for advanced fulltext searches).
* [Spider](../spider/index) uses partitioning to provide data sharding through multiple servers.
* [TokuDB](../tokudb/index) is a transactional storage engine which is optimized for workloads that do not fit in memory, and provides a good compression ratio. TokuDB has been deprecated by its upstream developers, and is disabled in [MariaDB 10.5](../what-is-mariadb-105/index), and removed in [MariaDB 10.6](../what-is-mariadb-106/index)
* [XtraDB](../xtradb/index) is the best choice in [MariaDB 10.1](../what-is-mariadb-101/index) and earlier in the majority of cases. It is a performance-enhanced fork of InnoDB and is MariaDB's default engine until [MariaDB 10.1](../what-is-mariadb-101/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysql_fix_extensions mysql\_fix\_extensions
======================
**MariaDB starting with [10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/)**From [MariaDB 10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/), `mariadb-fix-extensions` is a symlink to `mysql_fix_extensions`.
**MariaDB starting with [10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/)**From [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/), `mysql_fix_extensions` is the symlink, and `mariadb-fix-extensions` the binary name.
`mysql_fix_extensions` converts the extensions for [MyISAM](../myisam/index) (or ISAM) table files to their canonical forms.
It looks for files with extensions matching any lettercase variant of `.frm`, `.myd`, `.myi`, `.isd`, and `.ism` and renames them to have extensions of `.frm`, `.MYD`, `.MYI`, `.ISD`, and `.ISM`, respectively. This can be useful after transferring the files from a system with case-insensitive file names (such as Windows) to a system with case-sensitive file names.
Invoke mysql\_fix\_extensions as follows, where data\_dir is the path name to the MariaDB data directory.
```
mysql_fix_extensions data_dir
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Thread Groups in the Unix Implementation of the Thread Pool Thread Groups in the Unix Implementation of the Thread Pool
===========================================================
This article does not apply to the thread pool implementation on Windows. On Windows, MariaDB uses a native thread pool created with the `[CreateThreadpool](https://docs.microsoft.com/en-us/windows/desktop/api/threadpoolapiset/nf-threadpoolapiset-createthreadpool)` APl, which has its own methods to distribute threads between CPUs.
On Unix, the thread pool implementation uses objects called thread groups to divide up client connections into many independent sets of threads. The `[thread\_pool\_size](../thread-pool-system-status-variables/index#thread_pool_size)` system variable defines the number of thread groups on a system. Generally speaking, the goal of the thread group implementation is to have one running thread on each CPU on the system at a time. Therefore, the default value of the `[thread\_pool\_size](../thread-pool-system-status-variables/index#thread_pool_size)` system variable is auto-sized to the number of CPUs on the system.
When setting the `[thread\_pool\_size](../thread-pool-system-status-variables/index#thread_pool_size)` system variable's value at system startup, the max value is `100000`. However, it is not a good idea to set it that high. When setting its value dynamically, the max value is either `128` or the value that was set at system startup--whichever value is higher. It can be changed dynamically with `[SET GLOBAL](../set/index#global-session)`. For example:
```
SET GLOBAL thread_pool_size=32;
```
It can also be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
..
thread_handling=pool-of-threads
thread_pool_size=32
```
If you do not want MariaDB to use all CPUs on the system for some reason, then you can set it to a lower value than the number of CPUs. For example, this would make sense if the MariaDB Server process is limited to certain CPUs with the `[taskset](https://linux.die.net/man/1/taskset)` utility on Linux.
If you set the value to the number of CPUs and if you find that the CPUs are still underutilized, then try increasing the value.
The `[thread\_pool\_size](../thread-pool-system-status-variables/index#thread_pool_size)` system variable tends to have the most visible performance effect. It is roughly equivalent to the number of threads that can run at the same time. In this case, run means use CPU, rather than sleep or wait. If a client connection needs to sleep or wait for some reason, then it wakes up another client connection in the thread group before it does so.
One reason that CPU underutilization may occur in rare cases is that the thread pool is not always informed when a thread is going to wait. For example, some waits, such as a page fault or a miss in the OS buffer cache, cannot be detected by MariaDB. Prior to [MariaDB 10.0](../what-is-mariadb-100/index), network I/O related waits could also be missed.
Distributing Client Connections Between Thread Groups
-----------------------------------------------------
When a new client connection is created, its thread group is determined using the following calculation:
```
thread_group_id = connection_id % thread_pool_size
```
The `connection_id` value in the above calculation is the same monotonically increasing number that you can use to identify connections in `[SHOW PROCESSLIST](../show-processlist/index)` output or the `[information\_schema.PROCESSLIST](../information-schema-processlist-table/index)` table.
This calculation should assign client connections to each thread group in a round-robin manner. In general, this should result in an even distribution of client connections among thread groups.
Types of Threads
----------------
### Thread Group Threads
Thread groups have two different kinds of threads: a **listener thread** and **worker threads**.
* A thread group's **worker threads** actually perform work on behalf of client connections. A thread group can have many **worker threads**, but usually, only one will be actively running at a time. This is not always the case. For example, the thread group can become *oversubscribed* if the thread pool's **timer thread** detects that the thread group is *stalled*. This is explained more in the sections below.
* A thread group's **listener thread** listens for I/O events and distributes work to the **worker threads**. If it detects that there is a request that needs to be worked on, then it can wake up a sleeping **worker thread** in the thread group, if any exist. If the **listener thread** is the only thread in the thread group, then it can also create a new **worker thread**. If there is only one request to handle, and if the `[thread\_pool\_dedicated\_listener](../thread-pool-system-status-variables/index#thread_pool_dedicated_listener)` system variable is not enabled, then the **listener thread** can also become a **worker thread** and handle the request itself. This helps decrease the overhead that may be introduced by excessively waking up sleeping **worker threads** and excessively creating new **worker threads**.
### Global Threads
The thread pool has one global thread: a **timer thread**. The **timer thread** performs tasks, such as:
* Checks each thread group for stalls.
* Ensures that each thread group has a **listener thread**.
Thread Creation
---------------
A new thread is created in a thread group in the scenarios listed below.
In all of the scenarios below, the thread pool implementation prefers to wake up a sleeping **worker thread** that already exists in the thread group, rather than to create a new thread.
### Worker Thread Creation by Listener Thread
A thread group's **listener thread** can create a new **worker thread** when it has more client connection requests to distribute, but no pre-existing **worker threads** are available to work on the requests. This can help to ensure that the thread group always has enough threads to keep one **worker thread** active at a time.
A thread group's **listener thread** creates a new **worker thread** if all of the following conditions are met:
* The **listener thread** receives a client connection request that needs to be worked on.
* There are more client connection requests in the thread group's work queue that the **listener thread** still needs to distribute to **worker threads**, so the **listener thread** should not become a **worker thread**.
* There are no active **worker threads** in the thread group.
* There are no sleeping **worker threads** in the thread group that the **listener thread** can wake up.
* And one of the following conditions is also met:
+ The entire thread pool has fewer than `[thread\_pool\_max\_threads](../thread-pool-system-status-variables/index#thread_pool_max_threads)`.
+ There are fewer than two threads in the thread group. This is to guarantee that each thread group can have at least two threads, even if `[thread\_pool\_max\_threads](../thread-pool-system-status-variables/index#thread_pool_max_threads)` has already been reached or exceeded.
### Thread Creation by Worker Threads during Waits
A thread group's **worker thread** can create a new **worker thread** when the thread has to wait on something, and the thread group has more client connection requests queued, but no pre-existing **worker threads** are available to work on them. This can help to ensure that the thread group always has enough threads to keep one **worker thread** active at a time. For most workloads, this tends to be the primary mechanism that creates new **worker threads**.
A thread group's **worker thread** creates a new thread if all of the following conditions are met:
* The **worker thread** has to wait on some request. For example, it might be waiting on disk I/O, or it might be waiting on a lock, or it might just be waiting for a query that called the `[SLEEP()](../sleep/index)` function to finish.
* There are no active **worker threads** in the thread group.
* There are no sleeping **worker threads** in the thread group that the **worker thread** can wake up.
* And one of the following conditions is also met:
+ The entire thread pool has fewer than `[thread\_pool\_max\_threads](../thread-pool-system-status-variables/index#thread_pool_max_threads)`.
+ There are fewer than two threads in the thread group. This is to guarantee that each thread group can have at least two threads, even if `[thread\_pool\_max\_threads](../thread-pool-system-status-variables/index#thread_pool_max_threads)` has already been reached or exceeded.
* And one of the following conditions is also met:
+ There are more client connection requests in the thread group's work queue that the **listener thread** still needs to distribute to **worker threads**. In this case, the new thread is intended to be a **worker thread**.
+ There is currently no **listener thread** in the thread group. For example, if the `[thread\_pool\_dedicated\_listener](../thread-pool-system-status-variables/index#thread_pool_dedicated_listener)` system variable is not enabled, then the thread group's **listener thread** can became a **worker thread**, so that it could handle some client connection request. In this case, the new thread can become the thread group's **listener thread**.
### Listener Thread Creation by Timer Thread
The thread pool's **timer thread** can create a new **listener thread** for a thread group when the thread group has more client connection requests that need to be distributed, but the thread group does not currently have a **listener thread** to distribute them. This can help to ensure that the thread group does not miss client connection requests because it has no **listener thread**.
The thread pool's **timer thread** creates a new **listener thread** for a thread group if all of the following conditions are met:
* The thread group has not handled any I/O events since the last check by the timer thread.
* There is currently no **listener thread** in the thread group. For example, if the `[thread\_pool\_dedicated\_listener](../thread-pool-system-status-variables/index#thread_pool_dedicated_listener)` system variable is not enabled, then the thread group's **listener thread** can became a **worker thread**, so that it could handle some client connection request. In this case, the new thread can become the thread group's **listener thread**.
* There are no sleeping **worker threads** in the thread group that the **timer thread** can wake up.
* And one of the following conditions is also met:
+ The entire thread pool has fewer than `[thread\_pool\_max\_threads](../thread-pool-system-status-variables/index#thread_pool_max_threads)`.
+ There are fewer than two threads in the thread group. This is to guarantee that each thread group can have at least two threads, even if `[thread\_pool\_max\_threads](../thread-pool-system-status-variables/index#thread_pool_max_threads)` has already been reached or exceeded.
* If the thread group already has active **worker threads**, then the following condition also needs to be met:
+ A **worker thread** has not been created for the thread group within the *throttling interval*.
### Worker Thread Creation by Timer Thread during Stalls
The thread pool's **timer thread** can create a new **worker thread** for a thread group when the thread group is stalled. This can help to ensure that a long query can't monopole its thread group.
The thread pool's **timer thread** creates a new **worker thread** for a thread group if all of the following conditions are met:
* The **timer thread** thinks that the thread group is stalled. This means that the following conditions have been met:
+ There are more client connection requests in the thread group's work queue that the **listener thread** still needs to distribute to **worker threads**.
+ No client connection requests have been allowed to be dequeued to run since the last stall check by the **timer thread**.
* There are no sleeping **worker threads** in the thread group that the **timer thread** can wake up.
* And one of the following conditions is also met:
+ The entire thread pool has fewer than `[thread\_pool\_max\_threads](../thread-pool-system-status-variables/index#thread_pool_max_threads)`.
+ There are fewer than two threads in the thread group. This is to guarantee that each thread group can have at least two threads, even if `[thread\_pool\_max\_threads](../thread-pool-system-status-variables/index#thread_pool_max_threads)` has already been reached or exceeded.
* A **worker thread** has not been created for the thread group within the *throttling interval*.
### Thread Creation Throttling
In some of the scenarios listed above, a thread is only created within a thread group if no new threads have been created for the thread group within the *throttling interval*. The throttling interval depends on the number of threads that are already in the thread group.
**MariaDB starting with [10.5](../what-is-mariadb-105/index)**In [MariaDB 10.5](../what-is-mariadb-105/index) and later, thread creation is not throttled until a thread group has more than 1 + `[thread\_pool\_oversubscribe](../thread-pool-system-status-variables/index#thread_pool_oversubscribe)` threads:
| Number of Threads in Thread Group | Throttling Interval (milliseconds) |
| --- | --- |
| 0-(1 + `[thread\_pool\_oversubscribe](../thread-pool-system-status-variables/index#thread_pool_oversubscribe)`) | 0 |
| 4-7 | 50 \* `THROTTLING_FACTOR` |
| 8-15 | 100 \* `THROTTLING_FACTOR` |
| 16-65536 | 20 \* `THROTTLING_FACTOR` |
`THROTTLING_FACTOR = ([thread\_pool\_stall\_limit](../thread-pool-system-status-variables/index#thread_pool_stall_limit) / MAX (500,[thread\_pool\_stall\_limit](../thread-pool-system-status-variables/index#thread_pool_stall_limit)))`
**MariaDB until [10.4](../what-is-mariadb-104/index)**In [MariaDB 10.4](../what-is-mariadb-104/index) and before, thread creation is throttled when a thread group has more than 3 threads:
| Number of Threads in Thread Group | Throttling Interval (milliseconds) |
| --- | --- |
| 0-3 | 0 |
| 4-7 | 50 |
| 8-15 | 100 |
| 16-65536 | 200 |
Thread Group Stalls
-------------------
The thread pool has a feature that allows it to detect if a client connection is executing a long-running query that may be monopolizing its thread group. If a client connection were to monopolize its thread group, then that could prevent other client connections in the thread group from running their queries. In other words, the thread group would appear to be *stalled*.
This stall detection feature is implemented by creating a **timer thread** that periodically checks if any of the thread groups are stalled. There is only a single **timer thread** for the entire thread pool. The `[thread\_pool\_stall\_limit](../thread-pool-system-status-variables/index#thread_pool_stall_limit)` system variable defines the number of milliseconds between each stall check performed by the timer thread. The default value is `500`. It can be changed dynamically with `[SET GLOBAL](../set/index#global-session)`. For example:
```
SET GLOBAL thread_pool_stall_limit=300;
```
It can also be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
..
thread_handling=pool-of-threads
thread_pool_size=32
thread_pool_stall_limit=300
```
The **timer thread** considers a thread group to be stalled if the following is true:
* There are more client connection requests in the thread group's work queue that the **listener thread** still needs to distribute to **worker threads**.
* No client connection requests have been allowed to be dequeued to run since the last stall check by the **timer thread**.
This indicates that the one or more client connections currently using the active **worker threads** may be monopolizing the thread group, and preventing the queued client connections from performing work. When the **timer thread** detects that a thread group is stalled, it wakes up a sleeping **worker thread** in the thread group, if one is available. If there isn't one, then it creates a new **worker thread** in the thread group. This temporarily allows several client connections in the thread group to run in parallel.
The `[thread\_pool\_stall\_limit](../thread-pool-system-status-variables/index#thread_pool_stall_limit)` system variable essentially defines the limit for what a "fast query" is. If a query takes longer than `[thread\_pool\_stall\_limit](../thread-pool-system-status-variables/index#thread_pool_stall_limit)`, then the thread pool is likely to think that it is too slow, and it will either wake up a sleeping worker thread or create a new worker thread to let another client connection in the thread group run a query in parallel.
In general, changing the value of the `[thread\_pool\_stall\_limit](../thread-pool-system-status-variables/index#thread_pool_stall_limit)` system variable has the following effect:
* Setting it to **higher** values can help avoid starting too many parallel threads if you expect a lot of client connections to execute long-running queries.
* Setting it to **lower** values can help prevent deadlocks.
### Thread Group Oversubscription
If the **timer thread** were to detect a stall in a thread group, then it would either wake up a sleeping **worker thread** or create a new **worker thread** in that thread group. At that point, the thread group would have multiple active **worker threads**. In other words, the thread group would be *oversubscribed*.
You might expect that the thread pool would shutdown one of the **worker threads** when the stalled client connection finished what it was doing, so that the thread group would only have one active **worker thread** again. However, this does not always happen. Once a thread group is oversubscribed, the `[thread\_pool\_oversubscribe](../thread-pool-system-status-variables/index#thread_pool_oversubscribe)` system variable defines the upper limit for when **worker threads** start shutting down after they finish work for client connections. The default value is `3`. It can be changed dynamically with `[SET GLOBAL](../set/index#global-session)`. For example:
```
SET GLOBAL thread_pool_oversubscribe=10;
```
It can also be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
..
thread_handling=pool-of-threads
thread_pool_size=32
thread_pool_stall_limit=300
thread_pool_oversubscribe=10
```
To clarify, the `[thread\_pool\_oversubscribe](../thread-pool-system-status-variables/index#thread_pool_oversubscribe)` system variable does not play any part in the creation of new **worker threads**. The `[thread\_pool\_oversubscribe](../thread-pool-system-status-variables/index#thread_pool_oversubscribe)` system variable is only used to determine how many **worker threads** should remain active in a thread group, once a thread group is already oversubscribed due to stalls.
In general, the default value of `3` should be adequate for most users. Most users should not need to change the value of the `[thread\_pool\_oversubscribe](../thread-pool-system-status-variables/index#thread_pool_oversubscribe)` system variable.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Buildbot Setup for Virtual Machines - Debian 4 i386 Buildbot Setup for Virtual Machines - Debian 4 i386
===================================================
Create the VM:
```
cd /kvm/vms
qemu-img create -f qcow2 vm-debian4-i386-serial.qcow2 8G
kvm -m 2047 -hda /kvm/vms/vm-debian4-i386-serial.qcow2 -cdrom /kvm/debian-40r8-i386-netinst.iso -redir 'tcp:2241::22' -boot d -smp 2 -cpu qemu32,-nx -net nic,model=e1000 -net user
```
Serial console and account setup
--------------------------------
From base install, setup for serial port, and setup accounts for passwordless ssh login and sudo:
```
kvm -m 2047 -hda /kvm/vms/vm-debian4-i386-serial.qcow2 -cdrom /kvm/debian-40r8-i386-netinst.iso -redir 'tcp:2241::22' -boot c -smp 2 -cpu qemu32,-nx -net nic,model=e1000 -net user
su
apt-get install sudo openssh-server
VISUAL=vi visudo
# Add at the end:t %sudo ALL=NOPASSWD: ALL
# Add account <USER> to group sudo
# Copy in public ssh key.
# Add in /etc/inittab:
S0:2345:respawn:/sbin/getty -L ttyS0 19200 vt100
```
Add to /boot/grub/menu.lst:
```
serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
terminal --timeout=3 serial console
```
also add in menu.lst to kernel line:
```
console=tty0 console=ttyS0,115200n8
```
Do these steps:
```
# Add user buildbot, with disabled password. Add as sudo, and add ssh key.
sudo /usr/sbin/adduser --disabled-password buildbot
sudo /usr/sbin/adduser buildbot sudo
sudo su - buildbot
mkdir .ssh
# Add all necessary keys.
cat >.ssh/authorized_keys
chmod -R go-rwx .ssh
```
VM for build
------------
```
qemu-img create -b vm-debian4-i386-serial.qcow2 -f qcow2 vm-debian4-i386-build.qcow2
kvm -m 2047 -hda /kvm/vms/vm-debian4-i386-build.qcow2 -cdrom /kvm/debian-40r8-i386-netinst.iso -redir 'tcp:2241::22' -boot c -smp 2 -cpu qemu32,-nx -net nic,model=e1000 -net user -nographic
sudo apt-get build-dep mysql-server-5.0
# Some latex packages fail to install because they complain that the
# source is more than 5 years old! I solved by setting back the clock a
# couple of years temporarily ...
sudo apt-get install devscripts doxygen texlive-latex-base gs lsb-release fakeroot libevent-dev libssl-dev zlib1g-dev libreadline5-dev
```
VM for install testing
----------------------
```
qemu-img create -b vm-debian4-i386-serial.qcow2 -f qcow2 vm-debian4-i386-install.qcow2
kvm -m 2047 -hda /kvm/vms/vm-debian4-i386-install.qcow2 -cdrom /kvm/debian-40r8-i386-netinst.iso -redir 'tcp:2241::22' -boot c -smp 2 -cpu qemu32,-nx -net nic,model=e1000 -net user -nographic
```
See the [General Principles](../buildbot-setup-for-virtual-machines-general-principles/index) article for how to make the '`my.seed`' file.
```
# No packages mostly!
sudo apt-get install debconf-utils
cat >>/etc/apt/sources.list <<END
deb file:///home/buildbot/buildbot/debs binary/
deb-src file:///home/buildbot/buildbot/debs source/
END
sudo debconf-set-selections /tmp/my.seed
```
VM for upgrade testing
----------------------
```
qemu-img create -b vm-debian4-i386-install.qcow2 -f qcow2 vm-debian4-i386-upgrade.qcow2
kvm -m 2047 -hda /kvm/vms/vm-debian4-i386-upgrade.qcow2 -cdrom /kvm/debian-40r8-i386-netinst.iso -redir 'tcp:2241::22' -boot c -smp 2 -cpu qemu64 -net nic,model=e1000 -net user -nographic
sudo apt-get install mysql-server-5.0
mysql -uroot -prootpass -e "create database mytest; use mytest; create table t(a int primary key); insert into t values (1); select * from t"
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysql-stress-test mysql-stress-test
=================
*mysql-stress-test.pl* is a Perl script that performs stress-testing of the MariaDB server. It requires a version of Perl that has been built with threads support.
Syntax
------
```
mysql-stress-test.pl [options]
```
Options
-------
| Option | Description |
| --- | --- |
| `--help` | Display a help message and exit. |
| `--abort-on-error=N` | Causes the program to abort if an error with severity less than or equal to N was encountered. Set to 1 to abort on any error. |
| `--check-tests-file` | Periodically check the file that lists the tests to be run. If it has been modified, reread the file. This can be useful if you update the list of tests to be run during a stress test. |
| `--cleanup` | Force cleanup of the working directory. |
| `--log-error-details` | Log error details in the global [error log](../error-log/index) file. |
| `--loop-count=N` | In sequential test mode, the number of loops to execute before exiting. |
| `--mysqltest=path` | The path name to the [mysqltest](../mysqltest/index) program. |
| `--server-database=db_name` | The database to use for the tests. The default is test. |
| `--server-host=host_name` | he host name of the local host to use for making a TCP/IP connection to the local server. By default, the connection is made to localhost using a Unix socket file. |
| `--server-logs-dir=path` | This option is required. path is the directory where all client session logs will be stored. Usually this is the shared directory that is associated with the server used for testing. |
| `--server-password=password` | The password to use when connecting to the server. |
| `--server-port=port_num` | The TCP/IP port number to use for connecting to the server. The default is 3306. |
| `--server-socket=file_name` | For connections to localhost, the Unix socket file to use, or, on Windows, the name of the named pipe to use. The default is `/tmp/mysql.sock`. |
| `--server-user=user_name` | The MariaDB user name to use when connecting to the server. The default is root. |
| `--sleep-time=N` | The delay in seconds between test executions. |
| `--stress-basedir=path` | This option is required and specified the path is the working directory for the test run. It is used as the temporary location for result tracking during testing. |
| `--stress-datadir=path` | The directory of data files to be used during testing. The default location is the data directory under the location given by the `--stress-suite-basedir` option. |
| `--stress-init-file[=path]` | *file\_name* is the location of the file that contains the list of tests to be run once to initialize the database for the testing. If missing, the default file is `stress_init.txt` in the test suite directory. |
| `--stress-mode=mode` | This option indicates the test order in stress-test mode. The mode value is either `random` to select tests in random order or `seq` to run tests in each thread in the order specified in the test list file. The default mode is `random`. |
| `--stress-suite-basedir=path` | This option is required and specifies the directory that has the *t* and *r* subdirectories containing the test case and result files. This directory is also the default location of the `stress-test.txt` file that contains the list of tests. (A different location can be specified with the `--stress-tests-file` option.) |
| `--stress-tests-file[=file_name]` | Use this option to run the stress tests. *file\_name* is the location of the file that contains the list of tests. If omitted, the default file is `stress-test.txt` in the stress suite directory. (See `--stress-suite-basedir`.) |
| `--suite=suite_name` | Run the named test suite. The default name is `main` (the regular test suite located in the `mysql-test` directory). |
| `--test-count=N` | The number of tests to execute before exiting. |
| `--test-duration=N` | The duration of stress testing in seconds. |
| `--threads=N` | The number of threads. The default is 1. |
| `--verbose` | Verbose mode. Print more information about what the program does |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Error: symbol mysql_get_server_name, version libmysqlclient_16 not defined Error: symbol mysql\_get\_server\_name, version libmysqlclient\_16 not defined
==============================================================================
If you see the error message:
```
symbol mysql_get_server_name, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference
```
...then you are probably trying to use the mysql command-line client from MariaDB with libmysqlclient.so from MySQL.
The symbol `mysql_get_server_name()` is something present in the MariaDB source tree and not in the MySQL tree.
If you have both the MariaDB client package and the MySQL client packages installed this error will happen if your system finds the MySQL version of `libmysqlclient.so` first.
To figure out which library is being linked in dynamically (ie, the wrong one) use the 'ldd' tool.
```
ldd $(which mysql) | grep mysql
```
or
```
ldd /path/to/the/binary | grep mysql
```
For example:
```
me@mybox:~$ ldd $(which mysql)|grep mysql
libmysqlclient.so.16 => /usr/lib/libmysqlclient.so.16 (0xb74df000)
```
You can then use your package manager's tools to find out which package the library belongs to.
On CentOS the command to find out which package installed a specific file is:
```
rpm -qf /path/to/file
```
On Debian-based systems, the command is:
```
dpkg -S /path/to/file
```
Here's an example of locating the library and finding out which package it belongs to on an Ubuntu system:
```
me@mybox:~$ ldd $(which mysql)|grep mysql
libmysqlclient.so.16 => /usr/lib/libmysqlclient.so.16 (0xb75f8000)
me@mybox:~$ dpkg -S /usr/lib/libmysqlclient.so.16
libmariadbclient16: /usr/lib/libmysqlclient.so.16
```
The above shows that the mysql command-line client is using the library `/usr/lib/libmysqlclient.so.16` and that that library is part of the `libmariadbclient16` Ubuntu package. Unsurprisingly, the mysql command-line client works perfectly on this system.
If the answer that came back had been something other than a MariaDB package, then it is likely there would have been issues with running the MariaDB mysql client application.
If the library that the system tries to use is not from a MariaDB package, the remedy is to remove the offending package (and possibly install or re-install the correct package) so that the correct library can be used.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ENCODE ENCODE
======
Syntax
------
```
ENCODE(str,pass_str)
```
Description
-----------
ENCODE is not considered cryptographically secure, and should not be used for password encryption.
Encrypt `str` using `pass_str` as the password. To decrypt the result, use `[DECODE()](../decode/index)`.
The result is a binary string of the same length as `str`.
The strength of the encryption is based on how good the random generator is.
It is not recommended to rely on the encryption performed by the ENCODE function. Using a salt value (changed when a password is updated) will improve matters somewhat, but for storing passwords, consider a more cryptographically secure function, such as [SHA2()](../sha2/index).
Examples
--------
```
ENCODE('not so secret text', CONCAT('random_salt','password'))
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema THREAD_POOL_GROUPS Table Information Schema THREAD\_POOL\_GROUPS Table
=============================================
**MariaDB starting with [10.5](../what-is-mariadb-105/index)**The [Information Schema](../information_schema/index) `THREAD_POOL_GROUPS` table was introduced in [MariaDB 10.5.0](https://mariadb.com/kb/en/mariadb-1050-release-notes/).
The table provides information about [thread pool](../thread-pool-in-mariadb/index) groups, and contains the following columns:
| Column | Description |
| --- | --- |
| `GROUP_ID` | |
| `CONNECTIONS` | |
| `THREADS` | |
| `ACTIVE_THREADS` | |
| `STANDBY_THREADS` | |
| `QUEUE_LENGTH` | |
| `HAS_LISTENER` | |
| `IS_STALLED` | |
Setting [thread\_pool\_dedicated\_listener](../thread-pool-system-status-variables/index#thread_pool_dedicated_listener) will give each group its own dedicated listener, and the listener thread will not pick up work items. As a result, the actual queue size in the table will be more exact, since IO requests are immediately dequeued from poll, without delay.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb DDL statements that differ for ColumnStore DDL statements that differ for ColumnStore
==========================================
In most cases, a ColumnStore table works just as any other MariaDB table. There are however a few differences.
The following table lists the data definition statements (DDL) that differ from normal MariaDB [DDL](../data-definition/index) when used on ColumnStore tables.
| DDL | Difference |
| --- | --- |
| [DROP TABLE](../drop-table/index) | Columnstore supports [DROP TABLE ...RESTRICT](../columnstore-drop-table/index) which only drops the table in the front end. |
| [RENAME TABLE](../rename-table/index) | ColumnStore doesn't allow one to rename a table between databases. |
| [CREATE TABLE](../create-table/index) | ColumnStore doesn't need indexes, partitions and many other table and column options. See here for [ColumnStore Specific Syntax](../mariadb/columnstore-create-table/index) |
| [CREATE INDEX](../create-index/index) | ColumnStore doesn't need indexes. Hence an index many not be created on a table that is defined with engine=columnstore |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SQL Diagnostic Manager & SQLyog SQL Diagnostic Manager & SQLyog
===============================
SQL Diagnostic Manager (<https://www.idera.com/productssolutions/sql-diagnostic-manager-for-mysql>) A monitoring tool that gives DBAs real-time insights for optimizing the performance of MariaDB servers.
Key Features: 1. Agentless monitoring 2. Fully customizable 3. Affordable
Webyog’s SQL DM is fully compatible with Amazon Aurora databases, and Webyog is an Amazon launch partner.
Features specifically for RDS and Aurora instances also available. SQL Diagnostic Manager is browser based and is a proud MariaDB Monitor and Advisor.
SQLyog Ultimate (<http://www.webyog.com/en>) A MySQL administration tool for DBAs, developers, and database architects. This tool enables database developers, administrators, and architects to visually compare, optimize, and document schemas.
Key Features: 1. Automatically synchronize data 2. Visually compare data 3. Import external data
SQLyog runs on Windows and is a graphical MariaDB manager and admin tool, combining the features of MySQL Administrator, phpMyAdmin and other MariaDB Front Ends and MariaDB GUI tools.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CASE Statement CASE Statement
==============
Syntax
------
```
CASE case_value
WHEN when_value THEN statement_list
[WHEN when_value THEN statement_list] ...
[ELSE statement_list]
END CASE
```
Or:
```
CASE
WHEN search_condition THEN statement_list
[WHEN search_condition THEN statement_list] ...
[ELSE statement_list]
END CASE
```
Description
-----------
The text on this page describes the `CASE` statement for [stored programs](../stored-programs-and-views/index). See the [CASE OPERATOR](../case-operator/index) for details on the CASE operator outside of [stored programs](../stored-programs-and-views/index).
The `CASE` statement for [stored programs](../stored-programs-and-views/index) implements a complex conditional construct. If a `search_condition` evaluates to true, the corresponding SQL statement list is executed. If no search condition matches, the statement list in the `ELSE` clause is executed. Each `statement_list` consists of one or more statements.
The `CASE` statement cannot have an `ELSE NULL` clause, and it is terminated with `END CASE` instead of `END`. implements a complex conditional construct. If a `search_condition` evaluates to true, the corresponding SQL statement list is executed. If no search condition matches, the statement list in the `ELSE` clause is executed. Each `statement_list` consists of one or more statements.
If no when\_value or search\_condition matches the value tested and the `CASE` statement contains no `ELSE` clause, a Case not found for `CASE` statement error results.
Each statement\_list consists of one or more statements; an empty `statement_list` is not allowed. To handle situations where no value is matched by any `WHEN` clause, use an `ELSE` containing an empty [BEGIN ... END](../begin-end/index) block, as shown in this example:
```
DELIMITER |
CREATE PROCEDURE p()
BEGIN
DECLARE v INT DEFAULT 1;
CASE v
WHEN 2 THEN SELECT v;
WHEN 3 THEN SELECT 0;
ELSE BEGIN END;
END CASE;
END;
|
```
The indentation used here in the `ELSE` clause is for purposes of clarity only, and is not otherwise significant. See [Delimiters in the mysql client](../delimiters-in-the-mysql-client/index) for more on the use of the delimiter command.
**Note:** The syntax of the `CASE` statement used inside stored programs differs slightly from that of the SQL CASE expression described in [CASE OPERATOR](../case-operator/index). The `CASE` statement cannot have an `ELSE NULL` clause, and it is terminated with `END CASE` instead of `END`.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB Audit Plugin - Configuration MariaDB Audit Plugin - Configuration
====================================
After the audit plugin has been installed and loaded, there will be some new global variables within MariaDB. These can be used to configure many components, limits, and methods related to auditing the server. You may set these variables related to the logs, such as their location, size limits, rotation parameters, and method of logging information. You may also set what information is logged, such connects, disconnects, and failed attempts to connect. You can also have the audit plugin log queries, read and write access to tables. So as not to overload your logs, the audit plugin can be configured based on lists of users. You can include or exclude the activities of specific users in the logs.
To see a list of [audit plugin-related variables](../mariadb-audit-plugin-system-variables/index) on the server and their values, execute the follow while connected to the server:
```
SHOW GLOBAL VARIABLES LIKE 'server_audit%';
+-------------------------------+-----------------------+
| Variable_name | Value |
+-------------------------------+-----------------------+
| server_audit_events | CONNECT,QUERY,TABLE |
| server_audit_excl_users | |
| server_audit_file_path | server_audit.log |
| server_audit_file_rotate_now | OFF |
| server_audit_file_rotate_size | 1000000 |
| server_audit_file_rotations | 9 |
| server_audit_incl_users | |
| server_audit_logging | ON |
| server_audit_mode | 0 |
| server_audit_output_type | file |
| server_audit_query_log_limit | 1024 |
| server_audit_syslog_facility | LOG_USER |
| server_audit_syslog_ident | mysql-server_auditing |
| server_audit_syslog_info | |
| server_audit_syslog_priority | LOG_INFO |
+-------------------------------+-----------------------+
```
The values of these variables can be changed by an administrator with the `SUPER` privilege, using the [`SET`](../set/index) statement. Below is an example of how to disable audit logging:
```
SET GLOBAL server_audit_logging=OFF;
```
Although it is possible to change all of the variables shown above, some of them may be reset when the server restarts. Therefore, you may want set them in the configuration file (e.g., `/etc/my.cnf.d/server.cnf`) to ensure the values are the same after a restart:
```
[server]
...
server_audit_logging=OFF
…
```
For the reason given in the paragraph above, you would not generally set variables related to the auditing plugin using the [`SET`](../set/index) statement. However, you might do so to test settings before making them more permanent. Since one cannot always restart the server, you would use the [`SET`](../set/index) statement to change immediately the variables and then include the same settings in the configuration file so that the variables are set again as you prefer when the server is restarted.
#### Configuring Logs and Setting Other Variables
Of all of the server variables you can set, you may want to set initially the [server\_audit\_events](../server_audit-system-variables/index#server_audit_events) variable to tell the Audit Plugin which events to log. The [Log Settings documentation page](../mariadb-audit-plugin-log-settings/index) describes in detail the choices you have and provides examples of log entries related to them.
You can see a detailed list of system variables related to the MariaDB Audit Plugin on the [System Variables documentation page](../mariadb-audit-plugin-system-variables/index). Status variables related to the Audit Plugin are listed and explained on the [Status Variables documentation page](../mariadb-audit-plugin-status-variables/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Semisynchronous Replication Semisynchronous Replication
===========================
**MariaDB starting with [10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)**Semisynchronous replication is no longer a plugin in [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/) and later. This removes some overhead and improves performance. See [MDEV-13073](https://jira.mariadb.org/browse/MDEV-13073) for more information.
Description
-----------
[Standard MariaDB replication](../standard-replication/index) is asynchronous, but MariaDB also provides a semisynchronous replication option.
With regular asynchronous replication, replicas request events from the primary's binary log whenever the replicas are ready. The primary does not wait for a replica to confirm that an event has been received.
With fully synchronous replication, all replicas are required to respond that they have received the events. See [Galera Cluster](../galera-cluster/index).
Semisynchronous replication waits for just one replica to acknowledge that it has received and logged the events.
Semisynchronous replication therefore comes with some negative performance impact, but increased data integrity. Since the delay is based on the roundtrip time to the replica and back, this delay is minimized for servers in close proximity over fast networks.
In [MariaDB 10.3](../what-is-mariadb-103/index) and later, semisynchronous replication is built into the server, so it can be enabled immediately in those versions.
In [MariaDB 10.2](../what-is-mariadb-102/index) and before, semisynchronous replication requires the user to install a plugin on both the primary and the replica before it can be enabled.
Installing the Plugin
---------------------
**MariaDB starting with [10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)**In [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/) and later, the Semisynchronous Replication feature is built into MariaDB server and is no longer provided by a plugin. **This means that installing the plugin is not supported on those versions.** In [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/) and later, you can skip right to [Enabling Semisynchronous Replication](#enabling-semisynchronous-replication).
The semisynchronous replication plugin is actually two different plugins--one for the primary, and one for the replica. Shared libraries for both plugins are included with MariaDB. Although the plugins' shared libraries distributed with MariaDB by default, the plugin is not actually installed by MariaDB by default prior to [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/). There are two methods that can be used to install the plugin with MariaDB.
The first method can be used to install the plugin without restarting the server. You can install the plugin dynamically by executing [INSTALL SONAME](../install-soname/index) or [INSTALL PLUGIN](../install-plugin/index).
For example, if it's a primary:
```
INSTALL SONAME 'semisync_master';
```
Or if it's a replica:
```
INSTALL SONAME 'semisync_slave';
```
The second method can be used to tell the server to load the plugin when it starts up. The plugin can be installed this way by providing the [--plugin-load](../mysqld-options/index#-plugin-load) or the [--plugin-load-add](../mysqld-options/index#-plugin-load-add) options. This can be specified as a command-line argument to [mysqld](../mysqld-options/index) or it can be specified in a relevant server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index).
For example, if it's a primary:
```
[mariadb]
...
plugin_load_add = semisync_master
```
Or if it's a replica:
```
[mariadb]
...
plugin_load_add = semisync_slave
```
Uninstalling the Plugin
-----------------------
**MariaDB starting with [10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)**In [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/) and later, the Semisynchronous Replication feature is built into MariaDB server and is no longer provided by a plugin. **This means that uninstalling the plugin is not supported on those versions.**
You can uninstall the plugin dynamically by executing [UNINSTALL SONAME](../uninstall-soname/index) or [UNINSTALL PLUGIN](../uninstall-plugin/index).
For example, if it's a primary:
```
UNINSTALL SONAME 'semisync_master';
```
Or if it's a replica:
```
UNINSTALL SONAME 'semisync_slave';
```
If you installed the plugin by providing the [--plugin-load](../mysqld-options/index#-plugin-load) or the [--plugin-load-add](../mysqld-options/index#-plugin-load-add) options in a relevant server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index), then those options should be removed to prevent the plugin from being loaded the next time the server is restarted.
Enabling Semisynchronous Replication
------------------------------------
Semisynchronous replication can be enabled by setting the relevant system variables on the primary and the replica.
If a server needs to be able to switch between acting as a primary and a replica, then you can enable both the primary and replica system variables on the server. For example, you might need to do this if [MariaDB MaxScale](../maxscale/index) is being used to enable [auto-failover or switchover](../mariadb-maxscale-23-mariadb-monitor/index#cluster-manipulation-operations) with [MariaDB Monitor](../mariadb-maxscale-23-mariadb-monitor/index).
### Enabling Semisynchronous Replication on the Primary
Semisynchronous replication can be enabled on the primary by setting the [rpl\_semi\_sync\_master\_enabled](#rpl_semi_sync_master_enabled) system variable to `ON`. It can be set dynamically with [SET GLOBAL](../set/index#global-session). For example:
```
SET GLOBAL rpl_semi_sync_master_enabled=ON;
```
It can also be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
...
rpl_semi_sync_master_enabled=ON
```
### Enabling Semisynchronous Replication on the Replica
Semisynchronous replication can be enabled on the replica by setting the [rpl\_semi\_sync\_slave\_enabled](#rpl_semi_sync_slave_enabled) system variable to `ON`. It can be set dynamically with [SET GLOBAL](../set/index#global-session). For example:
```
SET GLOBAL rpl_semi_sync_slave_enabled=ON;
```
It can also be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
...
rpl_semi_sync_slave_enabled=ON
```
If semisynchronous replication is enabled on a server when [replica threads](../replication-threads/index#threads-on-the-slave) were already running, the replica I/O thread will need to be restarted to enable the replica to register as a semisynchronous replica when it connects to the primary. For example:
```
STOP SLAVE IO_THREAD;
START SLAVE IO_THREAD;
```
If this is not done, and the replica thread is already running, then it will continue to use asynchronous replication.
Configuring the Primary Timeout
-------------------------------
In semisynchronous replication, only after the events have been written to the relay log and flushed does the replica acknowledge receipt of a transaction's events. If the replica does not acknowledge the transaction before a certain amount of time has passed, then a timeout occurs and the primary switches to asynchronous replication. This will be reflected in the primary's [error log](../error-log/index) with messages like the following:
```
[Warning] Timeout waiting for reply of binlog (file: mariadb-1-bin.000002, pos: 538), semi-sync up to file , position 0.
[Note] Semi-sync replication switched OFF.
```
When this occurs, the [Rpl\_semi\_sync\_master\_status](../semisynchronous-replication-plugin-status-variables/index#rpl_semi_sync_master_status) status variable will be switched to `OFF`.
When at least one semisynchronous replica catches up, semisynchronous replication is resumed. This will be reflected in the primary's [error log](../error-log/index) with messages like the following:
```
[Note] Semi-sync replication switched ON with replica (server_id: 184137206) at (mariadb-1-bin.000002, 215076)
```
When this occurs, the [Rpl\_semi\_sync\_master\_status](../semisynchronous-replication-plugin-status-variables/index#rpl_semi_sync_master_status) status variable will be switched to `ON`.
The number of times that semisynchronous replication has been switched off can be checked by looking at the value of the [Rpl\_semi\_sync\_master\_no\_times](../semisynchronous-replication-plugin-status-variables/index#rpl_semi_sync_master_no_times) status variable.
If you see a lot of timeouts like this in your environment, then you may want to change the timeout period. The timeout period can be changed by setting the [rpl\_semi\_sync\_master\_timeout](#rpl_semi_sync_master_timeout) system variable. It can be set dynamically with [SET GLOBAL](../set/index#global-session). For example:
```
SET GLOBAL rpl_semi_sync_master_timeout=20000;
```
It can also be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
...
rpl_semi_sync_master_timeout=20000
```
To determine a good value for the [rpl\_semi\_sync\_master\_timeout](#rpl_semi_sync_master_timeout) system variable, you may want to look at the values of the [Rpl\_semi\_sync\_master\_net\_avg\_wait\_time](../semisynchronous-replication-plugin-status-variables/index#rpl_semi_sync_master_net_avg_wait_time) and [Rpl\_semi\_sync\_master\_tx\_avg\_wait\_time](../semisynchronous-replication-plugin-status-variables/index#rpl_semi_sync_master_tx_avg_wait_time) status variables.
Configuring the Primary Wait Point
----------------------------------
In semisynchronous replication, there are two potential points at which the primary can wait for the replica acknowledge the receipt of a transaction's events. These two wait points have different advantages and disadvantages.
The wait point is configured by the [rpl\_semi\_sync\_master\_wait\_point](#rpl_semi_sync_master_wait_point) system variable. The supported values are:
* `AFTER_SYNC`
* `AFTER_COMMIT`
It can be set dynamically with [SET GLOBAL](../set/index#global-session). For example:
```
SET GLOBAL rpl_semi_sync_master_wait_point='AFTER_SYNC';
```
It can also be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
...
rpl_semi_sync_master_wait_point=AFTER_SYNC
```
When this variable is set to `AFTER_SYNC`, the primary performs the following steps:
1. Prepares the transaction in the storage engine.
2. Syncs the transaction to the [binary log](../binary-log/index).
3. Waits for acknowledgement from the replica.
4. Commits the transaction to the storage engine.
5. Returns an acknowledgement to the client.
The effects of the `AFTER_SYNC` wait point are:
* All clients see the same data on the primary at the same time; after acknowledgement by the replica and after being committed to the storage engine on the primary.
* If the primary crashes, then failover should be lossless, because all transactions committed on the primary would have been replicated to the replica.
* However, if the primary crashes, then its [binary log](../binary-log/index) may also contain events for transactions that were prepared by the storage engine and written to the binary log, but that were never actually committed by the storage engine. As part of the server's [automatic crash recovery](../heuristic-recovery-with-the-transaction-coordinator-log/index) process, the server may recover these prepared transactions when the server is restarted. This could cause the "old" crashed primary to become inconsistent with its former replicas when they have been reconfigured to replace the old primary with a new one. The old primary in such a scenario can be re-introduced only as a [semisync slave](index#rpl_semi_sync_slave_enabled). The server post-crash recovery of the server configured with `rpl_semi_sync_slave_enabled = ON` ensures through [MDEV-21117](https://jira.mariadb.org/browse/MDEV-21117) that the server will not have extra transactions. The reconfigured as semisync replica server's binlog gets truncated to discard transactions proven not to be committed, in any of their branches if they are multi-engine. Truncation does not occur though when there exists a non-transactional group of events beyond the truncation position in which case recovery reports an error. When the semisync replica recovery can't be carried out, the crashed primary may need to be rebuilt.
When this variable is set to `AFTER_COMMIT`, the primary performs the following steps:
1. Prepares the transaction in the storage engine.
2. Syncs the transaction to the [binary log](../binary-log/index).
3. Commits the transaction to the storage engine.
4. Waits for acknowledgement from the replica.
5. Returns an acknowledgement to the client.
The effects of the `AFTER_COMMIT` wait point are:
* Other clients may see the committed transaction before the committing client.
* If the primary crashes, then failover may involve some data loss, because the primary may have committed transactions that had not yet been acknowledged by the replicas.
Versions
--------
| Version | Status | Introduced |
| --- | --- | --- |
| N/A | N/A | [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/) |
| 1.0 | Stable | [MariaDB 10.1.13](https://mariadb.com/kb/en/mariadb-10113-release-notes/) |
| 1.0 | Gamma | [MariaDB 10.0.13](https://mariadb.com/kb/en/mariadb-10013-release-notes/) |
| 1.0 | Unknown | [MariaDB 10.0.11](https://mariadb.com/kb/en/mariadb-10011-release-notes/) |
| 1.0 | N/A | [MariaDB 5.5](../what-is-mariadb-55/index) |
System Variables
----------------
#### `rpl_semi_sync_master_enabled`
* **Description:** Set to `ON` to enable semi-synchronous replication primary. Disabled by default.
* **Commandline:** `--rpl-semi-sync-master-enabled[={0|1}]`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `boolean`
* **Default Value:** `OFF`
---
#### `rpl_semi_sync_master_timeout`
* **Description:** The timeout value, in milliseconds, for semi-synchronous replication in the primary. If this timeout is exceeded in waiting on a commit for acknowledgement from a replica, the primary will revert to asynchronous replication.
+ When a timeout occurs, the [Rpl\_semi\_sync\_master\_status](../semisynchronous-replication-plugin-status-variables/index#rpl_semi_sync_master_status) status variable will also be switched to `OFF`.
+ See [Configuring the Primary Timeout](#configuring-the-master-timeout) for more information.
* **Commandline:** `--rpl-semi-sync-master-timeout[=#]`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `numeric`
* **Default Value:** `10000` (10 seconds)
* **Range:** `0` to `18446744073709551615`
---
#### `rpl_semi_sync_master_trace_level`
* **Description:** The tracing level for semi-sync replication. Four levels are defined:
+ `1`: General level, including for example time function failures.
+ `16`: More detailed level, with more verbose information.
+ `32`: Net wait level, including more information about network waits.
+ `64`: Function level, including information about function entries and exits.
* **Commandline:** `--rpl-semi-sync-master-trace-level[=#]`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `numeric`
* **Default Value:** `32`
* **Range:** `0` to `18446744073709551615`
---
#### `rpl_semi_sync_master_wait_no_slave`
* **Description:** If set to `ON`, the default, the replica count (recorded by [Rpl\_semi\_sync\_master\_clients](../semisynchronous-replication-plugin-status-variables/index#rpl_semi_sync_master_clients)) may drop to zero, and the primary will still wait for the timeout period. If set to `OFF`, the primary will revert to asynchronous replication as soon as the replica count drops to zero.
* **Commandline:** `--rpl-semi-sync-master-wait-no-slave[={0|1}]`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `boolean`
* **Default Value:** `ON`
---
#### `rpl_semi_sync_master_wait_point`
* **Description:** Whether the transaction should wait for semi-sync acknowledgement after having synced the binlog (`AFTER_SYNC`), or after having committed in storage engine (`AFTER_COMMIT`, the default).
+ When this variable is set to `AFTER_SYNC`, the primary performs the following steps:
1. Prepares the transaction in the storage engine.
2. Syncs the transaction to the [binary log](../binary-log/index).
3. Waits for acknowledgement from the replica.
4. Commits the transaction to the storage engine.
5. Returns an acknowledgement to the client.
+ When this variable is set to `AFTER_COMMIT`, the primary performs the following steps:
1. Prepares the transaction in the storage engine.
2. Syncs the transaction to the [binary log](../binary-log/index).
3. Commits the transaction to the storage engine.
4. Waits for acknowledgement from the replica.
5. Returns an acknowledgement to the client.
+ In [MariaDB 10.1.2](https://mariadb.com/kb/en/mariadb-1012-release-notes/) and before, this system variable does not exist. However, in those versions, the primary waits for the acknowledgement from replicas at a point that is equivalent to `AFTER_COMMIT`.
+ See [Configuring the Primary Wait Point](#configuring-the-master-wait-point) for more information.
* **Commandline:** `--rpl-semi-sync-master-wait-point=value`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `enum`
* **Default Value:** `AFTER_COMMIT`
* **Valid Values:** `AFTER_SYNC`, `AFTER_COMMIT`
* **Introduced:** [MariaDB 10.1.3](https://mariadb.com/kb/en/mariadb-1013-release-notes/)
---
#### `rpl_semi_sync_slave_delay_master`
* **Description:** Only write primary info file when ack is needed.
* **Commandline:** `--rpl-semi-sync-slave-delay-master[={0|1}]`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `boolean`
* **Default Value:** `OFF`
* **Introduced:** [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)
---
#### `rpl_semi_sync_slave_enabled`
* **Description:** Set to `ON` to enable semi-synchronous replication replica. Disabled by default.
* **Commandline:** `--rpl-semi-sync-slave-enabled[={0|1}]`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `boolean`
* **Default Value:** `OFF`
---
#### `rpl_semi_sync_slave_kill_conn_timeout`
* **Description:** Timeout for the mysql connection used to kill the replica io\_thread's connection on primary. This timeout comes into play when stop slave is executed.
* **Commandline:** `--rpl-semi-sync-slave-kill-conn-timeout[={0|1}]`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `numeric`
* **Default Value:** `5`
* **Range:** `0` to `4294967295`
* **Introduced:** [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)
---
#### `rpl_semi_sync_slave_trace_level`
* **Description:** The tracing level for semi-sync replication. The levels are the same as for [rpl\_semi\_sync\_master\_trace\_level](#rpl_semi_sync_master_trace_level).
* **Commandline:** `--rpl-semi-sync-slave-trace_level[=#]`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `numeric`
* **Default Value:** `32`
* **Range:** `0` to `18446744073709551615`
---
Options
-------
### `rpl_semi_sync_master`
* **Description:** Controls how the server should treat the plugin when the server starts up.
+ Valid values are:
- `OFF` - Disables the plugin without removing it from the [mysql.plugins](../mysqlplugin-table/index) table.
- `ON` - Enables the plugin. If the plugin cannot be initialized, then the server will still continue starting up, but the plugin will be disabled.
- `FORCE` - Enables the plugin. If the plugin cannot be initialized, then the server will fail to start with an error.
- `FORCE_PLUS_PERMANENT` - Enables the plugin. If the plugin cannot be initialized, then the server will fail to start with an error. In addition, the plugin cannot be uninstalled with [UNINSTALL SONAME](../uninstall-soname/index) or [UNINSTALL PLUGIN](../uninstall-plugin/index) while the server is running.
+ See [Plugin Overview: Configuring Plugin Activation at Server Startup](../plugin-overview/index#configuring-plugin-activation-at-server-startup) for more information.
* **Commandline:** `--rpl-semi-sync-master=value`
* **Data Type:** `enumerated`
* **Default Value:** `ON`
* **Valid Values:** `OFF`, `ON`, `FORCE`, `FORCE_PLUS_PERMANENT`
* **Removed:** [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)
---
### `rpl_semi_sync_slave`
* **Description:** Controls how the server should treat the plugin when the server starts up.
+ Valid values are:
- `OFF` - Disables the plugin without removing it from the [mysql.plugins](../mysqlplugin-table/index) table.
- `ON` - Enables the plugin. If the plugin cannot be initialized, then the server will still continue starting up, but the plugin will be disabled.
- `FORCE` - Enables the plugin. If the plugin cannot be initialized, then the server will fail to start with an error.
- `FORCE_PLUS_PERMANENT` - Enables the plugin. If the plugin cannot be initialized, then the server will fail to start with an error. In addition, the plugin cannot be uninstalled with [UNINSTALL SONAME](../uninstall-soname/index) or [UNINSTALL PLUGIN](../uninstall-plugin/index) while the server is running.
+ See [Plugin Overview: Configuring Plugin Activation at Server Startup](../plugin-overview/index#configuring-plugin-activation-at-server-startup) for more information.
* **Commandline:** `--rpl-semi-sync-slave=value`
* **Data Type:** `enumerated`
* **Default Value:** `ON`
* **Valid Values:** `OFF`, `ON`, `FORCE`, `FORCE_PLUS_PERMANENT`
* **Removed:** [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)
---
Status Variables
----------------
For a list of status variables added when the plugin is installed, see [Semisynchronous Replication Plugin Status Variables](../semisynchronous-replication-plugin-status-variables/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb CHAR BYTE CHAR BYTE
=========
Description
-----------
The `CHAR BYTE` data type is an alias for the `[BINARY](../binary/index)` data type. This is a compatibility feature.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Spider Storage Engine Overview Spider Storage Engine Overview
==============================
About
-----
The Spider storage engine is a [storage engine](../storage-engines/index) with built-in sharding features. It supports partitioning and [xa transactions](../xa-transactions/index), and allows tables of different MariaDB instances to be handled as if they were on the same instance. It refers to one possible implementation of ISO/IEC 9075-9:2008 SQL/MED.
When a table is created with the Spider storage engine, the table links to the table on a remote server. The remote table can be of any storage engine. The table link is concretely achieved by the establishment of the connection from a local MariaDB server to a remote MariaDB server. The link is shared for all tables that are part of a the same transaction.
The Spider documentation on the MariaDB Knowledge Base is currently incomplete. See the Spider website for more: <http://spiderformysql.com/>, as well as the [spider-1.0-doc](http://bazaar.launchpad.net/~kentokushiba/spiderformysql/spider-1.0-doc/files) and [spider-2.0-doc](http://bazaar.launchpad.net/~kentokushiba/spiderformysql/spider-2.0-doc/files) repositories.
Spider Versions in MariaDB
--------------------------
| Spider Version | Introduced | Maturity |
| --- | --- | --- |
| Spider 3.3.15 | [MariaDB 10.5.7](https://mariadb.com/kb/en/mariadb-1057-release-notes/), [MariaDB 10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/) | Stable |
| Spider 3.3.15 | [MariaDB 10.5.4](https://mariadb.com/kb/en/mariadb-1054-release-notes/) | Gamma |
| Spider 3.3.14 | [MariaDB 10.4.3](https://mariadb.com/kb/en/mariadb-1043-release-notes/), [MariaDB 10.3.13](https://mariadb.com/kb/en/mariadb-10313-release-notes/) | Stable |
| Spider 3.3.13 | [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/) | Stable |
| Spider 3.3.13 | [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/) | Gamma |
| Spider 3.2.37 | [MariaDB 10.1.10](https://mariadb.com/kb/en/mariadb-10110-release-notes/), [MariaDB 10.0.23](https://mariadb.com/kb/en/mariadb-10023-release-notes/) | Gamma |
| Spider 3.2.21 | [MariaDB 10.1.5](https://mariadb.com/kb/en/mariadb-1015-release-notes/), [MariaDB 10.0.18](https://mariadb.com/kb/en/mariadb-10018-release-notes/) | Gamma |
| Spider 3.2.18 | [MariaDB 10.0.17](https://mariadb.com/kb/en/mariadb-10017-release-notes/) | Gamma |
| Spider 3.2.11 | [MariaDB 10.0.14](https://mariadb.com/kb/en/mariadb-10014-release-notes/) | Gamma |
| Spider 3.2.4 | [MariaDB 10.0.12](https://mariadb.com/kb/en/mariadb-10012-release-notes/) | Gamma |
| Spider 3.2 | [MariaDB 10.0.11](https://mariadb.com/kb/en/mariadb-10011-release-notes/) | Gamma |
| Spider 3.0 | [MariaDB 10.0.4](https://mariadb.com/kb/en/mariadb-1004-release-notes/) | Beta |
### Some Server Variables to Set When Using Spider
**MariaDB starting with [10.3.4](https://mariadb.com/kb/en/mariadb-1034-release-notes/)**If you are using Spider with [replication](../replication/index), you can expand the list of transaction errors to be retried by setting [slave\_transaction\_retry\_errors](../replication-and-binary-log-server-system-variables/index#slave_transaction_retry_errors) to the following to avoid network problems:
* 1158: Got an error reading communication packets
* 1159: Got timeout reading communication packets
* 1160: Got an error writing communication packets
* 1161: Got timeout writing communication packets
* 1429: Unable to connect to foreign data source
* 2013: Lost connection to MySQL server during query
* 12701: Remote MySQL server has gone away
Do this as follows in your my.cnf file:
```
slave_transaction_retry_errors="1158,1159,1160,1161,1429,2013,12701"
```
From [MariaDB 10.4.5](https://mariadb.com/kb/en/mariadb-1045-release-notes/), the above is included the default.
Usage
-----
### Basic Usage
To create a table in the Spider storage engine format, the COMMENT and/or CONNECTION clauses of the [CREATE TABLE](../create-table/index) statement are used to pass connection information about the remote server.
For example, the following table exists on a remote server (in this example, the remote node was created with the [MySQL Sandbox](../mysql-sandbox/index) tool, an easy way to test with multiple installations)::
```
node1 >CREATE TABLE s(
id INT NOT NULL AUTO_INCREMENT,
code VARCHAR(10),
PRIMARY KEY(id));
```
On the local server, a Spider table can be created as follows:
```
CREATE TABLE s(
id INT NOT NULL AUTO_INCREMENT,
code VARCHAR(10),
PRIMARY KEY(id)
)
ENGINE=SPIDER
COMMENT 'host "127.0.0.1", user "msandbox", password "msandbox", port "8607"';
```
Records can now be inserted on the local server, and they will be stored on the remote server:
```
INSERT INTO s(code) VALUES ('a');
node1 > SELECT * FROM s;
+----+------+
| id | code |
+----+------+
| 1 | a |
+----+------+
```
### Further Examples
Preparing 10M record table using the [sysbench](http://sysbench.sourceforge.net/) utility
```
/usr/local/skysql/sysbench/bin/sysbench --test=oltp --db-driver=mysql --mysql-table-engine=innodb --mysql-user=skysql --mysql-password=skyvodka --mysql-host=192.168.0.202 --mysql-port=5054 --oltp-table-size=10000000 --mysql-db=test prepare
```
Make a first read only benchmark to check the initial single node performance.
```
/usr/local/skysql/sysbench/bin/sysbench --test=oltp --db-driver=mysql --mysql-table-engine=innodb --mysql-user=skysql --mysql-password=skyvodka --mysql-host=192.168.0.202 --mysql-port=5054 --mysql-db=test --oltp-table-size=10000000 --num-threads=4 --max-requests=100000 --oltp-read-only=on run
```
```
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 4
Doing OLTP test.
Running mixed OLTP test
Doing read-only test
Using Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Maximum number of requests for OLTP test is limited to 100000
Threads started!
Done.
OLTP test statistics:
queries performed:
read: 1400196
write: 0
other: 200028
total: 1600224
transactions: 100014 (1095.83 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 1400196 (15341.58 per sec.)
other operations: 200028 (2191.65 per sec.)
Test execution summary:
total time: 91.2681s
total number of events: 100014
total time taken by event execution: 364.3693
per-request statistics:
min: 1.85ms
avg: 3.64ms
max: 30.70ms
approx. 95 percentile: 4.66ms
Threads fairness:
events (avg/stddev): 25003.5000/84.78
execution time (avg/stddev): 91.0923/0.00
```
Define an easy way to access the nodes from the MariaDB or MySQL client.
```
alias backend1='/usr/local/skysql/mysql-client/bin/mysql --user=skysql --password=skyvodka --host=192.168.0.202 --port=5054'
alias backend2='/usr/local/skysql/mysql-client/bin/mysql --user=skysql --password=skyvodka --host=192.168.0.203 --port=5054'
alias spider1='/usr/local/skysql/mysql-client/bin/mysql --user=skysql --password=skyvodka --host=192.168.0.201 --port=5054'
```
Create the empty tables to hold the data and repeat for all available backend nodes.
```
backend1 << EOF
CREATE DATABASE backend;
CREATE TABLE backend.sbtest (
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
EOF
backend2 << EOF
CREATE DATABASE backend;
CREATE TABLE backend.sbtest (
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
EOF
```
#### Federation Setup
```
spider1 << EOF
CREATE SERVER backend
FOREIGN DATA WRAPPER mysql
OPTIONS(
HOST '192.168.0.202',
DATABASE 'test',
USER 'skysql',
PASSWORD 'skyvodka',
PORT 5054
);
CREATE TABLE test.sbtest
(
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=spider COMMENT='wrapper "mysql",srv "backend"';
SELECT * FROM test.sbtest LIMIT 10;
EOF
```
Without connection pool or MariaDB thread pool, HaProxy and Spider have been protecting the tcp socket overflow without specific TCP tuning. In reality with a well tuned TCP stack or thread pool the curve should not decrease so abruptly to 0. Refer to the [MariaDB Thread Pool](../threadpool-in-55/index) to explore this feature.
#### Sharding Setup
Create the spider table on the Spider Node
```
#spider1 << EOF
CREATE SERVER backend1
FOREIGN DATA WRAPPER mysql
OPTIONS(
HOST '192.168.0.202',
DATABASE 'backend',
USER 'skysql',
PASSWORD 'skyvodka',
PORT 5054
);
CREATE SERVER backend2
FOREIGN DATA WRAPPER mysql
OPTIONS(
HOST '192.168.0.203',
DATABASE 'backend',
USER 'skysql',
PASSWORD 'skyvodka',
PORT 5054
);
CREATE DATABASE IF NOT EXISTS backend;
CREATE TABLE backend.sbtest
(
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=spider COMMENT='wrapper "mysql", table "sbtest"'
PARTITION BY KEY (id)
(
PARTITION pt1 COMMENT = 'srv "backend1"',
PARTITION pt2 COMMENT = 'srv "backend2"'
) ;
EOF
```
Copy the data from the original sysbench table to the spider table
```
#/usr/local/skysql/mariadb/bin/mysqldump --user=skysql --password=skyvodka --host=192.168.0.202 --port=5054 --no-create-info test sbtest | spider1 backend
#backend2 -e"select count(*) from backend.sbtest;"
+----------+
| count(*) |
+----------+
| 3793316 |
+----------+
#backend1 -e"select count(*) from backend.sbtest;"
+----------+
| count(*) |
+----------+
| 6206684 |
+----------+
```
We observe a common issue with partitioning is a non uniform distribution of data between the backends. based on the partition key hashing algorithm.
Rerun the Benchmark with less queries
```
#/usr/local/skysql/sysbench/bin/sysbench --test=oltp --db-driver=mysql --mysql-table-engine=innodb --mysql-user=skysql --mysql-password=skyvodka --mysql-host=192.168.0.201 --mysql-port=5054 --mysql-db=backend --mysql-engine-trx=yes --oltp-table-size=10000000 --num-threads=4 --max-requests=100 --oltp-read-only=on run
```
```
OLTP test statistics:
queries performed:
read: 1414
write: 0
other: 202
total: 1616
transactions: 101 (22.95 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 1414 (321.30 per sec.)
other operations: 202 (45.90 per sec.)
Test execution summary:
total time: 4.4009s
total number of events: 101
total time taken by event execution: 17.2960
per-request statistics:
min: 114.48ms
avg: 171.25ms
max: 200.98ms
approx. 95 percentile: 195.12ms
Threads fairness:
events (avg/stddev): 25.2500/0.43
execution time (avg/stddev): 4.3240/0.04
```
The response time decreases to 0.04. This is expected because the query latency is increased from multiple network round trips and condition push down is not implemented yet. Sysbench doing a lot of range queries. Just consider for now that this range query can be a badly optimized query.
We need to increase the concurrency to get better throughput.
#### Background Setup
We have no background search available in MariaDB. It won't be available before [MariaDB 10.2](../what-is-mariadb-102/index), but the next table definition mainly enables improving the performance of a single complex query plan with background search that can be found via the upstream spiral binaries MariaDB branch.
We have 4 cores per backend and 2 backends .
On `backend1`
```
#backend1 << EOF
CREATE DATABASE bsbackend1;
CREATE DATABASE bsbackend2;
CREATE DATABASE bsbackend3;
CREATE DATABASE bsbackend4;
CREATE TABLE bsbackend1.sbtest (
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
CREATE TABLE bsbackend2.sbtest (
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
CREATE TABLE bsbackend3.sbtest (
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
CREATE TABLE bsbackend4.sbtest (
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
EOF
```
On `backend2`
```
#backend2 << EOF
CREATE DATABASE bsbackend5;
CREATE DATABASE bsbackend6;
CREATE DATABASE bsbackend7;
CREATE DATABASE bsbackend8;
CREATE TABLE bsbackend5.sbtest (
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
CREATE TABLE bsbackend6.sbtest (
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
CREATE TABLE bsbackend7.sbtest (
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
CREATE TABLE bsbackend8.sbtest (
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
EOF
```
On `Spider Node`
```
#spider2 << EOF
CREATE SERVER bsbackend1 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.202', DATABASE 'bsbackend1',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE SERVER bsbackend2 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.202', DATABASE 'bsbackend2',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE SERVER bsbackend3 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.202', DATABASE 'bsbackend3',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE SERVER bsbackend4 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.202', DATABASE 'bsbackend4',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE SERVER bsbackend5 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.203', DATABASE 'bsbackend5',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE SERVER bsbackend6 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.203', DATABASE 'bsbackend6',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE SERVER bsbackend7 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.203', DATABASE 'bsbackend7',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE SERVER bsbackend8 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.203', DATABASE 'bsbackend8',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE DATABASE IF NOT EXISTS bsbackend;
CREATE TABLE bsbackend.sbtest
(
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=spider COMMENT='wrapper "mysql", table "sbtest"'
PARTITION BY KEY (id)
(
PARTITION pt1 COMMENT = 'srv "bsbackend1"',
PARTITION pt2 COMMENT = 'srv "bsbackend2"',
PARTITION pt3 COMMENT = 'srv "bsbackend3"',
PARTITION pt4 COMMENT = 'srv "bsbackend4"',
PARTITION pt5 COMMENT = 'srv "bsbackend5"',
PARTITION pt6 COMMENT = 'srv "bsbackend6"',
PARTITION pt7 COMMENT = 'srv "bsbackend7"',
PARTITION pt8 COMMENT = 'srv "bsbackend8"'
) ;
EOF
INSERT INTO bsbackend.sbtest SELECT * FROM backend.sbtest;
```
Now test the following query :
```
select count(*) from sbtest;
+----------+
| count(*) |
+----------+
| 10000001 |
+----------+
1 row in set (8,38 sec)
set spider_casual_read=1;
set spider_bgs_mode=2;
select count(*) from sbtest;
+----------+
| count(*) |
+----------+
| 10000001 |
+----------+
1 row in set (4,25 sec)
mysql> select sum(k) from sbtest;
+--------+
| sum(k) |
+--------+
| 0 |
+--------+
1 row in set (5,67 sec)
mysql> set spider_casual_read=0;
mysql> select sum(k) from sbtest;
+--------+
| sum(k) |
+--------+
| 0 |
+--------+
1 row in set (12,56 sec)
```
#### High Availability Setup
```
#backend1 -e "CREATE DATABASE backend_rpl"
#backend2 -e "CREATE DATABASE backend_rpl"
#/usr/local/skysql/mariadb/bin/mysqldump --user=skysql --password=skyvodka --host=192.168.0.202 --port=5054 backend sbtest | backend1 backend_rpl
#/usr/local/skysql/mariadb/bin/mysqldump --user=skysql --password=skyvodka --host=192.168.0.203 --port=5054 backend sbtest | backend2 backend_rpl
#spider1 << EOF
DROP TABLE backend.sbtest;
CREATE SERVER backend1_rpl
FOREIGN DATA WRAPPER mysql
OPTIONS(
HOST '192.168.0.202',
DATABASE 'backend_rpl',
USER 'skysql',
PASSWORD 'skyvodka',
PORT 5054
);
CREATE SERVER backend2_rpl
FOREIGN DATA WRAPPER mysql
OPTIONS(
HOST '192.168.0.203',
DATABASE 'backend_rpl',
USER 'skysql',
PASSWORD 'skyvodka',
PORT 5054
);
CREATE TABLE backend.sbtest
(
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=spider COMMENT='wrapper "mysql", table "sbtest"'
PARTITION BY KEY (id)
(
PARTITION pt1 COMMENT = 'srv "backend1 backend2_rpl"',
PARTITION pt2 COMMENT = 'srv "backend2 backend1_rpl"'
) ;
INSERT INTO backend.sbtest select 10000001, 0, '' ,'replicas test';
EOF
#backend1 -e "SELECT * FROM backend.sbtest WHERE id=10000001";
+----------+---+---+---------------+
| id | k | c | pad |
+----------+---+---+---------------+
| 10000001 | 0 | | replicas test |
+----------+---+---+---------------+
# backend2 -e "SELECT * FROM backend.sbtest where id=10000001";
# backend2 -e "SELECT * FROM backend_rpl.sbtest where id=10000001";
+----------+---+---+---------------+
| id | k | c | pad |
+----------+---+---+---------------+
| 10000001 | 0 | | replicas test |
+----------+---+---+---------------+
```
What is happening if we stop one backend?
```
#spider1 -e "SELECT * FROM backend.sbtest where id=10000001";
ERROR 1429 (HY000) at line 1: Unable to connect to foreign data source: backend1
```
Let's fix this with spider monitoring. Note that msi is the list of spider nodes @@server\_id variable participating in the quorum.
```
#spider1 << EOF
DROP TABLE backend.sbtest;
CREATE TABLE backend.sbtest
(
id int(10) unsigned NOT NULL AUTO_INCREMENT,
k int(10) unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=spider COMMENT='wrapper "mysql", table "sbtest"'
PARTITION BY KEY (id)
(
PARTITION pt1 COMMENT = 'srv "backend1 backend2_rpl", mbk "2", mkd "2", msi "5054", link_status "0 0"',
PARTITION pt2 COMMENT = 'srv "backend2 backend1_rpl", mbk "2", mkd "2", msi "5054", link_status "0 0" '
) ;
CREATE SERVER mon
FOREIGN DATA WRAPPER mysql
OPTIONS(
HOST '192.168.0.201’,
DATABASE 'backend',
USER 'skysql',
PASSWORD 'skyvodka',
PORT 5054
);
INSERT INTO `mysql`.`spider_link_mon_servers` VALUES
('%','%','%',5054,'mon',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0,NULL,NULL);
SELECT spider_flush_table_mon_cache();
EOF
```
Monitoring should be setup between Spider nodes participating in the cluster. We only have one `Spider Node` and spider\_link\_mon\_servers represent the inter-connection of all Spider nodes in our setup.
This simple setup does not bring HA in case the `Spider Node` is not available. In a production setup the number of `Spider Nodes` in the spider\_link\_mon\_servers table should be at least 3 to get a majority consensus.
```
#spider1 -e "SELECT * FROM backend.sbtest WHERE id=10000001"
+----------+---+---+---------------+
| id | k | c | pad |
+----------+---+---+---------------+
| 10000001 | 0 | | replicas test |
+----------+---+---+---------------+
```
Checking the state of the nodes:
```
#spider1 -e "SELECT db_name, table_name,server FROM mysql.spider_tables WHERE link_status=3"
+---------+--------------+----------+
| db_name | table_name | server |
+---------+--------------+----------+
| backend | sbtest#P#pt1 | backend1 |
+---------+--------------+----------+
```
No change has been made to cluster, so let's create a divergence:
```
# spider1 -e "INSERT INTO backend.sbtest select 10000003, 0, '' ,'replicas test';"
# backend1 -e "SELECT * FROM backend.sbtest WHERE id=10000003"
# backend2 -e "SELECT * FROM backend_rpl.sbtest WHERE id=10000003"
+----------+---+---+---------------+
| id | k | c | pad |
+----------+---+---+---------------+
| 10000003 | 0 | | replicas test |
+----------+---+---+---------------+
```
Reintroducing the failed backend1 in the cluster:
```
#spider1 << EOF
ALTER TABLE backend.sbtest
ENGINE=spider COMMENT='wrapper "mysql", table "sbtest"'
PARTITION BY KEY (id)
(
PARTITION pt1 COMMENT = 'srv "backend1 backend2_rpl" mbk "2", mkd "2", msi "5054", link_status "2 0"',
PARTITION pt2 COMMENT = 'srv "backend2 backend1_rpl" mbk "2", mkd "2", msi "5054", link_status "0 2" '
) ;
select spider_copy_tables('backend.sbtest#P#pt1','0','1');
select spider_copy_tables('backend.sbtest#P#pt2','1','0');
ALTER TABLE backend.sbtest
ENGINE=spider COMMENT='wrapper "mysql", table "sbtest"'
PARTITION BY KEY (id)
(
PARTITION pt1 COMMENT = 'srv "backend1 backend2_rpl" mbk "2", mkd "2", msi "5054", link_status "1 0"',
PARTITION pt2 COMMENT = 'srv "backend2 backend1_rpl" mbk "2", mkd "2", msi "5054", link_status "0 1" '
) ;
EOF
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb NVL NVL
===
**MariaDB starting with [10.3](../what-is-mariadb-103/index)**From [MariaDB 10.3](../what-is-mariadb-103/index), NVL is a synonym for [IFNULL](../ifnull/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Build Environment Setup for Linux Build Environment Setup for Linux
=================================
Required Tools
--------------
The following is a list of tools that are required for building MariaDB on Linux and Mac OS X. Most, if not all, of these will exist as packages in your distribution's package repositories, so check there first. See [Building MariaDB on Ubuntu](../building-mariadb-on-ubuntu/index), [Building MariaDB on CentOS](../building-mariadb-on-centos/index), and [Building MariaDB on Gentoo](../building-mariadb-on-gentoo/index) pages for specific requirements for those platforms.
* [git](https://git-scm.com/)
* [gunzip](http://www.gzip.org/)
* [GNU tar](http://www.gnu.org/software/tar/)
* [gcc/g++ 4.8.5 or later, recommend above 9](http://gcc.gnu.org/) or [clang/clang++](https://clang.llvm.org/)
* [GNU make 3.75 or later](http://www.gnu.org/software/make/) or [Ninja](https://ninja-build.org/)
* [bison (3.0)](http://www.gnu.org/software/bison/)
* [libncurses](http://www.gnu.org/software/ncurses/)
* [zlib-dev](http://www.zlib.net/)
* [libevent-dev](http://libevent.org)
* [cmake above 2.8.7 though preferably above 3.3](http://www.cmake.org)
* [gnutls](http://www.gnutls.org) or [openssl](http://www.openssl.org)
* [jemalloc](http://www.canonware.com/jemalloc) (optional)
* [valgrind](http://www.valgrind.org/) (only needed if running [mysql-test-run --valgrind](../mysql-test-runpl-options/index#options-for-valgrind))
* [libcurl](https://curl.se/libcurl//libcurl) (only needed if you want to use the [S3 storage engine](../s3-storage-engine/index))
You can install these programs individually through your package manager.
In addition, some package managers support the use a build dependency command. When using this command, the package manager retrieves a list of build dependencies and install them for you, making it much easier to get started on the compile. The actual option varies, depending on the distribution you use.
On Ubuntu and Debian you can use the `build-dep` command.
```
# apt build-dep mariadb-server
```
Fedora uses the `builddep` command with DNF.
```
# dnf builddep mariadb-server
```
CentOS has a separate utility `yum-builddep`, which is part of the `yum-utils` package. This works like the DNF `builddep` command.
```
# yum install yum-utils
# yum-builddep mariadb-server
```
With openSUSE and SUSE, you can use the source-install command.
```
# zypper source-install -d mariadb
```
Each of these commands works off of the release of MariaDB provided in the official software repositories of the given distribution. In some instances and especially in older versions of Linux, MariaDB may not be available in the official repositories. In these cases you can use the MariaDB repositories as an alternative.
Bear in mind, the release of MariaDB provided by your distribution may not be the same as the version you are trying to install. Additionally, the package managers don't always retrieve all of the packages you need to compile MariaDB. There may be some missed or unlisted in the process. When this is the case, CMake fails during checks with an error message telling you what's missing.
Note: On Debian-based distributions, you may receive a *"You must put some 'source' URIs in your sources.list"* error. To avoid this, ensure that /etc/apt/sources.list contains the source repositories.
For example, for Debian buster:
```
deb http://ftp.debian.org/debian buster main contrib
deb http://security.debian.org buster/updates main contrib
deb-src http://ftp.debian.org/debian buster main contrib
deb-src http://security.debian.org buster/updates main contrib
```
Refer to the documentation for your Linux distribution for how to do this on your system.
After editing the sources.list, do:
```
sudo apt update
```
...and then the above mentioned `build-dep` command.
Note: On openSUSE the source package repository may be disabled. The following command will enable it:
```
sudo zypper mr -er repo-source
```
After enabling it, you will be able to run the zypper command to install the build dependencies.
You should now have your build environment set up and can proceed to [Getting the MariaDB Source Code](../getting_the_mariadb_source_code/index) and then using the [Generic Build Instructions](../generic-build-instructions/index) to build MariadB (or following the steps for your Linux distribution or [Creating a MariaDB Binary Tarball](../creating-the-mariadb-binary-tarball/index)).
See Also
--------
* [Installing Galera from source](../installating-galera-from-source/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Upgrading from MariaDB 10.4 to MariaDB 10.5 Upgrading from MariaDB 10.4 to MariaDB 10.5
===========================================
### How to Upgrade
For Windows, see [Upgrading MariaDB on Windows](../upgrading-mariadb-on-windows/index) instead.
For Windows, see [Upgrading MariaDB on Windows](../upgrading-mariadb-on-windows/index) instead.
For MariaDB Galera Cluster, see [Upgrading from MariaDB 10.4 to MariaDB 10.5 with Galera Cluster](upgrading-from-mariadb-104-to-mariadb-105-with-galera-cluster) instead.
Before you upgrade, it would be best to take a backup of your database. This is always a good idea to do before an upgrade. We would recommend [Mariabackup](../mariabackup/index).
The suggested upgrade procedure is:
1. Modify the repository configuration, so the system's package manager installs [MariaDB 10.5](../what-is-mariadb-105/index). For example,
* On Debian, Ubuntu, and other similar Linux distributions, see [Updating the MariaDB APT repository to a New Major Release](../installing-mariadb-deb-files/index#updating-the-mariadb-apt-repository-to-a-new-major-release) for more information.
* On RHEL, CentOS, Fedora, and other similar Linux distributions, see [Updating the MariaDB YUM repository to a New Major Release](../yum/index#updating-the-mariadb-yum-repository-to-a-new-major-release) for more information.
* On SLES, OpenSUSE, and other similar Linux distributions, see [Updating the MariaDB ZYpp repository to a New Major Release](../installing-mariadb-with-zypper/index#updating-the-mariadb-zypp-repository-to-a-new-major-release) for more information.
2. [Stop MariaDB](../starting-and-stopping-mariadb-automatically/index).
3. Uninstall the old version of MariaDB.
* On Debian, Ubuntu, and other similar Linux distributions, execute the following:
`sudo apt-get remove mariadb-server`
* On RHEL, CentOS, Fedora, and other similar Linux distributions, execute the following:
`sudo yum remove MariaDB-server`
* On SLES, OpenSUSE, and other similar Linux distributions, execute the following:
`sudo zypper remove MariaDB-server`
4. Install the new version of MariaDB.
* On Debian, Ubuntu, and other similar Linux distributions, see [Installing MariaDB Packages with APT](../installing-mariadb-deb-files/index#installing-mariadb-packages-with-apt) for more information.
* On RHEL, CentOS, Fedora, and other similar Linux distributions, see [Installing MariaDB Packages with YUM](../yum/index#installing-mariadb-packages-with-yum) for more information.
* On SLES, OpenSUSE, and other similar Linux distributions, see [Installing MariaDB Packages with ZYpp](../installing-mariadb-with-zypper/index#installing-mariadb-packages-with-zypp) for more information.
5. Make any desired changes to configuration options in [option files](../configuring-mariadb-with-option-files/index), such as `my.cnf`. This includes removing any options that are no longer supported.
6. [Start MariaDB](../starting-and-stopping-mariadb-automatically/index).
7. Run [mysql\_upgrade](../mysql_upgrade/index).
* `mysql_upgrade` does two things:
1. Ensures that the system tables in the#[mysql](../the-mysql-database-tables/index) database are fully compatible with the new version.
2. Does a very quick check of all tables and marks them as compatible with the new version of MariaDB .
### Incompatible Changes Between 10.4 and 10.5
On most servers upgrading from 10.4 should be painless. However, there are some things that have changed which could affect an upgrade:
#### Binary name changes
All binaries previously beginning with mysql now begin with mariadb, with symlinks for the corresponding mysql command.
Usually that shouldn't cause any changed behavior, but when starting the MariaDB server via [systemd](../systemd/index), or via the [mysqld\_safe](../mysqld_safe/index) script symlink, the server process will now always be started as `mariadbd`, not `mysqld`.
So anything looking for the `mysqld` name in the system process list, like e.g. monitoring solutions, now needs for `mariadbd` instead when the server / service is not started directly, but via `mysqld_safe` or as a system service.
#### GRANT PRIVILEGE changes
A number of statements changed the privileges that they require. The old privileges were historically inappropriately chosen in the upstream. 10.5.2 fixes this problem. Note, these changes are incompatible to previous versions. A number of GRANT commands might be needed after upgrade.
* `SHOW BINLOG EVENTS` now requires the `BINLOG MONITOR` privilege (requred `REPLICATION SLAVE` prior to 10.5.2).
* `SHOW SLAVE HOSTS` now requires the `REPLICATION MASTER ADMIN` privilege (required `REPLICATION SLAVE` prior to 10.5.2).
* `SHOW SLAVE STATUS` now requires the `REPLICATION SLAVE ADMIN` or the `SUPER` privilege (required `REPLICATION CLIENT` or `SUPER` prior to 10.5.2).
* `SHOW RELAYLOG EVENTS` now requires the `REPLICATION SLAVE ADMIN` privilege (required `REPLICATION SLAVE` prior to 10.5.2).
#### Options That Have Changed Default Values
| Option | Old default value | New default value |
| --- | --- | --- |
| [innodb\_adaptive\_hash\_index](../innodb-system-variables/index#innodb_adaptive_hash_index) | ON | OFF |
| [innodb\_checksum\_algorithm](../innodb-system-variables/index#innodb_checksum_algorithm) | crc32 | full\_crc32 |
| [innodb\_log\_optimize\_ddl](../innodb-system-variables/index#innodb_log_optimize_ddl) | ON | OFF |
| [slave\_parallel\_mode](../replication-and-binary-log-system-variables/index#slave_parallel_mode) | conservative | optimistic |
| [performance\_schema\_max\_cond\_classes](../performance-schema-system-variables/index#performance_schema_max_cond_classes) | 80 | 90 |
| [performance\_schema\_max\_file\_classes](../performance-schema-system-variables/index#performance_schema_max_file_classes) | 50 | 80 |
| [performance\_schema\_max\_mutex\_classes](../performance-schema-system-variables/index#performance_schema_max_mutex_classes) | 200 | 210 |
| [performance\_schema\_max\_rwlock\_classes](../performance-schema-system-variables/index#performance_schema_max_rwlock_classes) | 40 | 50 |
| [performance\_schema\_setup\_actors\_size](../performance-schema-system-variables/index#performance_schema_setup_actors_size) | 100 | -1 |
| [performance\_schema\_setup\_objects\_size](../performance-schema-system-variables/index#performance_schema_setup_objects_size) | 100 | -1 |
#### Options That Have Been Removed or Renamed
The following options should be removed or renamed if you use them in your [option files](../configuring-mariadb-with-option-files/index):
| Option | Reason |
| --- | --- |
| [innodb\_checksums](../innodb-system-variables/index#innodb_checksums) | Deprecated and functionality replaced by [innodb\_checksum\_algorithms](../innodb-system-variables/index#innodb_checksum_algorithm) in [MariaDB 10.0](../what-is-mariadb-100/index). |
| [idle\_flush\_pct](../innodb-system-variables/index#innodb_idle_flush_pct) | Has had no effect since merging InnoDB 5.7 from mysql-5.7.9 ([MariaDB 10.2.2](https://mariadb.com/kb/en/mariadb-1022-release-notes/)). |
| [innodb\_locks\_unsafe\_for\_binlog](../innodb-system-variables/index#innodb_locks_unsafe_for_binlog) | Deprecated in [MariaDB 10.0](../what-is-mariadb-100/index). Use [READ COMMITTED transaction isolation level](../set-transaction/index#read-committed) instead. |
| [innodb\_rollback\_segments](../innodb-system-variables/index#innodb_rollback_segments) | Deprecated and replaced by [innodb\_undo\_logs](../innodb-system-variables/index#innodb_undo_logs) in [MariaDB 10.0](../what-is-mariadb-100/index). |
| [innodb\_stats\_sample\_pages](../innodb-system-variables/index#innodb_stats_sample_pages) | Deprecated in [MariaDB 10.0](../what-is-mariadb-100/index). Use [innodb\_stats\_transient\_sample\_pages](../innodb-system-variables/index#innodb_stats_transient_sample_pages) instead. |
| [max\_long\_data\_size](../server-system-variables/index#max_long_data_size) | Deprecated and replaced by [max\_allowed\_packet](../server-system-variables/index#max_allowed_packet) in [MariaDB 5.5](../what-is-mariadb-55/index). |
| [multi\_range\_count](../server-system-variables/index#multi_range_count) | Deprecated and has had no effect since [MariaDB 5.3](../what-is-mariadb-53/index). |
| [thread\_concurrency](../server-system-variables/index#thread_concurrency) | Deprecated and has had no effect since [MariaDB 5.5](../what-is-mariadb-55/index). |
| [timed\_mutexes](../server-system-variables/index#timed_mutexes) | Deprecated and has had no effect since [MariaDB 5.5](../what-is-mariadb-55/index). |
#### Deprecated Options
The following options have been deprecated. They have not yet been removed, but will be in a future version, and should ideally no longer be used.
| Option | Reason |
| --- | --- |
| [innodb\_adaptive\_max\_sleep\_delay](../innodb-system-variables/index#innodb_adaptive_max_sleep_delay) | No need for thread throttling any more. |
| [innodb\_background\_scrub\_data\_check\_interval](../innodb-system-variables/index#innodb_background_scrub_data_check_interval) | Problematic ‘background scrubbing’ code removed. |
| [innodb\_background\_scrub\_data\_interval](../innodb-system-variables/index#innodb_background_scrub_data_interval) | Problematic ‘background scrubbing’ code removed. |
| [innodb\_background\_scrub\_data\_compressed](../innodb-system-variables/index#innodb_background_scrub_data_compressed) | Problematic ‘background scrubbing’ code removed. |
| [innodb\_background\_scrub\_data\_uncompressed](../innodb-system-variables/index#innodb_background_scrub_data_uncompressed) | Problematic ‘background scrubbing’ code removed. |
| [innodb\_buffer\_pool\_instances](../innodb-system-variables/index#innodb_buffer_pool_instances) | Having more than one buffer pool is no longer necessary. |
| [innodb\_commit\_concurrency](../innodb-system-variables/index#innodb_commit_concurrency) | No need for thread throttling any more. |
| [innodb\_concurrency\_tickets](../innodb-system-variables/index#innodb_concurrency_tickets) | No need for thread throttling any more. |
| [innodb\_log\_files\_in\_group](../innodb-system-variables/index#innodb_log_files_in_group) | Redo log was unnecessarily split into multiple files. Limited to 1 from [MariaDB 10.5](../what-is-mariadb-105/index). |
| [innodb\_log\_optimize\_ddl](../innodb-system-variables/index#innodb_log_optimize_ddl) | Prohibited optimizations. |
| [innodb\_page\_cleaners](../innodb-system-variables/index#innodb_page_cleaners) | Having more than one page cleaner task no longer necessary. |
| [innodb\_replication\_delay](../innodb-system-variables/index#innodb_replication_delay) | No need for thread throttling any more. |
| [innodb\_scrub\_log](../innodb-system-variables/index#innodb_scrub_log) | Never really worked as intended, redo log format is being redone. |
| [innodb\_scrub\_log\_speed](../innodb-system-variables/index#innodb_scrub_log_speed) | Never really worked as intended, redo log format is being redone. |
| [innodb\_thread\_concurrency](../innodb-system-variables/index#innodb_thread_concurrency) | No need for thread throttling any more. |
| [innodb\_thread\_sleep\_delay](../innodb-system-variables/index#innodb_thread_sleep_delay) | No need for thread throttling any more. |
| [innodb\_undo\_logs](../innodb-system-variables/index#innodb_undo_logs) | It always makes sense to use the maximum number of rollback segments. |
| [large\_page\_size](../server-system-variables/index#large_page_size) | Unused since multiple page size support was added. |
### Major New Features To Consider
You might consider using the following major new features in [MariaDB 10.5](../what-is-mariadb-105/index):
* The [S3 storage engine](../s3-storage-engine/index) allows one to archive MariaDB tables in Amazon S3, or any third-party public or private cloud that implements S3 API.
* [ColumnStore](../mariadb-columnstore/index) columnar storage engine.
* See also [System Variables Added in MariaDB 10.5](../system-variables-added-in-mariadb-105/index).
### See Also
* [The features in MariaDB 10.5](../what-is-mariadb-105/index)
* [Upgrading from MariaDB 10.4 to MariaDB 10.5 with Galera Cluster](upgrading-from-mariadb-104-to-mariadb-105-with-galera-cluster)
* [Upgrading from MariaDB 10.3 to MariaDB 10.4](../upgrading-from-mariadb-103-to-mariadb-104/index)
* [Upgrading from MariaDB 10.2 to MariaDB 10.3](../upgrading-from-mariadb-102-to-mariadb-103/index)
* [Upgrading from MariaDB 10.1 to MariaDB 10.2](../upgrading-from-mariadb-101-to-mariadb-102/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MyRocks and Bloom Filters MyRocks and Bloom Filters
=========================
Bloom filters are used to reduce read amplification. Bloom filters can be set on a per-column family basis (see [myrocks-column-families](../myrocks-column-families/index)).
Bloom Filter Parameters
-----------------------
* How many bits to use
* whole\_key\_filtering=true/false
* Whether the bloom filter is for the entire key or for the prefix. In case of a prefix, you need to look at the index definition and compute the desired prefix length.
### Computing Prefix Length
* It's 4 bytes for `index_nr`
* Then, for fixed-size columns (integer, date[time], decimal) it is key\_length as shown by `EXPLAIN`. For VARCHAR columns, determining the length is tricky (It depends on the values stored in the table. Note that MyRocks encodes VARCHARs with "Variable-Length Space-Padded Encoding" format).
Configuring Bloom Filter
------------------------
To enable 10-bit bloom filter for 8-byte prefix length for column family "cf1", put this into my.cnf:
```
rocksdb_override_cf_options='cf1={block_based_table_factory={filter_policy=bloomfilter:10:false;whole_key_filtering=0;};prefix_extractor=capped:8};'
```
and restart the server.
Check if the column family actually uses the bloom filter:
```
select *
from information_schema.rocksdb_cf_options
where
cf_name='cf1' and
option_type IN ('TABLE_FACTORY::FILTER_POLICY','PREFIX_EXTRACTOR');
```
```
+---------+------------------------------+----------------------------+
| CF_NAME | OPTION_TYPE | VALUE |
+---------+------------------------------+----------------------------+
| cf1 | PREFIX_EXTRACTOR | rocksdb.CappedPrefix.8 |
| cf1 | TABLE_FACTORY::FILTER_POLICY | rocksdb.BuiltinBloomFilter |
+---------+------------------------------+----------------------------+
```
Checking if Bloom Filter is Useful
----------------------------------
Watch these status variables:
```
show status like '%bloom%';
+-------------------------------------+-------+
| Variable_name | Value |
+-------------------------------------+-------+
| Rocksdb_bloom_filter_prefix_checked | 1 |
| Rocksdb_bloom_filter_prefix_useful | 0 |
| Rocksdb_bloom_filter_useful | 0 |
+-------------------------------------+-------+
```
Other useful variables are:
* `rocksdb_force_flush_memtable_now` - bloom filter is only used when reading data from disk. If you are doing testing, flush the data to disk first.
* `rocksdb_skip_bloom_filter_on_read` - skip using the bloom filter (default is FALSE).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb CREATE PROCEDURE CREATE PROCEDURE
================
Syntax
------
```
CREATE
[OR REPLACE]
[DEFINER = { user | CURRENT_USER | role | CURRENT_ROLE }]
PROCEDURE [IF NOT EXISTS] sp_name ([proc_parameter[,...]])
[characteristic ...] routine_body
proc_parameter:
[ IN | OUT | INOUT ] param_name type
type:
Any valid MariaDB data type
characteristic:
LANGUAGE SQL
| [NOT] DETERMINISTIC
| { CONTAINS SQL | NO SQL | READS SQL DATA | MODIFIES SQL DATA }
| SQL SECURITY { DEFINER | INVOKER }
| COMMENT 'string'
routine_body:
Valid SQL procedure statement
```
Description
-----------
Creates a [stored procedure](../stored-procedures/index). By default, a routine is associated with the default database. To associate the routine explicitly with a given database, specify the name as db\_name.sp\_name when you create it.
When the routine is invoked, an implicit USE db\_name is performed (and undone when the routine terminates). The causes the routine to have the given default database while it executes. USE statements within stored routines are disallowed.
When a stored procedure has been created, you invoke it by using the `CALL` statement (see [CALL](../call/index)).
To execute the `CREATE PROCEDURE` statement, it is necessary to have the `CREATE ROUTINE` privilege. By default, MariaDB automatically grants the `ALTER ROUTINE` and `EXECUTE` privileges to the routine creator. See also [Stored Routine Privileges](../stored-routine-privileges/index).
The `DEFINER` and SQL SECURITY clauses specify the security context to be used when checking access privileges at routine execution time, as described [here](../stored-routine-privileges/index). Requires the [SUPER](../grant/index#super) privilege, or, from [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/), the [SET USER](../grant/index#set-user) privilege.
If the routine name is the same as the name of a built-in SQL function, you must use a space between the name and the following parenthesis when defining the routine, or a syntax error occurs. This is also true when you invoke the routine later. For this reason, we suggest that it is better to avoid re-using the names of existing SQL functions for your own stored routines.
The IGNORE\_SPACE SQL mode applies to built-in functions, not to stored routines. It is always allowable to have spaces after a routine name, regardless of whether IGNORE\_SPACE is enabled.
The parameter list enclosed within parentheses must always be present. If there are no parameters, an empty parameter list of () should be used. Parameter names are not case sensitive.
Each parameter can be declared to use any valid data type, except that the COLLATE attribute cannot be used.
For valid identifiers to use as procedure names, see [Identifier Names](../identifier-names/index).
### Things to be Aware of With CREATE OR REPLACE
* One can't use `OR REPLACE` together with `IF EXISTS`.
CREATE PROCEDURE IF NOT EXISTS
------------------------------
If the `IF NOT EXISTS` clause is used, then the procedure will only be created if a procedure with the same name does not already exist. If the procedure already exists, then a warning will be triggered by default.
### IN/OUT/INOUT
Each parameter is an `IN` parameter by default. To specify otherwise for a parameter, use the keyword `OUT` or `INOUT` before the parameter name.
An `IN` parameter passes a value into a procedure. The procedure might modify the value, but the modification is not visible to the caller when the procedure returns. An `OUT` parameter passes a value from the procedure back to the caller. Its initial value is NULL within the procedure, and its value is visible to the caller when the procedure returns. An `INOUT` parameter is initialized by the caller, can be modified by the procedure, and any change made by the procedure is visible to the caller when the procedure returns.
For each `OUT` or `INOUT` parameter, pass a user-defined variable in the `CALL` statement that invokes the procedure so that you can obtain its value when the procedure returns. If you are calling the procedure from within another stored procedure or function, you can also pass a routine parameter or local routine variable as an `IN` or `INOUT` parameter.
### DETERMINISTIC/NOT DETERMINISTIC
`DETERMINISTIC` and `NOT DETERMINISTIC` apply only to [functions](../stored-functions/index). Specifying `DETERMINISTC` or `NON-DETERMINISTIC` in procedures has no effect. The default value is `NOT DETERMINISTIC`. Functions are `DETERMINISTIC` when they always return the same value for the same input. For example, a truncate or substring function. Any function involving data, therefore, is always `NOT DETERMINISTIC`.
### CONTAINS SQL/NO SQL/READS SQL DATA/MODIFIES SQL DATA
`CONTAINS SQL`, `NO SQL`, `READS SQL DATA`, and `MODIFIES SQL DATA` are informative clauses that tell the server what the function does. MariaDB does not check in any way whether the specified clause is correct. If none of these clauses are specified, `CONTAINS SQL` is used by default.
`MODIFIES SQL DATA` means that the function contains statements that may modify data stored in databases. This happens if the function contains statements like [DELETE](../delete/index), [UPDATE](../update/index), [INSERT](../insert/index), [REPLACE](../replace/index) or DDL.
`READS SQL DATA` means that the function reads data stored in databases, but does not modify any data. This happens if [SELECT](../select/index) statements are used, but there no write operations are executed.
`CONTAINS SQL` means that the function contains at least one SQL statement, but it does not read or write any data stored in a database. Examples include [SET](../set/index) or [DO](../do/index).
`NO SQL` means nothing, because MariaDB does not currently support any language other than SQL.
The routine\_body consists of a valid SQL procedure statement. This can be a simple statement such as [SELECT](../select/index) or [INSERT](../insert/index), or it can be a compound statement written using [BEGIN and END](../begin-end/index). Compound statements can contain declarations, loops, and other control structure statements. See [Programmatic and Compound Statements](../programmatic-and-compound-statements/index) for syntax details.
MariaDB allows routines to contain DDL statements, such as `CREATE` and DROP. MariaDB also allows [stored procedures](../stored-procedures/index) (but not [stored functions](../stored-functions/index)) to contain SQL transaction statements such as `COMMIT`.
For additional information about statements that are not allowed in stored routines, see [Stored Routine Limitations](../stored-routine-limitations/index).
### Invoking stored procedure from within programs
For information about invoking [stored procedures](../stored-procedures/index) from within programs written in a language that has a MariaDB/MySQL interface, see [CALL](../call/index).
### OR REPLACE
If the optional `OR REPLACE` clause is used, it acts as a shortcut for:
```
DROP PROCEDURE IF EXISTS name;
CREATE PROCEDURE name ...;
```
with the exception that any existing [privileges](../stored-routine-privileges/index) for the procedure are not dropped.
### sql\_mode
MariaDB stores the [sql\_mode](../server-system-variables/index#sql_mode) system variable setting that is in effect at the time a routine is created, and always executes the routine with this setting in force, regardless of the server [SQL mode](../sql_mode/index) in effect when the routine is invoked.
### Character Sets and Collations
Procedure parameters can be declared with any character set/collation. If the character set and collation are not specifically set, the database defaults at the time of creation will be used. If the database defaults change at a later stage, the stored procedure character set/collation will not be changed at the same time; the stored procedure needs to be dropped and recreated to ensure the same character set/collation as the database is used.
### Oracle Mode
**MariaDB starting with [10.3](../what-is-mariadb-103/index)**From [MariaDB 10.3](../what-is-mariadb-103/index), a subset of Oracle's PL/SQL language has been supported in addition to the traditional SQL/PSM-based MariaDB syntax. See [Oracle mode from MariaDB 10.3](../sql_modeoracle-from-mariadb-103/index#stored-procedures-and-stored-functions) for details on changes when running Oracle mode.
Examples
--------
The following example shows a simple stored procedure that uses an `OUT` parameter. It uses the DELIMITER command to set a new delimiter for the duration of the process — see [Delimiters in the mysql client](../delimiters-in-the-mysql-client/index).
```
DELIMITER //
CREATE PROCEDURE simpleproc (OUT param1 INT)
BEGIN
SELECT COUNT(*) INTO param1 FROM t;
END;
//
DELIMITER ;
CALL simpleproc(@a);
SELECT @a;
+------+
| @a |
+------+
| 1 |
+------+
```
Character set and collation:
```
DELIMITER //
CREATE PROCEDURE simpleproc2 (
OUT param1 CHAR(10) CHARACTER SET 'utf8' COLLATE 'utf8_bin'
)
BEGIN
SELECT CONCAT('a'),f1 INTO param1 FROM t;
END;
//
DELIMITER ;
```
CREATE OR REPLACE:
```
DELIMITER //
CREATE PROCEDURE simpleproc2 (
OUT param1 CHAR(10) CHARACTER SET 'utf8' COLLATE 'utf8_bin'
)
BEGIN
SELECT CONCAT('a'),f1 INTO param1 FROM t;
END;
//
ERROR 1304 (42000): PROCEDURE simpleproc2 already exists
DELIMITER ;
DELIMITER //
CREATE OR REPLACE PROCEDURE simpleproc2 (
OUT param1 CHAR(10) CHARACTER SET 'utf8' COLLATE 'utf8_bin'
)
BEGIN
SELECT CONCAT('a'),f1 INTO param1 FROM t;
END;
//
ERROR 1304 (42000): PROCEDURE simpleproc2 already exists
DELIMITER ;
Query OK, 0 rows affected (0.03 sec)
```
See Also
--------
* [Identifier Names](../identifier-names/index)
* [Stored Procedure Overview](../stored-procedure-overview/index)
* [ALTER PROCEDURE](../alter-procedure/index)
* [DROP PROCEDURE](../drop-procedure/index)
* [SHOW CREATE PROCEDURE](../show-create-procedure/index)
* [SHOW PROCEDURE STATUS](../show-procedure-status/index)
* [Stored Routine Privileges](../stored-routine-privileges/index)
* [Information Schema ROUTINES Table](../information-schema-routines-table/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Compression Plugins Compression Plugins
===================
**MariaDB starting with [10.7.0](https://mariadb.com/kb/en/mariadb-1070-release-notes/)**Compressions plugins were added in a [MariaDB 10.7.0](https://mariadb.com/kb/en/mariadb-1070-release-notes/) preview release.
The various MariaDB storage engines, such as [InnoDB](../innodb/index), [RocksDB](../myrocks/index), [Mroonga](../mroonga/index), can use different compression libraries.
Before [MariaDB 10.7.0](https://mariadb.com/kb/en/mariadb-1070-release-notes/), each separate library would have to be compiled in in order to be available for use, resulting in numerous runtime/rpm/deb dependencies, most of which would never be used by users.
From [MariaDB 10.7.0](https://mariadb.com/kb/en/mariadb-1070-release-notes/), five additional MariaDB compression libraries (besides the default zlib) are available as plugins (note that these affect InnoDB and Mroonga only; RocksDB still uses the compression algorithms from its own library):
* bzip2
* lzma
* lz4
* lzo
* snappy
Installing
----------
To use, these simply need to be [installed as a plugin](../install-soname/index):
```
INSTALL SONAME 'provider_lz4';
```
The compression algorithm can then be used, for example, in [InnoDB compression](../innodb-page-compression/index):
```
SET GLOBAL innodb_compression_algorithm = lz4;
```
Upgrading
---------
When upgrading from a release without compression plugins, if a non-zlib compression algorithm was used, those tables will be unreadable until the appropriate compression library is installed. [mariadb-upgrade](../mysql_upgrade/index) should be run. The `--force` option (to run [mariadb-check](../mysqlcheck/index)) or `mariadb-check` itself will indicate any problems with compression, for example:
```
Warning : MariaDB tried to use the LZMA compression, but its provider plugin is not loaded
Error : Table 'test.t' doesn't exist in engine
status : Operation failed
```
or
```
Error : Table test/t is compressed with lzma, which is not currently loaded.
Please load the lzma provider plugin to open the table
error : Corrupt
```
In this case, the appropriate compression plugin should be installed, and the server restarted.
See Also
--------
* [10.7 preview feature: Compression Provider Plugins](https://mariadb.org/10-7-preview-feature-provider-plugins/) (mariadb.org blog)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Installing and Configuring a Multi Server ColumnStore System - 1.1.X Installing and Configuring a Multi Server ColumnStore System - 1.1.X
====================================================================
Preparing to Install
--------------------
After the MariaDB ColumnStore servers have been setup based on the [Preparing for Installations](../../preparing-for-columnstore-installation/index) document and the required MariaDB ColumnStore Packages have been Installed, use of the following option to configure and install MariaDB ColumnStore:
* MariaDB ColumnStore One Step Quick Installer Script - Release 1.1.6 and later
* MariaDB ColumnStore Post Install Script
**NOTE**: The install and setup had you install the packages on the node designed as Performance Module #1, 'pm1'. This is where the install script is run from, 'pm1'
If installing on a system where there is a need to multi servers initially planned in the future, you would be a multi server install instead of a single server install.
Going from one configuration to another will require a re-installation of the MariaDB ColumnStore software.
**NOTE**: You can install MariaDB ColumnStore as a root user or non-root user. This is based on how you setup the servers based on "Preparing for Installation Document". If installing as root, you need to be logged in as root user in a root login shell. If you are installing as non-root, you need to be logged in as a non-root user that was setup in the "Preparing for Installation Document".
[https://mariadb.com/kb/en/library/preparing-for-columnstore-installation-11x/](../preparing-for-columnstore-installation-11x/index)
ColumnStore Cluster Test Tool
-----------------------------
This tool can be running before doing installation on a single-server or multi-node installs. It will verify the setup of all servers that are going to be used in the Columnstore System.
[https://mariadb.com/kb/en/mariadb/mariadb-columnstore-cluster-test-tool](../../mariadb/mariadb-columnstore-cluster-test-tool)
MariaDB ColumnStore One Step Quick Installer Script, quick\_installer\_multi\_server.sh
---------------------------------------------------------------------------------------
MariaDB ColumnStore One Step Quick Installer Script, quick\_installer\_multi\_server.sh, is used to perform a simple 1 command run to perform the configuration and startup of MariaDB ColumnStore package on a Mutil Server setup. This will work with both root and non-root installs. Available in Release 1.1.6 and later.
The script has 4 parameters.
* --pm-ip-addresses=xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx
* --um-ip-addresses=xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx, optional
* --dist-install Use Distributed Install, optional
* --system-name=nnnn System Name, optional
It will perform an install with these defaults:
* System-Name = columnstore-1, when not specified
* Multi-Server Install
+ with X Number of PMs when only PM IP Addresses are provided
+ with X Number of UMs when UM IP Addresses are provided and X Number of UMs when PM IP Addresses are provided
* Non-Distributed Installer when --dist-install when not specified
* Storage - Internal
* DBRoot - 1 DBroot per 1 Performance Module
* Local Query is disabled on um/pm install
* MariaDB Replication is enabled
* ssk-keys setup required
### Running quick\_installer\_multi\_server.sh help as root user
```
# /usr/local/mariadb/columnstore/bin/quick_installer_multi_server.sh --help
Usage ./quick_installer_multi_server.sh [OPTION]
Quick Installer for a Multi Server MariaDB ColumnStore Install
Defaults to non-distrubuted install, meaning MariaDB Columnstore
needs to be preinstalled on all nodes in the system
Performace Module (pm) IP addresses are required
User Module (um) IP addresses are option
When only pm IP addresses provided, system is combined setup
When both pm/um IP addresses provided, system is seperate setup
--pm-ip-addresses=xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx
--um-ip-addresses=xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx, optional
--dist-install Use Distributed Install, optional
--system-name=nnnn System Name, optional
```
### Running quick\_installer\_multi\_server.sh for a 1um/1pm non-distributed system as root user
```
# /usr/local/mariadb/columnstore/bin/quick_installer_multi_server.sh --um-ip-addresses=10.128.0.4 --pm-ip-addresses=10.128.0.3
NOTE: Performing a Multi-Server Seperate install with um and pm running on seperate servers
Run post-install script
The next step is:
If installing on a pm1 node:
/usr/local/mariadb/columnstore/bin/postConfigure
If installing on a non-pm1 using the non-distributed option:
/usr/local/mariadb/columnstore/bin/columnstore start
Run postConfigure script
This is the MariaDB ColumnStore System Configuration and Installation tool.
It will Configure the MariaDB ColumnStore System and will perform a Package
Installation of all of the Servers within the System that is being configured.
IMPORTANT: This tool requires to run on the Performance Module #1
With the no-Prompting Option being specified, you will be required to have the following:
1. Root user ssh keys setup between all nodes in the system or
use the password command line option.
2. A Configure File to use to retrieve configure data, default to Columnstore.xml.rpmsave
or use the '-c' option to point to a configuration file.
===== Quick Install Multi-Server Configuration =====
Setup System Module Type Configuration
There are 2 options when configuring the System Module Type: separate and combined
'separate' - User and Performance functionality on separate servers.
'combined' - User and Performance functionality on the same server
Select the type of System Module Install [1=separate, 2=combined] (1) >
Seperate Server Installation will be performed.
NOTE: Local Query Feature allows the ability to query data from a single Performance
Module. Check MariaDB ColumnStore Admin Guide for additional information.
Enable Local Query feature? [y,n] (n) >
NOTE: The MariaDB ColumnStore Schema Sync feature will replicate all of the
schemas and InnoDB tables across the User Module nodes. This feature can be enabled
or disabled, for example, if you wish to configure your own replication post installation.
MariaDB ColumnStore Schema Sync feature is Enabled, do you want to leave enabled? [y,n] (y) >
NOTE: MariaDB ColumnStore Replication Feature is enabled
NOTE: MariaDB ColumnStore Non-Distributed Install Feature is enabled
Enter System Name (columnstore-1) >
Setup Storage Configuration
----- Setup Performance Module DBRoot Data Storage Mount Configuration -----
There are 2 options when configuring the storage: internal or external
'internal' - This is specified when a local disk is used for the DBRoot storage.
High Availability Server Failover is not Supported in this mode
'external' - This is specified when the DBRoot directories are mounted.
High Availability Server Failover is Supported in this mode.
Select the type of Data Storage [1=internal, 2=external] (1) >
===== Setup Memory Configuration =====
NOTE: Setting 'NumBlocksPct' to 70%
Setting 'TotalUmMemory' to 50%
Setup the Module Configuration
----- User Module Configuration -----
Enter number of User Modules [1,1024] (1) >
*** User Module #1 Configuration ***
Enter Nic Interface #1 Host Name (10.128.0.4) >
Enter Nic Interface #1 IP Address of 10.128.0.4 (10.128.0.4) >
----- Performance Module Configuration -----
Enter number of Performance Modules [1,1024] (1) >
*** Parent OAM Module Performance Module #1 Configuration ***
Enter Nic Interface #1 Host Name (10.128.0.3) >
Enter Nic Interface #1 IP Address of 10.128.0.3 (10.128.0.3) >
Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm1' (1) >
Next step is to enter the password to access the other Servers.
This is either your password or you can default to using a ssh key
If using a password, the password needs to be the same on all Servers.
Enter password, hit 'enter' to default to using a ssh key, or 'exit' >
===== Checking MariaDB ColumnStore System Logging Functionality =====
The MariaDB ColumnStore system logging is setup and working on local server
MariaDB ColumnStore System Configuration and Installation is Completed
===== MariaDB ColumnStore System Startup =====
System Configuration is complete.
Performing System Installation.
----- Starting MariaDB ColumnStore on local server -----
MariaDB ColumnStore successfully started
MariaDB ColumnStore Database Platform Starting, please wait ........ DONE
System Catalog Successfully Created
Run MariaDB ColumnStore Replication Setup.. DONE
MariaDB ColumnStore Install Successfully Completed, System is Active
Enter the following command to define MariaDB ColumnStore Alias Commands
. /etc/profile.d/columnstoreAlias.sh
Enter 'mcsmysql' to access the MariaDB ColumnStore SQL console
Enter 'mcsadmin' to access the MariaDB ColumnStore Admin console
NOTE: The MariaDB ColumnStore Alias Commands are in /etc/profile.d/columnstoreAlias.sh
```
### Running quick\_installer\_multi\_server.sh for a 2 pm combo distributed install system as non-root user
```
# /home/guest/mariadb/columnstore/bin/quick_installer_multi_server.sh --pm-ip-addresses=10.128.0.3,10.128.0.4 --dist-install
NOTE: Performing a Multi-Server Combined install with um/pm running on some server
Run post-install script
The next step is:
If installing on a pm1 node:
/home/guest/mariadb/columnstore/bin/postConfigure
If installing on a non-pm1 using the non-distributed option:
/home/guest/mariadb/columnstore/bin/columnstore start
Run postConfigure script
This is the MariaDB ColumnStore System Configuration and Installation tool.
It will Configure the MariaDB ColumnStore System and will perform a Package
Installation of all of the Servers within the System that is being configured.
IMPORTANT: This tool requires to run on the Performance Module #1
With the no-Prompting Option being specified, you will be required to have the following:
1. Root user ssh keys setup between all nodes in the system or
use the password command line option.
2. A Configure File to use to retrieve configure data, default to Columnstore.xml.rpmsave
or use the '-c' option to point to a configuration file.
===== Quick Install Multi-Server Configuration =====
Setup System Module Type Configuration
There are 2 options when configuring the System Module Type: separate and combined
'separate' - User and Performance functionality on separate servers.
'combined' - User and Performance functionality on the same server
Select the type of System Module Install [1=separate, 2=combined] (2) >
Combined Server Installation will be performed.
The Server will be configured as a Performance Module.
All MariaDB ColumnStore Processes will run on the Performance Modules.
NOTE: The MariaDB ColumnStore Schema Sync feature will replicate all of the
schemas and InnoDB tables across the User Module nodes. This feature can be enabled
or disabled, for example, if you wish to configure your own replication post installation.
MariaDB ColumnStore Schema Sync feature is Enabled, do you want to leave enabled? [y,n] (y) >
NOTE: MariaDB ColumnStore Replication Feature is enabled
Enter System Name (columnstore-1) >
Setup Storage Configuration
----- Setup Performance Module DBRoot Data Storage Mount Configuration -----
There are 2 options when configuring the storage: internal or external
'internal' - This is specified when a local disk is used for the DBRoot storage.
High Availability Server Failover is not Supported in this mode
'external' - This is specified when the DBRoot directories are mounted.
High Availability Server Failover is Supported in this mode.
Select the type of Data Storage [1=internal, 2=external] (1) >
===== Setup Memory Configuration =====
NOTE: Setting 'NumBlocksPct' to 50%
Setting 'TotalUmMemory' to 25%
Setup the Module Configuration
----- Performance Module Configuration -----
Enter number of Performance Modules [1,1024] (2) >
*** Parent OAM Module Performance Module #1 Configuration ***
Enter Nic Interface #1 Host Name (10.128.0.3) >
Enter Nic Interface #1 IP Address of 10.128.0.3 (10.128.0.3) >
Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm1' (1) >
*** Performance Module #2 Configuration ***
Enter Nic Interface #1 Host Name (10.128.0.4) >
Enter Nic Interface #1 IP Address of 10.128.0.4 (10.128.0.4) >
Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm2' (2) >
===== Running the MariaDB ColumnStore MariaDB Server setup scripts =====
post-mysqld-install Successfully Completed
post-mysql-install Successfully Completed
Next step is to enter the password to access the other Servers.
This is either your password or you can default to using a ssh key
If using a password, the password needs to be the same on all Servers.
Enter password, hit 'enter' to default to using a ssh key, or 'exit' >
===== System Installation =====
System Configuration is complete.
Performing System Installation.
Performing a MariaDB ColumnStore System install using Binary packages
located in the /home/guest directory.
----- Performing Install on 'pm2 / 10.128.0.4' -----
Install log file is located here: /tmp/pm2_binary_install.log
MariaDB ColumnStore Package being installed, please wait ... DONE
===== Checking MariaDB ColumnStore System Logging Functionality =====
The MariaDB ColumnStore system logging is setup and working on local server
===== MariaDB ColumnStore System Startup =====
System Configuration is complete.
Performing System Installation.
----- Starting MariaDB ColumnStore on local server -----
MariaDB ColumnStore successfully started
MariaDB ColumnStore Database Platform Starting, please wait .............. DONE
System Catalog Successfully Created
Run MariaDB ColumnStore Replication Setup.. DONE
MariaDB ColumnStore Install Successfully Completed, System is Active
Enter the following command to define MariaDB ColumnStore Alias Commands
. /etc/profile.d/columnstoreAlias.sh
Enter 'mcsmysql' to access the MariaDB ColumnStore SQL console
Enter 'mcsadmin' to access the MariaDB ColumnStore Admin console
NOTE: The MariaDB ColumnStore Alias Commands are in /etc/profile.d/columnstoreAlias.sh
```
MariaDB ColumnStore Post Install Script, postConfigure
------------------------------------------------------
The following is a transcript of a typical run of the MariaDB ColumnStore configuration script. Plain-text formatting indicates output from the script and bold text indicates responses to questions. After each question there is a short discussion of what the question is asking and what some typical answers might be. You will not see these discussions the running the actual configuration script.
### Common Installation Examples
During postConfigure, there are 2 questions that are asked where the answer given determines the path that postConfigure takes in configuring the system. Those 2 questions are as follows:
```
Select the type of server install [1=single, 2=multi] (2) >
```
and
```
Select the Type of Module Install being performed:
1. Separate - User and Performance functionalities on separate servers
2. Combined - User and Performance functionalities on the same server
Enter Server Type ID [1-2] (1) >
```
The following examples illustrates some common configurations and helps to provide answers to the above questions:
* Single Node - User and Performance running on 1 server - single / combined
* Mutli-Node #1 - User and Performance running on some server - multi / combined
* Mutli-Node #2 - User and Performance running on separate servers - multi / separate
Post Configuration tool
-----------------------
Post Configuration tool, postConfigure, does the system configuration and setup. The servers and storages are configured during this process. It is executed from the designed 'pm1' node.
### postConfigure script
To get additional information, user to run the following:
```
# /usr/local/mariadb/columnstore/bin/postConfigure -h
This is the MariaDB ColumnStore System Configuration and Installation tool.
It will Configure the MariaDB ColumnStore System based on Operator inputs and
will perform a Package Installation of all of the Modules within the
System that is being configured.
IMPORTANT: This tool should only be run on a Performance Module Server,
preferably Module #1
Instructions:
Press 'enter' to accept a value in (), if available or
Enter one of the options within [], if available, or
Enter a new value
Usage: postConfigure [-h][-c][-u][-p][-s][-port][-i][-n]
-h Help
-c Config File to use to extract configuration data, default is Columnstore.xml.rpmsave
-u Upgrade, Install using the Config File from -c, default to Columnstore.xml.rpmsave
If ssh-keys aren't setup, you should provide passwords as command line arguments
-p Unix Password, used with no-prompting option
-s Single Threaded Remote Install
-port MariaDB ColumnStore Port Address
-i Non-root Install directory, Only use for non-root installs
-n Non-distributed install, meaning it will not install the remote nodes
```
### postConfigure Install options
The postConfigure script supports 2 different types of installs:
* Distributed Install
* Non-Distributed Install
#### Distributed Install
Distributed Install by postConfigure will interact with all the nodes in the system during the configuration and setup process. It will push copied of the MariaDB ColumnStore packages from the 'pm1' node where postConfigure is running. During upgrades, it will also make sure the ColumnStore is stopped and will replace any packages that were previously install with the new package. Since it pushed the new packages to the other nodes, it is required that the packages be placed in the current home directory of the user doing the install, i.e. /root/ for root user install.
Distributed Install is the default setting, so no additional command line arguments need to be provide.
#### Non-Distributed Install
Non-Distributed Install by postConfigure will not have any interaction excluding a ping test during the configuration section to validate the IP address provided. With this option, it is up to the user to install the MariaDB ColumnStore packages on the non-pm1 nodes and startup the ColumnStore service before running postConfigure.
Non-Distributed Install does require an additional command line argument to be provided. The "-n" is the required argument as sown in this example:
```
/usr/local/mariadb/columnstore/bin/postConfigure -n
```
### Running postConfigure
Running postConfigure is a bit different when launching as root user or a non-root user. As a non-root user, you will be required the setup 2 Environment variables and provide the base directory where MariaDB ColumnStore resides.
#### Running postConfigure as root user
```
/usr/local/mariadb/columnstore/bin/postConfigure
```
#### Running postConfigure as non-root user
```
export COLUMNSTORE_INSTALL_DIR=/home/guest/mariadb/columnstore
export LD_LIBRARY_PATH=/home/guest/mariadb/columnstore/lib:/home/guest/mariadb/columnstore/mysql/lib/mysql
/home/guest/mariadb/columnstore/bin/postConfigure -i /home/guest/mariadb/columnstore
```
### postConfigure examples
#### Distributed Install example
This is an example of a Distributed Install of a 1UM/2PM system with internal storage. MariaDB ColumnStore 'rpm's packages was installed for this example. No password is provide on the command line, so the root user password is provided when prompted.
```
# /usr/local/mariadb/columnstore/bin/postConfigure
This is the MariaDB ColumnStore System Configuration and Installation tool.
It will Configure the MariaDB ColumnStore System and will perform a Package
Installation of all of the Servers within the System that is being configured.
IMPORTANT: This tool should only be run on the Parent OAM Module
which is a Performance Module, preferred Module #1
Prompting instructions:
Press 'enter' to accept a value in (), if available or
Enter one of the options within [], if available, or
Enter a new value
===== Setup System Server Type Configuration =====
There are 2 options when configuring the System Server Type: single and multi
'single' - Single-Server install is used when there will only be 1 server configured
on the system. It can also be used for production systems, if the plan is
to stay single-server.
'multi' - Multi-Server install is used when you want to configure multiple servers now or
in the future. With Multi-Server install, you can still configure just 1 server
now and add on addition servers/modules in the future.
Select the type of System Server install [1=single, 2=multi] (2) >
===== Setup System Module Type Configuration =====
There are 2 options when configuring the System Module Type: separate and combined
'separate' - User and Performance functionality on separate servers.
'combined' - User and Performance functionality on the same server
Select the type of System Module Install [1=separate, 2=combined] (2) > 1
Separate Server Installation will be performed.
NOTE: Local Query Feature allows the ability to query data from a single Performance
Module. Check MariaDB ColumnStore Admin Guide for additional information.
Enable Local Query feature? [y,n] (n) >
NOTE: The MariaDB ColumnStore Schema Sync feature will replicate all of the
schemas and InnoDB tables across the User Module nodes. This feature can be enabled
or disabled, for example, if you wish to configure your own replication post installation.
MariaDB ColumnStore Schema Sync feature, do you want to enable? [y,n] (y) >
NOTE: MariaDB ColumnStore Replication Feature is enabled
Enter System Name (columnstore-1) > mymcs-1
===== Setup Storage Configuration =====
----- Setup Performance Module DBRoot Data Storage Mount Configuration -----
There are 3 options when configuring the storage: internal or external
'internal' - This is specified when a local disk is used for the DBRoot storage.
High Availability Server Failover is not Supported in this mode
'external' - This is specified when the DBRoot directories are mounted.
High Availability Server Failover is Supported in this mode.
Select the type of Data Storage [1=internal, 2=external] (1) >
===== Setup Memory Configuration =====
NOTE: Setting 'NumBlocksPct' to 70%
Setting 'TotalUmMemory' to 50%
===== Setup the Module Configuration =====
----- User Module Configuration -----
Enter number of User Modules [1,1024] (1) >
*** User Module #1 Configuration ***
Enter Nic Interface #1 Host Name (unassigned) > um1-hostname
Enter Nic Interface #1 IP Address of um1-hostname (0.0.0.0) > 172.30.0.59
Enter Nic Interface #2 Host Name (unassigned) >
----- Performance Module Configuration -----
Enter number of Performance Modules [1,1024] (1) > 2
*** Parent OAM Module Performance Module #1 Configuration ***
Enter Nic Interface #1 Host Name (ip-172-30-0-161.us-west-2.compute.internal) >
Enter Nic Interface #1 IP Address of ip-172-30-0-161.us-west-2.compute.internal (172.30.0.161) >
Enter Nic Interface #2 Host Name (unassigned) >
Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm1' (1) >
*** Performance Module #2 Configuration ***
Enter Nic Interface #1 Host Name (unassigned) > pm2-hostname
Enter Nic Interface #1 IP Address of pm2-hostname (0.0.0.0) > 172.30.0.152
Enter Nic Interface #2 Host Name (unassigned) >
Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm2' () > 2
===== System Installation =====
System Configuration is complete.
Performing System Installation.
Performing a MariaDB ColumnStore System install using RPM packages
located in the /root directory.
Next step is to enter the password to access the other Servers.
This is either your password or you can default to using a ssh key
If using a password, the password needs to be the same on all Servers.
Enter password, hit 'enter' to default to using a ssh key, or 'exit' >
Confirm password >
----- Performing Install on 'um1 / um1-hostname' -----
Install log file is located here: /tmp/um1_rpm_install.log
----- Performing Install on 'pm2 / pm2-hostname' -----
Install log file is located here: /tmp/pm2_rpm_install.log
MariaDB ColumnStore Package being installed, please wait ... DONE
===== Checking MariaDB ColumnStore System Logging Functionality =====
The MariaDB ColumnStore system logging is setup and working on local server
===== MariaDB ColumnStore System Startup =====
System Installation is complete. If any part of the install failed,
the problem should be investigated and resolved before continuing.
package installed and the associated service started.
Would you like to startup the MariaDB ColumnStore System? [y,n] (y) >
----- Starting MariaDB ColumnStore on local server -----
MariaDB ColumnStore successfully started
MariaDB ColumnStore Database Platform Starting, please wait ......... DONE
System Catalog Successfully Created
MariaDB ColumnStore Install Successfully Completed, System is Active
Enter the following command to define MariaDB ColumnStore Alias Commands
. /usr/local/mariadb/columnstore/bin/columnstoreAlias
Enter 'mcsmysql' to access the MariaDB ColumnStore SQL console
Enter 'mcsadmin' to access the MariaDB ColumnStore Admin console
#
```
IMPORTANT: If postConfigure fails at any point, you can use the following guides to help trouble shoot any issues. And once an issue has been fixed, it is required that you re-run postConfigure until you get a successful completion.
[https://mariadb.com/kb/en/library/system-troubleshooting-mariadb-columnstore/#multi-node-install-problems-and-how-to-diagnose](../system-troubleshooting-mariadb-columnstore/index#multi-node-install-problems-and-how-to-diagnose)
#### Non-Distributed Install example
This is an example of a Non-Distributed Install of a 2PM system with External storage. MariaDB ColumnStore 'rpm's packages was installed for this example. A password of 'ssh' is provide on the command line, which means that ssh-keys are setup and no password prompt will be required.
This example also was done on a system that had the GlusterFS third party package installed, so it will show the storage option for the MariaDB ColumnStore Data Replication. If GlusterFS wasnt installed, it would not show up as a Storage Option.
```
# /usr/local/mariadb/columnstore/bin/postConfigure -n -p ssh
This is the MariaDB ColumnStore System Configuration and Installation tool.
It will Configure the MariaDB ColumnStore System and will perform a Package
Installation of all of the Servers within the System that is being configured.
IMPORTANT: This tool should only be run on the Parent OAM Module
which is a Performance Module, preferred Module #1
Prompting instructions:
Press 'enter' to accept a value in (), if available or
Enter one of the options within [], if available, or
Enter a new value
===== Setup System Server Type Configuration =====
There are 2 options when configuring the System Server Type: single and multi
'single' - Single-Server install is used when there will only be 1 server configured
on the system. It can also be used for production systems, if the plan is
to stay single-server.
'multi' - Multi-Server install is used when you want to configure multiple servers now or
in the future. With Multi-Server install, you can still configure just 1 server
now and add on addition servers/modules in the future.
Select the type of System Server install [1=single, 2=multi] (2) >
===== Setup System Module Type Configuration =====
There are 2 options when configuring the System Module Type: separate and combined
'separate' - User and Performance functionality on separate servers.
'combined' - User and Performance functionality on the same server
Select the type of System Module Install [1=separate, 2=combined] (2) >
Separate Server Installation will be performed.
NOTE: The MariaDB ColumnStore Schema Sync feature will replicate all of the
schemas and InnoDB tables across the User Module nodes. This feature can be enabled
or disabled, for example, if you wish to configure your own replication post installation.
MariaDB ColumnStore Schema Sync feature, do you want to enable? [y,n] (y) >
NOTE: MariaDB ColumnStore Replication Feature is enabled
Enter System Name (columnstore-1) > mymcs-1
===== Setup Storage Configuration =====
----- Setup Performance Module DBRoot Data Storage Mount Configuration -----
There are 3 options when configuring the storage: internal or external
'internal' - This is specified when a local disk is used for the DBRoot storage.
High Availability Server Failover is not Supported in this mode
'external' - This is specified when the DBRoot directories are mounted.
High Availability Server Failover is Supported in this mode.
Select the type of Data Storage [1=internal, 2=external] (1) > 2
===== Setup Memory Configuration =====
NOTE: Setting 'NumBlocksPct' to 70%
Setting 'TotalUmMemory' to 50%
===== Setup the Module Configuration =====
----- Performance Module Configuration -----
Enter number of Performance Modules [1,1024] (1) > 2
*** Parent OAM Module Performance Module #1 Configuration ***
Enter Nic Interface #1 Host Name (ip-172-30-0-161.us-west-2.compute.internal) >
Enter Nic Interface #1 IP Address of ip-172-30-0-161.us-west-2.compute.internal (172.30.0.161) >
Enter Nic Interface #2 Host Name (unassigned) >
Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm1' (1) >
*** Performance Module #2 Configuration ***
Enter Nic Interface #1 Host Name (unassigned) > pm2-hostname
Enter Nic Interface #1 IP Address of pm2-hostname (0.0.0.0) > 172.30.0.152
Enter Nic Interface #2 Host Name (unassigned) >
Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm2' () > 2
===== Running the MariaDB ColumnStore MariaDB ColumnStore setup scripts =====
post-mysqld-install Successfully Completed
post-mysql-install Successfully Completed
===== Checking MariaDB ColumnStore System Logging Functionality =====
The MariaDB ColumnStore system logging is setup and working on local server
MariaDB ColumnStore System Configuration and Installation is Completed
===== MariaDB ColumnStore System Startup =====
System Installation is complete. If any part of the install failed,
the problem should be investigated and resolved before continuing.
Non-Distributed Install: make sure all other modules have MariaDB ColumnStore
package installed and the associated service started.
----- Starting MariaDB ColumnStore on local server -----
MariaDB ColumnStore successfully started
MariaDB ColumnStore Database Platform Starting, please wait ......... DONE
System Catalog Successfully Created
MariaDB ColumnStore Install Successfully Completed, System is Active
Enter the following command to define MariaDB ColumnStore Alias Commands
. /usr/local/mariadb/columnstore/bin/columnstoreAlias
Enter 'mcsmysql' to access the MariaDB ColumnStore SQL console
Enter 'mcsadmin' to access the MariaDB ColumnStore Admin console
#
```
IMPORTANT: If postConfigure fails at any point, you can use the following guides to help trouble shoot any issues. And once an issue has been fixed, it is required that you re-run postConfigure until you get a successful completion.
[https://mariadb.com/kb/en/library/system-troubleshooting-mariadb-columnstore/#multi-node-install-problems-and-how-to-diagnose](../system-troubleshooting-mariadb-columnstore/index#multi-node-install-problems-and-how-to-diagnose)
#### Data Replication Install example
This is an example of a Distributed Install of a 1UM/2PM system with Data Replication storage. Root user password is passed in as a command line argument.
As part of the Preparing for Install, the GlusterFS third party package would have already been installed on all of the PM servers within this system. During the running of postConfigure, the user will be required to enter some Data Replication options based on the type of system you want. Example is number of copies of the Database you would like to have maintained across the PM servers.
This example also enabled the Local Query Feature.
```
/usr/local/mariadb/columnstore/bin/postConfigure -p 'root-user-password'
This is the MariaDB ColumnStore System Configuration and Installation tool.
It will Configure the MariaDB ColumnStore System and will perform a Package
Installation of all of the Servers within the System that is being configured.
IMPORTANT: This tool should only be run on the Parent OAM Module
which is a Performance Module, preferred Module #1
Prompting instructions:
Press 'enter' to accept a value in (), if available or
Enter one of the options within [], if available, or
Enter a new value
===== Setup System Server Type Configuration =====
There are 2 options when configuring the System Server Type: single and multi
'single' - Single-Server install is used when there will only be 1 server configured
on the system. It can also be used for production systems, if the plan is
to stay single-server.
'multi' - Multi-Server install is used when you want to configure multiple servers now or
in the future. With Multi-Server install, you can still configure just 1 server
now and add on addition servers/modules in the future.
Select the type of System Server install [1=single, 2=multi] (2) >
===== Setup System Module Type Configuration =====
There are 2 options when configuring the System Module Type: separate and combined
'separate' - User and Performance functionality on separate servers.
'combined' - User and Performance functionality on the same server
Select the type of System Module Install [1=separate, 2=combined] (2) > 1
Seperate Server Installation will be performed.
NOTE: Local Query Feature allows the ability to query data from a single Performance
Module. Check MariaDB ColumnStore Admin Guide for additional information.
Enable Local Query feature? [y,n] (n) > y
NOTE: Local Query Feature is enabled
NOTE: MariaDB ColumnStore Replication Feature is enabled
Enter System Name (columnstore-1) > mymcs-1
===== Setup Storage Configuration =====
----- Setup Performance Module DBRoot Data Storage Mount Configuration -----
There are 3 options when configuring the storage: internal, external, or DataRedundancy
'internal' - This is specified when a local disk is used for the DBRoot storage.
High Availability Server Failover is not Supported in this mode
'external' - This is specified when the DBRoot directories are mounted.
High Availability Server Failover is Supported in this mode.
'DataRedundancy' - This is specified when gluster is installed and you want
the DBRoot directories to be controlled by ColumnStore Data Redundancy.
High Availability Server Failover is Supported in this mode.
NOTE: glusterd service must be running and enabled on all PMs.
Select the type of Data Storage [1=internal, 2=external, 3=DataRedundancy] (1) > 3
===== Setup Memory Configuration =====
NOTE: Setting 'NumBlocksPct' to 70%
Setting 'TotalUmMemory' to 50%
===== Setup the Module Configuration =====
----- User Module Configuration -----
Enter number of User Modules [1,1024] (1) >
*** User Module #1 Configuration ***
Enter Nic Interface #1 Host Name (unassigned) > um1-hostname
Enter Nic Interface #1 IP Address of um1-hostname (0.0.0.0) > 172.30.0.59
Enter Nic Interface #2 Host Name (unassigned) >
----- Performance Module Configuration -----
Enter number of Performance Modules [1,1024] (1) > 2
*** Parent OAM Module Performance Module #1 Configuration ***
Enter Nic Interface #1 Host Name (ip-172-30-0-161.us-west-2.compute.internal) >
Enter Nic Interface #1 IP Address of ip-172-30-0-161.us-west-2.compute.internal (172.30.0.161) >
Enter Nic Interface #2 Host Name (unassigned) >
Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm1' (1) >
*** Performance Module #2 Configuration ***
Enter Nic Interface #1 Host Name (unassigned) > pm2-hostname
Enter Nic Interface #1 IP Address of pm2-hostname (0.0.0.0) > 172.30.0.152
Enter Nic Interface #2 Host Name (unassigned) >
Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm2' () > 2
===== System Installation =====
System Configuration is complete.
Performing System Installation.
Performing a MariaDB ColumnStore System install using RPM packages
located in the /root directory.
===== Running the MariaDB ColumnStore MariaDB ColumnStore setup scripts =====
post-mysqld-install Successfully Completed
post-mysql-install Successfully Completed
----- Performing Install on 'um1 / um1-hostname' -----
Install log file is located here: /tmp/um1_rpm_install.log
----- Performing Install on 'pm2 / pm2-hostname' -----
Install log file is located here: /tmp/pm2_rpm_install.log
MariaDB ColumnStore Package being installed, please wait ... DONE
===== Configuring MariaDB ColumnStore Data Redundancy Functionality =====
Only 2 PMs configured. Setting number of copies at 2.
----- Setup Data Redundancy Network Configuration -----
'existing' - This is specified when using previously configured network devices. (NIC Interface #1)
No additional network configuration is required with this option.
'dedicated' - This is specified when it is desired for Data Redundancy traffic to use
a separate network than one previously configured for ColumnStore.
You will be prompted to provide Hostname and IP information for each PM.
Select the data redundancy network [1=existing, 2=dedicated] (1) >
----- Performing Data Redundancy Configuration -----
gluster peer probe 172.30.0.161
gluster peer probe 172.30.0.152
Gluster create and start volume dbroot1...DONE
Gluster create and start volume dbroot2...DONE
----- Data Redundancy Configuration Complete -----
===== Checking MariaDB ColumnStore System Logging Functionality =====
The MariaDB ColumnStore system logging is setup and working on local server
===== MariaDB ColumnStore System Startup =====
System Installation is complete. If any part of the install failed,
the problem should be investigated and resolved before continuing.
package installed and the associated service started.
Would you like to startup the MariaDB ColumnStore System? [y,n] (y) >
----- Starting MariaDB ColumnStore on local server -----
MariaDB ColumnStore successfully started
MariaDB ColumnStore Database Platform Starting, please wait .......... DONE
System Catalog Successfully Created
MariaDB ColumnStore Install Successfully Completed, System is Active
Enter the following command to define MariaDB ColumnStore Alias Commands
. /usr/local/mariadb/columnstore/bin/columnstoreAlias
Enter 'mcsmysql' to access the MariaDB ColumnStore SQL console
Enter 'mcsadmin' to access the MariaDB ColumnStore Admin console
#
```
IMPORTANT: If postConfigure fails at any point, you can use the following guides to help trouble shoot any issues. And once an issue has been fixed, it is required that you re-run postConfigure until you get a successful completion.
[https://mariadb.com/kb/en/library/system-troubleshooting-mariadb-columnstore/#multi-node-install-problems-and-how-to-diagnose](../system-troubleshooting-mariadb-columnstore/index#multi-node-install-problems-and-how-to-diagnose)
MariaDB Columnstore Memory Configuration
----------------------------------------
During the installation process, postConfigure will set the 2 main Memory configuration settings based on the size of memory detected on the local node.
The 2 settings are in the MariaDB Columnstore Configuration file, /usr/local/mariadb/columnstore/etc Columnstore.xml. These 2 settings are:
```
'NumBlocksPct' - Performance Module Data cache memory setting
TotalUmMemory - User Module memory setting, used as temporary memory for joins
```
On a system that has the Performance Module and User Module functionality combined on the same server, this is the default settings:
```
NumBlocksPct - 50% of total memory
TotalUmMemory - 25% of total memory, default maximum the percentage equal to 16G
```
On a system that has the Performance Module and User Module functionality on different servers, this is the default settings:
```
NumBlocksPct - This setting is NOT configured, and the default that the applications will then use is 70%
TotalUmMemory - 50% of total memory
```
The user can choose to change these settings after the install is completed, if for instance they want to setup more memory for Joins to utilize. On a single server or combined UM/PM server, it is recommended to not have the combination of these 2 settings over 75% of total memory
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Preparing for ColumnStore Installation - 1.0.X Preparing for ColumnStore Installation - 1.0.X
==============================================
### Prerequisite
With the GA version of MariaDB ColumnStore, there should be no versions of MariaDB Server or MySQL pre-installed on the OS before a MariaDB ColumnStore binary or RPM is installed on the system. If you have an installation of MariaDB server, uninstall it before proceeding.
#### Configuration preparation
Before installing MariaDB ColumnStore, there is some preparation necessary. You will need to determine the following, refer the MariaDB ColumnStore Architecture Document for additional information.
* How many User Modules (UMs) will your system need?
* How many Performance Modules (PMs) will your system need?
* How much disk space will your system need?
##### OS information
MariaDB ColumnStore is certified to run on:
RHEL/CentOS v6, v7
Ubuntu 16.04 LTS
Debian v8
SUSE 12
but it should run on any recent Linux system.
Make sure the same OS is installed on all the servers for a multi-node system.
Make sure the locale setting on all servers are all the same.
To set locale to en\_US and UTf-8, run:
```
# localedef -i en_US -f UTF-8 en_US.UTF-8
```
##### System administration information
Information your system administrator must provide you before you start installing MariaDB ColumnStore:
* The hostnames of each interface on each node (optional).
* The IP address of each interface on each node.
* The root/non-root password for the nodes (all nodes must have the same root/non-root password or root/non-root ssh keys must be set up between servers). MariaDB ColumnStore can be installed as root or a non-root user.
Example for 3 PM, 1UM system, these are the steps required to configure PM-1 for passwordless ssh. The equivalent steps must be repeated on every PM in the system. The equivalent steps must be repeated on every UM in the system if the MariaDB ColumnStore Data Replication feature is enabled during the install process.
```
[root@pm- 1 ~] $ ssh-keygen
[root@pm- 1 ~] $ ssh-copy-id -i ~/.ssh/id_rsa.pub pm-1
[root@pm- 1 ~] $ ssh-copy-id -i ~/.ssh/id_rsa.pub pm-2
[root@pm- 1 ~] $ ssh-copy-id -i ~/.ssh/id_rsa.pub pm-3
[root@pm- 1 ~] $ ssh-copy-id -i ~/.ssh/id_rsa.pub um-1
```
##### Network configuration
MariaDB ColumnStore is quite flexible regarding networking. Some options are as follows:
* The interconnect between UMs and PMs can be one or more private VLANs. In this case MariaDB ColumnStore will automatically trunk the individual LANs together to provide greater effective bandwidth between the UMs and PMs.
* The PMs do not require a public LAN access as they only need to communicate with the UMs.
* The UMs most likely require at least one public interface to access the MySQL server front end from the site LAN. This interface can be a separate physical or logical connection from the PM interconnect.
* You can use whatever security your site requires on the public access to the MySQL server front end on the UMs. By default it is listening on port 3306.
* MariaDB ColumnStore software only requires a TCP/IP stack to be present to function. You can use any physical layer you desire.
#### MariaDB ColumnStore port usage
The MariaDB ColumnStore daemon utilizes port 3306.
You must reserve the following ports to run the MariaDB ColumnStore software: 8600 - 8630, 8700, and 8800
#### Storage and Database files (DBRoots)
DBRoots are the MariaDB ColumnStore Datafile containers or directories. For example on a root install, they are /usr/local/mariadb/columnstore/data<N> where N is the dbroot number.
IMPORTANT: When using Storage (extX, NFS, etc), setup to have the MariaDB front-end data and the DBRoots back-end data files mounted. Don't setup a mount where the actually MariaDB Columnstore package as a whole is mounted, i.e. /usr/local/mariadb or /usr/local/mariadb/columnstore.
This would included mounts for:
* /usr/local/mariadb/columnstore/mysql/db # optional for the front-end schemas and non-Columnstore data
* /usr/local/mariadb/columnstore/dataX # DBroots, Columnstore data
##### Local database files
If you are using local disk to store the database files, the DBRoot directories will be created under the installation directory, for example /usr/local/mariadb/columnstore/data<N> where N is the DBRoot number. You should setup to have 1 DBRoot per Performance Module when configuring the system.
Use of soft-links for the Data. If you want to setup an install where the Data is stored out in a separate directory in the case you have limit amount local storage, this can be done. It is recommended that the softlinks be setup at the Data Directory Levels, like mariadb/columnstore/data and mariadb/columnstore/dataX. With this setup, you can perform upgrades using any of the package types, rpm, debian, or binary. In the case where you prefer OR have to set a softlink at the top directory, like /usr/local/mariadb, you will need to install using the binary package. If you install using the rpm package and tool, this softlink will be deleted when you perform the upgrade process and the upgrade will fail.
##### SAN mounted database files
If you are using a SAN to store the database files, the following must be taken into account:
* Each of these DBRoots must be a separate, mountable partition/directory
* You might have more than 1 DBRoot assigned to a Performance Module and you can have different number of DBroots per Performance Module, but its recommend to have the same number and same physical size of the storage device on each Performance Module. Here is an example: If you setup 1 Performance Module and you have 2 separate devices that aren't stripped, then you would configure 2 DBRoots to this 1 Performance Module and setup the /etc/fstab with 2 mounts.
* MariaDB ColumnStore will run on most Linux filesystems, but we test most heavily with EXT2. You should have no problems with EXT3 or EXT4, but the journaling in these filesystems can be expensive for a database application. You should carefully evaluate the write characteristics of your chosen filesystem to make sure they meet your specific business needs. In any event, MariaDB ColumnStore writes relatively few, very large (64MB) files. You should consult with your Linux system administrator to see if configuring a larger bytes-per-inode setting than the default is available in your chosen filesystem.
* MariaDB ColumnStore supports High Availability failover in the case where a Performance Module was to go down when you are using SAN storage devices. To setup a system to support this, all of the SAN devices would need to be mountable to all of the Performance Modules. So in a system that had 2 Performance Modules and each one had 1 SAN device. If 1 of the Performance Modules was to go offline, the system would automatically detect this and would remount the SAN device from the downed module to the 1 active Performance Module. And the system would be able to continue to perform while the module remain offline.
* The fstab file must be set up (/etc/fstab). These entries would need to be added to each PM pointing to the all the dbroot(s) being used on all PMs. The 'noauto' option indicates that all dbroots will be associated to every PM but will not be automatically mounted at server startup. The associated dbroots that are assigned to each PM will be specifically mounted to that PM at Columnstore startup.
The following example shows an /etc/fstab setup for 2 dbroots total for all PMs, but they can setup any disk type they want:
```
/dev/sda1 /usr/local/mariadb/columnstore/data1 ext2 noatime,nodiratime,noauto 0 0
/dev/sdd1 /usr/local/mariadb/columnstore/data2 ext2 noatime,nodiratime,noauto 0 0
```
#### Performance optimization considerations
There are optimizations that should be made when using MariaDB ColumnStore listed below. As always, please consult with your network administrator for additional optimization considerations for your specific installation needs.
##### GbE NIC settings:
* Modify /etc/rc.d/rc.local to include the following:
```
/sbin/ifconfig eth0 txqueuelen 10000
```
* Modify /etc/sysctl.conf for the following:
```
# increase TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# increase Linux autotuning TCP buffer limits
# min, default, and max number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# recommended to increase this for 1000 BT or higher
net.core.netdev_max_backlog = 2500
# for 10 GigE, use this
net.core.netdev_max_backlog = 30000
```
* Cache memory settings: To optimize Linux to cache directories and inodes the vm.vfs\_cache\_pressure can be set to a lower value than 100 to attempt to retain caches for inode and directory structures. This will help improve read performance. A value of 10 is suggested. The following commands must all be run as the root user or with sudo.
To check the current value:
```
cat /proc/sys/vm/vfs_cache_pressure
```
To set the current value until the next reboot:
```
sysctl -w vm.vfs_cache_pressure=10
```
To set the value permanently across reboots, add the following to /etc/sysctl.conf:
```
vm.vfs_cache_pressure = 10
```
#### System settings considerations
##### umask setting
The default setting of 022 in /etc/profile is what is recommended. It it required that it the setting doesn't end with a 7, like 077. Example, on a root install, mysqld that runs as 'mysql' user needs to be able to read the MariaDB ColumnStore configuration file, Columnstore.xml. So a last digit of 7 would prevent this and cause the install to fail.
The current umask can be determined:
```
umask
```
A value of 022 can be set in the current session or in /etc/profile as follows:
```
umask 022
```
#### Firewall considerations
The MariaDB ColumnStore utilizes these ports 3306, 8600 - 8630, 8700, and 8800. So on multi-node installs, these ports will need to be accessible between the servers. So if there is any firewall software running on the system that could block these ports, either that firewall would need to be disabled as shown below or the ports listed above will need to be configured to allow both input and output on all servers within the firewall software. You will also want to allow these ports to be passed though on any routers that might be connected between the servers.
To disable any local firewalls and SELinux on mutli-node installs
You must be a Root user.
CentOS 6 and systems using iptables
```
#service iptables save (Will save your existing IPTable Rules)
#service iptables stop (It will disable firewall Temporarly)
```
To Disable it Permanentely:
```
#chkconfig iptables off
```
CentOS 7 and systems using systemctl with firewalld installed
```
#systemctl status firewalld
#systemctl stop firewalld
#systemctl disable firewalld
```
Ubuntu and systems using ufw
```
#service ufw stop (It will disable firewall Temporarly)
```
To Disable it Permanentely:
```
#chkconfig ufw off
```
SUSE
```
#/sbin/rcSuSEfirewall2 status
#/sbin/rcSuSEfirewall2 stop
```
To disable SELinux,
```
edit file "/etc/selinux/config" and find line;
SELINUX=enforcing
Now replace it with,
SELINUX=disabled
```
### Package dependencies
#### Boost libraries
MariaDB ColumnStore requires that the boost package of 1.53 or newer is installed.
For Centos 7, Ubuntu 16, Debian 8, SUSE 12 and other newer OS's, you can just install the boost packages via yum or apt-get.
```
# yum -y install boost
```
or
```
# apt-get -y install libboost-all-dev
```
For CentOS 6, you can either download and install the MariaDB Columnstore Centos 6 boost library package or install the boost source of 1.55 and build it to generate the required libraries. That means both the build and the install machines require this.
How to download, go to binaries download page and download "centos6\_boost\_1\_55.tar.gz"
<https://mariadb.com/downloads/columnstore>
Click All Versions - > 1.0.x -> centos -> x86\_64
Install package on each server in the cluster
```
wget https://downloads.mariadb.com/ColumnStore/1.0.x/centos/x86_64/centos6_boost_1_55.tar.gz
tar xfz centos6_boost_1_55.tar.gz -C /usr/lib
ldconfig
```
Downloading and build the boost libraries:
NOTE: This means that the "Development Tools" group install be done prior to this.
```
yum groupinstall "Development Tools"
yum install cmake
```
Here is the procedure to download and build the boost source:
```
cd /usr/
wget http://sourceforge.net/projects/boost/files/boost/1.55.0/boost_1_55_0.tar.gz
tar zxvf boost_1_55_0.tar.gz
cd boost_1_55_0
./bootstrap.sh --with-libraries=atomic,date_time,exception,filesystem,iostreams,locale,program_options,regex,signals,system,test,thread,timer,log --prefix=/usr
./b2 install
ldconfig
```
For SUSE 12, you will need to install the boost-devel package, which is part of the SLE-SDK package.
```
SUSEConnect -p sle-sdk/12.2/x86_64
zypper install boost-devel
```
#### Other packages
Make sure these packages are installed on the nodes where the MariaDB ColumnStore packages will be installed:
#### Centos 6/7
```
# yum -y install epel-release
# yum -y install expect perl perl-DBI openssl zlib file sudo libaio rsync snappy net-tools perl-DBD-MySQL jemalloc
```
#### Ubuntu 16
```
# apt-get -y install tzdata libtcl8.6 expect perl openssl file sudo libdbi-perl libboost-all-dev libreadline-dev rsync libsnappy1v5 net-tools libdbd-mysql-perl libjemalloc1
```
#### Debian 8
```
# apt-get -y install expect perl openssl file sudo libdbi-perl libboost-all-dev libreadline-dev rsync libsnappy1 net-tools libdbd-mysql-perl libjemalloc1
```
#### SUSE 12
```
zypper addrepo https://download.opensuse.org/repositories/network:cluster/SLE_12_SP3/network:cluster.repo
zypper refresh
zypper install expect perl perl-DBI openssl zlib file sudo libaio rsync boost snappy net-tools perl-DBD-mysql jemalloc
```
#### System Logging Package
MariaDB ColumnStore utilizes the System Logging applications for generating logs. So one of the below system logging applications should be install on all servers in the ColumnStore system:
* syslog
* rsyslog
* syslog-ng
### Choosing the type of initial download/install
Installing MariaDB ColumnStore with the use of soft-links. If you want to setup an install where the Data is stored out in a separate directory in the case you have limit amount local storage, this can be done. It is recommended that the softlinks be setup at the Data Directory Levels, like mariadb/columnstore/data and mariadb/columnstore/dataX. With this setup, you can perform upgrades using any of the package types, rpm, debian, or binary. In the case where you prefer OR have to set a softlink at the top directory, like /usr/local/mariadb, you will need to install using the binary package. If you install using the rpm package and tool, this softlink will be deleted when you perform the upgrade process and the upgrade will fail.
IMPORTANT: Make sure there are no other version of MariaDB server install. If so, these will need to be uninstalled before installing MariaDB ColumnStore.
### Root user installs
#### Initial download/install of MariaDB ColumnStore RPMs
1. Install MariaDB ColumnStore as user root (use 'su -' to establish a login shell if you access the box using another account):
Note: MariaDB ColumnStore installation will install with a single MariaDB userid of root with no password. You may setup users and permissions for a MariaDB ColumnStore-Mysql account just as you would in MySQL.
Note: The packages will be installed into /usr/local. This is required for root user installs
**Download the package mariadb-columnstore-release#.x86\_64.tar.gz (RHEL5 64-BIT) to the server where you are installing MariaDB ColumnStore and place in the /root directory.** Unpack the tarball, which will generate multiple RPMs that will reside in the /root/ directory.
`tar -zxf mariadb-columnstore-release#.x86_64.tar`
**Install the RPMs. The MariaDB ColumnStore software will be installed in /usr/local/.
`rpm -ivh mariadb-columnstore*release#*.rpm`**
#### Initial download/install of MariaDB ColumnStore binary package
Install MariaDB ColumnStore as user root on the server designated as PM1: Note: You may setup users and permissions for an MariaDB ColumnStore account just as you would in MariaDB.
**For root user installs, MariaDB Columnstore needs to run in /usr/local. You can either install directly into /usr/local or install elsewhere and then setup a softlink to /usr/local. Here is an example of setting up a soft-link if you install the binary package in /mnt/mariadb**
```
# ln -s /mnt/mariadb /usr/local
```
* Download the package into the /root/ and copy to /usr/local directory to the server where you are installing MariaDB ColumnStore.
```
cp /root/mariadb-columnstore-release#.x86_64.bin.tar.gz /usr/local/ mariadb-columnstore-release#.x86_64.bin.tar.gz
```
* Unpack the tarball, which will generate the /usr/local/ directory.
`tar -zxvf mariadb-columnstore-release#.x86_64.bin.tar.gz`
Run the post-install script:
```
/usr/local/mariadb/columnstore/bin/post-install
```
#### Initial download/install of MariaDB ColumnStore DEB package
DEB package installs are not supported in the current version, but there is an Ubuntu 16.04 binary package that you can use to install. Just follow the binary package instructions above
Install MariaDB ColumnStore on a Debian or Ubuntu OS as user root: Note: You may setup users and permissions for an MariaDB ColumnStore account just as you would in MariaDB.
1. Download the package mariadb-columnstore-release#.amd64.deb.tar.gz
(DEB 64- BIT) into the /root directory of the server where you are installing MariaDB ColumnStore.
2. Unpack the tarball, which will generate DEBs.
`tar -zxf mariadb-columnstore-release#.amd64.deb.tar.gz`
3. Install the MariaDB ColumnStore DEBs. The MariaDB ColumnStore software will be installed in /usr/ local/.
`dpkg -i mariadb-columnstore*release#*.deb`
### Non-root user installs
MariaDB Columnstore can be installed to run as a non-root user using the binary tar file installation. These procedures will also allow you to change the installation from the default install directory into a user-specified directory. These procedures will need to be run on all the MariaDB ColumnStore Servers.
For the purpose of these instructions, the following assumptions are:
* Non-root user "guest" is used in this example
* Installation directory is /home/guest/mariadb/columnstore
Tasks involved:
* Create the non-root user and group of the same name (by root user)
* Update sudo configuration (by root user)
* Set the user file limits (by root user)
* Modify fstab if using SAN Mounted files (by root user)
* Uninstall existing MariaDB Columnstore installation if needed (by root user)
* Update permissions on certain directories that MariaDB Columnstore writes (by root user)
* Set up defaults file
* MariaDB Columnstore Installation (by non-root user)
* Enable MariaDB Columnstore to start automatically at boot time
#### Creation of the non-root user (by root user)
Before beginning the binary tar file installation you will need your system administrator to set up accounts for you on every MariaDB Columnstore node. The account name must be the same on every node. The password used must be the same on every node. If you subsequently change the password on one node, you must change it on every node. The user-id must be the same on every node as well. In the examples below we will use the account name 'guest' and the password 'mariadb'. Additionally, every node must have a basic Linux server package setup and additionally have expect (and all its dependencies) installed.
* create new user
Group ID is an example, can be different than 1000, but needs to be the same on all servers in the cluster
`adduser guest -u 1000`
* create group
```
addgroup guest
moduser -g guest guest
```
The value for user-id must be the same for all nodes.
* Assign password to newly created user
`passwd guest`
* Log in as user guest
`su - guest`
* Choose an installation directory in which the non-root user has full read-write access. The installation directory must be the same on every node. In the examples below we will use the path '/home/guest/mariadb/columnstore'.
On each host, the install process will update $HOME/.bashrc with the following
```
export COLUMNSTORE_INSTALL_DIR=$HOME/mariadb/columnstore
export PATH=$COLUMNSTORE_INSTALL_DIR/bin:$COLUMNSTORE_INSTALL_DIR/mysql/bin:/usr/sbin:$PATH
export LD_LIBRARY_PATH=$COLUMNSTORE_INSTALL_DIR/lib:$COLUMNSTORE_INSTALL_DIR/mysql/lib/mysql
```
Note that these commands must be available to non-interactive shells. Once changes have been made, verify by running 'ssh user@host env' to ensure these values are displayed.
You must log off and log back in for these environment variables to be effective.
#### Update sudo configuration (by root user)
The sudo configuration file on each node will need to be modified to add in the non-root user. The recommended way is to use the Unix command, visudo. The following example will add the ‘guest’ user: visudo
* Add the following line for the non-root user:
```
guest ALL=(ALL) NOPASSWD: ALL
```
* Comment out the following line, which will allow the user to login without 'tty':
```
#Defaults requiretty
```
#### Set the user file limits (by root user)
ColumnStore needs the open file limit to be increased for the specified user. To do this edit the /etc/security/limits.conf file and make the following additions at the end of the file:
```
guest hard nofile 65536
guest soft nofile 65536
```
If you are already logged in as 'guest' you will need to log out and back in again for this change to take effect.
#### Modify fstab if using SAN Mounted Database Files (by root user)
If you are using a SAN to store the database files, an ‘users‘ option will need to be added to the fstab entries (by the root user). For more information, please see the “SAN Mounted Database Files” section earlier in this guide.
Example entries:
/dev/sda1 /home/guest/mariadb/columnstore/data1 ext2 noatime,nodiratime,noauto,users 0 0
/dev/sdd1 /home/mariadb/columnstore/data2 ext2 noatime,nodiratime,noauto,users 0 0
The disk device being used will need to have its user permissions set to the non-root user name. This is an example command run as 'root' user setting the user ownership of dbroot /dev/sda1 to non-root user of 'guest':
```
mke2fs dbroot (i.e., /dev/sda1)
mount /dev/sda1 /tmpdir
chown -R infinidb.infinidb /tmpdir
umount /tmpdir
```
#### Uninstall existing MariaDB Columnstore installation, if needed (by root user)
If MariaDB Columnstore has ever before been installed on any of the planned hosts as a root user install, you must have the system administrator verify that no remnants of that installation exist. The non-root installation will not be successful if there are MariaDB Columnstore files owned by root on any of the hosts.
* Verify the MariaDB Columnstore installation directory does not exist:
The /usr/local/mariadb/columnstore directory should not exist at all unless it is your target directory, in which case it must be completely empty and owned by the non-root user.
* Verify the /etc/fstab entries are correct for the new installation.
* Verify the /etc/default/columnstore directory does not exist.
* Verify the /var/lock/subsys/mysql-Columnstore file does not exist.
* Verify the /tmp/StopColumnstore file does not exist.
There should not be any files or directories owned by root in the /tmp directory
#### Update permissions on certain directories that MariaDB Columnstore writes (by root user)
These directories are writing to by the MariaDB Columnstore applications and the permissions need to be set to allow them to create files. So the permissions of them need to be set to the following by root user.
```
chmod 777 /tmp
chmod 777 /dev/shm
```
#### MariaDB Columnstore installation (by non-root user)
You should be familiar with the general MariaDB Columnstore installation instructions in this guide as you will be askedthe same questions during installation.
* Log in as non-root user ( guest , in our example) Note: Ensure you are at your home directory before proceeding to the next step
* Now place the MariaDB Columnstore binary tar file in your home directory on the host you will be using as PM1. Untar the binary distribution package to the /home/guest directory: tar -xf mariadb-columnstore-release#.x86\_64.bin.tar.gz
* Run post installation:
```
./mariadb/columnstore/bin/post-install --installdir=$HOME/mariadb/columnstore
```
* Run the 3 command lines that were outputted by the previous post-install command, which would look like the following. See the “MariaDB Columnstore Configuration” in this guide for more information:
```
export COLUMNSTORE_INSTALL_DIR=/home/guest/mariadb/columnstore
export LD_LIBRARY_PATH=/home/guest/mariadb/columnstore/lib:/home/guest/mariadb/columnstore/mysql/lib/mysql
/home/guest/mariadb/columnstore/bin/postConfigure -i /home/guest/mariadb/columnstore
```
a. When prompted for package type, enter 'binary' Enter the Package Type being installed to other servers [rpm,deb,binary] (rpm) > binary
b. When prompted for password, enter the non-user account password OR just hit enter if you have setup the non-root user with password-less ssh keys on all nodes (Please see the “System Administration Information” section earlier in this guide for more information on ssh keys.)
#### Post-installation (by root user)
Optional items to assist in MariaDB Columnstore auto-start and logging:
* To configure MariaDB Columnstore to start automatically at boot time, perform the following steps in each InfiniDB server:
* Add the following to the /etc/rc.local or /etc/rc.d/rc.local (centos7) file:
su - guest -l -c "/home/guest/mariadb/columnstore/bin/columnstore start"
or
sudo runuser -l mariadb-user -c "/home/mariadb-user/mariadb/columnstore/bin/columnstore start"
Note: Make sure the above entry is added to the rc.local file that gets executed at boot time. Depending on the OS installation, rc.local could be in a different location.
* MariaDB Columnstore will setup and log using your current system logging application in the directory /var/log/mariadb/columnstore. Perform the following if you want to setup to have the MariaDB Columnstore logs archived daily and deleted after 7 defaults (default setting):
* cp /home/guest/mariadb/columnstore/bin/columnstoreLogRotate /etc/logrotate.d/columnstore (this is a rename of the file)
### ColumnStore Cluster Test Tool
This tool can be running before doing installation on a single-server or multi-node installs. It will verify the setup of all servers that are going to be used in the Columnstore System.
[https://mariadb.com/kb/en/mariadb/mariadb-columnstore-cluster-test-tool](../../mariadb/mariadb-columnstore-cluster-test-tool)
The next step would be to run the install script postConfigure, check the Single Server Or Multi-Server Install guide.
### ColumnStore Configuration and Installation Tool
MariaDB Columnstore System Configuration and Installation tool, 'postConfigure', will Configure the MariaDB Columnstore System and will perform a package Installation of all of the servers within the system that is being configured. It will prompt the user to configuration information like server, storage, and system features. It updates the MariaDB Columnstore System Configuration File, Columnstore.xml. It will also execute MariaDB Server setup scripts on server where User Module Functionality will be. At the end, it will start up the ColumnStore system.
NOTE: This tool is always run on the Performance Module #1.
Example uses of this script are shown in the Single and Multi Server Installations Guides.
```
# /usr/local/mariadb/columnstore/bin/postConfigure -h
This is the MariaDB ColumnStore System Configuration and Installation tool.
It will Configure the MariaDB ColumnStore System based on Operator inputs and
will perform a Package Installation of all of the Modules within the
System that is being configured.
IMPORTANT: This tool should only be run on a Performance Module Server,
preferably Module #1
Instructions:
Press 'enter' to accept a value in (), if available or
Enter one of the options within [], if available, or
Enter a new value
Usage: postConfigure [-h][-c][-u][-p][-s][-port][-i][-n]
-h Help
-c Config File to use to extract configuration data, default is Columnstore.xml.rpmsave
-u Upgrade, Install using the Config File from -c, default to Columnstore.xml.rpmsave
If ssh-keys aren't setup, you should provide passwords as command line arguments
-p Unix Password, used with no-prompting option
-s Single Threaded Remote Install
-port MariaDB ColumnStore Port Address
-i Non-root Install directory, Only use for non-root installs
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Backup and Restore for MariaDB ColumnStore 1.1.0 onwards Backup and Restore for MariaDB ColumnStore 1.1.0 onwards
========================================================
Backup and Restore package
==========================
The Backup and Restore is part of the MariaDB ColumnStore Tools package. It can be downloaded from:
<https://mariadb.com/downloads/mariadb-ax/tools-ax>
Installing MariaDB ColumnStore Tools package
============================================
The package is available as rpm, deb and binary. Follow the instructions to install the associated package:
RPM
---
```
rpm -ivh mariadb-columnstore-tools-x.x.x-x.rpm
```
DEB
---
```
dpkg -i mariadb-columnstore-tools-x.x.x-x.deb
```
BINARY
------
```
tar zxvf mariadb-columnstore-tools-x.x.x-x.tar.gz
```
Backup Overview
===============
The high level steps involved in performing a full backup of MariaDB ColumnStore are:
* Suspend write activity on the system.
* Backup the MariaDB Server data files.
* Backup the ColumnStore data files.
* Resume write activity on the system.
columnstoreBackup
=================
In MariaDB ColumnStore 1.1.0 a tool - columnstoreBackup to automate the backup/restore across the MariaDB ColumnStore nodes is available.
Note: columnstoreBackup tool is only for ColumnStore data backups. Other engines may not be fully backed up and data could be lost when restoring.
### Backup Setup
To run columnstoreBackup you'll need to setup a backup server with passwordless ssh login available for the user account that installed MariaDB ColumnStore. (Default: root). It will need passwordless ssh login to all MariaDB ColumnStore Modules.
Copy the executable [columnstoreBackup](https://mariadb.com/downloads/mariadb-ax/tools-ax) onto the backup server. Create a target directory on the backup server to store the files. This directory will need to have enough space to store all ColumnStore data files. Example:
```
Backup Executable: /home/user/columnstoreBackup
Backup Data Directory: /home/user/columnstoreBackupData/
```
There is an optional columnstoreBackup.config file that when placed in the same directory as the columnstoreBackup executable will allow you to configure an incremental backup option that uses the rsync link-dest option to enable incremental backups. These are stored in backup.1 thru backup.[n-1] from newest to oldest. The columnstoreBackup.config file should only contain a single line:
```
NUMBER_BACKUPS=[n]
```
Where "n" is the number of incremental backups to store. (Default: 3)
### Running columnstoreBackup
columnstoreBackup must be run as root user either logging in as root or via the sudo command.
```
Usage: [sudo] ./columnstoreBackup [options] activeParentOAM backupServerLocation
activeParentOAM IP address of ColumnStore server
(Active parent OAM module on multi-node install)
backupServerLocation Path to the directory for storing backup files.
OPTIONS:
-h,--help Prints help and exits.
-v,--verbose Print more verbose execution details.
-d,--dry-run Dry run and executes rsync dry run with stats.
-z,--compress Utilize the compression option for rsync.
-n [value] Maximum number parallel rsync commands. (Default: 5)
--user=[user] Change the user performing remote sessions. (Default: root)
--install-dir=[PATH] Change the install directory of ColumnStore.
Default: /usr/local/mariadb/columnstore
```
Example:
```
Running from the directory /home/user/:
sudo ./columnstoreBackup -zv 192.168.1.2 home/user/columnstoreBackupData
```
This will execute a backup for the system with a parent OAM module located at 192.168.1.2 and store all backup files inside the directory located at home/user/columnstoreBackupData. Option v will print out a more verbose logging of commands executed and option z will let rsync utilize the compression option for file transfers.
### Backup Logging
Logging is output to the console as well as to a columnstoreBackup.log that is located in the directory columnstoreBackup is executed. This will contain some extra details on some issues. Log rotation is left to the user for handling.
### Backup Return Codes
```
0 - success
1 - command line parameter or config file issue detected
2 - missing rsync or xmllint
3 - detected issue with disk space
4 - detected bad configuration file settings
5 - rsync command failed with an error
255 - could not connect via passwordless ssh
```
### Backup Operation Notes
columnstoreBackup will create the following directories inside the Backup Data Directory:
```
backup.[1-n] (n incremental backups)
cnf (my.cnf and my.cnf.d)
pm[moduleID]dbroot[DBRootID] (pm1dbroot1 contains PM data from dbroot 1 on pm 1)
um[moduleID] (NOTE: When UM/PM are combined on nodes UM1 is the mysql/db directory for PM1)
```
These directories are created if they do not exist and can be created prior to execution by the user.
The columnstoreBackup option -n [value] limits the number parallel rsync commands executed at a given time. The default 5 means up to 5 DBRoots will kick off rysnc commands to various PMs and the backup system will wait until all are complete and verified successful. At this time it will kick off another 5 DBRoots. The progress indicator should reflect the percentage of total completion and not individual rysnc commands. This value can be set higher via the -n command but if the number of DBRoots present in the system is large enough there may be a performance hit on system processing or network bandwidth limitations.
columnstoreRestore
==================
The tool is designed to be run on the system storing the backups. This will automate restoring from backups created by the columnstoreBackup tool.
### Restore Setup
To run columnstoreRestore you'll need to setup a backup server with passwordless ssh login available for the user account that installed MariaDB ColumnStore. (Default: root)
columnstoreRestore must be run as root or with sudo.
columnstoreRestore expects MariaDB Columnstore to be shutdown in a fresh install state.
Take the following steps to prepare system for columnstoreRestore:
* On the active parent OAM module execute the command
```
mcsadmin shutdownsystem y
```
* Run on all PM modules:
```
rm -rf [INSTALL_DIR]/data*/000.dir
rm -rf [INSTALL_DIR]/data1/systemFiles/dbrm/*
```
* Run on all UM or combo PM front-end nodes
```
cd [INSTALL_DIR]/mysql/db
delete all directories except:
calpontsys
infinidb_querystats
infinidb_infinidb_vtable
mysql
performance_schema
test
```
* On the active parent OAM module execute the command
```
[INSTALL_DIR]/bin/clearShm
```
* On the backup system run columnstoreRestore script
### Running columnstoreRestore
columnstoreRestore must be run as root user either logging in as root or via the sudo command.
```
Usage: ./columnstoreRestore [options] backupServerLocation restoreServerPM1
restoreServerPM1 IP address of ColumnStore server
(Assumes PM1 = Active Parent OAM Module)
backupServerLocation Path to the directory for storing backup files.
OPTIONS:
-h,--help Print this message and exit.
-v,--verbose Print more verbose execution details.
-d,--dry-run Dry run and executes rsync dry run with stats.
-z,--compress Utilize the compression option for rsync.
-n [value] Maximum number parallel rsync commands. (Default: 5).
--user=[user] Change the user performing remote sessions. (Default: root)
--install-dir=[PATH] Change the install directory of ColumnStore.
Default: /usr/local/mariadb/columnstore
```
EXAMPLE: Running from the directory /home/user/ with the columnstoreBackupData directory created in the columnstoreBackup example above:
```
sudo ./columnstoreRestore -zv home/user/columnstoreBackupData 192.168.1.100
```
This will execute a restore for the MariaDB ColumnStore system with a parent OAM module located at 192.168.1.100 from the directory located at home/user/columnstoreBackupData. Option v will print out a more verbose logging of commands executed and option z will let rsync utilize the compression option for file transfers.
### Restore Logging
Logging is output to the console as well as to a columnstoreRestore.log that is located in the directory columnstoreRestore is executed. This will contain some extra details on some issues. Log rotation is left to the user for handling.
### Restore Return Codes
```
0 - success
1 - command line parameter or config file issue detected
2 - missing rsync or xmllint
3 - detected issue with disk space
4 - detected bad configuration file settings
5 - rsync command failed with an error
255 - could not connect via passwordless ssh
```
### Restore Operation Notes
columnstoreRestore will create a restoreConfig directory inside the backupServerLocation defined at command line. This is just meant to store a copy of the restored systems version and configuration file for verification the restore is possible.
The columnstoreRestore option -n [value] limits the number parallel rsync commands executed at a given time. The default 5 means up to 5 DBRoots will kick off rysnc commands to various PMs and the backup system will wait until all are complete and verified successful. At this time it will kick off another 5 DBRoots. The progress indicator should reflect the percentage of total completion and not individual rysnc commands. This value can be set higher via the -n command but if the number of DBRoots present in the system is large enough there may be a performance hit on system processing or network bandwidth limitations.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb libMariaDB libMariaDB
===========
| Title | Description |
| --- | --- |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Authentication with Pluggable Authentication Modules (PAM) Authentication with Pluggable Authentication Modules (PAM)
===========================================================
| Title | Description |
| --- | --- |
| [Authentication Plugin - PAM](../authentication-plugin-pam/index) | Uses the Pluggable Authentication Module (PAM) framework to authenticate MariaDB users. |
| [User and Group Mapping with PAM](../user-and-group-mapping-with-pam/index) | Configure PAM to map a given PAM user or group to a different MariaDB user. |
| [Configuring PAM Authentication and User Mapping with Unix Authentication](../configuring-pam-authentication-and-user-mapping-with-unix-authentication/index) | Walkthrough configuration of PAM authentication and user mapping with Unix authentication. |
| [Configuring PAM Authentication and User Mapping with LDAP Authentication](../configuring-pam-authentication-and-user-mapping-with-ldap-authentication/index) | Configuring PAM authentication and user mapping with LDAP authentication. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb GeomCollFromWKB GeomCollFromWKB
===============
A synonym for [ST\_GeomCollFromWKB](../st_geomcollfromwkb/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Compound (Composite) Indexes Compound (Composite) Indexes
============================
A mini-lesson in "compound indexes" ("composite indexes")
---------------------------------------------------------
This document starts out trivial and perhaps boring, but builds up to more interesting information, perhaps things you did not realize about how MariaDB and MySQL indexing works.
This also explains [EXPLAIN](../explain/index) (to some extent).
(Most of this applies to non-MySQL brands of databases, too.)
The query to discuss
--------------------
The question is "When was Andrew Johnson president of the US?".
The available table `Presidents` looks like:
```
+-----+------------+----------------+-----------+
| seq | last_name | first_name | term |
+-----+------------+----------------+-----------+
| 1 | Washington | George | 1789-1797 |
| 2 | Adams | John | 1797-1801 |
...
| 7 | Jackson | Andrew | 1829-1837 |
...
| 17 | Johnson | Andrew | 1865-1869 |
...
| 36 | Johnson | Lyndon B. | 1963-1969 |
...
```
("Andrew Johnson" was picked for this lesson because of the duplicates.)
What index(es) would be best for that question? More specifically, what would be best for
```
SELECT term
FROM Presidents
WHERE last_name = 'Johnson'
AND first_name = 'Andrew';
```
Some INDEXes to try...
* No indexes
* INDEX(first\_name), INDEX(last\_name) (two separate indexes)
* "Index Merge Intersect"
* INDEX(last\_name, first\_name) (a "compound" index)
* INDEX(last\_name, first\_name, term) (a "covering" index)
* Variants
No indexes
----------
Well, I am fudging a little here. I have a PRIMARY KEY on `seq`, but that has no advantage on the query we are studying.
```
mysql> SHOW CREATE TABLE Presidents \G
CREATE TABLE `presidents` (
`seq` tinyint(3) unsigned NOT NULL AUTO_INCREMENT,
`last_name` varchar(30) NOT NULL,
`first_name` varchar(30) NOT NULL,
`term` varchar(9) NOT NULL,
PRIMARY KEY (`seq`)
) ENGINE=InnoDB AUTO_INCREMENT=45 DEFAULT CHARSET=utf8
mysql> EXPLAIN SELECT term
FROM Presidents
WHERE last_name = 'Johnson'
AND first_name = 'Andrew';
+----+-------------+------------+------+---------------+------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+------+---------------+------+---------+------+------+-------------+
| 1 | SIMPLE | Presidents | ALL | NULL | NULL | NULL | NULL | 44 | Using where |
+----+-------------+------------+------+---------------+------+---------+------+------+-------------+
# Or, using the other form of display: EXPLAIN ... \G
id: 1
select_type: SIMPLE
table: Presidents
type: ALL <-- Implies table scan
possible_keys: NULL
key: NULL <-- Implies that no index is useful, hence table scan
key_len: NULL
ref: NULL
rows: 44 <-- That's about how many rows in the table, so table scan
Extra: Using where
```
Implementation details
----------------------
First, let's describe how InnoDB stores and uses indexes.
* The data and the PRIMARY KEY are "clustered" together in on BTree.
* A BTree lookup is quite fast and efficient. For a million-row table there might be 3 levels of BTree, and the top two levels are probably cached.
* Each secondary index is in another BTree, with the PRIMARY KEY at the leaf.
* Fetching 'consecutive' (according to the index) items from a BTree is very efficient because they are stored consecutively.
* For the sake of simplicity, we can count each BTree lookup as 1 unit of work, and ignore scans for consecutive items. This approximates the number of disk hits for a large table in a busy system.
For MyISAM, the PRIMARY KEY is not stored with the data, so think of it as being a secondary key (over-simplified).
INDEX(first\_name), INDEX(last\_name)
-------------------------------------
The novice, once he learns about indexing, decides to index lots of columns, one at a time. But...
MySQL rarely uses more than one index at a time in a query. So, it will analyze the possible indexes.
* first\_name -- there are 2 possible rows (one BTree lookup, then scan consecutively)
* last\_name -- there are 2 possible rows Let's say it picks last\_name. Here are the steps for doing the SELECT: 1. Using INDEX(last\_name), find 2 index entries with last\_name = 'Johnson'. 2. Get the PRIMARY KEY (implicitly added to each secondary index in InnoDB); get (17, 36). 3. Reach into the data using seq = (17, 36) to get the rows for Andrew Johnson and Lyndon B. Johnson. 4. Use the rest of the WHERE clause filter out all but the desired row. 5. Deliver the answer (1865-1869).
```
mysql> EXPLAIN SELECT term
FROM Presidents
WHERE last_name = 'Johnson'
AND first_name = 'Andrew' \G
select_type: SIMPLE
table: Presidents
type: ref
possible_keys: last_name, first_name
key: last_name
key_len: 92 <-- VARCHAR(30) utf8 may need 2+3*30 bytes
ref: const
rows: 2 <-- Two 'Johnson's
Extra: Using where
```
"Index Merge Intersect"
-----------------------
OK, so you get really smart and decide that MySQL should be smart enough to use both name indexes to get the answer. This is called "Intersect". 1. Using INDEX(last\_name), find 2 index entries with last\_name = 'Johnson'; get (7, 17) 2. Using INDEX(first\_name), find 2 index entries with first\_name = 'Andrew'; get (17, 36) 3. "And" the two lists together (7,17) & (17,36) = (17) 4. Reach into the data using seq = (17) to get the row for Andrew Johnson. 5. Deliver the answer (1865-1869).
```
id: 1
select_type: SIMPLE
table: Presidents
type: index_merge
possible_keys: first_name,last_name
key: first_name,last_name
key_len: 92,92
ref: NULL
rows: 1
Extra: Using intersect(first_name,last_name); Using where
```
The EXPLAIN fails to give the gory details of how many rows collected from each index, etc.
INDEX(last\_name, first\_name)
------------------------------
This is called a "compound" or "composite" index since it has more than one column. 1. Drill down the BTree for the index to get to exactly the index row for Johnson+Andrew; get seq = (17). 2. Reach into the data using seq = (17) to get the row for Andrew Johnson. 3. Deliver the answer (1865-1869). This is much better. In fact this is usually the "best".
```
ALTER TABLE Presidents
(drop old indexes and...)
ADD INDEX compound(last_name, first_name);
id: 1
select_type: SIMPLE
table: Presidents
type: ref
possible_keys: compound
key: compound
key_len: 184 <-- The length of both fields
ref: const,const <-- The WHERE clause gave constants for both
rows: 1 <-- Goodie! It homed in on the one row.
Extra: Using where
```
"Covering": INDEX(last\_name, first\_name, term)
------------------------------------------------
Surprise! We can actually do a little better. A "Covering" index is one in which \_all\_ of the fields of the SELECT are found in the index. It has the added bonus of not having to reach into the "data" to finish the task. 1. Drill down the BTree for the index to get to exactly the index row for Johnson+Andrew; get seq = (17). 2. Deliver the answer (1865-1869). The "data" BTree is not touched; this is an improvement over "compound".
```
... ADD INDEX covering(last_name, first_name, term);
id: 1
select_type: SIMPLE
table: Presidents
type: ref
possible_keys: covering
key: covering
key_len: 184
ref: const,const
rows: 1
Extra: Using where; Using index <-- Note
```
Everything is similar to using "compound", except for the addition of "Using index".
Variants
--------
* What would happen if you shuffled the fields in the WHERE clause? Answer: The order of ANDed things does not matter.
* What would happen if you shuffled the fields in the INDEX? Answer: It may make a huge difference. More in a minute.
* What if there are extra fields on the the end? Answer: Minimal harm; possibly a lot of good (eg, 'covering').
* Reduncancy? That is, what if you have both of these: INDEX(a), INDEX(a,b)? Answer: Reduncy costs something on INSERTs; it is rarely useful for SELECTs.
* Prefix? That is, INDEX(last\_name(5). first\_name(5)) Answer: Don't bother; it rarely helps, and often hurts. (The details are another topic.)
More examples:
--------------
```
INDEX(last, first)
... WHERE last = '...' -- good (even though `first` is unused)
... WHERE first = '...' -- index is useless
INDEX(first, last), INDEX(last, first)
... WHERE first = '...' -- 1st index is used
... WHERE last = '...' -- 2nd index is used
... WHERE first = '...' AND last = '...' -- either could be used equally well
INDEX(last, first)
Both of these are handled by that one INDEX:
... WHERE last = '...'
... WHERE last = '...' AND first = '...'
INDEX(last), INDEX(last, first)
In light of the above example, don't bother including INDEX(last).
```
Postlog
-------
Refreshed -- Oct, 2012; more links -- Nov 2016
See also
--------
* [Cookbook on designing the best index for a SELECT](http://mysql.rjweb.org/doc.php/index_cookbook_mysql)
* [Sheeri's discussing of Indexes](http://technocation.org/files/doc/2013_02_MySQLindexes.pdf)
* [Slides on EXPLAIN](http://www.slideshare.net/phpcodemonkey/mysql-explain-explained)
* [Mysql manual page on range accesses in composite indexes](http://dev.mysql.com/doc/refman/5.7/en/range-optimization.html#range-access-multi-part)
* [Overhead of Composite Indexes](http://stackoverflow.com/questions/32418812/overhead-of-composite-indexes)
* [Size and other limits on Indexes](http://mysql.rjweb.org/doc.php/limits)
Rick James graciously allowed us to use this article in the Knowledge Base.
[Rick James' site](http://mysql.rjweb.org/) has other useful tips, how-tos, optimizations, and debugging tips.
Original source: <http://mysql.rjweb.org/doc.php/index1>
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Global Transaction ID Global Transaction ID
=====================
The terms *master* and *slave* have historically been used in replication, but the terms terms *primary* and *replica* are now preferred. The old terms are used still used in parts of the documentation, and in MariaDB commands, although [MariaDB 10.5](../what-is-mariadb-105/index) has begun the process of renaming. The documentation process is ongoing. See [MDEV-18777](https://jira.mariadb.org/browse/MDEV-18777) to follow progress on this effort.
Note that MariaDB and MySQL have different GTID implementations, and that these are not compatible with each other.
Overview
--------
MariaDB replication in general works as follows (see [Replication overview](../replication-overview/index) for more information):
On a master server, all updates to the database (DML and DDL) are written into the [binary log](../binary-log/index) as binlog events. A replica server connects to the primary and reads the binlog events, then applies the events locally to replicate the same changes as done on the primary. A server can be both a primary and a replica at the same time, and it is thus possible for binlog events to replicated through multiple levels of servers.
A replica server keeps track of the position in the primary's binlog of the last event applied on the replica. This allows the replica server to re-connect and resume from where it left off after replication has been temporarily stopped. It also allows a replica to disconnect, be cloned and then have the new replica resume replication from the same primary.
Global transaction ID introduces a new event attached to each event group in the binlog. (An event group is a collection of events that are always applied as a unit. They are best thought of as a "transaction", though they also include non-transactional DML statements, as well as DDL). As an event group is replicated from primary server to replica server, the global transaction ID is preserved. Since the ID is globally unique across the entire group of servers, this makes it easy to uniquely identify the same binlog events on different servers that replicate each other (this was not easily possible before [MariaDB 10.0.2](https://mariadb.com/kb/en/mariadb-1002-release-notes/)).
Benefits
--------
Using global transaction ID provides two main benefits:
1. Easy to change a replica server to connect to and replicate from a different primary server.
The replica remembers the global transaction ID of the last event group applied from the old primary. This makes it easy to know where to resume replication on the new primary, since the global transaction IDs are known throughout the entire replication hierarchy. This is not the case when using old-style replication; in this case the replica knows only the specific file name and offset of the old primary server of the last event applied. There is no simple way to guess from this the correct file name and offset on a new primary.
2. The state of the replica is recorded in a crash-safe way.
The replica keeps track of its current position (the global transaction ID of the last transaction applied) in the [mysql.gtid\_slave\_pos](../mysqlgtid_slave_pos-table/index) system table. If this table is using a transactional storage engine (such as InnoDB, which is the default), then updates to the state are done in the same transaction as the updates to the data. This makes the state crash-safe; if the replica server crashes, crash recovery on restart will make sure that the recorded replication position matches the changes that were actually replicated. This is not the case for old-style replication, where the state is recorded in a file relay-log.info, which is updated independently of the actual data changes and can easily get out of sync if the replica server crashes. (This works for DML to transactional tables; non-transactional tables and DDL in general are not crash-safe in MariaDB.)
Because of these two benefits, it is generally recommended to use global transaction ID for any replication setups based on [MariaDB 10.0.2](https://mariadb.com/kb/en/mariadb-1002-release-notes/) or later. However, old-style replication continues to work as always, so there is no pressing need to change existing setups. Global transaction ID integrates smoothly with old-style replication, and the two can be used freely together in the same replication hierarchy. There is no special configuration needed of the server to start using global transaction ID. However, it must be explicitly set for a replica server with the appropriate [CHANGE MASTER](../change-master-to/index) option; by default old-style replication is used by a replication replica, to maintain backwards compatibility.
Implementation
--------------
A global transaction ID, or GTID for short, consists of three numbers separated with dashes '-'. For example:
`0-1-10`
* The first number 0 is the domain ID, which is specific for global transaction ID (more on this below). It is a 32-bit unsigned integer.
* The second number is the server ID, the same as is also used in old-style replication. It is a 32-bit unsigned integer.
* The third number is the sequence number. This is a 64-bit unsigned integer that is monotonically increasing for each new event group logged into the binlog.
The server ID is set to the server ID of the server where the event group is first logged into the binlog. The sequence number is increased on a server for every event group logged. Since server IDs must be unique for every server, this makes the (server\_id, sequence\_number) pair, and hence the whole GTID, globally unique.
Using a 64-bit number provides ample range that there should be no risk of it overflowing in the foreseeable future. However, one should not artificially (by setting `gtid_seq_no`) inject a GTID with a very high sequence number close to the limit of 64-bit.
### The Domain ID
When events are replicated from a primary server to a replica server, the events are always logged into the replica's binlog in the same order that they were read from the primary's binlog. Thus, if there is only ever a single primary server receiving (non-replication) updates at a time, then the binlog order will be identical on every server in the replication hierarchy.
This consistent binlog order is used by the replica to keep track of its current position in the replication. Basically, the replica remembers the GTID of the last event group replicated from the primary. When reconnecting to a primary, whether the same one or a new one, it sends this GTID position to the primary, and the primary starts sending events from the first event after the corresponding event group.
However, if user updates are done independently on multiple servers at the same time, then in general it is not possible for binlog order to be identical across all servers. This can happen when using multi-source replication, with multi-primary ring topologies, or just if manual updates are done on a replica that is replicating from active primary. If the binlog order is different on the new primary from the order on the old primary, then it is not sufficient for the replica to keep track of a single GTID to completely record the current state.
The domain ID, the first component of the GTID, is used to handle this.
In general, the binlog is not a single ordered stream. Rather, it consists of a number of different streams, each one identified by its own domain ID. Within each stream, GTIDs always have the same order in every server binlog. However, different streams can be interleaved in different ways on different servers.
A replica server then keeps track of its replication position by recording the last GTID applied within each replication stream. When connecting to a new primary, the replica can start replication from a different point in the binlog for each domain ID.
For more details on using multi-primary setups and multiple domain IDs, see [Use with multi-source replication and other multi-primary setups](#use-with-multi-source-replication-and-other-multi-master-setups).
Simple replication setups only have a single primary being updated by the application at any one time. In such setups, there is only a single replication stream needed. Then domain ID can be ignored, and left as the default of 0 on all servers.
Using Global Transaction IDs
----------------------------
Global transaction ID is enabled automatically. Each event group logged to the binlog receives a GTID event, as can be seen with [mysqlbinlog](../mysqlbinlog/index) or [SHOW BINLOG EVENTS](../show-binlog-events/index).
The replica automatically keeps track of the GTID of the last applied event group, as can be seen from the [gtid\_slave\_pos](#gtid_slave_pos) variable:
```
SELECT @@GLOBAL.gtid_slave_pos
0-1-1
```
When a replica connects to a primary, it can use either global transaction ID or old-style filename/offset to decide where in the primary binlogs to start replicating from. To use global transaction ID, use the [CHANGE MASTER](../change-master-to/index) *master\_use\_gtid* option:
`CHANGE MASTER TO master_use_gtid = { slave_pos | current_pos | no }`
A replica is configured to use GTID by `CHANGE MASTER TO master_use_gtid=slave_pos`. When the replica connects to the primary, it will start replication at the position of the last GTID replicated to the replica, which can be seen in the variable [gtid\_slave\_pos](#gtid_slave_pos). Since GTIDs are the same across all replication servers, the replica can then be pointed to a different primary, and the correct position will be determined automatically.
But suppose that we set up two servers A and B and let A be the primary and B the replica. It runs for a while. Then at some point we take down A, and B becomes the new primary. Then later we want to add A back, this time as a replica.
Since A was never a replica before, it does not have any prior replicated GTIDs, and [gtid\_slave\_pos](#gtid_slave_pos) will be empty. To allow A to be added as a replica automatically, `master_use_gtid=current_pos` can be used. This will connect using the value of the variable [gtid\_current\_pos](#gtid_current_pos) instead of [gtid\_slave\_pos](#gtid_slave_pos), which also takes into account GTIDs written into the binlog when the server was a primary.
When using `master_use_gtid=current_pos` there is no need to consider whether a server was a primary or a replica prior to using [CHANGE MASTER](../change-master-to/index). But care must be taken not to inject extra transactions into the binlog on the replica server that are not intended to be replicated to other servers. If such an extra transaction is the most recent when the replica starts, it will be used as the starting point of replication. This will probably fail because that transaction is not present on the primary. To avoid local changes on a replica server to go into the binlog, set [sql\_log\_bin](../replication-and-binary-log-system-variables/index#sql_log_bin) to 0.
If it is undesirable that changes to the binlog on the replica affects the GTID replication position, then `master_use_gtid=slave_pos` should be used. Then the replica will always connect to the primary at the position of the last replicated GTID. This may avoid some surprises for users that expect behavior consistent with traditional replication, where the replication position is never changed by local changes done on a server.
When [GTID strict mode](#gtid_strict_mode) is enabled (by setting `@@GLOBAL.gtid_strict_mode` to 1), it is normally best to use `current_pos`. In strict mode, extra transactions on the primary are disallowed.
If a replica is configured with the binlog disabled, `current_pos` and `slave_pos` are equivalent.
Even when a replica is configured to connect with the old-style binlog filename and offset (`CHANGE MASTER TO master_log_file=..., master_log_pos=...`), it will still keep track of the current GTID position in `@@GLOBAL.gtid_slave_pos`. This means that an existing replica previously configured and running can be changed to connect with GTID (to the same or a new master) simply with:
`CHANGE MASTER TO master_use_gtid = slave_pos`
The replica remembers that `master_use_gtid=slave_pos|master_pos` was specified and will use it also for subsequent connects, until it is explicitly changed by specifying `master_log_file/pos=...` or `master_use_gtid=no`. The current value can be seen as the field Using\_Gtid of SHOW SLAVE STATUS:
```
SHOW SLAVE STATUS\G
...
Using_Gtid: Slave_pos
```
The replica server internally uses the [mysql.gtid\_slave\_pos table](../mysqlgtid_slave_pos-table/index) to store the GTID position (and so preserve the value of `@@GLOBAL.gtid_slave_pos` across server restarts). After upgrading a server to 10.0, it is necessary to run [mysql\_upgrade](../mysql_upgrade/index) (as always) to get the table created.
In order to be crash-safe, this table must use a transactional storage engine such as InnoDB. When MariaDB is first installed (or upgraded to 10.0.2+) the table is created using the default storage engine - which itself defaults to InnoDB. If there is a need to change the storage engine for this table (to make it transactional on a system configured with [MyISAM](../myisam/index) as the default storage engine, for example), use [ALTER TABLE](../alter-table/index):
`ALTER TABLE mysql.gtid_slave_pos ENGINE = InnoDB`
The [mysql.gtid\_slave\_pos table](../mysqlgtid_slave_pos-table/index) should not be modified in any other way. In particular, do not try to update the rows in the table to change the replica's idea of the current GTID position; instead use
`SET GLOBAL gtid_slave_pos = '0-1-1'`
Starting from [MariaDB 10.3.1](https://mariadb.com/kb/en/mariadb-1031-release-notes/), the server variable [gtid\_pos\_auto\_engines](#gtid_pos_auto_engines) can preferably be set to make the server handle this automatically. See the description of the [mysql.gtid\_slave\_pos table](../mysqlgtid_slave_pos-table/index) for details.
### Using `current_pos` vs. `slave_pos`
When setting the [MASTER\_USE\_GTID](../change-master-to/index#master_use_gtid) replication parameter, you have the option of enabling Global Transaction IDs to use either the `current_pos` or `slave_pos` values.
Using the value `current_pos` causes the replica to set its position based on the [gtid\_current\_pos](#gtid_current_pos) system variable, which is a union of [gtid\_binlog\_pos](#gtid_binlog_pos) and [gtid\_slave\_pos](#gtid_slave_pos). Using the value `slave_pos` causes the replica to instead set its position based on the [gtid\_slave\_pos](#gtid_slave_pos) system variable.
You may run into issues when you use the value `current_pos` if you write any local transactions on the replica. For instance, if you issue an [INSERT](../insert/index) statement or otherwise write to a table while the [replica threads](../replication-threads/index#threads-on-the-slave) are stopped, then new local GTIDs may be generated in [gtid\_binlog\_pos](#gtid_binlog_pos), which will affect the replica's value of [gtid\_current\_pos](#gtid_current_pos). This may cause errors when the [replica threads](../replication-threads/index#threads-on-the-slave) are restarted, since the local GTIDs will be absent from the primary.
You can correct this issue by setting the [MASTER\_USE\_GTID](../change-master-to/index#master_use_gtid) replication parameter to `slave_pos` instead of `current_pos`. For example:
```
CHANGE MASTER TO MASTER_USE_GTID = slave_pos;
START SLAVE;
```
### Using GTIDs with Parallel Replication
If [parallel replication](../parallel-replication/index) is in use, then events that were logged with GTIDs with different [gtid\_domain\_id](index#gtid_domain_id) values can be applied in parallel in an [out-of-order](../parallel-replication/index#out-of-order-parallel-replication) manner.
### Using GTIDs with MariaDB Galera Cluster
Starting with [MariaDB 10.1.4](https://mariadb.com/kb/en/mariadb-1014-release-notes/), MariaDB Galera Cluster has limited support for GTIDs. See [Using MariaDB GTIDs with MariaDB Galera Cluster](../using-mariadb-gtids-with-mariadb-galera-cluster/index) for more information.
Setting up a New Replica Server with Global Transaction ID
----------------------------------------------------------
Setting up a new replica server with global transaction ID is not much different from setting up an old-style replica. The basic steps are:
1. Setup the new server and load it with the initial data.
2. Start the replica replicating from the appropriate point in the primary's binlog.
### Setting up a New Replica with an Empty Server
The simplest way for testing purposes is probably to setup a new, empty replica server and replicate all of the primary's binlogs from the start (this is usually not feasible in a realistic production setup, as the initial binlog files will probably have been purged or take too long to apply).
The replica server is installed in the normal way. By default, the GTID position for a newly installed server is empty, which makes the replica replicate from the start of the primary's binlogs. But if the replica was used for other purposes before, the initial position can be explicitly set to empty first:
`SET GLOBAL gtid_slave_pos = "";`
Next, point the replica to the master with [CHANGE MASTER](../change-master-to/index). Specify master\_host etc. as usual. But instead of specifying master\_log\_file and master\_log\_pos manually, use `master_use_gtid=current_pos` (or `slave_pos` to have GTID do it automatically:
```
CHANGE MASTER TO master_host="127.0.0.1", master_port=3310, master_user="root", master_use_gtid=current_pos;
START SLAVE;
```
### Setting up a New Replica From a Backup
The normal way to set up a new replication replica is to take a backup from an existing server (either a primary or replica in the replication topology), and then restore that backup on the server acting as the new replica, and the configure it to start replicating from the appropriate position in the primary's binary log.
It is important that the position at which replication is started corresponds exactly to the state of the data at the point in time that the backup was taken. Otherwise, the replica can end up with different data than the primary because of missing or duplicated transactions. Of course, if there are no writes to the server being backed up during the backup process, then a simple [SHOW MASTER STATUS](../show-master-status/index) will give the correct position.
See the description of the specific backup tool to determine how to get the binary log position that corresponds to the backup.
Once the current binary log position for the backup has been obtained, in the form of a binary log file name and position, the corresponding GTID position can be obtained from [BINLOG\_GTID\_POS()](../binlog_gtid_pos/index) on the server that was backed up:
```
SELECT BINLOG_GTID_POS("master-bin.000001", 600);
```
The new replica can then start replicating from the primary by setting the correct value for [gtid\_slave\_pos](#gtid_slave_pos), and then executing [CHANGE MASTER](../change-master-to/index) with the relevant values for the primary, and then starting the [replica threads](../replication-threads/index#threads-on-the-slave) by executing [START SLAVE](../start-slave/index). For example:
```
SET GLOBAL gtid_slave_pos = "0-1-2";
CHANGE MASTER TO master_host="127.0.0.1", master_port=3310, master_user="root", master_use_gtid=slave_pos;
START SLAVE;
```
This method is particularly useful when setting up a new replica from a backup of the primary. Remember to ensure that the value of [server\_id](../replication-and-binary-log-system-variables/index#server_id) configured on the new replica is different from that of any other server in the replication topology.
If the backup was taken of an existing replica server, then the new replica should already have the correct GTID position stored in the [mysql.gtid\_slave\_pos](../mysqlgtid_slave_pos-table/index) table. This is assuming that this table was backed up and that it was backed up in a consistent manner with changes to other tables. In this case, there is no need to explicitly look up the GTID position on the old server and set it on the new replica - it will be already correctly loaded from the [mysql.gtid\_slave\_pos](../mysqlgtid_slave_pos-table/index) table. This however does not work if the backup was taken from the primary - because then the current GTID position is contained in the binary log, not in the [mysql.gtid\_slave\_pos](../mysqlgtid_slave_pos-table/index) table or any other table.
#### Setting up a New Replica with Mariabackup
A new replica can easily be set up with [Mariabackup](../mariabackup/index), which is a fork of [Percona XtraBackup](../backup-restore-and-import-clients-percona-xtrabackup/index). See [Setting up a Replica with Mariabackup](../setting-up-a-replication-slave-with-mariabackup/index) for more information.
#### Setting up a New Replica with mysqldump
A new replica can also be set up with [mysqldump](../mysqldump/index).
Starting with [MariaDB 10.0.13](https://mariadb.com/kb/en/mariadb-10013-release-notes/), [mysqldump](../mysqldump/index) automatically includes the GTID position as a comment in the backup file if either the [--master-data](../mysqldump/index#options) or [--dump-slave](../mysqldump/index#options) option is used. It also automatically includes the commands to set [gtid\_slave\_pos](#gtid_slave_pos) and execute [CHANGE MASTER](../change-master-to/index) in the backup file if the [--gtid](../mysqldump/index#options) option is used with either the [--master-data](../mysqldump/index#options) or [--dump-slave](../mysqldump/index#options) option.
### Switching An Existing Old-Style Replica To Use GTID.
If there is already an existing replica running using old-style binlog filename/offset position, then this can be changed to use GTID directly. This can be useful for upgrades for example, or where there are already tools to setup new replica using old-style binlog positions.
When a replica connects to a primary using old-style binlog positions, and the primary supports GTID (i.e. is [MariaDB 10.0.2](https://mariadb.com/kb/en/mariadb-1002-release-notes/) or later), then the replica automatically downloads the GTID position at connect and updates it during replication. Thus, once a replica has connected to the GTID-aware primary at least once, it can be switched to using GTID without any other actions needed;
```
STOP SLAVE;
CHANGE MASTER TO master_host="127.0.0.1", master_port=3310, master_user="root", master_use_gtid=current_pos;
START SLAVE;
```
(A later version will probably add a way to setup the replica so that it will connect with old-style binlog file/offset the first time, and automatically switch to using GTID on subsequent connects.)
Changing a Replica to Replicate From a Different Primary
--------------------------------------------------------
Once replication is running with GTID (master\_use\_gtid=current\_pos|slave\_pos), the replica can be pointed to a new primary simply by specifying in CHANGE MASTER the new master\_host (and if required master\_port, master\_user, and master\_password):
```
STOP SLAVE;
CHANGE MASTER TO master_host='127.0.0.1', master_port=3312;
START SLAVE;
```
The replica has a record of the GTID of the last applied transaction from the old primary, and since GTIDs are identical across all servers in a replication hierarchy, the replica will just continue from the appropriate point in the new primary's binlog.
It is important to understand how this change of primary work. The binlog is an ordered stream of events (or multiple streams, one per replication domain, (see [Use with multi-source replication and other multi-primary setups](#use-with-multi-source-replication-and-other-multi-master-setups)). Events within the stream are always applied in the same order on every replica that replicates it. The MariaDB GTID relies on this ordering, so that it is sufficient to remember just a single point within the stream. Since event order is the same on every server, switching to the point of the same GTID in the binlog of another server will give the same result.
This translates into some responsibility for the user. The MariaDB GTID replication is fully asynchronous, and fully flexible in how it can be configured. This makes it possible to use it in ways where the assumption that binlog sequence is the same on all servers is violated. In such cases, when changing primary, GTID will still attempt to continue at the point of current GTID in the new binlog.
The most common way that binlog sequence gets different between servers is when the user/DBA does updates directly on a replica server (and these updates are written into the replica's binlog). This results in events in the replica's binlog that are not present on the primary or any other replicas. This can be avoided by setting the session variable sql\_log\_bin false while doing such updates, so they do not go into the binlog.
It is normally best to avoid any differences in binlogs between servers. That being said, MariaDB replication is designed for maximum flexibility, and there can be valid reasons for introducing such differences from time to time. It this case, it just needs to be understood that the GTID position is a single point in each binlog stream (one per replication domain), and how this affects the users particular setup.
Differences can also occur when two primary are active at the same time in a replication hierarchy. This happens when using a multi-primary ring. But it can also occur in a simple primary-replica setup, during switch to a new primary, if changes on the old primary is not allowed to fully replicate to all replica servers before switching primary. Normally, to switch primary, first writes to the old primary should be stopped, then one should wait for all changes to be replicated to the new primary, and only then should writes begin on the new primary. Deliberately using multiple active primary is also supported, this is described in the next section.
The [GTID strict mode](#gtid_strict_mode) can be used to enforce identical binlogs across servers. When it is enabled, most actions that would cause differences are rejected with an error.
Use With Multi-Source Replication and Other Multi-Primary Setups
----------------------------------------------------------------
MariaDB global transaction ID supports having multiple primarys active at the same time. Typically this happens with either multi-source replication or multi-primary ring setups.
In such setups, each active primary must be configured with its own distinct replication domain ID, [gtid\_domain\_id](#gtid_domain_id). The binlog will then in effect consists of multiple independent streams, one per active primary. Within one replication domain, binlog order is always the same on every server. But two different streams can be interleaved differently in different server binlogs.
The GTID position of a given replica is then not a single GTID. Rather, it becomes the GTID of the last event group applied for each value of domain ID, in effect the position reached in each binlog stream. When the replica connects to a primary, it can continue from one stream in a different binlog position than another stream. Since order within one stream is consistent across all servers, this is sufficient to always be able to continue replication at the correct point in any new primary server(s).
Domain IDs are assigned by the DBA, according to the need of the application. The default value of @@GLOBAL.gtid\_domain\_id is 0. This is appropriate for most replication setups, where only a single primary is active at a time. The MariaDB server will never by itself introduce new domain\_id values into the binlog.
When using multi-source replication, where a single replica connects to multiple primaries at the same time, each such primary should be configured with its own distinct domain ID.
Similarly, in a multi-primary ring topology, where all primary in the ring are updated by the application concurrently (with some mechanism to avoid conflicts), a distinct domain ID should be configured for each server (In a multi-primary ring where the application is careful to only do updates on one primary at a time, a single domain ID is sufficient).
Normally, a replica server should not receive direct updates (as this creates binlog differences compared to the primary). Thus it does not matter what value of gtid\_domain\_id is set on a replica, though it may make sense to make it the same as the primary (if not using multi-primary) to make it easy to promote the replica as a new primary. Of course, if a replica is itself an active primary, as in a multi-primary ring topology, the domain ID should be set according to the server's role as active primary.
Note that domain ID and server ID are distinct concepts. It is possible to use a different domain ID on each server, but this is normally not desirable. It makes the current GTID position (@@global.gtid\_slave\_pos) more complicated to understand and work with, and loses the concept of a single ordered binlog stream across all servers. It is recommended only to configure as many domain IDs as there are primary servers actively being updated by the application at the same time.
It is not an error in itself to configure domain IDs incorrectly (for example, not configuring them at all). For example, this will be typical in an upgrade scenario where a multi-primary ring using 5.5 is upgraded to 10.0. The ring will continue to work as before even though everything is configured to use the default domain ID 0. It is even possible to use GTID for replication between the servers. However, care must be taken when switching a replica to a different primary. If the binlog order between the old and the new primary differs, then a single GTID position to start replication from in the new primary's binlog may not be sufficient.
New Syntax For Global Transaction ID
------------------------------------
### CHANGE MASTER
[CHANGE MASTER](../change-master-to/index) has a new option, `master_use_gtid=[current_pos|slave_pos|no]`. When enabled (set to *current\_pos* or *slave\_pos*), the replica will connect to the master using the GTID position. When disabled (set to "no"), the old-style binlog filename/offset position is used to decide where to start replicating when connecting. Unlike in the old-style, when GTID is enabled, the values of the [MASTER\_LOG\_FILE](../change-master-to/index#master_log_file) and [MASTER\_LOG\_POS](../change-master-to/index#master_log_pos) options are not updated per received event in [master\_info\_file](../mysqld-options/index#-master-info-file) file.
The value of `master_use_gtid` is saved across server restarts (in master.info). The current value can be seen as the field Using\_Gtid in the output of SHOW SLAVE STATUS.
For a detailed look at the difference between the *current\_pos* and *slave\_pos* options, see [Using global transaction IDs](#using-global-transaction-ids)
### START SLAVE UNTIL master\_gtid\_pos=xxx
When starting replication with [START SLAVE](../start-slave/index), it is possible to request the replica to run only until a specific GTID position is reached. Once that position is reached, the replica will stop.
The syntax for this is:
`START SLAVE UNTIL master_gtid_pos = <GTID position>`
The replica will start replication from the current GTID position, run up to and including the event with the GTID specified, and then stop. Note that this stops both the IO thread and the SQL thread (unlike START SLAVE UNTIL MASTER\_LOG\_FILE/MASTER\_LOG\_POS, which stops only the SQL thread).
If multiple GTIDs are specified, then they must be with distinct replication domain ID, for example:
`START SLAVE UNTIL master_gtid_pos = "1-11-100,2-21-50"`
With multiple domains in the UNTIL condition, each domain runs only up to and including the specified position, so it is possible for different domains to stop at different places in the binlog (each domain will resume from the stopped position when the replica is started the next time).
Not specifying a replication domain at all in the UNTIL condition means that the domain is stopped immediately, nothing is replicated from that domain. In particular, specifying the empty string will stop the replica immediately.
When using `START SLAVE UNTIL master_gtid_pos = XXX`, if the UNTIL position is present in the primary's binlog then it is permissible for the start position to be missing on the primary. In this case, replication for the associated domains stop immediately.
Both replica threads must be already stopped when using UNTIL master\_gtid\_pos, otherwise an error occurs. It is also an error if the replica is not configured to use GTID (`CHANGE MASTER TO master_use_gtid=current_pos|slave_pos`). And both threads must be started at the same time, the `IO_THREAD` or `SQL_THREAD` options can not be used to start only one of them.
`START SLAVE UNTIL master_gtid_pos=XXX` is particularly useful for promoting a new primary among a set of replicas when the old master goes away and replicas may have reached different positions in the old primary's binlog. The new primary needs to be ahead of all the other replicas to avoid losing events. This can be achieved by picking one server, say S1, and replicating any missing events from each other server S2, S3, ..., Sn:
```
CHANGE MASTER TO master_host="S2";
START SLAVE UNTIL master_gtid_pos = "<S2 GTID position>";
...
CHANGE MASTER TO master_host="Sn";
START SLAVE UNTIL master_gtid_pos = "<Sn GTID position>";
```
Once this is completed, S1 will have all events present on any of the servers. It can now be selected as the new primary, and all the other servers set to replicate from it.
### BINLOG\_GTID\_POS().
The [BINLOG\_GTID\_POS()](../binlog_gtid_pos/index) function takes as input an old-style [binary log](../binary-log/index) position in the form of a file name and a file offset. It looks up the position in the current binlog, and returns a string representation of the corresponding GTID position. If the position is not found in the current binlog, NULL is returned.
### MASTER\_GTID\_WAIT
The [MASTER\_GTID\_WAIT](../master_gtid_wait/index) function is useful in replication for controlling primary/replica synchronization, and blocks until the replica has read and applied all updates up to the specified position in the primary log. See [MASTER\_GTID\_WAIT](../master_gtid_wait/index) for details.
System Variables
----------------
#### `gtid_slave_pos`
This system variable contains the GTID of the last transaction applied to the database by the server's [replica threads](../replication-threads/index#threads-on-the-slave) for each replication domain. This system variable's value is automatically updated whenever a [replica thread](../replication-threads/index#threads-on-the-slave) applies an event group. This system variable's value can also be manually changed by users, so that the user can change the GTID position of the [replica threads](../replication-threads/index#threads-on-the-slave).
When using [multi-source replication](../multi-source-replication/index), the same GTID position is shared by all replica connections. In this case, different primaries should use different replication domains by configuring different [gtid\_domain\_id](#gtid_domain_id) values. If one primary was using a [gtid\_domain\_id](#gtid_domain_id) value of `1`, and if another primary was using a [gtid\_domain\_id](#gtid_domain_id) value of `2`, then any replicas replicating from both primaries would have GTIDs with both [gtid\_domain\_id](#gtid_domain_id) values in `gtid_slave_pos`.
This system variable's value can be manually changed by executing [SET GLOBAL](../set/index#global-session), but all replica threads to be stopped with [STOP SLAVE](../stop-slave/index) first. For example:
```
STOP ALL SLAVES;
SET GLOBAL gtid_slave_pos = "1-10-100,2-20-500";
START ALL SLAVES;
```
This system variable's value can be reset by manually changing its value to the empty string. For example:
```
SET GLOBAL gtid_slave_pos = '';
```
The GTID position defined by `gtid_slave_pos` can be used as a replica's starting replication position by setting [MASTER\_USE\_GTID=slave\_pos](../change-master-to/index#master_use_gtid) when the replica is configured with the [CHANGE MASTER TO](../change-master-to/index) statement. As an alternative, the [gtid\_current\_pos](#gtid_current_pos) system variable can also be used as a replica's starting replication position.
If a user sets the value of the `gtid_slave_pos` system variable, and [gtid\_binlog\_pos](#gtid_binlog_pos) contains later GTIDs for certain replication domains, then [gtid\_current\_pos](#gtid_current_pos) will contain the GTIDs from [gtid\_binlog\_pos](#gtid_binlog_pos) for those replication domains. To protect users in this scenario, if a user sets tthe `gtid_slave_pos` system variable to a GTID position that is behind the GTID position in [gtid\_binlog\_pos](#gtid_binlog_pos), then the server will give the user a warning.
This can help protect the user when the replica is configured to use [gtid\_current\_pos](#gtid_current_pos) as its replication position. This can also help protect the user when a server has been rolled back to restart replication from an earlier point in time, but the user has forgotten to reset [gtid\_binlog\_pos](#gtid_binlog_pos) with [RESET MASTER](../reset-master/index).
The [mysql.gtid\_slave\_pos](../mysqlgtid_slave_pos-table/index) system table is used to store the contents of global.gtid\_slave\_pos and preserve it over restarts.
* **Commandline:** None
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `string`
* **Default:** Null
#### `gtid_binlog_pos`
This variable is the GTID of the last event group written to the binary log, for each replication domain.
Note that when the binlog is empty (such as on a fresh install or after [RESET MASTER](../reset-master/index)), there are no event groups written in any replication domain, so in this case the value of `gtid_binlog_pos` will be the empty string.
The value is read-only, but it is updated whenever a DML or DDL statement is written to the binary log. The value can be reset by executing [RESET MASTER](../reset-master/index), which will also delete all binary logs. However, note that [RESET MASTER](../reset-master/index) does not also reset [gtid\_slave\_pos](#gtid_slave_pos). Since [gtid\_current\_pos](#gtid_current_pos) is the union of [gtid\_slave\_pos](#gtid_slave_pos) and `gtid_binlog_pos`, that means that new GTIDs added to `gtid_binlog_pos` can lag behind those in [gtid\_current\_pos](#gtid_current_pos) if [gtid\_slave\_pos](#gtid_slave_pos) contains GTIDs in the same domain with higher sequence numbers. If you want to reset [gtid\_current\_pos](#gtid_current_pos) for a specific GTID domain in cases like this, then you will also have to change [gtid\_slave\_pos](#gtid_slave_pos) in addition to executing [RESET MASTER](../reset-master/index). See [gtid\_slave\_pos](#gtid_slave_pos) for notes on how to change its value.
* **Commandline:** None
* **Scope:** Global
* **Dynamic:** Read-only
* **Data Type:** `string`
* **Default:** Null
#### `gtid_binlog_state`
The variable gtid\_binlog\_state holds the internal state of the binlog. The state consists of the last GTID ever logged to the binary log for every combination of domain\_id and server\_id. This information is used by the primary to determine whether a given GTID has been logged to the binlog in the past, even if it has later been deleted due to binlog purge. For each domain\_id, the last entry in @@gtid\_binlog\_state is the last GTID logged into binlog, ie. this is the value that appears in @@gtid\_binlog\_pos.
Normally this internal state is not needed by users, as @@gtid\_binlog\_pos is more useful in most cases. The main usage of @@gtid\_binlog\_state is to restore the state of the binlog after RESET MASTER (or equivalently if the binlog files are lost). If the value of @@gtid\_binlog\_state is saved before RESET MASTER and restored afterwards, the primary will retain information about past history, same as if PURGE BINARY LOGS had been used (of course the actual events in the binary logs are still deleted).
Note that to set the value of @@gtid\_binlog\_state, the binary log must be empty, that is it must not contain any GTID events and the previous value of @@gtid\_binlog\_state must be the empty string. If not, then RESET MASTER must be used first to erase the binary log first.
The value of @@gtid\_binlog\_state is preserved by the server across restarts by writing a file MASTER-BIN.state, where MASTER-BIN is the base name of the binlog set with the --log-bin option. This file is written at server shutdown, and re-read at next server start. (In case of a server crash, the data in the MASTER-BIN.state is not correct, and the server instead recovers the correct value during binlog crash recovery by scanning the binlog files and recording each GTID found).
For completeness, note that setting @@gtid\_binlog\_state internally executes a RESET MASTER. This is normally not noticeable as it can only be changed when the binlog is empty of GTID events. However, if executed e.g. immediately after upgrading to MariaDB 10, it is possible that the binlog is non-empty but without any GTID events, in which case all such events will be deleted, just as if RESET MASTER had been run.
* **Commandline:** None
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `string`
* **Default:** Null
#### `gtid_current_pos`
This system variable contains the GTID of the last transaction applied to the database for each replication domain.
The value of this system variable is constructed from the values of the [gtid\_binlog\_pos](#gtid_binlog_pos) and [gtid\_slave\_pos](#gtid_slave_pos) system variables. It gets GTIDs of transactions executed locally from the value of the [gtid\_binlog\_pos](#gtid_binlog_pos) system variable. It gets GTIDs of replicated transactions from the value of the [gtid\_slave\_pos](#gtid_slave_pos) system variable.
For each replication domain, if the [server\_id](../replication-and-binary-log-system-variables/index#server_id) of the corresponding GTID in [gtid\_binlog\_pos](#gtid_binlog_pos) is equal to the servers own [server\_id](../replication-and-binary-log-system-variables/index#server_id), *and* the sequence number is higher than the corresponding GTID in [gtid\_slave\_pos](#gtid_slave_pos), then the GTID from [gtid\_binlog\_pos](#gtid_binlog_pos) will be used. Otherwise the GTID from [gtid\_slave\_pos](#gtid_slave_pos) will be used for that domain.
GTIDs from [gtid\_binlog\_pos](#gtid_binlog_pos) in which the [server\_id](../replication-and-binary-log-system-variables/index#server_id) of the GTID is **not** equal to the server's own [server\_id](../replication-and-binary-log-system-variables/index#server_id) are effectively ignored. If [gtid\_binlog\_pos](#gtid_binlog_pos) contains a GTID for a given replication domain, but the [server\_id](../replication-and-binary-log-system-variables/index#server_id) of the GTID is **not** equal to the server's own [server\_id](../replication-and-binary-log-system-variables/index#server_id), and [gtid\_slave\_pos](#gtid_slave_pos) does **not** contain a GTID for that given replication domain, then `gtid_current_pos` will **not** contain any GTID for that replication domain.
Thus, `gtid_current_pos` contains the most recent GTID executed on the server, whether this was done as a primary or as a replica.
The GTID position defined by `gtid_current_pos` can be used as a replica's starting replication position by setting [MASTER\_USE\_GTID=current\_pos](../change-master-to/index#master_use_gtid) when the replica is configured with the [CHANGE MASTER TO](../change-master-to/index) statement. As an alternative, the [gtid\_slave\_pos](#gtid_slave_pos) system variable can also be used as a replica's starting replication position.
The value of `gtid_current_pos` is read-only, but it is updated whenever a transaction is written to the binary log and/or replicated by a replica thread, and that transaction's GTID is considered *newer* than the current GTID for that domain. See above for the rules on how to determine if a GTID would be considered *newer*.
If you need to reset the value, see the notes on resetting [gtid\_slave\_pos](#gtid_slave_pos) and [gtid\_binlog\_pos](#gtid_binlog_pos), since `gtid_current_pos` is formed from the values of those variables.
* **Commandline:** None
* **Scope:** Global
* **Dynamic:** Read-only
* **Data Type:** `string`
* **Default:** Null
#### `gtid_strict_mode`
The GTID strict mode is an optional setting that can be used to help the DBA enforce a strict discipline about keeping binlogs identical across multiple servers replicating using global transaction ID.
When GTID strict mode is enabled, some additional errors are enabled for situations that could otherwise cause differences between binlogs on different servers in a replication hierarchy:
1. If a replica server tries to replicate a GTID with a sequence number lower than what is already in the binlog for that replication domain, the SQL thread stops with an error (this indicates an extra transaction in the replica binlog not present on the primary).
2. Similarly, an attempt to manually binlog a GTID with a lower sequence number (by setting `@@SESSION.gtid_seq_no`) is rejected with an error.
3. If the replica tries to connect starting at a GTID that is missing in the primary's binlog, this is an error in GTID strict mode even if a GTID exists with a higher sequence number (this indicates a GTID on the replica missing on the primary). Note that this error is controlled by the setting of GTID strict mode on the connecting replica server.
GTID mode is off by default; this is needed to preserve backwards compatibility with existing replication setups (older versions of the server did not enforce any strict mode for binlog order). Global transaction ID is designed to work correctly even when strict mode is not enabled. However, with strict mode enforced, the semantics is simpler and thus easier to understand, because binlog order is always identical across servers and sequence numbers are always strictly increasing within each replication domain. This can also make automated scripting of large replication setups easier to implement correctly.
When GTID strict mode is enabled, the replica will stop with an error when a problem is encountered. This allows the DBA to become aware of the problem and take corrective actions to avoid similar issues in the future. One way to recover from such an error is to temporarily disable GTID strict mode on the offending replica, to be able to replicate past the problem point (perhaps using `START SLAVE UNTIL master_gtid_pos=XXX`).
* **Commandline:** `--gtid-strict-mode[={0|1}]`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `boolean`
* **Default:** `Off`
---
#### `gtid_domain_id`
* **Description:** This variable is used to decide which replication domain new GTIDs are logged in for a primary server. See [Use with multi-source replication and other multi-primary setups](#use-with-multi-source-replication-and-other-multi-master-setups) for details. This variable can also be set on the session level by a user with the SUPER privilege. This is used by [mysqlbinlog](../mysqlbinlog/index) to preserve the domain ID of GTID events.
* **Commandline:** `--gtid-domain-id=#`
* **Scope:** Global, Session
* **Dynamic:** Yes
* **Data Type:** `numeric (32-bit unsigned integer)`
* **Default Value:** `0`
* **Range:** `0` to `4294967295`
---
#### `last_gtid`
* **Description:** Holds the GTID that was assigned to the last transaction, or statement that was logged to the [binary log](../binary-log/index). If the binary log is disabled, or if no transaction or statement was executed in the session yet, then the value is an empty string.
* **Scope:** Session
* **Dynamic:** Read-only
* **Data Type:** `string`
---
#### `server_id`
* **Description:** Server\_id can be set on the session level to change which server\_id value is logged in binlog events (both GTID and other events). This is used by mysqlbinlog to preserve the server ID of GTID events.
* **Scope:** Global, Session
* **Dynamic:** Yes
* **Data Type:** numeric (32-bit unsigned integer)
---
#### `gtid_seq_no`
* **Description:** gtid\_seq\_no can be set on the session level to change which sequence number is logged in the following GTID event. The variable, along with [@@gtid\_domain\_id](#gtid_domain_id) and [@@server\_id](#server_id), is typically used by [mysqlbinlog](../mysqlbinlog/index) to set up the gtid value of the transaction being decoded into the output.
* **Commandline:** None
* **Scope:** Session
* **Dynamic:** Yes
* **Data Type:** `numeric (64-bit unsigned integer)`
* **Default:** Null
---
#### `gtid_ignore_duplicates`
* **Description:** When set, different primary connections in multi-source replication are allowed to receive and process event groups with the same GTID (when using GTID mode). Only one will be applied, any others will be ignored. Within a given replication domain, just the sequence number will be used to decide whether a given GTID has been already applied; this means it is the responsibility of the user to ensure that GTID sequence numbers are strictly increasing. With gtid\_ignore\_duplicates=OFF, a duplicate event based on domain id and sequence number, will be executed.
* **Commandline:** `--gtid-ignore-duplicates=#`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `boolean`
* **Default:** `OFF`
---
#### `gtid_pos_auto_engines`
This variable is used to enable multiple versions of the [mysql.gtid\_slave\_pos](../mysqlgtid_slave_pos-table/index) table, one for each transactional storage engine in use. This can improve replication performance if a server is using multiple different storage engines in different transactions.
The value is a list of engine names, separated by commas (','). Replication of transactions using these engines will automatically create new versions of the mysql.gtid\_slave\_pos table in the same engine and use that for future transactions (table creation takes place in a background thread). This avoids introducing a cross-engine transaction to update the GTID position. Only transactional storage engines are supported for gtid\_pos\_auto\_engines (this currently means [InnoDB](../innodb/index), [TokuDB](../tokudb/index), or [MyRocks](../myrocks/index)).
The variable can be changed dynamically, but replica SQL threads should be stopped when changing it, and it will take effect when the replicas are running again.
When setting the variable on the command line or in a configuration file, it is possible to specify engines that are not enabled in the server. The server will then still start if, for example, that engine is no longer used. Attempting to set a non-enabled engine dynamically in a running server (with SET GLOBAL gtid\_pos\_auto\_engines) will still result in an error.
Removing a storage engine from the variable will have no effect once the new tables have been created - as long as these tables are detected, they will be used.
* **Commandline:** `--gtid-pos-auto-engines=value`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `string` (comma-separated list of engine names)
* **Default:** empty
* **Introduced:** [MariaDB 10.3.1](https://mariadb.com/kb/en/mariadb-1031-release-notes/)
---
#### `gtid_cleanup_batch_size`
* **Description:** Normally does not need tuning. How many old rows must accumulate in the [mysql.gtid\_slave\_pos table](../mysqlgtid_slave_pos-table/index) before a background job will be run to delete them. Can be increased to reduce number of commits if using many different engines with [gtid\_pos\_auto\_engines](#gtid_pos_auto_engines), or to reduce CPU overhead if using a huge number of different [gtid\_domain\_ids](#gtid_domain_id). Can be decreased to reduce number of old rows in the table.
* **Commandline:** `--gtid-cleanup-batch-size=#`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `numeric`
* **Default:** `64`
* **Range:** `0` to `2147483647`
* **Introduced:** [MariaDB 10.4.1](https://mariadb.com/kb/en/mariadb-1041-release-notes/)
---
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Optimizing GROUP BY and DISTINCT Clauses in Subqueries Optimizing GROUP BY and DISTINCT Clauses in Subqueries
======================================================
A DISTINCT clause and a GROUP BY without a corresponding HAVING clause have no meaning in IN/ALL/ANY/SOME/EXISTS subqueries. The reason is that IN/ALL/ANY/SOME/EXISTS only check if an outer row satisfies some condition with respect to all or any row in the subquery result. Therefore is doesn't matter if the subquery has duplicate result rows or not - if some condition is true for some row of the subquery, this condition will be true for all duplicates of this row. Notice that GROUP BY without a corresponding HAVING clause is equivalent to a DISTINCT.
[MariaDB 5.3](../what-is-mariadb-53/index) and later versions automatically remove DISTINCT and GROUP BY without HAVING if these clauses appear in an IN/ALL/ANY/SOME/EXISTS subquery. For instance:
```
select * from t1
where t1.a > ALL(select distinct b from t2 where t2.c > 100)
```
is transformed to:
```
select * from t1
where t1.a > ALL(select b from t2 where t2.c > 100)
```
Removing these unnecessary clauses allows the optimizer to find more efficient query plans because it doesn't need to take care of post-processing the subquery result to satisfy DISTINCT / GROUP BY.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb About Buildbot About Buildbot
==============
Overview
--------
The goal of MariaDB Foundation Buildbot is to ensure that the MariaDB Server is being thoroughly tested on all supported platforms and environments. We are currently running 100 different configurations for the following platforms:
* x64 and x86
* aarch64
* ppc64le
* s390x
and operating systems:
* Debian 9, 10, 11 and Sid
* Ubuntu 16.04, 18.04, 20.04 and 21.04
* Fedora 33 and 34
* CentOS 7 and 8
* RHEL 7 and 8
* SLES 12 and 15
* OpenSUSE 15 and 42
* Windows
* AIX-7.2
Moreover, we run other ecosystem tests for:
* PHP
* DBdeployer
* MySqlJS
* PyMySQL
Packages built by buildbot can be downloaded from [here](https://ci.mariadb.org/).
What is Buildbot?
-----------------
The MariaDB Foundation uses Buildbot, a continuous integration and testing framework to test and create MariaDB Server packages. It is hosted on <https://buildbot.mariadb.org/> and ensures that each push to the MariaDB Server GitHub repository is properly tested.
Who uses Buildbot?
------------------
Buildbot should be used by each MariaDB developer to ensure that the new changes that are made are properly tested on all supported platforms and environments. In order to enforce this, Buildbot is used for [branch protection](../branch-protection-using-buildbot/index). However, even though branch protection is enabled, **only a selected few** builders are part of it. So, it is the developer's responsibility to monitor all the builders and make sure that everything is fine before making the final push to the main MariaDB branch.
Buildbot keywords
-----------------
* Changes/Repository - Any change that occurs in the source code (commit)
* Build Master - The main process that runs on a dedicated machine. It checks for changes in the source code and is in charge of scheduling builds
* Build - The actual configuration that is tested. It consists of a sequence of steps that define the actual test (e.g. get source code, compile, run tests, etc)
* Buildbot Worker - The process which waits for commands from the Build Master in order to run a build
How does the Buildbot work?
---------------------------

As it comes to the Buildbot Master, we use a multi-master configuration. This means that we have multiple running master processes. So, we have a dedicated master for the user interface and several other that deal with looking for changes and scheduling builds.
Each time a push is made to the MariaDB Server Repository, it is detected by the buildbot master which schedules all the builds. Each build defines a different test configuration. We mainly use Docker Latent Workers which means that for each build, the master starts a Docker container on a remote machine. The container is configured to run the buildbot-worker process on startup. This process can now receive instructions from the master. In this way, by using latent workers there isn’t a buildbot-worker process continuously running on the worker machine. Instead, for each build a separate container is started.
Below, you can find a detailed step by step overview of what happens after a push is made to the MariaDB Server repository:
* Step 1: Detect a new change in the MariaDB Server repository
+ Trigger source tarball creation
* Step 2: Tarball creation
+ Clone the repository and create a source tarball corresponding to the latest changes
+ Trigger bintar builds
* Step 3: Bintar builds
+ Fetch the source tarball previously created
+ Compile
+ Test (mysql-test-run)
+ Save bintar
+ Trigger package creation builds
+ Trigger ecosystem builds
* Step 4.1: Package creation
+ Fetch source
+ Create packages
+ Save packages
+ Trigger installation builds
* Step 4.2: Ecosystem tests
* Step 5: Installation builds
+ Fetch packages
+ Test if they install successfully
+ Trigger upgrade builds
* Step 6: Upgrade builds
+ Test if the previously MariaDB Server version can be successfully upgraded to the current one
The information below refers to the old Buildbot (<http://buildbot.askmonty.org/>), and not the new Buildbot (<https://buildbot.mariadb.org/>). The information is old, outdated, or otherwise currently incorrect.
Overview
--------
The current state of the MariaDB trees with respect to build or test failures is always available from the [Buildbot setup](http://buildbot.askmonty.org/buildbot/) page.
* [MariaDB-5.5 waterfall status page.](http://buildbot.askmonty.org/buildbot/waterfall?branch=5.5)
* [MariaDB-10.0 waterfall status page.](http://buildbot.askmonty.org/buildbot/waterfall?branch=10.0)
* [MariaDB-10.1 waterfall status page.](http://buildbot.askmonty.org/buildbot/waterfall?branch=10.1)
The BuildBot setup polls the Launchpad trees every 5 minutes for changes. Whenever a new push is found in one of our trees, the new code is compiled and run through the test suite.
If all platforms are green after this, everything is good. If not, it means there is a problem with the push, and someone needs to look into it ASAP. If it was your push, then the someone who needs to look at it is you!
BuildBot is a generic, GPL'ed program providing a continuous integration test framework. For more information on BuildBot, see the [the BuildBot project homepage](http://buildbot.net/trac).
Volunteering to Run a Build Slave
---------------------------------
Many of our build hosts are run by [community](../community/index) members, and we are always looking for additional volunteers to help us cover additional platforms or build options in BuildBot.
If you are able to provide a spare machine for this purpose, your help is greatly appreciated! This is a good way to get involved without having to spend a lot of time on it. Get started by writing an email to 'maria-developers (at) lists.launchpad.net' with an offer to run a BuildBot slave.
Setting up the Slave BuildBot
-----------------------------
See [buildbot-setup](../buildbot-setup/index).
### Pausing mysql-test-run.pl
Sometimes you need to work when your computer is busy running tests for buildbot. We've added a new feature to the mysql-test-run.pl script which allows you to stop it temporarily so you can use your computer and then restart the tests when you're ready.
To do this, define the environment variable "MTR\_STOP\_FILE". Whenever the file specified by this environment variable exists, the mysql-test-run.pl script will stop as soon as it is able to do so (i.e. it won't stop immediately). When the file is removed, the mysql-test-run.pl script will continue from where it left off.
If you plan on using this feature you should also set the "MTR\_STOP\_KEEP\_ALIVE" environment variable with a value of 120. This will make the script print messages to buildbot every 2 minutes which will prevent a timeout.
Database with Test Results
--------------------------
Buildbot saves the results of test runs in a database, to be used for enhanced reporting on web pages without need to change the Buildbot code, and for data mining when investigating test failures.
The database schema is documented under [Buildbot Database Schema](../buildbot_database_schema/index). The schema is likely to evolve as we gradually add more kinds of information.
For now, the data is not externally available. But the plan is to set up a slave database to replicate the data, and provide access (eg. remote database accounts) to members of the community with interesting ideas about how to present or mine this data, or who are just curious to play with it. If anyone has an interest in this, or wants to volunteer a slave host for this purpose, please send a mail to [[email protected]](mailto:[email protected]). The more people show interest in this, the faster it is likely to happen!
Reports
-------
We are developing new reports fed off the test results in the database. These reports will be located [here](http://buildbot.askmonty.org/buildbot/reports/). The first report is the [Cross Reference](http://buildbot.askmonty.org/buildbot/reports/cross_reference) report. This report allows all test failures to be searched.
Buildbot Maintenance
--------------------
Here is some information on how our Buildbot installation is set up and maintained:
* The configuration file is included in the [Tools for MariaDB](https://github.com/MariaDB/mariadb.org-tools) repository.
* The building and testing of binary packages is documented on the [package-testing-with-buildbot-and-kvm](../package-testing-with-buildbot-and-kvm/index) page.
* We developed a small tool, [runvm](../buildbot_runvm/index), which is used to do some of the builds inside a virtual machine, mostly to test builds of binary packages.
* The [BuildBot Development](../buildbot_development/index) page describes how we developed some of the enhancements to BuildBot that we use and have contributed upstream.
See Also
--------
* The [Buildbot ToDo](../buildbot-todo/index) page
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb HashiCorp Vault and MariaDB HashiCorp Vault and MariaDB
===========================
Vault is open source software for secret management provided by HashiCorp. It is designed to avoid sharing secrets of various types, like passwords and private keys. When building automation, Vault is a good solution to avoid storing secrets in plain text in a repository.
MariaDB and Vault may relate to each other in several ways:
* MariaDB has a [Hashicorp Key Management plugin](../hashicorp-key-management-plugin/index), to manage and rotate SSH keys.
* Users passwords can be stored in Vault.
* MariaDB (and MySQL) can be used as a secret engine, a component which stores, generates, or encrypts data.
* MariaDB (and MySQL) can be used as a backend storage, providing durability for Vault data.
For information about how to install Vault, see [Install Vault](https://www.vaultproject.io/docs/install).
Vault Features
--------------
Vault is used via an HTTP/HTTPS API.
Vault is identity-based. Users login and Vault sends them a token that is valid for a certain amount of time, or until certain conditions occur. Users with a valid token may request to obtain secrets for which they have proper permissions.
Vault encrypts the secrets it stores.
Vault can optionally audit changes to secrets and secrets requests by the users.
Vault Architecture
------------------
Vault is a server. This allows decoupling the secrets management logic from the clients, which only need to login and keep a token until it expires.
The sever can actually be a cluster of servers, to implement high availability.
The main Vault components are:
* **Storage Backed**: This is where the secrets are stored. Vault only send encrypted data to the backend storage.
* **HTTP API**: This API is used by the clients, and provides an access to Vault server.
* **Barrier**: Similarly to an actual barrier, it protects all inner Vault components. The HTTP API and the storage backend are outside of the barrier and could be accessed by anyone. All communications from and to these components have to pass through the barrier. The barrier verifies data and encrypts it. The barrier can have two states: *sealed* or *unsealed*. Data can only pass through when the barrier is unsealed. All the following components are located inside the barrier.
* **Auth Method**: Handles login attempts from clients. When a login succeeds, the auth method returns a list of security policies to Vault core.
* **Token Store**: Here the tokens generated as a result of a succeeded login are stored.
* **Secrets Engines**: These components manage secrets. They can have different levels of complexity. Some of them simply expect to receive a key, and return the corresponding secret. Others may generate secrets, including one-time-passwords.
* **Audit Devices**: These components log the requests received by Vault and the responses sent back to the clients.There may be multiple devices, in which case an **Audit Broker** sends the request or response to the proper device.
Dev Mode
--------
It is possible to start Vault in dev mode:
```
vault server -dev
```
Dev mode is useful for learning Vault, or running experiments on some particular features. It is extremely insecure, because dev mode is equivalent to starting Vault with several insecure options. This means that Vault should never run in production in dev mode. However, this also means that all the regular Vault features are available in dev mode.
Dev mode simplifies all operations. Actually, no configuration is necessary to get Vault up and running in dev mode. It makes it possible to communicate with the Vault API from the shell without any authentication. Data is stored in memory by default. Vault is unsealed by default, and if explicitly sealed, it can be unsealed using only one key.
For more details, see ["Dev" Server Mode](https://www.vaultproject.io/docs/concepts/dev-server) in Vault documentation.
Vault Resources and References
------------------------------
* [Documentation](https://www.vaultproject.io/docs).
* [MySQL/MariaDB Database Secrets Engine](https://www.vaultproject.io/docs/secrets/databases/mysql-maria).
* [MySQL Storage Backend](https://www.vaultproject.io/docs/configuration/storage/mysql).
---
Content initially contributed by [Vettabase Ltd](https://vettabase.com/).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB ColumnStore software upgrade 1.1.2 GA to 1.1.3 GA MariaDB ColumnStore software upgrade 1.1.2 GA to 1.1.3 GA
=========================================================
MariaDB ColumnStore software upgrade 1.1.2 GA to 1.1.3 GA
---------------------------------------------------------
Additional Dependency Packages exist for 1.1.3, so make sure you install those based on the "Preparing for ColumnStore Installation" Guide.
Note: Columnstore.xml modifications you manually made are not automatically carried forward on an upgrade. These modifications will need to be incorporated back into .XML once the upgrade has occurred.
The previous configuration file will be saved as /usr/local/mariadb/columnstore/etc/Columnstore.xml.rpmsave.
If you have specified a root database password (which is good practice), then you must configure a .my.cnf file with user credentials for the upgrade process to use. Create a .my.cnf file in the user home directory with 600 file permissions with the following content (updating PASSWORD as appropriate):
```
[mysqladmin]
user = root
password = PASSWORD
```
### Choosing the type of upgrade
As noted on the Preparing guide, you can installing MariaDB ColumnStore with the use of soft-links. If you have the softlinks be setup at the Data Directory Levels, like mariadb/columnstore/data and mariadb/columnstore/dataX, then your upgrade will happen without any issues. In the case where you have a softlink at the top directory, like /usr/local/mariadb, you will need to upgrade using the binary package. If you updating using the rpm package and tool, this softlink will be deleted when you perform the upgrade process and the upgrade will fail.
#### Root User Installs
#### Upgrading MariaDB ColumnStore using RPMs
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Download the package mariadb-columnstore-1.1.3-1-centos#.x86\_64.rpm.tar.gz to the PM1 server where you are installing MariaDB ColumnStore.** Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Unpack the tarball, which will generate a set of RPMs that will reside in the /root/ directory.
```
# tar -zxf mariadb-columnstore-1.1.3-1-centos#.x86_64.rpm.tar.gz
```
* Upgrade the RPMs. The MariaDB ColumnStore software will be installed in /usr/local/.
```
# rpm -e --nodeps $(rpm -qa | grep '^mariadb-columnstore')
# rpm -ivh mariadb-columnstore-*1.1.3*rpm
```
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml.rpmsave
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
For RPM Upgrade, the previous configuration file will be saved as:
/usr/local/mariadb/columnstore/etc/Columnstore.xml.rpmsave
### Initial download/install of MariaDB ColumnStore binary package
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /usr/local directory -mariadb-columnstore-1.1.3-1.x86\_64.bin.tar.gz (Binary 64-BIT)to the server where you are installing MariaDB ColumnStore.
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Run pre-uninstall script
```
# /usr/local/mariadb/columnstore/bin/pre-uninstall
```
* Unpack the tarball, in the /usr/local/ directory.
```
# tar -zxvf -mariadb-columnstore-1.1.3-1.x86_64.bin.tar.gz
```
* Run post-install scripts
```
# /usr/local/mariadb/columnstore/bin/post-install
```
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml,rpmsave
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
### Upgrading MariaDB ColumnStore using the DEB package
A DEB upgrade would be done on a system that supports DEBs like Debian or Ubuntu systems.
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /root directory
mariadb-columnstore-1.1.3-1.amd64.deb.tar.gz
(DEB 64-BIT) to the server where you are installing MariaDB ColumnStore.
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Unpack the tarball, which will generate DEBs.
```
# tar -zxf mariadb-columnstore-1.1.3-1.amd64.deb.tar.gz
```
* Remove, purge and install all MariaDB ColumnStore debs
```
# cd /root/
# dpkg -r $(dpkg --list | grep 'mariadb-columnstore' | awk '{print $2}')
# dpkg -P $(dpkg --list | grep 'mariadb-columnstore' | awk '{print $2}')
# dpkg --install mariadb-columnstore-*1.1.3-1*deb
```
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml,rpmsave
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
#### Non-Root User Installs
### Initial download/install of MariaDB ColumnStore binary package
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /home/'non-root-user" directory
mariadb-columnstore-1.1.3-1.x86\_64.bin.tar.gz (Binary 64-BIT)to the server where you are installing MariaDB ColumnStore.
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Run pre-uninstall script
```
# $HOME/mariadb/columnstore/bin/pre-uninstall
--installdir= /home/guest/mariadb/columnstore
```
* Unpack the tarball, which will generate the $HOME/ directory.
```
# tar -zxvf -mariadb-columnstore-1.1.3-1.x86_64.bin.tar.gz
```
* Run post-install scripts
```
# $HOME/mariadb/columnstore/bin/post-install
--installdir=/home/guest/mariadb/columnstore
```
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml,rpmsave
```
# $HOME/mariadb/columnstore/bin/postConfigure -u -i /home/guest/mariadb/columnstore
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb LOCALTIME LOCALTIME
=========
Syntax
------
```
LOCALTIME
LOCALTIME([precision])
```
Description
-----------
`LOCALTIME` and `LOCALTIME()` are synonyms for `[NOW()](../now/index)`.
See Also
--------
* [Microseconds in MariaDB](../microseconds-in-mariadb/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb PBXT System Variables PBXT System Variables
=====================
**MariaDB until [5.3](../what-is-mariadb-53/index)**PBXT is no longer maintained, and is not part of [MariaDB 5.5](../what-is-mariadb-55/index) or later.
This page documents system variables related to the [PrimeBase XT storage engine (PBXT)](../pbxt/index). PBXT is no longer maintained, and is not part of [MariaDB 5.5](../what-is-mariadb-55/index) or later.
See [Server System Variables](../server-system-variables/index) for a complete list of system variables and instructions on setting them.
See also the [Full list of MariaDB options, system and status variables](../full-list-of-mariadb-options-system-and-status-variables/index).
Variables that specify a number of bytes may include a unit indication after the value. For example: 100KB, 64MB, etc. There should be no space between the number and the unit. Units are case insensitive (KB = Kb = kb). If no unit is specified then bytes is assumed. The recognized units are:
* **KB** (or **K**) - Kilobyte, 1024 bytes
* **MB** (or **M**) - Megabyte, 1024 KB
* **GB** (or **G**) - Gigabyte, 1024 MB
* **TB** (or **T**) - Terabyte, 1024 GB
* **PB** (or **P**) - Petabyte, 1024 TB
Variables which use this type of value are: `pbxt_index_cache_size`, `pbxt_record_cache_size`, `pbxt_log_cache_size`, `pbxt_log_file_threshold`, `pbxt_checkpoint_frequency`, `pbxt_data_log_threshold`, `pbxt_log_buffer_size`, `pbxt_data_file_grow_size`, and `pbxt_row_file_grow_size`.
#### PBXT Data Log Variables
PBXT stores part of the database in the data logs. This is mostly data from rows containing long VARCHAR fields or BLOB data. The data logs are managed by the "compactor" thread. When a record is deleted from a data log, the data is marked as garbage. When the total garbage in a data log reaches a certain threshold, the compactor thread compacts the data log by copying the valid data to a new data log, and deleting the old data log.
#### Options for PBXT
| Option | Default Value |
| --- | --- |
| ``--`pbxt` | `ON` |
| ``--`pbxt-max-threads` | `0` |
| ``--`pbxt-statistics` | `ON` |
#### `pbxt_auto_increment_mode`
* **Description:** The parameter determines how PBXT manages auto-increment values. Possible values are '`0`' (MySQL standard) or '`1`' (Previous IDs are never re-used).
In the standard 'MySQL' mode it is possible that an auto-increment value is re-issued. This occurs when the maximum auto-increment value is deleted, and then MariaDB is restarted. This occurs because the next auto-increment value to be issued is determined at startup by retrieving the current maximum auto-increment value from the table.
In mode 1, auto-increment values are never re-issued because PBXT automatically incrementing the table level AUTO\_INCREMENT table option. The AUTO\_INCREMENT table is incremented in steps of 100. Since this requires the table file to be flushed to disk, this can influence performance.
* **Commandline:** `--pbxt-auto-increment-mode=#`
* **Default Value:** `0`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_checkpoint_frequency`
* **Description:** The amount of data written to the transaction log before a checkpoint is performed.
* **Commandline:** `--pbxt-checkpoint-frequency=#`
* **Default Value:** `24MB`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_data_file_grow_size`
* **Description:** The grow size of the handle data (.xtd) files.
* **Commandline:** `--pbxt-data-file-grow-size=#`
* **Default Value:** `2MB`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_data_log_threshold`
* **Description:** The maximum size of a data log file. PBXT can create a maximum of 32000 data logs, which are used by all tables. So the value of this variable can be increased to increase the total amount of data that can be stored in the database.
* **Commandline:** `--pbxt-data-log-threshold=#`
* **Default Value:** `64MB`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_flush_log_at_trx_commit`
* **Description:** This variable specifies the durability of recently committed transactions. By reducing the durability, the speed of write operations can be increased.
'`0`' - Lowest durability, the transaction log is not written or flushed on transaction commit. In this case it is possible to loose transactions if the server executable crashes.
'`1`' - Full-durability, the transaction log is written and flushed on every transaction commit.
'`2`' - Medium durabilty, the transaction log is written, but not flushed on transaction commit. In this case it is possible to loose transactions of the server machine crashes (for example, a power failer).
In all cases, the transaction log is flushed at least once every second. This means that it is only every possible to loose database changes that occurred within the last second.
* **Commandline:** `--pbxt-flush-log-at-trx-commit=#`
* **Default Value:** `1`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_garbage_threshold`
* **Description:** The percentage of garbage in a data log file before it is compacted. This is a value between 1 and 99.
* **Commandline:** `--pbxt-garbage-threshold=#`
* **Default Value:** `50`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_index_cache_size`
* **Description:** The amount of memory allocated to the index cache. The memory allocated here is used only for caching index pages (.xti files).
* **Commandline:** `--pbxt-index-cache-size=#`
* **Default Value:** `32MB`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_log_buffer_size`
* **Description:** This is the size of the buffer used when writing a data log. The engine allocates one buffer per thread, but only if the thread is required to write a data log.
* **Commandline:** `--pbxt-log-buffer-size=#`
* **Default Value:** `256MB`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_log_cache_size`
* **Description:** The size of a transaction log file (xlog-\*.xt files) before "rollover", and a new log file is created.
* **Commandline:** `--pbxt-log-cache-size=#`
* **Default Value:** `32MB`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_log_file_count`
* **Description:** The number of transaction log files on disk before logs that are no longer required are deleted, default value is 3. The number of transaction logs on disk may exceed this number if the logs are still being read.
If a transaction log has been read (i.e. the log is offline), it will be recycled for writing again, unless it must be deleted because the number of logs on disk exceeds this threshold. Recycling logs is an optimization because the writing a pre-allocated file is faster then writing to the end of a file.
Note: an exception to this rule is Mac OS X. On Mac OS X old log files are not recycled because writing pre-allocated file is slower than writing to the end of file.
* **Commandline:** `--pbxt-log-file-count=#`
* **Default Value:** `3`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_log_file_threshold`
* **Description:** The size of a transaction log file (xlog-\*.xt files) before "rollover", and a new log file is created.
* **Commandline:** `--pbxt-log-file-threshold=#`
* **Default Value:** `32MB`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_offline_log_function`
* **Description:** This variable determines what happens to a transaction log when it is offline. A log is offline if PBXT is no longer reading or writing to the log. There are 3 possibilities:
'`0`' - Recycle log (default). This means the log is renamed and written again.
'`1`' - Delete log (default on Mac OS X).
'`2`' - Keep log. The logs can be used to repeat all operations that were applied to the database.
* **Commandline:** `--pbxt-offline-log-function=#`
* **Default Value:** `0`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_record_cache_size`
* **Description:** This is the amount of memory allocated to the record cache used to cache table data. This memory is used to cache changes to the handle data (.xtd) and row index (.xtr) files.
* **Commandline:** `--pbxt-record-cache-size=#`
* **Default Value:** `32MB`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_row_file_grow_size`
* **Description:** The grow size of the row index (.xtr) files.
* **Commandline:** `--pbxt-row-file-grow-size=#`
* **Default Value:** `256KB`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_sweeper_priority`
* **Description:** Determines the priority of the background Sweeper thread. Possible values are '`0`' (Low), '`1`' (Normal), or '`2`' (High). The Sweeper is responsible for removing deleted records and index entries (deleted records also result from UPDATE statements). If many old deleted records accumulate search operations become slower. Therefore it may improve performance to increase the priority of the Sweeper on a machine with 4 or more cores.
* **Commandline:** `--pbxt-sweeper-priority=#`
* **Default Value:** `0`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_support_xa`
* **Description:** This variable determines if XA (2-phase commit) support is enabled.
* **Commandline:** `--pbxt-support-xa=#`
* **Default Value:** `TRUE`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
#### `pbxt_transaction_buffer_size`
* **Description:** The size of the global transaction log buffer (the engine allocates 2 buffers of this size). Data to be written to a transaction log file is first written to the transaction log buffer. Since the buffer is flushed on transaction commit, it only makes sense to use a large transaction log buffer if you have longer running transactions, or many transaction running in parallel.
* **Commandline:** `--pbxt-transaction-buffer-size=#`
* **Default Value:** `1MB`
* **Removed:** `[MariaDB 5.5](../what-is-mariadb-55/index)`
---
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb USER USER
====
Syntax
------
```
USER()
```
Description
-----------
Returns the current MariaDB user name and host name, given when authenticating to MariaDB, as a string in the utf8 [character set](../data-types-character-sets-and-collations/index).
Note that the value of USER() may differ from the value of [CURRENT\_USER()](../current_user/index), which is the user used to authenticate the current client. `[CURRENT\_ROLE()](../current_role/index)` returns the current active role.
`SYSTEM_USER()` and `SESSION_USER` are synonyms for `USER()`.
Statements using the `USER()` function or one of its synonyms are not [safe for statement level replication](../unsafe-statements-for-replication/index).
Examples
--------
```
shell> mysql --user="anonymous"
SELECT USER(),CURRENT_USER();
+---------------------+----------------+
| USER() | CURRENT_USER() |
+---------------------+----------------+
| anonymous@localhost | @localhost |
+---------------------+----------------+
```
To select only the IP address, use [SUBSTRING\_INDEX()](../substring_index/index),
```
SELECT SUBSTRING_INDEX(USER(), '@', -1);
+----------------------------------+
| SUBSTRING_INDEX(USER(), '@', -1) |
+----------------------------------+
| 192.168.0.101 |
+----------------------------------+
```
See Also
--------
* [CURRENT\_USER()](../current_user/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb RETURN RETURN
======
Syntax
------
```
RETURN expr
```
The `RETURN` statement terminates execution of a [stored function](../stored-functions/index) and returns the value *`expr`* to the function caller. There must be at least one `RETURN` statement in a stored function. If the function has multiple exit points, all exit points must have a `RETURN`.
This statement is not used in [stored procedures](../stored-procedures/index), [triggers](../triggers/index), or [events](../events/index). [LEAVE](../leave/index) can be used instead.
The following example shows that `RETURN` can return the result of a [scalar subquery](../subqueries-scalar-subqueries/index):
```
CREATE FUNCTION users_count() RETURNS BOOL
READS SQL DATA
BEGIN
RETURN (SELECT COUNT(DISTINCT User) FROM mysql.user);
END;
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysql.help_topic Table mysql.help\_topic Table
=======================
`mysql.help_topic` is one of the four tables used by the [HELP command](../help-command/index). It is populated when the server is installed by the `fill_help_table.sql` script. The other help tables are [help\_relation](../mysqlhelp_relation-table/index), [help\_category](../mysqlhelp_category-table/index) and [help\_keyword](../mysqlhelp_keyword-table/index).
**MariaDB starting with [10.4](../what-is-mariadb-104/index)**In [MariaDB 10.4](../what-is-mariadb-104/index) and later, this table uses the [Aria](../aria/index) storage engine.
**MariaDB until [10.3](../what-is-mariadb-103/index)**In [MariaDB 10.3](../what-is-mariadb-103/index) and before, this table uses the [MyISAM](../myisam-storage-engine/index) storage engine.
The `mysql.help_topic` table contains the following fields:
| Field | Type | Null | Key | Default | Description |
| --- | --- | --- | --- | --- | --- |
| `help_topic_id` | `int(10) unsigned` | NO | PRI | `NULL` | |
| `name` | `char(64)` | NO | UNI | `NULL` | |
| `help_category_id` | `smallint(5) unsigned` | NO | | `NULL` | |
| `description` | `text` | NO | | `NULL` | |
| `example` | `text` | NO | | `NULL` | |
| `url` | `char(128)` | NO | | `NULL` | |
Example
-------
```
SELECT * FROM help_topic\G;
...
*************************** 704. row ***************************
help_topic_id: 692
name: JSON_DEPTH
help_category_id: 41
description: JSON functions were added in MariaDB 10.2.3.
Syntax
------
JSON_DEPTH(json_doc)
Description
-----------
Returns the maximum depth of the given JSON document, or
NULL if the argument is null. An error will occur if the
argument is an invalid JSON document.
Scalar values or empty arrays or objects have a depth of 1.
Arrays or objects that are not empty but contain only
elements or member values of depth 1 will have a depth of 2.
In other cases, the depth will be greater than 2.
Examples
--------
SELECT JSON_DEPTH('[]'), JSON_DEPTH('true'),
JSON_DEPTH('{}');
+------------------+--------------------+------------------+
| JSON_DEPTH('[]') | JSON_DEPTH('true') |
JSON_DEPTH('{}') |
+------------------+--------------------+------------------+
| 1 | 1 | 1 |
+------------------+--------------------+------------------+
SELECT JSON_DEPTH('[1, 2, 3]'), JSON_DEPTH('[[], {},
[]]');
+-------------------------+----------------------------+
| JSON_DEPTH('[1, 2, 3]') | JSON_DEPTH('[[], {}, []]') |
+-------------------------+----------------------------+
| 2 | 2 |
+-------------------------+----------------------------+
SELECT JSON_DEPTH('[1, 2, [3, 4, 5, 6], 7]');
+---------------------------------------+
| JSON_DEPTH('[1, 2, [3, 4, 5, 6], 7]') |
+---------------------------------------+
| 3 |
+---------------------------------------+
URL: https://mariadb.com/kb/en/json_depth/
example:
url: https://mariadb.com/kb/en/json_depth/
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb FORCE INDEX FORCE INDEX
===========
Description
-----------
Forcing an index to be used is mostly useful when the optimizer decides to do a table scan even if you know that using an index would be better. (The optimizer could decide to do a table scan even if there is an available index when it believes that most or all rows will match and it can avoid the overhead of using the index).
FORCE INDEX works by only considering the given indexes (like with USE\_INDEX) but in addition it tells the optimizer to regard a table scan as something very expensive. However if none of the 'forced' indexes can be used, then a table scan will be used anyway.
FORCE INDEX cannot force an [ignored index](../ignored-indexes/index) to be used - it will be treated as if it doesn't exist.
Example
-------
```
CREATE INDEX Name ON City (Name);
EXPLAIN SELECT Name,CountryCode FROM City FORCE INDEX (Name)
WHERE name>="A" and CountryCode >="A";
```
This produces:
```
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE City range Name Name 35 NULL 4079 Using where
```
### Index Prefixes
When using index hints (USE, FORCE or IGNORE INDEX), the index name value can also be an unambiguous prefix of an index name.
See Also
--------
* [Index Hints: How to Force Query Plans](../index-hints-how-to-force-query-plans/index) for more details
* [USE INDEX](../use-index/index)
* [IGNORE INDEX](../ignore-index/index)
* [Ignored Indexes](../ignored-indexes/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb The Essentials of an Index The Essentials of an Index
==========================
Imagine you've created a table with the following rows (this is the same table as used in the [More Advanced Joins](../more-advanced-joins/index) tutorial).
```
+----+------------+-----------+-------------------------+---------------------------+--------------+
| ID | First_Name | Last_Name | Position | Home_Address | Home_Phone |
+----+------------+-----------+-------------------------+---------------------------+--------------+
| 1 | Mustapha | Mond | Chief Executive Officer | 692 Promiscuous Plaza | 326-555-3492 |
| 2 | Henry | Foster | Store Manager | 314 Savage Circle | 326-555-3847 |
| 3 | Bernard | Marx | Cashier | 1240 Ambient Avenue | 326-555-8456 |
| 4 | Lenina | Crowne | Cashier | 281 Bumblepuppy Boulevard | 328-555-2349 |
| 5 | Fanny | Crowne | Restocker | 1023 Bokanovsky Lane | 326-555-6329 |
| 6 | Helmholtz | Watson | Janitor | 944 Soma Court | 329-555-2478 |
+----+------------+-----------+-------------------------+---------------------------+--------------+
```
Now, imagine you've been asked to return the home phone of Fanny Crowne. Without indexes, the only way to do it is to go through every row until you find the matching first name and surname. Now imagine there are millions of records and you can see that, even for a speedy database server, this is highly inefficient.
The answer is to sort the records. If they were stored in alphabetical order by surname, even a human could quickly find a record amongst a large number. But we can't sort the entire record by surname. What if we want to also look a record by ID, or by first name? The answer is to create separate indexes for each column we wish to sort by. An index simply contains the sorted data (such as surname), and a link to the original record.
For example, an index on Last\_Name:
```
+-----------+----+
| Last_Name | ID |
+-----------+----+
| Crowne | 4 |
| Crowne | 5 |
| Foster | 2 |
| Marx | 3 |
| Mond | 1 |
| Watson | 6 |
+-----------+----+
```
and an index on Position
```
+-------------------------+----+
| Position | ID |
+-------------------------+----+
| Cashier | 3 |
| Cashier | 4 |
| Chief Executive Officer | 1 |
| Janitor | 6 |
| Restocker | 5 |
| Store Manager | 2 |
+-------------------------+----+
```
would allow you to quickly find the phone numbers of all the cashiers, or the phone number of the employee with the surname Marx, very quickly.
Where possible, you should create an index for each column that you search for records by, to avoid having the server read every row of a table.
See [CREATE INDEX](../create-index/index) and [Getting Started with Indexes](../getting-started-with-indexes/index) for more information.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb sysbench v0.5 - 3x 15 Minute Runs on perro with 5.2-wl86 a sysbench v0.5 - 3x 15 Minute Runs on perro with 5.2-wl86 a
==========================================================
sysbench v0.5 - 3x 15 Minute Runs on perro with 5.2-wl86 key cache partitions off, 8, and 32 and key buffer size 400
MariDB sysbench benchmark comparison for key\_cache\_partitions in % with key\_buffer\_size = 400MB
Each test was run 3 times for 15 minutes with 3 minutes warmup.
```
Number of threads
1 4 8 16 32 64 128
sysbench test
oltp_complex_ro
8 / off -0.78 -0.42 -0.18 -0.49 -1.03 -0.64 1.08
32 / off -0.38 -0.30 0.55 -0.39 -0.75 -0.05 2.49
oltp_simple
8 / off -1.19 -2.20 -0.74 -2.74 -1.54 0.28 -1.46
32 / off -1.24 -1.22 0.33 -0.13 0.11 2.09 -1.34
select
8 / off -0.71 -1.68 -1.48 -2.05 0.94 -2.93 -0.18
32 / off -0.71 -1.33 -2.11 -0.63 -0.40 -19.68* -11.45*
update_index
8 / off -1.30 4.37 -14.69* -2.56 17.69* -1.14 2.82
32 / off -1.47 7.03* 0.71 -0.72 15.61* 1.61 0.33
( 8/off*100)-100
(32/off*100)-100
* means due to unusual high STDEV (see OO.org spreadsheet for details)
off means key_cache_partitions off
8 means key_cache_partitions = 8
32 means key_cache_partitions = 32
```
Benchmark was run on perro: Linux openSUSE 11.1 (x86\_64), single socket dual-core Intel 3.2GHz. with 1MB L2 cache, 2GB RAM, data\_dir on 2 disk software RAID 0
MariaDB and MySQL were compiled with
```
BUILD/compile-amd64-max
```
MariaDB revision was:
```
revno: 2742
committer: Igor Babaev <[email protected]>
branch nick: maria-5.2-keycache
timestamp: Tue 2010-02-16 08:41:11 -0800
message:
WL#86: Partitioned key cache for MyISAM.
This is the base patch for the task.
```
sysbench was run with the following parameters:
```
--oltp-table-size=20000000 \ # 20 million rows.
--max-requests=0 \
--mysql-table-engine=MyISAM \
--mysql-user=root \
--mysql-engine-trx=no \
--myisam-max-rows=50000000 \
--rand-seed=303
```
and these variable parameters:
```
--num-threads=$THREADS --test=${TEST_DIR}/${SYSBENCH_TEST}
```
Configuration used for MariaDB:
```
--no-defaults \
--datadir=/mnt/data/sysbench/data \
--language=./sql/share/english \
--key_buffer_size=400M \
--key_cache_partitions=32 \ # Off | 8 | 32
--max_connections=256 \
--query_cache_size=0 \
--query_cache_type=0 \
--skip-grant-tables \
--socket=/tmp/mysql.sock \
--table_open_cache=512 \
--thread_cache=512 \
--tmpdir=/mnt/data/sysbench
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Building Cassandra Storage Engine Building Cassandra Storage Engine
=================================
THIS PAGE IS OBSOLETE, it describes how to build a branch of MariaDB-5.5 with Cassandra SE. Cassandra SE is a part of [MariaDB 10.0](../what-is-mariadb-100/index), which uses different approach to building.
This page describes how to build the [Cassandra Storage Engine](../cassandra-storage-engine/index).
Getting the source code
-----------------------
The code is in bazaar branch at [lp:~maria-captains/maria/5.5-cassandra](https://code.launchpad.net/~maria-captains/maria/5.5-cassandra).
Alternatively, you can download a tarball from <http://ftp.osuosl.org/pub/mariadb/mariadb-5.5.27/cassandra-preview/>
Building
--------
The build process is not fully streamlined yet. It is
* known to work on Fedora 15 and OpenSUSE
* known not to work on Ubuntu Oneiric Ocelot (see [MDEV-501](https://jira.mariadb.org/browse/MDEV-501)).
* known to work on Ubuntu Precise Pangolin
The build process is as follows
* Install Cassandra (we tried 1.1.3 ... 1.1.5, 1.2 beta versions should work but haven't been tested)
* Install the Thrift library (we used 0.8.0 and [0.9.0-trunk](https://dist.apache.org/repos/dist/release/thrift/0.9.0/thrift-0.9.0.tar.gz)), only the C++ backend is needed.
+ we have installed it by compiling the source tarball downloaded from [thrift.apache.org](http://thrift.apache.org/)
* edit `storage/cassandra/CMakeLists.txt` and modify the `INCLUDE_DIRECTORIES` directive to point to Thrift's include directory.
* `export LIBS="-lthrift"`, on another machine it was "-lthrift -ldl"
* `export LDFLAGS=-L/path/to/thrift/libs`
* Build the server
+ we used BUILD/compile-pentium-max script (the name is for historic reasons. It will actually build an optimized amd64 binary)
Running the server
------------------
Cassandra storage engine is linked into the server (ie, it is not a plugin). All you need to do is to make sure Thrift's libthrift.so can be found by the loader. This may require adjusting the LD\_LIBRARY\_PATH variable.
Running tests
-------------
There is a basic testsuite. In order to run it, one needs
* Start Cassandra on localhost
* Set PATH so that `cqlsh` and `cassandra-cli` binaries can be found
* From the build directory, run
```
cd mysql-test
./mysql-test-run t/cassandra.test
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CREATE SERVER CREATE SERVER
=============
Syntax
------
```
CREATE [OR REPLACE] SERVER [IF NOT EXISTS] server_name
FOREIGN DATA WRAPPER wrapper_name
OPTIONS (option [, option] ...)
option:
{ HOST character-literal
| DATABASE character-literal
| USER character-literal
| PASSWORD character-literal
| SOCKET character-literal
| OWNER character-literal
| PORT numeric-literal }
```
Description
-----------
This statement creates the definition of a server for use with the [Spider](../spider/index), [Connect](../connect/index), [FEDERATED](../federated-storage-engine/index) or [FederatedX](../federatedx/index) storage engine. The CREATE SERVER statement creates a new row within the [servers](../mysqlservers-table/index) table within the mysql database. This statement requires the [SUPER](../grant/index#super) privilege or, from [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/), the [FEDERATED ADMIN](../grant/index#federated-admin) privilege.
The server\_name should be a unique reference to the server. Server definitions are global within the scope of the server, it is not possible to qualify the server definition to a specific database. server\_name has a maximum length of 64 characters (names longer than 64 characters are silently truncated), and is case insensitive. You may specify the name as a quoted string.
The wrapper\_name may be quoted with single quotes. Supported values are:
* `mysql`
* `mariadb` (in [MariaDB 10.3](../what-is-mariadb-103/index) and later)
For each option you must specify either a character literal or numeric literal. Character literals are UTF-8, support a maximum length of 64 characters and default to a blank (empty) string. String literals are silently truncated to 64 characters. Numeric literals must be a number between 0 and 9999, default value is 0.
**Note**: The `OWNER` option is currently not applied, and has no effect on the ownership or operation of the server connection that is created.
The CREATE SERVER statement creates an entry in the [mysql.servers](../mysqlservers-table/index) table that can later be used with the CREATE TABLE statement when creating a [Spider](../spider/index), [Connect](../connect/index), [FederatedX](../federatedx/index) or [FEDERATED](../federated-storage-engine/index) table. The options that you specify will be used to populate the columns in the mysql.servers table. The table columns are Server\_name, Host, Db, Username, Password, Port and Socket.
[DROP SERVER](../drop-server/index) removes a previously created server definition.
CREATE SERVER is not written to the [binary log](../binary-log/index), irrespective of the [binary log format](../binary-log-formats/index) being used. From [MariaDB 10.1.13](https://mariadb.com/kb/en/mariadb-10113-release-notes/), [Galera](../galera/index) replicates the CREATE SERVER, [ALTER SERVER](../alter-server/index) and [DROP SERVER](../drop-server/index) statements.
For valid identifiers to use as server names, see [Identifier Names](../identifier-names/index).
#### OR REPLACE
If the optional `OR REPLACE` clause is used, it acts as a shortcut for:
```
DROP SERVER IF EXISTS name;
CREATE SERVER server_name ...;
```
#### IF NOT EXISTS
If the IF NOT EXISTS clause is used, MariaDB will return a warning instead of an error if the server already exists. Cannot be used together with OR REPLACE.
Examples
--------
```
CREATE SERVER s
FOREIGN DATA WRAPPER mysql
OPTIONS (USER 'Remote', HOST '192.168.1.106', DATABASE 'test');
```
OR REPLACE and IF NOT EXISTS:
```
CREATE SERVER s
FOREIGN DATA WRAPPER mysql
OPTIONS (USER 'Remote', HOST '192.168.1.106', DATABASE 'test');
ERROR 1476 (HY000): The foreign server, s, you are trying to create already exists
CREATE OR REPLACE SERVER s
FOREIGN DATA WRAPPER mysql
OPTIONS (USER 'Remote', HOST '192.168.1.106', DATABASE 'test');
Query OK, 0 rows affected (0.00 sec)
CREATE SERVER IF NOT EXISTS s
FOREIGN DATA WRAPPER mysql
OPTIONS (USER 'Remote', HOST '192.168.1.106', DATABASE 'test');
Query OK, 0 rows affected, 1 warning (0.00 sec)
SHOW WARNINGS;
+-------+------+----------------------------------------------------------------+
| Level | Code | Message |
+-------+------+----------------------------------------------------------------+
| Note | 1476 | The foreign server, s, you are trying to create already exists |
+-------+------+----------------------------------------------------------------+
```
See Also
--------
* [Identifier Names](../identifier-names/index)
* [ALTER SERVER](../alter-server/index)
* [DROP SERVER](../drop-server/index)
* [Spider Storage Engine](../spider/index)
* [Connect Storage Engine](../connect/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb TRUE FALSE TRUE FALSE
==========
Description
-----------
The constants TRUE and FALSE evaluate to 1 and 0, respectively. The constant names can be written in any lettercase.
Examples
--------
```
SELECT TRUE, true, FALSE, false;
+------+------+-------+-------+
| TRUE | TRUE | FALSE | FALSE |
+------+------+-------+-------+
| 1 | 1 | 0 | 0 |
+------+------+-------+-------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore Alter Table ColumnStore Alter Table
=======================
The ALTER TABLE statement modifies existing tables. This includes adding, deleting and renaming columns as well as renaming tables.
Syntax
------
```
ALTER TABLE tbl_name
alter_specification [, alter_specification] ...
alter_specification:
table_option ...
| ADD [COLUMN] col_name column_definition
| ADD [COLUMN] (col_name column_definition,...)
| ALTER [COLUMN] col_name {SET DEFAULT literal | DROP DEFAULT}
| CHANGE [COLUMN] old_col_name new_col_name column_definition
| DROP [COLUMN] col_name
| RENAME [TO] new_tbl_name
column_definition:
data_type
[NOT NULL | NULL]
[DEFAULT default_value]
[COMMENT '[compression=0|1];']
table_options:
table_option [[,] table_option] ... (see CREATE TABLE options)
```
images here
ADD
---
The ADD clause allows you to add columns to a table. You must specify the data type after the column name.The following statement adds a priority column with an integer datatype to the orders table:
```
ALTER TABLE orders ADD COLUMN priority INTEGER;
```
* Compression level (0 for no compression, 1 for compression) can be set at the system level. If a session default exists, this will override the system default. In turn, this can be overridden by the table level compression comment, and finally a compression comment at the column level.
### Online alter table add column
ColumnStore engine fully supports online DDL (one session can be adding columns to a table while another session is querying that table). Since MySQL 5.1 did not support alter online alter table, MariaDB ColumnStore has provided a its own syntax to do so for adding columns to a table, one at a time only. Do not attempt to use it for any other purpose. Follow the example below as closely as possible
We have also provided the following workaround. This workaround is intended for adding columns to a table, one at a time only. Do not attempt to use it for any other purpose. Follow the example below as closely as possible.
Scenario: add an INT column named col7 to the existing table foo:
```
select calonlinealter('alter table foo add column col7 int;');
alter table foo add column col7 int comment 'schema sync only';
```
The select statement may take several tens of seconds to run, depending on how many rows are currently in the table. Regardless, other sessions can select against the table during this time (but they won’t be able to see the new column yet). The alter table statement will take less than 1 second (depending on how busy MariaDB is) and during this brief time interval, other table reads will be held off.
CHANGE
------
The CHANGE clause allows you to rename a column in a table.
Notes to CHANGE COLUMN:
* You cannot currently use CHANGE COLUMN to change the definition of that column.
* You can only change a single column at a time.The following example renames the order\_qty field to quantity in the orders table:
```
ALTER TABLE orders CHANGE COLUMN order_qty quantity
INTEGER;
```
DROP
----
The DROP clause allows you to drop columns. All associated data is removed when the column is dropped. You can DROP COLUMN (column\_name). The following example alters the orders table to drop the priority column:
```
ALTER TABLE orders DROP COLUMN priority;
```
RENAME
------
The RENAME clause allows you to rename a table.The following example renames the orders table:
```
ALTER TABLE orders RENAME TO customer_orders;
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema ROCKSDB_CFSTATS Table Information Schema ROCKSDB\_CFSTATS Table
=========================================
The [Information Schema](../information_schema/index) `ROCKSDB_CFSTATS` table is included as part of the [MyRocks](../myrocks/index) storage engine.
The `PROCESS` [privilege](../grant/index) is required to view the table.
It contains the following columns:
| Column | Description |
| --- | --- |
| `CF_NAME` | |
| `STAT_TYPE` | |
| `VALUE` | |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema performance_timers Table Performance Schema performance\_timers Table
============================================
Description
-----------
The `performance_timers` table lists available event timers.
It contains the following columns:
| Column | Description |
| --- | --- |
| `TIMER_NAME` | Time name, used in the [setup\_timers](../performance-schema-setup_timers-table/index) table. |
| `TIMER_FREQUENCY` | Number of timer units per second. Dependent on the processor speed. |
| `TIMER_RESOLUTION` | Number of timer units by which timed values increase each time. |
| `TIMER_OVERHEAD` | Minimum timer overhead, determined during initialization by calling the timer 20 times and selecting the smallest value. Total overhead will be at least double this, as the timer is called at the beginning and end of each timed event. |
Any `NULL` values indicate that that particular timer is not available on your platform, Any timer names with a non-NULL value can be used in the [setup\_timers](../performance-schema-setup_timers-table/index) table.
Example
-------
```
SELECT * FROM performance_timers;
+-------------+-----------------+------------------+---------------------+
| TIMER_NAME | TIMER_FREQUENCY | TIMER_RESOLUTION | TIMER_OVERHEAD |
+-------------+-----------------+------------------+---------------------+
| CYCLE | 2293651741 | 1 | 28 |
| NANOSECOND | 1000000000 | 1 | 48 |
| MICROSECOND | 1000000 | 1 | 52 |
| MILLISECOND | 1000 | 1000 | 9223372036854775807 |
| TICK | 106 | 1 | 496 |
+-------------+-----------------+------------------+---------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema INNODB_CHANGED_PAGES Table Information Schema INNODB\_CHANGED\_PAGES Table
===============================================
The [Information Schema](../information_schema/index) `INNODB_CHANGED_PAGES` Table contains data about modified pages from the bitmap file. It is updated at checkpoints by the log tracking thread parsing the log, so does not contain real-time data.
The number of records is limited by the value of the [innodb\_max\_changed\_pages](../xtradbinnodb-server-system-variables/index#innodb_max_changed_pages) system variable.
The `PROCESS` [privilege](../grant/index) is required to view the table.
It has the following columns:
| Column | Description |
| --- | --- |
| `SPACE_ID` | Modified page space id |
| `PAGE_ID` | Modified page id |
| `START_LSN` | Interval start after which page was changed (equal to checkpoint LSN) |
| `END_LSN` | Interval end before which page was changed (equal to checkpoint LSN) |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore Compression Mode ColumnStore Compression Mode
============================
MariaDB ColumnStore has the ability to compress data and this is controlled through a compression mode. This compression mode may be set as a default for the instance or set at the session level.
To set the compression mode at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.
```
set infinidb_compression_type = n
```
where n is:
* 0) compression is turned off. Any subsequent table create statements run will have compression turned off for that table unless any statement overrides have been performed. Any alter statements run to add a column will have compression turned off for that column unless any statement override has been performed.
* 2) compression is turned on. Any subsequent table create statements run will have compression turned on for that table unless any statement overrides have been performed. Any alter statements run to add a column will have compression turned on for that column unless any statement override has been performed. ColumnStore uses snappy compression in this mode.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb MAKE_SET MAKE\_SET
=========
Syntax
------
```
MAKE_SET(bits,str1,str2,...)
```
Description
-----------
Returns a set value (a string containing substrings separated by "," characters) consisting of the strings that have the corresponding bit in bits set. *`str1`* corresponds to bit 0, *`str2`* to bit 1, and so on. NULL values in *`str1`*, *`str2`*, ... are not appended to the result.
Examples
--------
```
SELECT MAKE_SET(1,'a','b','c');
+-------------------------+
| MAKE_SET(1,'a','b','c') |
+-------------------------+
| a |
+-------------------------+
SELECT MAKE_SET(1 | 4,'hello','nice','world');
+----------------------------------------+
| MAKE_SET(1 | 4,'hello','nice','world') |
+----------------------------------------+
| hello,world |
+----------------------------------------+
SELECT MAKE_SET(1 | 4,'hello','nice',NULL,'world');
+---------------------------------------------+
| MAKE_SET(1 | 4,'hello','nice',NULL,'world') |
+---------------------------------------------+
| hello |
+---------------------------------------------+
SELECT QUOTE(MAKE_SET(0,'a','b','c'));
+--------------------------------+
| QUOTE(MAKE_SET(0,'a','b','c')) |
+--------------------------------+
| '' |
+--------------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CREATE USER CREATE USER
===========
Syntax
------
```
CREATE [OR REPLACE] USER [IF NOT EXISTS]
user_specification [,user_specification ...]
[REQUIRE {NONE | tls_option [[AND] tls_option ...] }]
[WITH resource_option [resource_option ...] ]
[lock_option] [password_option]
user_specification:
username [authentication_option]
authentication_option:
IDENTIFIED BY 'password'
| IDENTIFIED BY PASSWORD 'password_hash'
| IDENTIFIED {VIA|WITH} authentication_rule [OR authentication_rule ...]
authentication_rule:
authentication_plugin
| authentication_plugin {USING|AS} 'authentication_string'
| authentication_plugin {USING|AS} PASSWORD('password')
tls_option:
SSL
| X509
| CIPHER 'cipher'
| ISSUER 'issuer'
| SUBJECT 'subject'
resource_option:
MAX_QUERIES_PER_HOUR count
| MAX_UPDATES_PER_HOUR count
| MAX_CONNECTIONS_PER_HOUR count
| MAX_USER_CONNECTIONS count
| MAX_STATEMENT_TIME time
password_option:
PASSWORD EXPIRE
| PASSWORD EXPIRE DEFAULT
| PASSWORD EXPIRE NEVER
| PASSWORD EXPIRE INTERVAL N DAY
lock_option:
ACCOUNT LOCK
| ACCOUNT UNLOCK
}
```
Description
-----------
The `CREATE USER` statement creates new MariaDB accounts. To use it, you must have the global [CREATE USER](../grant/index#create-user) privilege or the [INSERT](../grant/index#table-privileges) privilege for the [mysql](../the-mysql-database-tables/index) database. For each account, `CREATE USER` creates a new row in [mysql.user](../mysqluser-table/index) (until [MariaDB 10.3](../what-is-mariadb-103/index) this is a table, from [MariaDB 10.4](../what-is-mariadb-104/index) it's a view) or [mysql.global\_priv\_table](../mysqlglobal_priv-table/index) (from [MariaDB 10.4](../what-is-mariadb-104/index)) that has no privileges.
If any of the specified accounts, or any permissions for the specified accounts, already exist, then the server returns `ERROR 1396 (HY000)`. If an error occurs, `CREATE USER` will still create the accounts that do not result in an error. Only one error is produced for all users which have not been created:
```
ERROR 1396 (HY000):
Operation CREATE USER failed for 'u1'@'%','u2'@'%'
```
`CREATE USER`, [DROP USER](../drop-user/index), [CREATE ROLE](../create-role/index), and [DROP ROLE](../drop-role/index) all produce the same error code when they fail.
See [Account Names](#account-names) below for details on how account names are specified.
OR REPLACE
----------
If the optional `OR REPLACE` clause is used, it is basically a shortcut for:
```
DROP USER IF EXISTS name;
CREATE USER name ...;
```
For example:
```
CREATE USER foo2@test IDENTIFIED BY 'password';
ERROR 1396 (HY000): Operation CREATE USER failed for 'foo2'@'test'
CREATE OR REPLACE USER foo2@test IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.00 sec)
```
IF NOT EXISTS
-------------
When the `IF NOT EXISTS` clause is used, MariaDB will return a warning instead of an error if the specified user already exists.
For example:
```
CREATE USER foo2@test IDENTIFIED BY 'password';
ERROR 1396 (HY000): Operation CREATE USER failed for 'foo2'@'test'
CREATE USER IF NOT EXISTS foo2@test IDENTIFIED BY 'password';
Query OK, 0 rows affected, 1 warning (0.00 sec)
SHOW WARNINGS;
+-------+------+----------------------------------------------------+
| Level | Code | Message |
+-------+------+----------------------------------------------------+
| Note | 1973 | Can't create user 'foo2'@'test'; it already exists |
+-------+------+----------------------------------------------------+
```
Authentication Options
----------------------
### IDENTIFIED BY 'password'
The optional `IDENTIFIED BY` clause can be used to provide an account with a password. The password should be specified in plain text. It will be hashed by the [PASSWORD](../password/index) function prior to being stored in the [mysql.user](../mysqluser-table/index)/[mysql.global\_priv\_table](../mysqlglobal_priv-table/index) table.
For example, if our password is `mariadb`, then we can create the user with:
```
CREATE USER foo2@test IDENTIFIED BY 'mariadb';
```
If you do not specify a password with the `IDENTIFIED BY` clause, the user will be able to connect without a password. A blank password is not a wildcard to match any password. The user must connect without providing a password if no password is set.
The only [authentication plugins](../authentication-plugins/index) that this clause supports are [mysql\_native\_password](../authentication-plugin-mysql_native_password/index) and [mysql\_old\_password](../authentication-plugin-mysql_old_password/index).
### IDENTIFIED BY PASSWORD 'password\_hash'
The optional `IDENTIFIED BY PASSWORD` clause can be used to provide an account with a password that has already been hashed. The password should be specified as a hash that was provided by the [PASSWORD](../password/index) function. It will be stored in the [mysql.user](../mysqluser-table/index)/[mysql.global\_priv\_table](../mysqlglobal_priv-table/index) table as-is.
For example, if our password is `mariadb`, then we can find the hash with:
```
SELECT PASSWORD('mariadb');
+-------------------------------------------+
| PASSWORD('mariadb') |
+-------------------------------------------+
| *54958E764CE10E50764C2EECBB71D01F08549980 |
+-------------------------------------------+
1 row in set (0.00 sec)
```
And then we can create a user with the hash:
```
CREATE USER foo2@test IDENTIFIED BY PASSWORD '*54958E764CE10E50764C2EECBB71D01F08549980';
```
If you do not specify a password with the `IDENTIFIED BY` clause, the user will be able to connect without a password. A blank password is not a wildcard to match any password. The user must connect without providing a password if no password is set.
The only [authentication plugins](../authentication-plugins/index) that this clause supports are [mysql\_native\_password](../authentication-plugin-mysql_native_password/index) and [mysql\_old\_password](../authentication-plugin-mysql_old_password/index).
### IDENTIFIED {VIA|WITH} authentication\_plugin
The optional `IDENTIFIED VIA authentication_plugin` allows you to specify that the account should be authenticated by a specific [authentication plugin](../authentication-plugins/index). The plugin name must be an active authentication plugin as per [SHOW PLUGINS](../show-plugins/index). If it doesn't show up in that output, then you will need to install it with [INSTALL PLUGIN](../install-plugin/index) or [INSTALL SONAME](../install-soname/index).
For example, this could be used with the [PAM authentication plugin](../authentication-plugin-pam/index):
```
CREATE USER foo2@test IDENTIFIED VIA pam;
```
Some authentication plugins allow additional arguments to be specified after a `USING` or `AS` keyword. For example, the [PAM authentication plugin](../authentication-plugin-pam/index) accepts a [service name](../authentication-plugin-pam/index#configuring-the-pam-service):
```
CREATE USER foo2@test IDENTIFIED VIA pam USING 'mariadb';
```
The exact meaning of the additional argument would depend on the specific authentication plugin.
**MariaDB starting with [10.4.0](https://mariadb.com/kb/en/mariadb-1040-release-notes/)**The `USING` or `AS` keyword can also be used to provide a plain-text password to a plugin if it's provided as an argument to the [PASSWORD()](../password/index) function. This is only valid for [authentication plugins](../authentication-plugins/index) that have implemented a hook for the [PASSWORD()](../password/index) function. For example, the [ed25519](../authentication-plugin-ed25519/index) authentication plugin supports this:
```
CREATE USER safe@'%' IDENTIFIED VIA ed25519 USING PASSWORD('secret');
```
**MariaDB starting with [10.4.3](https://mariadb.com/kb/en/mariadb-1043-release-notes/)**One can specify many authentication plugins, they all work as alternatives ways of authenticating a user:
```
CREATE USER safe@'%' IDENTIFIED VIA ed25519 USING PASSWORD('secret') OR unix_socket;
```
By default, when you create a user without specifying an authentication plugin, MariaDB uses the [mysql\_native\_password](../authentication-plugin-mysql_native_password/index) plugin.
TLS Options
-----------
By default, MariaDB transmits data between the server and clients without encrypting it. This is generally acceptable when the server and client run on the same host or in networks where security is guaranteed through other means. However, in cases where the server and client exist on separate networks or they are in a high-risk network, the lack of encryption does introduce security concerns as a malicious actor could potentially eavesdrop on the traffic as it is sent over the network between them.
To mitigate this concern, MariaDB allows you to encrypt data in transit between the server and clients using the Transport Layer Security (TLS) protocol. TLS was formerly known as Secure Socket Layer (SSL), but strictly speaking the SSL protocol is a predecessor to TLS and, that version of the protocol is now considered insecure. The documentation still uses the term SSL often and for compatibility reasons TLS-related server system and status variables still use the prefix ssl\_, but internally, MariaDB only supports its secure successors.
See [Secure Connections Overview](../secure-connections-overview/index) for more information about how to determine whether your MariaDB server has TLS support.
You can set certain TLS-related restrictions for specific user accounts. For instance, you might use this with user accounts that require access to sensitive data while sending it across networks that you do not control. These restrictions can be enabled for a user account with the [CREATE USER](index), [ALTER USER](../alter-user/index), or [GRANT](../grant/index) statements. The following options are available:
| Option | Description |
| --- | --- |
| `REQUIRE NONE` | TLS is not required for this account, but can still be used. |
| `REQUIRE SSL` | The account must use TLS, but no valid X509 certificate is required. This option cannot be combined with other TLS options. |
| `REQUIRE X509` | The account must use TLS and must have a valid X509 certificate. This option implies `REQUIRE SSL`. This option cannot be combined with other TLS options. |
| `REQUIRE ISSUER 'issuer'` | The account must use TLS and must have a valid X509 certificate. Also, the Certificate Authority must be the one specified via the string `issuer`. This option implies `REQUIRE X509`. This option can be combined with the `SUBJECT`, and `CIPHER` options in any order. |
| `REQUIRE SUBJECT 'subject'` | The account must use TLS and must have a valid X509 certificate. Also, the certificate's Subject must be the one specified via the string `subject`. This option implies `REQUIRE X509`. This option can be combined with the `ISSUER`, and `CIPHER` options in any order. |
| `REQUIRE CIPHER 'cipher'` | The account must use TLS, but no valid X509 certificate is required. Also, the encryption used for the connection must use a specific cipher method specified in the string `cipher`. This option implies `REQUIRE SSL`. This option can be combined with the `ISSUER`, and `SUBJECT` options in any order. |
The `REQUIRE` keyword must be used only once for all specified options, and the `AND` keyword can be used to separate individual options, but it is not required.
For example, you can create a user account that requires these TLS options with the following:
```
CREATE USER 'alice'@'%'
REQUIRE SUBJECT '/CN=alice/O=My Dom, Inc./C=US/ST=Oregon/L=Portland'
AND ISSUER '/C=FI/ST=Somewhere/L=City/ O=Some Company/CN=Peter Parker/[email protected]'
AND CIPHER 'SHA-DES-CBC3-EDH-RSA';
```
If any of these options are set for a specific user account, then any client who tries to connect with that user account will have to be configured to connect with TLS.
See [Securing Connections for Client and Server](../securing-connections-for-client-and-server/index) for information on how to enable TLS on the client and server.
Resource Limit Options
----------------------
**MariaDB starting with [10.2.0](https://mariadb.com/kb/en/mariadb-1020-release-notes/)**[MariaDB 10.2.0](https://mariadb.com/kb/en/mariadb-1020-release-notes/) introduced a number of resource limit options.
It is possible to set per-account limits for certain server resources. The following table shows the values that can be set per account:
| Limit Type | Decription |
| --- | --- |
| `MAX_QUERIES_PER_HOUR` | Number of statements that the account can issue per hour (including updates) |
| `MAX_UPDATES_PER_HOUR` | Number of updates (not queries) that the account can issue per hour |
| `MAX_CONNECTIONS_PER_HOUR` | Number of connections that the account can start per hour |
| `MAX_USER_CONNECTIONS` | Number of simultaneous connections that can be accepted from the same account; if it is 0, `max_connections` will be used instead; if `max_connections` is 0, there is no limit for this account's simultaneous connections. |
| `MAX_STATEMENT_TIME` | Timeout, in seconds, for statements executed by the user. See also [Aborting Statements that Exceed a Certain Time to Execute](../aborting-statements/index). |
If any of these limits are set to `0`, then there is no limit for that resource for that user.
Here is an example showing how to create a user with resource limits:
```
CREATE USER 'someone'@'localhost' WITH
MAX_USER_CONNECTIONS 10
MAX_QUERIES_PER_HOUR 200;
```
The resources are tracked per account, which means `'user'@'server'`; not per user name or per connection.
The count can be reset for all users using [FLUSH USER\_RESOURCES](../flush/index), [FLUSH PRIVILEGES](../flush/index) or [mysqladmin reload](../mysqladmin/index).
Per account resource limits are stored in the [user](../mysqluser-table/index) table, in the [mysql](../the-mysql-database-tables/index) database. Columns used for resources limits are named `max_questions`, `max_updates`, `max_connections` (for `MAX_CONNECTIONS_PER_HOUR`), and `max_user_connections` (for `MAX_USER_CONNECTIONS`).
Account Names
-------------
Account names have both a user name component and a host name component, and are specified as `'user_name'@'host_name'`.
The user name and host name may be unquoted, quoted as strings using double quotes (`"`) or single quotes (`'`), or quoted as identifiers using backticks (```). You must use quotes when using special characters (such as a hyphen) or wildcard characters. If you quote, you must quote the user name and host name separately (for example `'user_name'@'host_name'`).
### Host Name Component
If the host name is not provided, it is assumed to be `'%'`.
Host names may contain the wildcard characters `%` and `_`. They are matched as if by the [LIKE](../like/index) clause. If you need to use a wildcard character literally (for example, to match a domain name with an underscore), prefix the character with a backslash. See `LIKE` for more information on escaping wildcard characters.
Host name matches are case-insensitive. Host names can match either domain names or IP addresses. Use `'localhost'` as the host name to allow only local client connections.
You can use a netmask to match a range of IP addresses using `'base_ip/netmask'` as the host name. A user with an IP address *ip\_addr* will be allowed to connect if the following condition is true:
```
ip_addr & netmask = base_ip
```
For example, given a user:
```
CREATE USER 'maria'@'247.150.130.0/255.255.255.0';
```
the IP addresses satisfying this condition range from 247.150.130.0 to 247.150.130.255.
Using `255.255.255.255` is equivalent to not using a netmask at all. Netmasks cannot be used for IPv6 addresses.
Note that the credentials added when creating a user with the `'%'` wildcard host will not grant access in all cases. For example, some systems come with an anonymous localhost user, and when connecting from localhost this will take precedence.
Before [MariaDB 10.6](../what-is-mariadb-106/index), the host name component could be up to 60 characters in length. Starting from [MariaDB 10.6](../what-is-mariadb-106/index), it can be up to 255 characters.
### User Name Component
User names must match exactly, including case. A user name that is empty is known as an anonymous account and is allowed to match a login attempt with any user name component. These are described more in the next section.
For valid identifiers to use as user names, see [Identifier Names](../identifier-names/index).
It is possible for more than one account to match when a user connects. MariaDB selects the first matching account after sorting according to the following criteria:
* Accounts with an exact host name are sorted before accounts using a wildcard in the host name. Host names using a netmask are considered to be exact for sorting.
* Accounts with a wildcard in the host name are sorted according to the position of the first wildcard character. Those with a wildcard character later in the host name sort before those with a wildcard character earlier in the host name.
* Accounts with a non-empty user name sort before accounts with an empty user name.
* Accounts with an empty user name are sorted last. As mentioned previously, these are known as anonymous accounts. These are described more in the next section.
The following table shows a list of example account as sorted by these criteria:
```
+---------+-------------+
| User | Host |
+---------+-------------+
| joffrey | 192.168.0.3 |
| | 192.168.0.% |
| joffrey | 192.168.% |
| | 192.168.% |
+---------+-------------+
```
Once connected, you only have the privileges granted to the account that matched, not all accounts that could have matched. For example, consider the following commands:
```
CREATE USER 'joffrey'@'192.168.0.3';
CREATE USER 'joffrey'@'%';
GRANT SELECT ON test.t1 to 'joffrey'@'192.168.0.3';
GRANT SELECT ON test.t2 to 'joffrey'@'%';
```
If you connect as joffrey from `192.168.0.3`, you will have the `SELECT` privilege on the table `test.t1`, but not on the table `test.t2`. If you connect as joffrey from any other IP address, you will have the `SELECT` privilege on the table `test.t2`, but not on the table `test.t1`.
Usernames can be up to 80 characters long before 10.6 and starting from 10.6 it can be 128 characters long.
### Anonymous Accounts
Anonymous accounts are accounts where the user name portion of the account name is empty. These accounts act as special catch-all accounts. If a user attempts to log into the system from a host, and an anonymous account exists with a host name portion that matches the user's host, then the user will log in as the anonymous account if there is no more specific account match for the user name that the user entered.
For example, here are some anonymous accounts:
```
CREATE USER ''@'localhost';
CREATE USER ''@'192.168.0.3';
```
#### Fixing a Legacy Default Anonymous Account
On some systems, the [mysql.db](../mysqldb-table/index) table has some entries for the `''@'%'` anonymous account by default. Unfortunately, there is no matching entry in the [mysql.user](../mysqluser-table/index)/[mysql.global\_priv\_table](../mysqlglobal_priv-table/index) table, which means that this anonymous account doesn't exactly exist, but it does have privileges--usually on the default `test` database created by [mysql\_install\_db](../mysql_install_db/index). These account-less privileges are a legacy that is leftover from a time when MySQL's privilege system was less advanced.
This situation means that you will run into errors if you try to create a `''@'%'` account. For example:
```
CREATE USER ''@'%';
ERROR 1396 (HY000): Operation CREATE USER failed for ''@'%'
```
The fix is to [DELETE](../delete/index) the row in the [mysql.db](../mysqldb-table/index) table and then execute [FLUSH PRIVILEGES](../flush/index):
```
DELETE FROM mysql.db WHERE User='' AND Host='%';
FLUSH PRIVILEGES;
```
And then the account can be created:
```
CREATE USER ''@'%';
Query OK, 0 rows affected (0.01 sec)
```
See [MDEV-13486](https://jira.mariadb.org/browse/MDEV-13486) for more information.
Password Expiry
---------------
**MariaDB starting with [10.4.3](https://mariadb.com/kb/en/mariadb-1043-release-notes/)**Besides automatic password expiry, as determined by [default\_password\_lifetime](../server-system-variables/index#default_password_lifetime), password expiry times can be set on an individual user basis, overriding the global setting, for example:
```
CREATE USER 'monty'@'localhost' PASSWORD EXPIRE INTERVAL 120 DAY;
```
See [User Password Expiry](../user-password-expiry/index) for more details.
Account Locking
---------------
**MariaDB starting with [10.4.2](https://mariadb.com/kb/en/mariadb-1042-release-notes/)**Account locking permits privileged administrators to lock/unlock user accounts. No new client connections will be permitted if an account is locked (existing connections are not affected). For example:
```
CREATE USER 'marijn'@'localhost' ACCOUNT LOCK;
```
See [Account Locking](../account-locking/index) for more details.
From [MariaDB 10.4.7](https://mariadb.com/kb/en/mariadb-1047-release-notes/) and [MariaDB 10.5.8](https://mariadb.com/kb/en/mariadb-1058-release-notes/), the *lock\_option* and *password\_option* clauses can occur in either order.
See Also
--------
* [Troubleshooting Connection Issues](../troubleshooting-connection-issues/index)
* [Authentication from MariaDB 10.4](../authentication-from-mariadb-104/index)
* [Identifier Names](../identifier-names/index)
* [GRANT](../grant/index)
* [ALTER USER](../alter-user/index)
* [DROP USER](../drop-user/index)
* [SET PASSWORD](../set-password/index)
* [SHOW CREATE USER](../show-create-user/index)
* [mysql.user table](../mysqluser-table/index)
* [mysql.global\_priv\_table](../mysqlglobal_priv-table/index)
* [Password Validation Plugins](../password-validation-plugins/index) - permits the setting of basic criteria for passwords
* [Authentication Plugins](../authentication-plugins/index) - allow various authentication methods to be used, and new ones to be developed.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb EXTRACTVALUE EXTRACTVALUE
============
Syntax
------
```
EXTRACTVALUE(xml_frag, xpath_expr)
```
Description
-----------
The `EXTRACTVALUE()` function takes two string arguments: a fragment of XML markup and an XPath expression, (also known as a locator). It returns the text (That is, CDDATA), of the first text node which is a child of the element or elements matching the XPath expression.
In cases where a valid XPath expression does not match any text nodes in a valid XML fragment, (including the implicit `/text()` expression), the `EXTRACTVALUE()` function returns an empty string.
### Invalid Arguments
When either the XML fragment or the XPath expression is `NULL`, the `EXTRACTVALUE()` function returns `NULL`. When the XML fragment is invalid, it raises a warning Code 1525:
```
Warning (Code 1525): Incorrect XML value: 'parse error at line 1 pos 11: unexpected END-OF-INPUT'
```
When the XPath value is invalid, it generates an Error 1105:
```
ERROR 1105 (HY000): XPATH syntax error: ')'
```
### Explicit text() Expressions
This function is the equivalent of performing a match using the XPath expression after appending `/text()`. In other words:
```
SELECT
EXTRACTVALUE('<cases><case>example</case></cases>', '/cases/case')
AS 'Base Example',
EXTRACTVALUE('<cases><case>example</case></cases>', '/cases/case/text()')
AS 'text() Example';
+--------------+----------------+
| Base Example | text() Example |
+--------------+----------------+
| example | example |
+--------------+----------------+
```
### Count Matches
When `EXTRACTVALUE()` returns multiple matches, it returns the content of the first child text node of each matching element, in the matched order, as a single, space-delimited string.
By design, the `EXTRACTVALUE()` function makes no distinction between a match on an empty element and no match at all. If you need to determine whether no matching element was found in the XML fragment or if an element was found that contained no child text nodes, use the XPath `count()` function.
For instance, when looking for a value that exists, but contains no child text nodes, you would get a count of the number of matching instances:
```
SELECT
EXTRACTVALUE('<cases><case/></cases>', '/cases/case')
AS 'Empty Example',
EXTRACTVALUE('<cases><case/></cases>', 'count(/cases/case)')
AS 'count() Example';
+---------------+-----------------+
| Empty Example | count() Example |
+---------------+-----------------+
| | 1 |
+---------------+-----------------+
```
Alternatively, when looking for a value that doesn't exist, `count()` returns 0.
```
SELECT
EXTRACTVALUE('<cases><case/></cases>', '/cases/person')
AS 'No Match Example',
EXTRACTVALUE('<cases><case/></cases>', 'count(/cases/person)')
AS 'count() Example';
+------------------+-----------------+
| No Match Example | count() Example |
+------------------+-----------------+
| | 0|
+------------------+-----------------+
```
### Matches
**Important**: The `EXTRACTVALUE()` function only returns CDDATA. It does not return tags that the element might contain or the text that these child elements contain.
```
SELECT
EXTRACTVALUE('<cases><case>Person<email>[email protected]</email></case></cases>', '/cases')
AS Case;
+--------+
| Case |
+--------+
| Person |
+--------+
```
Note, in the above example, while the XPath expression matches to the parent `<case>` instance, it does not return the contained `<email>` tag or its content.
Examples
--------
```
SELECT
ExtractValue('<a>ccc<b>ddd</b></a>', '/a') AS val1,
ExtractValue('<a>ccc<b>ddd</b></a>', '/a/b') AS val2,
ExtractValue('<a>ccc<b>ddd</b></a>', '//b') AS val3,
ExtractValue('<a>ccc<b>ddd</b></a>', '/b') AS val4,
ExtractValue('<a>ccc<b>ddd</b><b>eee</b></a>', '//b') AS val5;
+------+------+------+------+---------+
| val1 | val2 | val3 | val4 | val5 |
+------+------+------+------+---------+
| ccc | ddd | ddd | | ddd eee |
+------+------+------+------+---------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb DISJOINT DISJOINT
========
Syntax
------
```
Disjoint(g1,g2)
```
Description
-----------
Returns `1` or `0` to indicate whether `g1` is spatially disjoint from (does not intersect) `g2`.
DISJOINT() tests the opposite relationship to [INTERSECTS()](../intersects/index).
DISJOINT() is based on the original MySQL implementation and uses object bounding rectangles, while [ST\_DISJOINT()](../st_disjoint/index) uses object shapes.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb table_exists table\_exists
=============
Syntax
------
```
table_exists(in_db_name,in_table_name, out_table_type)
# in_db_name VARCHAR(64)
# in_table_name VARCHAR(64)
# out_table_type ENUM('', 'BASE TABLE', 'VIEW', 'TEMPORARY')
```
Description
-----------
`table_exists` is a [stored procedure](../stored-procedures/index) available with the [Sys Schema](../sys-schema/index).
Given a database *in\_db\_name* and table name *in\_table\_name*, returns the table type in the OUT parameter *out\_table\_type*. The return value is an ENUM field containing one of:
* '' - the table does not exist
* 'BASE TABLE' - a regular table
* 'VIEW' - a view
* 'TEMPORARY' - a temporary table
Examples
--------
```
CALL sys.table_exists('mysql', 'time_zone', @table_type); SELECT @table_type;
+-------------+
| @table_type |
+-------------+
| BASE TABLE |
+-------------+
CALL sys.table_exists('mysql', 'user', @table_type); SELECT @table_type;
+-------------+
| @table_type |
+-------------+
| VIEW |
+-------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Buildbot Setup for Solaris x86 Buildbot Setup for Solaris x86
==============================
The following steps were used to create a Solaris 10 x86 BuildSlave.
I started with a default install of Solaris 10.
First I added a new user with:
```
groupadd sudo
useradd -d /export/home/${username} -m -s /bin/bash -g staff -G sudo ${username}
passwd ${username}
```
I then logged in as the new user and setup an ssh key.
Now to install software
Prior to actually installing the software, I adjusted the global profile so that the /usr/local/ dirs were included in the various PATHs:
```
vi /etc/profile
# Add the following lines:
LD_LIBRARY_PATH=/opt/csw/lib:/usr/local/lib:/usr/sfw/lib:$LD_LIBRARY_PATH # Add required libraries
PYTHONPATH=/usr/local/lib/python2.5/site-packages:$PYTHONPATH
PATH=/usr/local/bin:/usr/bin:/usr/sbin:/etc:/usr/sfw/bin:$PATH # Puts "local" packages in your path
export LOGNAME PATH PYTHONPATH LD_LIBRARY_PATH
```
The extra software, I downloaded from sunfreeware:
```
cd /tmp
ftp ftp.sunfreeware.com
anonymous
none
bin
cd pub/freeware/intel/10/
mget python-2.5.1-sol10-x86-local.gz sudo-1.7.4p4-sol10-x86-local.gz libintl-3.4.0-sol10-x86-local.gz libgcc-3.4.6-sol10-x86-local.gz libiconv-1.13.1-sol10-x86-local.gz
mget automake-1.9-sol10-intel-local.gz autogen-5.9.8-sol10-x86-local.gz autoconf-2.68-sol10-x86-local.gz gcc-4.5.1-sol10-x86-local.gz
mget m4-1.4.15-sol10-x86-local.gz libtool-2.4-sol10-x86-local.gz gmp-4.2.1-sol10-x86-local.gz
mget md5-6142000-sol10-intel-local.gz openssl-1.0.0c-sol10-x86-local.gz libsigsegv-2.9-sol10-x86-local.gz tcl-8.5.9-sol10-x86-local.gz tk-8.5.9-sol10-x86-local.gz perl-5.12.2-sol10-x86-local.gz
mget libtool-2.4-sol10-x86-local.gz sed-4.2.1-sol10-x86-local.gz zlib-1.2.5-sol10-x86-local.gz binutils-2.21-sol10-x86-local.gz groff-1.21-sol10-x86-local.gz bzip2-1.0.6-sol10-x86-local.gz
mget make-3.82-sol10-x86-local.gz sed-4.2.1-sol10-x86-local.gz gdb-6.8-sol10-x86-local.gz coreutils-8.9-sol10-x86-local.gz cmake-2.6.0-sol10-x86-local.gz
quit
```
With all of the software downloaded, I next setup and configured sudo and python:
```
su
gunzip -v python-2.5.1-sol10-x86-local.gz
pkgadd -d python-2.5.1-sol10-x86-local
gunzip -v libintl-3.4.0-sol10-x86-local.gz libgcc-3.4.6-sol10-x86-local.gz libiconv-1.13.1-sol10-x86-local.gz sudo-1.7.4p4-sol10-x86-local.gz
pkgadd -d libintl-3.4.0-sol10-x86-local
pkgadd -d libgcc-3.4.6-sol10-x86-local
pkgadd -d libiconv-1.13.1-sol10-x86-local
pkgadd -d sudo-1.7.4p4-sol10-x86-local
mkdir -p /usr/local/var/lib/
/usr/local/sbin/visudo
```
With sudo now working, I logged out and then back in. I then installed the other packages:
```
cd /tmp
gunzip -v *.gz
sudo pkgadd -d autoconf-2.68-sol10-x86-local
sudo pkgadd -d autogen-5.9.8-sol10-x86-local
sudo pkgadd -d automake-1.9-sol10-intel-local
sudo pkgadd -d binutils-2.21-sol10-x86-local
sudo pkgadd -d gcc-4.5.1-sol10-x86-local
sudo pkgadd -d groff-1.21-sol10-x86-local
sudo pkgadd -d libsigsegv-2.9-sol10-x86-local
sudo pkgadd -d make-3.82-sol10-x86-local
sudo pkgadd -d m4-1.4.15-sol10-x86-local
sudo pkgadd -d md5-6142000-sol10-intel-local
sudo pkgadd -d openssl-1.0.0c-sol10-x86-local
sudo pkgadd -d perl-5.12.2-sol10-x86-local
sudo pkgadd -d tcl-8.5.9-sol10-x86-local
sudo pkgadd -d tk-8.5.9-sol10-x86-local
sudo pkgadd -d zlib-1.2.5-sol10-x86-local
sudo pkgadd -d bzip2-1.0.6-sol10-x86-local
sudo pkgadd -d libtool-2.4-sol10-x86-local
sudo pkgadd -d sed-4.2.1-sol10-x86-local
sudo pkgadd -d gdb-6.8-sol10-x86-local
sudo pkgadd -d coreutils-8.9-sol10-x86-local
sudo pkgadd -d gmp-4.2.1-sol10-x86-local
sudo pkgadd -d cmake-2.6.0-sol10-x86-local
```
With those packages installed it was time to install the pieces of software which don't have pre-built packages:
Install Zope Interface:
```
cd /tmp
wget http://www.zope.org/Products/ZopeInterface/3.3.0/zope.interface-3.3.0.tar.gz
gunzip -v zope.interface-3.3.0.tar.gz
gtar -xf zope.interface-3.3.0.tar
cd zope.interface-3.3.0/
python setup.py build
sudo python setup.py install
```
Install the latest Twisted framework:
```
cd /tmp
wget http://tmrc.mit.edu/mirror/twisted/Twisted/10.2/Twisted-10.2.0.tar.bz2
bunzip2 Twisted-10.2.0.tar.bz2
gtar -xf Twisted-10.2.0.tar
cd Twisted-10.2.0
sudo python setup.py install
```
Install Bazaar:
```
cd /tmp
wget http://launchpad.net/bzr/2.2/2.2.2/+download/bzr-2.2.2.tar.gz
gunzip -v bzr-2.2.2.tar.gz
gtar -xf bzr-2.2.2.tar
cd bzr-2.2.2
sudo python setup.py install
```
Install ccache:
```
cd /tmp
wget http://samba.org/ftp/ccache/ccache-3.1.4.tar.gz
gunzip ccache-3.1.4.tar.gz
gtar xvf ccache-3.1.4.tar
cd ccache-3.1.4
./configure --prefix /usr
make
sudo make install
```
Configure and start NTP:
```
sudo cp /etc/inet/ntp.server /etc/inet/ntp.conf
sudo vi /etc/inet/ntp.conf
#
# Comment out the following lines:
#server 127.127.XType.0
#fudge 127.127.XType.0 stratum 0
#broadcast 224.0.1.1 ttl 4
#
# Add in the following lines:
server 0.us.pool.ntp.org
server 1.us.pool.ntp.org
server 2.us.pool.ntp.org
server 3.us.pool.ntp.org
# save the file and quit back the the command prompt
sudo touch /var/ntp/ntp.drift
sudo ntpdate 0.us.pool.ntp.org
sudo svcadm enable svc:/network/ntp
```
Check out and make a test build of MariaDB:
```
cd
mkdir src
cd src/
bzr branch lp:maria trunk
cd trunk/
BUILD/compile-solaris-amd64
```
Add a user for buildbot:
```
sudo useradd -d /export/home/buildbot -m buildbot
```
Install Buildbot:
```
cd /tmp
wget http://buildbot.googlecode.com/files/buildbot-slave-0.8.3.tar.gz
gunzip -v buildbot-slave-0.8.3.tar.gz
gtar -xf buildbot-slave-0.8.3.tar
cd buildbot-slave-0.8.3/
sudo python setup.py install
```
Create the buildbot as the buildbot user:
On the build master, add new entry to /etc/buildbot/maria-master-private.cfg
Remember the ${slave-name} and ${password} configured above, they're used in the next step.
Back on the solaris machine:
```
sudo su - buildbot
buildslave create-slave --usepty=0 /export/home/buildbot/maria-slave \
hasky.askmonty.org:9989 ${slavename} ${password}
echo '${contact-email-address}' > /export/home/buildbot/maria-slave/info/admin
echo 'A host running Solaris 10 x86.' > /export/home/buildbot/maria-slave/info/host
exit
```
Now start the slave:
```
sudo su - buildbot
buildslave start maria-slave
```
That's the basic process.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb About Galera Replication About Galera Replication
========================
In MariaDB Cluster, the Server replicates a transaction at commit time by broadcasting the write set associated with the transaction to every node in the cluster. The client connects directly to the DBMS and experiences behavior that is similar to native MariaDB in most cases. The wsrep API (write set replication API) defines the interface between Galera replication and MariaDB.
Synchronous vs. Asynchronous Replication
----------------------------------------
The basic difference between synchronous and asynchronous replication is that "synchronous" replication guarantees that if a change happened on one node in the cluster, then the change will happen on other nodes in the cluster "synchronously", or at the same time. "Asynchronous" replication gives no guarantees about the delay between applying changes on "master" node and the propagation of changes to "slave" nodes. The delay with "asynchronous" replication can be short or long. This also implies that if master node crashes in an "asynchronous" replication topology, then some of the latest changes may be lost.
Theoretically, synchronous replication has a number of advantages over asynchronous replication:
* Clusters utilizing synchronous replication are always highly available. If one of the nodes crashed, then there would be no data loss. Additionally, all cluster nodes are always consistent.
* Clusters utilizing synchronous replication allow transactions to be executed on all nodes in parallel.
* Clusters utilizing synchronous replication can guarantee causality across the whole cluster. This means that if a `SELECT` is executed on one cluster node after a transaction is executed on a cluster node, it should see the effects of that transaction.
However, in practice, synchronous database replication has traditionally been implemented via the so-called "2-phase commit" or distributed locking which proved to be very slow. Low performance and complexity of implementation of synchronous replication led to a situation where asynchronous replication remains the dominant means for database performance scalability and availability. Widely adopted open-source databases such as MySQL or PostgreSQL offer only asynchronous or semi-synchronous replication solutions.
Galera's replication is not completely synchronous. It is sometimes called **virtually synchronous** replication.
Certification-Based Replication Method
--------------------------------------
An alternative approach to synchronous replication that uses Group Communication and transaction ordering techniques was suggested by a number of researchers. For example:
* [Database State Machine Approach](http://library.epfl.ch/theses/?nr=2090)
* [Don't Be Lazy, Be Consistent](http://www.cs.mcgill.ca/~kemme/papers/vldb00.html)
Prototype implementations have shown a lot of promise. We combined our experience in synchronous database replication and the latest research in the field to create the Galera Replication library and the wsrep API.
Galera replication is a **highly transparent**, **scalable**, and **virtually synchronous** replication solution for database clustering to achieve high availability and improved performance. Galera-based clusters are:
* Highly available
* Highly transparent
* Highly scalable (near linear scalability may be reached depending on the application)
Generic Replication Library
---------------------------
Galera replication functionality is implemented as a shared library and can be linked with any transaction processing system which implements the wsrep API hooks.
The Galera replication library is a protocol stack providing functionality for preparing, replicating and applying of transaction write sets. It consists of:
* **wsrep API** specifies the interface — responsibilities for DBMS and replication provider
* **wsrep hooks** is the wsrep integration in the DBMS engine.
* **Galera provider** implements the wsrep API for Galera library
* **certification** layer takes care of preparing write sets and performing certification
* **replication** manages replication protocol and provides total ordering capabilities
* **GCS framework** provides plugin architecture for group communication systems
* many gcs implementations can be adapted, we have experimented with spread and our in-house implementations: vsbes and gemini
Many components in the Galera replication library were redesigned and improved with the introduction of [MariaDB 10.4](../what-is-mariadb-104/index), which includes Galera 4.
Galera Slave Threads
--------------------
Although the **Galera provider** certifies the write set associated with a transaction at commit time on each node in the cluster, this write set is not necessarily applied on that cluster node immediately. Instead, the write set is placed in the cluster node's receive queue on the node, and it is eventually applied by one of the cluster node's Galera slave thread.
The number of Galera slave threads can be configured with the [wsrep\_slave\_threads](../galera-cluster-system-variables/index#wsrep_slave_threads) system variable.
The Galera slave threads are able to determine which write sets are safe to apply in parallel. However, if your cluster nodes seem to have frequent consistency problems, then setting the value to `1` will probably fix the problem.
When a cluster node's state, as seen by [wsrep\_local\_state\_comment](../galera-cluster-status-variables/index#wsrep_local_state_comment), is `JOINED`, then increasing the number of slave threads may help the cluster node catch up with the cluster more quickly. In this case, it may be useful to set the number of threads to twice the number of CPUs on the system.
Streaming Replication
---------------------
**MariaDB starting with [10.4](../what-is-mariadb-104/index)**Streaming replication was introduced in Galera 4, and so is only available from [MariaDB 10.4](../what-is-mariadb-104/index).
In older versions of MariaDB Cluster there was a 2GB limit on the size of the transaction you could run. The node waits on the transaction commit before performing replication and certification. With large transactions, long running writes, and changes to huge data-sets, there was a greater possibility of a conflict forcing rollback on an expensive operation.
Using Streaming replication, the node breaks huge transactions up into smaller and more manageable fragments, it then replicates these fragments to the cluster as it works instead of waiting for the commit. Once certified, the fragment can no longer be aborted by conflicting transactions. As this can have performance consequences both during execution and in the event of rollback, it is recommended that you only use it with large transactions that are unlikely to experience conflict.
For more information on Streaming Replication, see the [Galera](https://galeracluster.com/library/documentation/streaming-replication.html) documentation.
Group Commits
-------------
**MariaDB starting with [10.4](../what-is-mariadb-104/index)**Group Commit support for MariaDB Cluster was introduced in Galera 4, and so is only available from [MariaDB 10.4](../what-is-mariadb-104/index).
In MariaDB Group Commit, groups of transactions are flushed together to disk to improve performance. Prior to [MariaDB 10.4](../what-is-mariadb-104/index), this feature was not available in MariaDB Cluster as it interfered with the global-ordering of transactions for replication. Beginning in 10.4, MariaDB Cluster can take advantage of Group Commit.
For more information on Group Commit, see the [Galera](https://galeracluster.com/library/kb/group-commit.html) documentation.
See Also
--------
* [Galera Cluster: Galera Replication](https://galeracluster.com/products/)
* [What is MariaDB Galera Cluster?](../what-is-mariadb-galera-cluster/index)
* [Galera Use Cases](../galera-use-cases/index)
* [Getting Started with MariaDB/Galera Cluster](../getting-started-with-mariadb-galera-cluster/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb EXPLAIN FORMAT=JSON Differences From MySQL EXPLAIN FORMAT=JSON Differences From MySQL
==========================================
[EXPLAIN FORMAT=JSON](../explain-formatjson/index) output in MySQL and MariaDB.
MariaDB's EXPLAIN JSON output is different from MySQL's. Here's a list of differences. (Currently they come in no particular order).
Attached Conditions are Prettier
--------------------------------
MySQL prints conditions with too many quotes and braces. Also, subqueries are printed in full (despite that you also get a plan for this subquery). You see something like this:
```
"attached_condition": "((`test`.`t1`.`a` < (/* select#2 */ select min(`test`.`t10`.`b`) from `test`.`t10`)) or (`test`.`t1`.`a` > (/* select#3 */ select max(`test`.`t10`.`b`) from `test`.`t10`)))",
"attached_condition": "((`test`.`t20`.`col1` > `test`.`t20`.`col2`) or (`test`.`t20`.`col3` = 4))"
```
in MariaDB, the same conditions are printed like this:
```
"attached_condition": "((t1.a < (subquery#2)) or (t1.a > (subquery#3)))"
"attached_condition": "((t20.col1 > t20.col2) or (t20.col3 = 4))"
```
JSON Pretty-printer is Smarter
------------------------------
MySQL's JSON pretty-printer is pretty dumb:
```
"possible_keys": [
"a"
],
"key": "a",
"used_key_parts": [
"a"
],
```
MariaDB's JSON pretty-printer is a bit smarter:
```
"possible_keys": ["a"],
"key": "a",
"key_length": "5",
"used_key_parts": ["a"],
```
Index Merge Shows used\_key\_parts
----------------------------------
For multi-part keys, tabular EXPLAIN shows key\_length column and leaves the user to do column-size arithmetic to figure out how many key parts are used.
MySQL's EXPLAIN=JSON may show used\_key\_parts member which shows which key parts are used. For range access, key\_length is also provided:
```
"access_type": "range",
"possible_keys": [
"col1"
],
"key": "col1",
"used_key_parts": [
"col1",
"col2"
],
"key_length": "10",
```
But if you are using index\_merge, you will still have to decode key\_length:
```
"table": {
"table_name": "t22",
"access_type": "index_merge",
"possible_keys": [
"col1",
"col3"
],
"key": "sort_union(col1,col3)",
"key_length": "10,5",
"rows": 2398,
```
In MariaDB, you get used\_key\_parts for all parts of index\_merge:
```
"table_name": "t22",
"access_type": "index_merge",
"possible_keys": ["col1", "col3"],
"key_length": "10,5",
"index_merge": {
"sort_union": {
"range": {
"key": "col1",
"used_key_parts": ["col1", "col2"]
},
"range": {
"key": "col3",
"used_key_parts": ["col3"]
}
}
```
Range Checked for Each Record
-----------------------------
In MySQL, you need to decode hex index number bitmaps (like in the tabular form):
```
"table": {
"table_name": "t2",
"access_type": "ALL",
"possible_keys": [
"key1",
"key3"
],
"rows": 1000,
"filtered": 100,
"range_checked_for_each_record": "index map: 0x5"
}
```
In MariaDB, the keys are shown explicitly
```
"range-checked-for-each-record": {
"keys": ["key1", "key3"],
"table": {
"table_name": "t2",
"access_type": "ALL",
"possible_keys": ["key1", "key3"],
"rows": 1000,
"filtered": 100
}
```
also, the structure of display ("range checked ..." embeds the table access) is closer to query plan's structure. (TODO: should we move "range-checked-for-each-record" inside the "table" ? )
Full Scan on NULL Key
---------------------
Tabular EXPLAIN shows "Full scan on NULL key" in the Extra column. MySQL has made a direct translation to JSON:
```
"table": {
"table_name": "t1",
"access_type": "ref_or_null",
...
...
"rows": 2,
"filtered": 100,
"using_index": true,
"full_scan_on_NULL_key": true,
...
}
```
This is not appropriate for MariaDB because it would like to have place for ANALYZE to show #loops for each construct. It is also illogical - some attribute at the end says "btw all of the above is not used in some cases". Because of that, MariaDB uses:
```
"full-scan-on-null_key": {
"table": {
"table_name": "t1",
"access_type": "ref_or_null",
"possible_keys": ["a"],
"key": "a",
...
}
```
Join Buffer Plan is Shown in Greater Detail.
--------------------------------------------
MySQL displays "using join buffer" as just another kind of table access. It doesn't separate reading from join buffer and writing to join buffer.
```
"nested_loop": [
{
"table": {
"table_name": "A",
"access_type": "ALL",
"rows": 10,
"filtered": 100,
"attached_condition": "(`test`.`A`.`b` = 3)"
}
},
{
"table": {
"table_name": "B",
"access_type": "ALL",
"rows": 20,
"filtered": 100,
"using_join_buffer": "Block Nested Loop",
"attached_condition": "((`test`.`B`.`b` = 4) and ((`test`.`A`.`a` + `test`.`B`.`a`) < 3))"
}
}
```
MariaDB shows what is really is going on:
```
"table": {
"table_name": "A",
"access_type": "ALL",
"rows": 10,
"filtered": 100,
"attached_condition": "(A.b = 3)"
},
"block-nl-join": {
"table": {
"table_name": "B",
"access_type": "ALL",
"rows": 10,
"filtered": 100,
"attached_condition": "(B.b = 4)"
},
"buffer_type": "flat",
"buffer_size": "128Kb",
"join_type": "BNL",
"attached_condition": "((A.a + B.a) < 3)"
}
```
TODO: other differences
-----------------------
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Building MariaDB on Ubuntu Building MariaDB on Ubuntu
==========================
In the event that you are using the Linux-based operating system Ubuntu or any of its derivatives and would like to compile MariaDB from source code, you can do so using the MariaDB source repository for the release that interests you.
Before you begin, install the `software-properties-common`, `devscripts` and `equivs` packages.
```
$ sudo apt-get install software-properties-common \
devscripts \
equivs
```
Installing Build Dependencies
-----------------------------
MariaDB requires a number of packages to compile from source. Fortunately, you can use the MariaDB repositories to retrieve the necessary code for the version you want. Use the [Repository Configuration](https://downloads.mariadb.org/mariadb/repositories/) tool to determine how to set up the MariaDB repository for your release of Ubuntu, the version of MariaDB that you want to install, and the mirror that you want to use.
First add the authentication key for the repository, then add the repository.
```
$ sudo apt-key adv --recv-keys \
--keyserver hkp://keyserver.ubuntu.com:80 \
0xF1656F24C74CD1D8
$ sudo add-apt-repository --update --yes --enable-source \
'deb [arch=amd64] http://nyc2.mirrors.digitalocean.com/mariadb/repo/10.3/ubuntu '$(lsb_release -sc)' main'
```
Once the repository is set up, you can use `apt-get` to retrieve the build dependencies. MariaDB packages supplied by Ubuntu and packages supplied by the MariaDB repository have the same base name of `mariadb-server`. You need to specify the specific version you want to retrieve.
```
$ sudo apt-get build-dep mariadb-10.3
```
Building MariaDB
----------------
Once you have the base dependencies installed, you can retrieve the source code and start building MariaDB. The source code is available on GitHub. Use the `--branch` option to specify the particular version of MariaDB you want to build.
```
$ git clone --branch 10.3 https://github.com/MariaDB/server.git
```
The source code includes scripts to install the remaining build dependencies. For Ubuntu, they're located in the `debian/` directory. Navigate into the repository and run the `autobake-deb.sh` script. Then use
```
$ cd server/
$ ./debian/autobake-deb.sh
```
### Further Dependencies
In the event that there are still build dependencies that are not satisfied, use `mk-build-deps` to generate a build dependency `deb` to use in installing the remaining packages.
```
$ mk-build-deps debian/control
$ apt-get install ./mariadb-*build-deps_*.deb
```
Then, call the `autobake-deb.sh` script again to build MariaDB.
```
$ ./debian/autobake-deb.sh
```
### After Building
After building the packages, it is a good idea to put them in a repository. See the [Creating a Debian Repository](../creating_a_debian_repository/index) page for instructions.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore Security Vulnerabilities ColumnStore Security Vulnerabilities
====================================
This page is about security vulnerabilities that have been fixed for or still affect MariaDB ColumnStore. In addition links are included to fixed security vulnerabilities in MariaDB Server since MariaDB ColumnStore is based on MariaDB Server.
Sensitive security issues can be sent directly to the persons responsible for MariaDB security: security [AT] mariadb (dot) org.
About CVEs
----------
CVE® stands for *"**C**ommon **V**ulnerabilities and **E**xposures"*. It is a publicly available and free to use database of known software vulnerabilities maintained at <https://cve.mitre.org/>
CVEs fixed in ColumnStore
-------------------------
The appropriate release notes listed [here](https://mariadb.com/kb/en/columnstore-release-notes/) document CVEs fixed within a given release. Additional information can also be found at [Security Vulnerabilities Fixed in MariaDB](../security/index).
There are no known CVEs on ColumnStore specific infrastructure outside of the MariaDB Server at this time.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information on Plugins Information on Plugins
=======================
| Title | Description |
| --- | --- |
| [List of Plugins](../list-of-plugins/index) | List of plugins included in MariaDB, ordered by their maturity. |
| [Information Schema PLUGINS Table](../plugins-table-information-schema/index) | Information Schema table containing information on plugins installed on a server. |
| [Information Schema ALL\_PLUGINS Table](../all-plugins-table-information-schema/index) | Information about server plugins, whether installed or not. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SQL Statements SQL Statements
===============
Complete list of SQL statements for data definition, data manipulation, etc.
| Title | Description |
| --- | --- |
| [Account Management SQL Commands](../account-management-sql-commands/index) | CREATE/DROP USER, GRANT, REVOKE, SET PASSWORD etc. |
| [Administrative SQL Statements](../administrative-sql-statements/index) | SQL statements for setting, flushing and displaying server variables and resources. |
| [Data Definition](../data-definition/index) | SQL commands for defining data, such as ALTER, CREATE, DROP, RENAME etc. |
| [Data Manipulation](../data-manipulation/index) | SQL commands for querying and manipulating data, such as SELECT, UPDATE, DELETE etc. |
| [Prepared Statements](../prepared-statements/index) | Prepared statements from any client using the text based prepared statement interface. |
| [Programmatic & Compound Statements](../programmatic-compound-statements/index) | Compound SQL statements for stored routines and in general. |
| [Stored Routine Statements](../stored-routine-statements/index) | SQL statements related to creating and using stored routines. |
| [Table Statements](../table-statements/index) | Documentation on creating, altering, analyzing and maintaining tables. |
| [Transactions](../transactions/index) | Sequence of statements that are either completely successful, or have no effect on any schemas |
| [HELP Command](../help-command/index) | The HELP command will retrieve syntax and help within the mysql client. |
| [Comment Syntax](../comment-syntax/index) | Comment syntax and style. |
| [Built-in Functions](../built-in-functions/index) | Functions and procedures in MariaDB. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Basic SQL Queries: A Quick SQL Cheat Sheet Basic SQL Queries: A Quick SQL Cheat Sheet
==========================================
This page lists the most important SQL statements and contains links to their documentation pages. If you need a basic tutorial on how to use the MariaDB database server and how to execute simple commands, see [A MariaDB Primer](../a-mariadb-primer/index).
Also see [Common MariaDB Queries](../common-mariadb-queries/index) for examples of commonly-used queries.
Defining How Your Data Is Stored
--------------------------------
* [CREATE DATABASE](../create-database/index) is used to create a new, empty database.
* [DROP DATABASE](../drop-database/index) is used to completely destroy an existing database.
* [USE](../use/index) is used to select a default database.
* [CREATE TABLE](../create-table/index) is used to create a new table, which is where your data is actually stored.
* [ALTER TABLE](../alter-table/index) is used to modify an existing table's definition.
* [DROP TABLE](../drop-table/index) is used to completely destroy an existing table.
* [DESCRIBE](../describe/index) shows the structure of a table.
Manipulating Your Data
----------------------
* [SELECT](../select/index) is used when you want to read (or select) your data.
* [INSERT](../insert/index) is used when you want to add (or insert) new data.
* [UPDATE](../update/index) is used when you want to change (or update) existing data.
* [DELETE](../delete/index) is used when you want to remove (or delete) existing data.
* [REPLACE](../replace/index) is used when you want to add or change (or replace) new or existing data.
* [TRUNCATE](../truncate-table/index) is used when you want to empty (or delete) all data from the template.
Transactions
------------
* [START TRANSACTION](../start-transaction/index) is used to begin a transaction.
* [COMMIT](../commit/index) is used to apply changes and end transaction.
* [ROLLBACK](../rollback/index) is used to discard changes and end transaction.
### A Simple Example
```
CREATE DATABASE mydb;
USE mydb;
CREATE TABLE mytable ( id INT PRIMARY KEY, name VARCHAR(20) );
INSERT INTO mytable VALUES ( 1, 'Will' );
INSERT INTO mytable VALUES ( 2, 'Marry' );
INSERT INTO mytable VALUES ( 3, 'Dean' );
SELECT id, name FROM mytable WHERE id = 1;
UPDATE mytable SET name = 'Willy' WHERE id = 1;
SELECT id, name FROM mytable;
DELETE FROM mytable WHERE id = 1;
SELECT id, name FROM mytable;
DROP DATABASE mydb;
SELECT count(1) from mytable; gives the number of records in the table
```
*The first version of this article was copied, with permission, from <http://hashmysql.org/wiki/Basic_SQL_Statements> on 2012-10-05.*
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema memory_summary_by_account_by_event_name Table Performance Schema memory\_summary\_by\_account\_by\_event\_name Table
======================================================================
**MariaDB starting with [10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/)**The memory\_summary\_by\_account\_by\_event\_name table was introduced in [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/).
There are five memory summary tables in the Performance Schema that share a number of fields in common. These include:
* memory\_summary\_by\_account\_by\_event\_name
* [memory\_summary\_by\_host\_by\_event\_name](../performance-schema-memory_summary_by_host_by_event_name-table/index)
* [memory\_summary\_by\_thread\_by\_event\_name](../performance-schema-memory_summary_by_thread_by_event_name-table/index)
* [memory\_summary\_by\_user\_by\_event\_name](../performance-schema-memory_summary_by_user_by_event_name-table/index)
* [memory\_global\_by\_event\_name](../performance-schema-memory_global_by_event_name-table/index)
The `memory_summary_by_account_by_event_name` table contains memory usage statistics aggregated by account and event.
The table contains the following columns:
| Field | Type | Null | Default | Description |
| --- | --- | --- | --- | --- |
| USER | char(32) | YES | NULL | User portion of the account. |
| HOST | char(60) | YES | NULL | Host portion of the account. |
| EVENT\_NAME | varchar(128) | NO | NULL | Event name. |
| COUNT\_ALLOC | bigint(20) unsigned | NO | NULL | Total number of allocations to memory. |
| COUNT\_FREE | bigint(20) unsigned | NO | NULL | Total number of attempts to free the allocated memory. |
| SUM\_NUMBER\_OF\_BYTES\_ALLOC | bigint(20) unsigned | NO | NULL | Total number of bytes allocated. |
| SUM\_NUMBER\_OF\_BYTES\_FREE | bigint(20) unsigned | NO | NULL | Total number of bytes freed |
| LOW\_COUNT\_USED | bigint(20) | NO | NULL | Lowest number of allocated blocks (lowest value of CURRENT\_COUNT\_USED). |
| CURRENT\_COUNT\_USED | bigint(20) | NO | NULL | Currently allocated blocks that have not been freed (COUNT\_ALLOC minus COUNT\_FREE). |
| HIGH\_COUNT\_USED | bigint(20) | NO | NULL | Highest number of allocated blocks (highest value of CURRENT\_COUNT\_USED). |
| LOW\_NUMBER\_OF\_BYTES\_USED | bigint(20) | NO | NULL | Lowest number of bytes used. |
| CURRENT\_NUMBER\_OF\_BYTES\_USED | bigint(20) | NO | NULL | Current number of bytes used (total allocated minus total freed). |
| HIGH\_NUMBER\_OF\_BYTES\_USED | bigint(20) | NO | NULL | Highest number of bytes used. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Geometry Constructors Geometry Constructors
======================
Geometry constructors
| Title | Description |
| --- | --- |
| [BUFFER](../buffer/index) | Synonym for ST\_BUFFER. |
| [CONVEXHULL](../convexhull/index) | Synonym for ST\_CONVEXHULL. |
| [GEOMETRYCOLLECTION](../geometrycollection/index) | Constructs a WKB GeometryCollection. |
| [LINESTRING](../linestring/index) | Constructs a WKB LineString value from a number of WKB Point arguments. |
| [MULTILINESTRING](../multilinestring/index) | Constructs a MultiLineString value. |
| [MULTIPOINT](../multipoint/index) | Constructs a WKB MultiPoint value. |
| [MULTIPOLYGON](../multipolygon/index) | Constructs a WKB MultiPolygon. |
| [POINT](../point/index) | Constructs a WKB Point. |
| [PointOnSurface](../pointonsurface/index) | Synonym for ST\_PointOnSurface. |
| [POLYGON](../polygon/index) | Constructs a WKB Polygon value from a number of WKB LineString arguments. |
| [ST\_BUFFER](../st_buffer/index) | A new geometry with a buffer added to the original geometry. |
| [ST\_CONVEXHULL](../st_convexhull/index) | The minimum convex geometry enclosing all geometries within the set. |
| [ST\_INTERSECTION](../st_intersection/index) | The intersection, or shared portion, of two geometries. |
| [ST\_POINTONSURFACE](../st_pointonsurface/index) | Returns a POINT guaranteed to intersect a surface. |
| [ST\_SYMDIFFERENCE](../st_symdifference/index) | Portions of two geometries that don't intersect. |
| [ST\_UNION](../st_union/index) | Union of two geometries. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb BIT_COUNT BIT\_COUNT
==========
Syntax
------
```
BIT_COUNT(N)
```
Description
-----------
Returns the number of bits that are set in the argument N.
Examples
--------
```
SELECT BIT_COUNT(29), BIT_COUNT(b'101010');
+---------------+----------------------+
| BIT_COUNT(29) | BIT_COUNT(b'101010') |
+---------------+----------------------+
| 4 | 3 |
+---------------+----------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb BDB BDB
===
BDB was a storage engine included in old versions of MySQL.
This storage engine was removed from the 5.1 tree at some point, before the birth of MariaDB.
BDB permitted the use of a patched version of the Berkeley DB key-value store. It was a transactional storage engine. Locking was performed at page level.
Limitations
-----------
* Indexes per table: 31
* Columns per index: 16
* Index size: 1024 bytes
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CONNECT Table Types - Data Files CONNECT Table Types - Data Files
================================
Most of the tables processed by CONNECT are just plain DOS or UNIX data files, logically regarded as tables thanks to the description given when creating the table. This description comes from the `[CREATE TABLE](../create-table/index)` statement. Depending on the application, these tables can already exist as data files, used as is by CONNECT, or can have been physically made by CONNECT as the result of a `CREATE TABLE ... SELECT ...` and/or INSERT statement(s).
The file *path/name* is given by the `FILE_NAME` option. If it is a relative path/name, it will be relative to the database directory, the one containing the table `.FRM` file.
Unless specified, the maturity of file table types is stable.
Multiple File Tables
--------------------
A **multiple** file table is one that is physically contained in several files of the same type instead of just one. These files are processed sequentially during the process of a query and the result is the same as if all the table files were merged into one. This is great to process files coming from different sources (such as cash register log files) or made at different time periods (such as bank monthly reports) regarded as one table. Note that the operations on such files are restricted to sequential Select and Update; and that VEC multiple tables are not supported by CONNECT. The file list depends on the setting of the **multiple** option of the `CREATE TABLE` statement for that table.
Multiple tables are specified by the option MULTIPLE=*n*, which can take four values:
| | |
| --- | --- |
| 0 | Not a multiple table (the default). This can be used in an [ALTER TABLE](../alter-table/index) statement. |
| 1 | The table is made from files located in the same directory. The FILE\_NAME option is a pattern such as `'cash*.log'` that all the table file path/names verify. |
| 2 | The FILE\_NAME gives the name of a file that contains the path/names of all the table files. This file can be made using a DIR table. |
| 3 | Like multiple=1 but also including eligible files from the directory sub-folders. |
The `FILEID` special column, described [here](../using-connect-virtual-and-special-columns/index), allows query pruning by filtering the file list or doing some grouping on the files that make a multiple table.
**Note:** Multiple was not initially implemented for XML tables. This restriction was removed in version 1.02.
Record Format
-------------
This characteristic applies to table files handled by the operating system input/output functions. It is **fixed** for table types [FIX](../connect-dos-and-fix-table-types/index), [BIN](../connect-bin-table-type/index), [DBF](../connect-dbf-table-type/index) and [VEC](../connect-vec-table-type/index), and it is variable for [DOS](../connect-dos-and-fix-table-types/index), VCT, [FMT](../connect-csv-and-fmt-table-types/index) and some [JSON](../connect-json-table-type/index) tables.
For fixed tables, most I/O operations are done by block of BLOCK\_SIZE rows. This diminishes the number of I/O’s and enables block indexing.
Starting with CONNECT version 1.6.6, the BLOCK\_SIZE option can also be specified for variable tables. Then, a file similar to the block indexing file is created by CONNECT that gives the size in bytes of each block of BLOCK\_SIZE rows. This enables the use of block I/Os and block indexing to variable tables. It also enables CONNECT to return the exact row number for info commands
File Mapping
------------
For file-based tables of reasonable size, processing time can be greatly enhanced under Windows(TM) and some flavors of UNIX or Linux by using the technique of “file mapping”, in which a file is processed as if it were entirely in memory. Mapping is specified when creating the table by the use of the `MAPPED=YES` option. This does not apply to tables not handled by system I/O functions (`[XML](%5b%5bconnect-xml-table-type)` and `[INI](../connect-ini-table-type/index)`).
Big File Tables
---------------
Because all files are handled by the standard input/output functions of the operating system, their size is limited to 2GB, the maximum size handled by standard functions. For some table types, CONNECT can deal with files that are larger than 2GB, or prone to become larger than this limit. These are the [FIX](../connect-dos-and-fix-table-types/index), [BIN](../connect-bin-table-type/index) and [VEC](../connect-vec-table-type/index) types. To tell connect to use input/output functions dealing with big files, specify the option `huge=1` or `huge=YES` for that table. Note however that CONNECT cannot randomly access tables having more than 2G records.
Compressed File Tables
----------------------
CONNECT can make and process some tables whose data file is compressed. The only supported compression format is the gzlib format. Zip and zlib formats are supported differently. The table types that can be compressed are [DOS](../connect-dos-and-fix-table-types/index), [FIX](connect-dos-and-fix-table-typess), [BIN](../connect-bin-table-type/index), [CSV](../connect-csv-and-fmt-table-types/index) and [FMT](../connect-csv-and-fmt-table-types/index). This can save some disk space at the cost of a somewhat longer processing time.
Some restrictions apply to compressed tables:
* Compressed tables are not indexable.
* Update and partial delete are not supported.
Use the numeric compress option to specify a compressed table:
1. Not compressed
2. Compressed in gzlib format.
3. Made of compressed blocks of block\_size records (enabling block indexing)
Relational Formatted Tables
---------------------------
These are based on files whose records represent one table row. Only the column representation within each record can differ. The following relational formatted tables are supported:
* [DOS and FIX Table Types](../connect-dos-and-fix-table-types/index)
* [DBF Table Type](../connect-dbf-table-type/index)
* [BIN Table Type](../connect-bin-table-type/index)
* [VEC Table Type](../connect-vec-table-type/index)
* [CSV and FMT Table Types](../connect-csv-and-fmt-table-types/index)
NoSQL Table Types
-----------------
These are based on files that do not match the relational format but often represent hierarchical data. CONNECT can handle JSON, INI-CFG, XML and some HTML files..
The way it is done is different from what PostgreSQL does. In addition to including in a table some column values of a specific data format (JSON, XML) to be handled by specific functions, CONNECT can directly use JSON, XML or INI files that can be produced by other applications and this is the table definition that describes where and how the contained information must be retrieved.
This is also different from what MariaDB does with [dynamic columns](../dynamic-columns/index), which is close to what MySQL and PostgreSQL do with the JSON column type.
The following NoSQL types are supported:
* [XML Table Type](../connect-xml-table-type/index)
* [JSON Table Type](../connect-json-table-type/index)
* [INI Table Type](../connect-ini-table-type/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Fair Choice Between Range and Index_merge Optimizations Fair Choice Between Range and Index\_merge Optimizations
========================================================
`index_merge` is a method used by the optimizer to retrieve rows from a single table using several index scans. The results of the scans are then merged.
When using [EXPLAIN](../explain/index), if `index_merge` is the plan chosen by the optimizer, it will show up in the "type" column. For example:
```
MariaDB [ontime]> SELECT COUNT(*) FROM ontime;
+--------+
|count(*)|
+--------+
| 1578171|
+--------+
MySQL [ontime]> EXPLAIN SELECT * FROM ontime WHERE (Origin='SEA' OR Dest='SEA');
+--+-----------+------+-----------+-------------+-----------+-------+----+-----+--------------------------------------+
|id|select_type|table |type |possible_keys|key |key_len|ref |rows |Extra |
+--+-----------+------+-----------+-------------+-----------+-------+----+-----+--------------------------------------+
| 1|SIMPLE |ontime|index_merge|Origin,Dest |Origin,Dest|6,6 |NULL|92800|Using union (Origin,Dest); Using where|
+--+-----------+------+-----------+-------------+-----------+-------+----+-----+--------------------------------------+
```
The "rows" column gives us a way to compare efficiency between `index_merge` and other plans.
It is sometimes necessary to discard index\_merge in favor of a different plan to avoid a combinatorial explosion of possible range and/or index\_merge strategies. But, the old logic in MySQL for when index\_merge was rejected caused some good index\_merge plans to not even be considered. Specifically, additional `AND` predicates in `WHERE` clauses could cause an index\_merge plan to be rejected in favor of a less efficient plan. The slowdown could be anywhere from 10x to over 100x. Here are two examples (based on the previous query) using MySQL:
```
MySQL [ontime]> EXPLAIN SELECT * FROM ontime WHERE (Origin='SEA' OR Dest='SEA') AND securitydelay=0;
+--+-----------+------+----+-------------------------+-------------+-------+-----+------+-----------+
|id|select_type|table |type|possible_keys |key |key_len|ref |rows |Extra |
+--+-----------+------+----+-------------------------+-------------+-------+-----+------+-----------+
| 1|SIMPLE |ontime|ref |Origin,Dest,SecurityDelay|SecurityDelay|5 |const|791546|Using where|
+--+-----------+------+----+-------------------------+-------------+-------+-----+------+-----------+
MySQL [ontime]> EXPLAIN SELECT * FROM ontime WHERE (Origin='SEA' OR Dest='SEA') AND depdelay < 12*60;
+--+-----------+------+----+--------------------+----+-------+----+-------+-----------+
|id|select_type|table |type|possible_keys |key |key_len|ref |rows |Extra |
+--+-----------+------+----+--------------------+----+-------+----+-------+-----------+
| 1|SIMPLE |ontime|ALL |Origin,DepDelay,Dest|NULL|NULL |NULL|1583093|Using where|
+--+-----------+------+----+--------------------+----+-------+----+-------+-----------
```
In the above output, the "rows" column shows that the first is almost 10x less efficient and the second is over 15x less efficient than `index_merge`.
Starting in [MariaDB 5.3](../what-is-mariadb-53/index), the optimizer will delay discarding potential `index_merge` plans until the point where it is really necessary. See [MWL#24](http://askmonty.org/worklog/?tid=24) for more information.
By not discarding potential `index_merge` plans until absolutely necessary, the two queries stay just as efficient as the original:
```
MariaDB [ontime]> EXPLAIN SELECT * FROM ontime WHERE (Origin='SEA' or Dest='SEA');
+--+-----------+------+-----------+-------------+-----------+-------+----+-----+-------------------------------------+
|id|select_type|table |type |possible_keys|key |key_len|ref |rows |Extra |
+--+-----------+------+-----------+-------------+-----------+-------+----+-----+-------------------------------------+
| 1|SIMPLE |ontime|index_merge|Origin,Dest |Origin,Dest|6,6 |NULL|92800|Using union(Origin,Dest); Using where|
+--+-----------+------+-----------+-------------+-----------+-------+----+-----+-------------------------------------+
MariaDB [ontime]> EXPLAIN SELECT * FROM ontime WHERE (Origin='SEA' or Dest='SEA') AND securitydelay=0;
+--+-----------+------+-----------+-------------------------+-----------+-------+----+-----+-------------------------------------+
|id|select_type|table |type |possible_keys |key |key_len|ref |rows |Extra |
+--+-----------+------+-----------+-------------------------+-----------+-------+----+-----+-------------------------------------+
| 1|SIMPLE |ontime|index_merge|Origin,Dest,SecurityDelay|Origin,Dest|6,6 |NULL|92800|Using union(Origin,Dest); Using where|
+--+-----------+------+-----------+-------------------------+-----------+-------+----+-----+-------------------------------------+
MariaDB [ontime]> EXPLAIN SELECT * FROM ontime WHERE (Origin='SEA' or Dest='SEA') AND depdelay < 12*60;
+--+-----------+------+-----------+--------------------+-----------+-------+----+-----+-------------------------------------+
|id|select_type|table |type |possible_keys |key |key_len|ref |rows |Extra |
+--+-----------+------+-----------+--------------------+-----------+-------+----+-----+-------------------------------------+
| 1|SIMPLE |ontime|index_merge|Origin,DepDelay,Dest|Origin,Dest|6,6 |NULL|92800|Using union(Origin,Dest); Using where|
+--+-----------+------+-----------+--------------------+-----------+-------+----+-----+-------------------------------------+
```
This new behavior is always on and there is no need to enable it. There are no known issues or gotchas with this new optimization.
See Also
--------
* [What is MariaDB 5.3](../what-is-mariadb-53/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Buildbot Setup for Virtual Machines - Ubuntu 10.04 (alpha), i386 and amd64 Buildbot Setup for Virtual Machines - Ubuntu 10.04 (alpha), i386 and amd64
==========================================================================
Base install
------------
```
qemu-img create -f qcow2 vm-lucid-amd64-serial.qcow2 8G
kvm -m 1024 -hda vm-lucid-amd64-serial.qcow2 -cdrom /kvm/lucid-server-amd64.iso -redir tcp:2238::22 -boot d -smp 2 -cpu qemu64 -net nic,model=virtio -net user
# Install, picking default options mostly, only adding openssh server.
kvm -m 1024 -hda vm-lucid-amd64-serial.qcow2 -cdrom /kvm/lucid-server-amd64.iso -redir tcp:2238::22 -boot c -smp 2 -cpu qemu64 -net nic,model=virtio -net user -nographic
ssh -p 2238 localhost
# edit /boot/grub/menu.lst and visudo, see below
ssh -t -p 2238 localhost "mkdir .ssh; sudo addgroup $USER sudo"
scp -P 2238 authorized_keys localhost:.ssh/
echo $'Buildbot\n\n\n\n\ny' | ssh -p 2238 localhost 'chmod -R go-rwx .ssh; sudo adduser --disabled-password buildbot; sudo addgroup buildbot sudo; sudo mkdir ~buildbot/.ssh; sudo cp .ssh/authorized_keys ~buildbot/.ssh/; sudo chown -R buildbot:buildbot ~buildbot/.ssh; sudo chmod -R go-rwx ~buildbot/.ssh'
scp -P 2238 ttyS0.conf buildbot@localhost:
ssh -p 2238 buildbot@localhost 'sudo cp ttyS0.conf /etc/init/; rm ttyS0.conf; sudo shutdown -h now'
```
```
qemu-img create -f qcow2 vm-lucid-i386-serial.qcow2 8G
kvm -m 1024 -hda vm-lucid-i386-serial.qcow2 -cdrom /kvm/lucid-server-i386.iso -redir tcp:2239::22 -boot d -smp 2 -cpu qemu32,-nx -net nic,model=virtio -net user
# Install, picking default options mostly, only adding openssh server.
kvm -m 1024 -hda vm-lucid-i386-serial.qcow2 -cdrom /kvm/lucid-server-i386.iso -redir tcp:2239::22 -boot c -smp 2 -cpu qemu32,-nx -net nic,model=virtio -net user -nographic
ssh -p 2239 localhost
# edit /boot/grub/menu.lst and visudo, see below
ssh -t -p 2239 localhost "mkdir .ssh; sudo addgroup $USER sudo"
scp -P 2239 authorized_keys localhost:.ssh/
echo $'Buildbot\n\n\n\n\ny' | ssh -p 2239 localhost 'chmod -R go-rwx .ssh; sudo adduser --disabled-password buildbot; sudo addgroup buildbot sudo; sudo mkdir ~buildbot/.ssh; sudo cp .ssh/authorized_keys ~buildbot/.ssh/; sudo chown -R buildbot:buildbot ~buildbot/.ssh; sudo chmod -R go-rwx ~buildbot/.ssh'
scp -P 2239 ttyS0.conf buildbot@localhost:
ssh -p 2239 buildbot@localhost 'sudo cp ttyS0.conf /etc/init/; rm ttyS0.conf; sudo shutdown -h now'
```
Enabling passwordless sudo:
```
sudo VISUAL=vi visudo
# uncomment `%sudo ALL=NOPASSWD: ALL' line in `visudo`, and move to end.
```
Editing /boot/grub/menu.lst:
```
sudo vi /etc/default/grub
# Add/edit these entries:
GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0,115200n8"
GRUB_TERMINAL="serial"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"
sudo update-grub
```
VMs for building .debs
----------------------
```
for i in 'vm-lucid-amd64-serial.qcow2 2238 qemu64' 'vm-lucid-i386-serial.qcow2 2239 qemu32,-nx' ; do \
set $i; \
runvm --logfile=kernel_$2.log --base-image=$1 --port=$2 --cpu=$3 "$(echo $1 | sed -e 's/serial/build/')" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get update" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get -y build-dep mysql-server-5.1" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get install -y devscripts hardening-wrapper fakeroot doxygen texlive-latex-base ghostscript libevent-dev libssl-dev zlib1g-dev libreadline5-dev" ; \
done
```
VMs for install testing.
------------------------
See above for how to obtain my.seed and sources.append.
```
cat >sources.append <<END
deb file:///home/buildbot/buildbot/debs binary/
deb-src file:///home/buildbot/buildbot/debs source/
END
for i in 'vm-lucid-amd64-serial.qcow2 2238 qemu64' 'vm-lucid-i386-serial.qcow2 2239 qemu32,-nx' ; do \
set $i; \
runvm --logfile=kernel_$2.log --base-image=$1 --port=$2 --cpu=$3 "$(echo $1 | sed -e 's/serial/install/')" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get update" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get install -y debconf-utils" \
"= scp -P $2 my.seed sources.append buildbot@localhost:/tmp/" \
"sudo debconf-set-selections /tmp/my.seed" \
"sudo sh -c 'cat /tmp/sources.append >> /etc/apt/sources.list'"; \
done
```
VMs for upgrade testing
-----------------------
```
for i in 'vm-lucid-amd64-install.qcow2 2238 qemu64' 'vm-lucid-i386-install.qcow2 2239 qemu32,-nx' ; do \
set $i; \
runvm --logfile=kernel_$2.log --base-image=$1 --port=$2 --cpu=$3 "$(echo $1 | sed -e 's/install/upgrade/')" \
'sudo DEBIAN_FRONTEND=noninteractive apt-get install -y mysql-server-5.1' \
'mysql -uroot -prootpass -e "create database mytest; use mytest; create table t(a int primary key); insert into t values (1); select * from t"' ;\
done
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Identifier Case-sensitivity Identifier Case-sensitivity
===========================
Whether objects are case-sensitive or not is partly determined by the underlying operating system. Unix-based systems are case-sensitive, Windows is not, while Mac OS X is usually case-insensitive by default, but devices can be configured as case-sensitive using Disk Utility.
Database, table, table aliases and [trigger](../triggers/index) names are affected by the systems case-sensitivity, while index, column, column aliases, [stored routine](../stored-programs-and-views/index) and [event](../events/index) names are never case sensitive.
Log file group name are case sensitive.
The [lower\_case\_table\_names](../server-system-variables/index#lower_case_table_names) server system variable plays a key role. It determines whether table names, aliases and database names are compared in a case-sensitive manner. If set to 0 (the default on Unix-based systems), table names and aliases and database names are compared in a case-sensitive manner. If set to 1 (the default on Windows), names are stored in lowercase and not compared in a case-sensitive manner. If set to 2 (the default on Mac OS X), names are stored as declared, but compared in lowercase.
It is thus possible to make Unix-based systems behave like Windows and ignore case-sensitivity, but the reverse is not true, as the underlying Windows filesystem can not support this.
Even on case-insensitive systems, you are required to use the same case consistently within the same statement. The following statement fails, as it refers to the table name in a different case.
```
SELECT * FROM a_table WHERE A_table.id>10;
```
For a full list of identifier naming rules, see [Identifier Names](../identifier-names/index).
Please note that [lower\_case\_table\_names](../server-system-variables/index#lower_case_table_names) is a database initialization parameter. This means that, along with [innodb\_page\_size](../server-system-variables/index#innodb_page_size), this variable must be set before running [mysql\_install\_db](../mysql_install_db/index), and will not change the behavior of servers unless applied before the creation of core system databases.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Plans for MariaDB 10.10 Plans for MariaDB 10.10
=======================
[MariaDB 10.10](../what-is-mariadb-1010/index) is an upcoming major development release.
JIRA
----
We manage our development plans in JIRA, so the definitive list will be there. [This search](https://jira.mariadb.org/issues/?jql=project+%3D+MDEV+AND+issuetype+%3D+Task+AND+fixVersion+in+%2810.10%29+ORDER+BY+priority+DESC) shows what we **currently** plan for 10.10. It shows all tasks with the **Fix-Version** being 10.10. Not all these tasks will really end up in 10.10 but tasks with the "red" priorities have a much higher chance of being done in time for 10.10. Practically, you can think of these tasks as "features that **will** be in 10.10". Tasks with the "green" priorities probably won't be in 10.10. Think of them as "bonus features that would be **nice to have** in 10.10".
Contributing
------------
If you want to be part of developing any of these features, see [Contributing to the MariaDB Project](../contributing-to-the-mariadb-project/index). You can also add new features to this list or to [JIRA](../jira-project-planning-and-tracking/index).
See Also
--------
* [Current tasks for 10.10](https://jira.mariadb.org/issues/?jql=project%20%3D%20MDEV%20AND%20issuetype%20%3D%20Task%20AND%20fixVersion%20in%20(10.10)%20ORDER%20BY%20priority%20DESC)
* [10.10 Features/fixes by vote](https://jira.mariadb.org/issues/?jql=project%20%3D%20MDEV%20AND%20issuetype%20%3D%20Task%20AND%20fixVersion%20in%20(10.10)%20ORDER%20BY%20votes%20DESC%2C%20priority%20DESC)
* [What is MariaDB 10.10?](../what-is-mariadb-1010/index)
* [What is MariaDB 10.9?](../what-is-mariadb-109/index)
* [What is MariaDB 10.8?](../what-is-mariadb-108/index)
* [What is MariaDB 10.6?](../what-is-mariadb-106/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MyRocks and START TRANSACTION WITH CONSISTENT SNAPSHOT MyRocks and START TRANSACTION WITH CONSISTENT SNAPSHOT
======================================================
FB/MySQL has added new syntax:
```
START TRANSACTION WITH CONSISTENT ROCKSDB|INNODB SNAPSHOT;
```
The statement returns the binlog coordinates pointing at the snapshot.
MariaDB (and Percona Server) support extension to the regular
```
START TRANSACTION WITH CONSISTENT SNAPSHOT;
```
syntax as documented in [Enhancements for START TRANSACTION WITH CONSISTENT SNAPSHOT](../enhancements-for-start-transaction-with-consistent-snapshot/index).
After issuing the statement, one can examine the [binlog\_snapshot\_file](../replication-and-binary-log-status-variables/index#binlog_snapshot_file) and [binlog\_snapshot\_position](../replication-and-binary-log-status-variables/index#binlog_snapshot_position) status variables to see the binlog position that corresponds to the snapshot.
See Also
--------
* [START TRANSACTION](../start-transaction/index)
* [Enhancements for START TRANSACTION WITH CONSISTENT SNAPSHOT](../enhancements-for-start-transaction-with-consistent-snapshot/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB Replication MariaDB Replication
====================
The terms *master* and *slave* have historically been used in replication, but the terms terms *primary* and *replica* are now preferred. The old terms are used still used in parts of the documentation, and in MariaDB commands, although [MariaDB 10.5](../what-is-mariadb-105/index) has begun the process of renaming. The documentation process is ongoing. See [MDEV-18777](https://jira.mariadb.org/browse/MDEV-18777) to follow progress on this effort.
Replication is a feature allowing the contents of one or more primary servers to be mirrored on one or more replica servers.
| Title | Description |
| --- | --- |
| [Replication Overview](../replication-overview/index) | Allow the contents of one or more primary servers to be mirrored on one or more replicas. |
| [Replication Commands](../replication-commands/index) | List of replication-related commands. |
| [Setting Up Replication](../setting-up-replication/index) | Getting replication working involves steps on both the primary server/s and the replica server/s. |
| [Setting up a Replica with Mariabackup](../setting-up-a-replica-with-mariabackup/index) | Setting up a replica with Mariabackup. |
| [Read-Only Replicas](../read-only-replicas/index) | Making replicas read-only. |
| [Replication as a Backup Solution](../replication-as-a-backup-solution/index) | Replication can be used to support the backup strategy. |
| [Multi-Source Replication](../multi-source-replication/index) | Using replication with many masters. |
| [Replication Threads](../replication-threads/index) | Types of threads that are used to enable replication. |
| [Global Transaction ID](../gtid/index) | Improved replication using global transaction IDs. |
| [Parallel Replication](../parallel-replication/index) | Executing queries replicated from the primary in parallel on the replica. |
| [Replication and Binary Log System Variables](../replication-and-binary-log-system-variables/index) | Replication and binary log system variables. |
| [Replication and Binary Log Status Variables](../replication-and-binary-log-status-variables/index) | Replication and binary log status variables. |
| [Binary Log](../binary-log/index) | Contains a record of all changes to the databases, both data and structure |
| [Unsafe Statements for Statement-based Replication](../unsafe-statements-for-statement-based-replication/index) | Statements that are not safe for statement-based replication. |
| [Replication and Foreign Keys](../replication-and-foreign-keys/index) | Cascading deletes or updates based on foreign key relations are not written to the binary log |
| [Relay Log](../relay-log/index) | Event log created by the replica from the primary binary log. |
| [Enhancements for START TRANSACTION WITH CONSISTENT SNAPSHOT](../enhancements-for-start-transaction-with-consistent-snapshot/index) | Enhancements for START TRANSACTION WITH CONSISTENT SNAPSHOT. |
| [Group Commit for the Binary Log](../group-commit-for-the-binary-log/index) | Optimization when the server is run with innodb\_flush\_logs\_at\_trx\_commit or sync\_binlog. |
| [Selectively Skipping Replication of Binlog Events](../selectively-skipping-replication-of-binlog-events/index) | @@skip\_replication and --replicate-events-marked-for-skip. |
| [Binlog Event Checksums](../binlog-event-checksums/index) | Including a checksum in binlog events. |
| [Binlog Event Checksum Interoperability](../binlog-event-checksum-interoperability/index) | Replicating between servers with differing binlog checksum availability |
| [Annotate\_rows\_log\_event](../annotate_rows_log_event/index) | Annotate\_rows events accompany row events and describe the query which caused the row event. |
| [Row-based Replication With No Primary Key](../row-based-replication-with-no-primary-key/index) | MariaDB improves on row-based replication of tables with no primary key |
| [Replication Filters](../replication-filters/index) | Replication filters allow users to configure replication slaves to intentio... |
| [Running Triggers on the Replica for Row-based Events](../running-triggers-on-the-replica-for-row-based-events/index) | Running triggers on the replica for row-based events. |
| [Semisynchronous Replication](../semisynchronous-replication/index) | Semisynchronous replication. |
| [Using MariaDB Replication with MariaDB Galera Cluster](../using-mariadb-replication-with-mariadb-galera-cluster/index) | Information on using MariaDB replication with MariaDB Galera Cluster. |
| [Delayed Replication](../delayed-replication/index) | Specify that a slave should lag behind the master by (at least) a specified amount of time. |
| [Replication When the Primary and Replica Have Different Table Definitions](../replication-when-the-primary-and-replica-have-different-table-definitions/index) | Slave and the primary table definitions can differ while replicating. |
| [Restricting Speed of Reading Binlog from Primary by a Replica](../restricting-speed-of-reading-binlog-from-primary-by-a-replica/index) | The read\_binlog\_speed\_limit option can be used to reduce load on the primary. |
| [Changing a Replica to Become the Primary](../changing-a-replica-to-become-the-primary/index) | How to change a replica to primary and old primary as a replica for the new primary. |
| [Replication with Secure Connections](../replication-with-secure-connections/index) | Enabling TLS encryption in transit for MariaDB replication. |
| [Obsolete Replication Information](../obsolete-replication-information/index) | This section is for replication-related items that are obsolete |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Database Theory Database Theory
================
Just as perhaps we take movie special effects for granted until we see what state of the art was in previous eras, so we can't fully appreciate the power of relational databases without seeing what preceded them.
Relational databases allow any table to relate to any other table through means of common fields. It is a highly flexible system, and most modern databases are relational.
| Title | Description |
| --- | --- |
| [Introduction to Relational Databases](../introduction-to-relational-databases/index) | Brief introduction to the concept of a relational database. |
| [Exploring Early Database Models](../exploring-early-database-models/index) | Before relational databases there were a number of other models |
| [Understanding the Hierarchical Database Model](../understanding-the-hierarchical-database-model/index) | The earliest model was the hierarchical database model, resembling an upside-down tree. |
| [Understanding the Network Database Model](../understanding-the-network-database-model/index) | A progression from the hierarchical model designed to solve some of its problems |
| [Understanding the Relational Database Model](../understanding-the-relational-database-model/index) | The relational database model was a huge leap forward from the network data... |
| [Relational Databases: Basic Terms](../relational-databases-basic-terms/index) | The relational database model uses certain terms to describe its components |
| [Relational Databases: Table Keys](../relational-databases-table-keys/index) | A key, or index, unlocks access to the tables |
| [Relational Databases: Foreign Keys](../relational-databases-foreign-keys/index) | Foreign keys are the primary key in a foreign table |
| [Relational Databases: Views](../relational-databases-views/index) | Views are virtual tables |
| [Database Design](../database-design/index) | Articles about the database design process |
| [Database Normalization](../database-normalization/index) | Normalization is a powerful tool for designing databases |
| [ACID: Concurrency Control with Transactions](../acid-concurrency-control-with-transactions/index) | Ensuring data integrity. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Extended Keys Extended Keys
=============
Syntax
------
Enable:
```
set optimizer_switch='extended_keys=on';
```
Disable:
```
set optimizer_switch='extended_keys=off';
```
Description
-----------
Extended Keys is an optimization set with the [optimizer\_switch](../server-system-variables/index#optimizer_switch) system variable, which makes use of existing components of InnoDB keys to generate more efficient execution plans. Using these components in many cases allows the server to generate execution plans which employ index-only look-ups. It is set by default.
Extended keys can be used with:
* ref and eq-ref accesses
* range scans
* index-merge scans
* loose scans
* min/max optimizations
Examples
--------
An example of how extended keys could be employed for a query built over a [DBT-3/TPC-H database](http://www.tpc.org/tpch/specs.asp) with one added index defined on `p_retailprice`:
```
select o_orderkey
from part, lineitem, orders
where p_retailprice > 2095 and o_orderdate='1992-07-01'
and o_orderkey=l_orderkey and p_partkey=l_partkey;
```
The above query asks for the `orderkeys` of the orders placed on 1992-07-01 which contain parts with a retail price greater than $2095.
Using Extended Keys, the query could be executed by the following execution plan:
1. Scan the entries of the index `i_p_retailprice` where `p_retailprice>2095` and read `p_partkey` values from the extended keys.
2. For each value `p_partkey` make an index look-up into the table lineitem employing index `i_l_partkey` and fetch the values of `l_orderkey` from the extended index.
3. For each fetched value of `l_orderkey`, append it to the date `'1992-07-01'` and use the resulting key for an index look-up by index `i_o_orderdate` to fetch the values of `o_orderkey` from the found index entries.
All access methods of this plan do not touch table rows, which results in much better performance.
Here is the explain output for the above query:
```
MariaDB [dbt3sf10]> explain
-> select o_orderkey
-> from part, lineitem, orders
-> where p_retailprice > 2095 and o_orderdate='1992-07-01'
-> and o_orderkey=l_orderkey and p_partkey=l_partkey\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: part
type: range
possible_keys: PRIMARY,i_p_retailprice
key: i_p_retailprice
key_len: 9
ref: NULL
rows: 100
Extra: Using where; Using index
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: lineitem
type: ref
possible_keys: PRIMARY,i_l_suppkey_partkey,i_l_partkey,i_l_orderkey,i_l_orderkey_quantity
key: i_l_partkey
key_len: 5
ref: dbt3sf10.part.p_partkey
rows: 15
Extra: Using index
*************************** 3. row ***************************
id: 1
select_type: SIMPLE
table: orders
type: ref
possible_keys: PRIMARY,i_o_orderdate
key: i_o_orderdate
key_len: 8
ref: const,dbt3sf10.lineitem.l_orderkey
rows: 1
Extra: Using index
3 rows in set (0.00 sec)
```
See Also
--------
* [MWL#247](http://askmonty.org/worklog/?tid=247)
* [Blog post about the development of this feature](http://igors-notes.blogspot.com/2011/12/3-way-join-that-touches-only-indexes.html)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CONV CONV
====
Syntax
------
```
CONV(N,from_base,to_base)
```
Description
-----------
Converts numbers between different number bases. Returns a string representation of the number *`N`*, converted from base *`from_base`* to base *`to_base`*.
Returns `NULL` if any argument is `NULL`, or if the second or third argument are not in the allowed range.
The argument *`N`* is interpreted as an integer, but may be specified as an integer or a string. The minimum base is 2 and the maximum base is 36. If *`to_base`* is a negative number, *`N`* is regarded as a signed number. Otherwise, *`N`* is treated as unsigned. `CONV()` works with 64-bit precision.
Some shortcuts for this function are also available: `[BIN()](../bin/index)`, `[OCT()](../oct/index)`, `[HEX()](../hex/index)`, `[UNHEX()](../unhex/index)`. Also, MariaDB allows [binary](../binary-literals/index) literal values and [hexadecimal](../hexadecimal-literals/index) literal values.
Examples
--------
```
SELECT CONV('a',16,2);
+----------------+
| CONV('a',16,2) |
+----------------+
| 1010 |
+----------------+
SELECT CONV('6E',18,8);
+-----------------+
| CONV('6E',18,8) |
+-----------------+
| 172 |
+-----------------+
SELECT CONV(-17,10,-18);
+------------------+
| CONV(-17,10,-18) |
+------------------+
| -H |
+------------------+
SELECT CONV(12+'10'+'10'+0xa,10,10);
+------------------------------+
| CONV(12+'10'+'10'+0xa,10,10) |
+------------------------------+
| 42 |
+------------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Atomic Write Support Atomic Write Support
====================
Partial Write Operations
------------------------
When Innodb writes to the filesystem, there is generally no guarantee that a given write operation will be complete (not partial) in cases of a poweroff event, or if the operating system crashes at the exact moment a write is being done.
Without detection or prevention of partial writes, the integrity of the database can be compromised after recovery.
innodb\_doublewrite - an Imperfect Solution
-------------------------------------------
Since its inception, Innodb has had a mechanism to detect and ignore partial writes via the [InnoDB Doublewrite Buffer](../xtradbinnodb-doublewrite-buffer/index) (also innodb\_checksum can be used to detect a partial write).
Doublewrites, controlled by the [innodb\_doublewrite](../xtradbinnodb-server-system-variables/index#innodb_doublewrite) system variable, comes with its own set of problems. Especially on SSD, writing each page twice can have detrimental effects (write leveling).
Atomic Write - a Faster Alternative to innodb\_doublewrite
----------------------------------------------------------
A better solution is to directly ask the filesystem to provide an atomic (all or nothing) write guarantee. Currently this is only available on [a few SSD cards](index#devices-that-support-atomic-writes-with-mariadb).
Enabling Atomic Writes from [MariaDB 10.2](../what-is-mariadb-102/index)
------------------------------------------------------------------------
When starting, [MariaDB 10.2](../what-is-mariadb-102/index) and beyond automatically detects if any of the supported SSD cards are used.
When opening an InnoDB table, there is a check if the tablespace for the table is [on a device that supports atomic writes](index#devices-that-support-atomic-writes-with-mariadb) and if yes, it will automatically enable atomic writes for the table. If atomic writes support is not detected, the doublewrite buffer will be used.
One can disable atomic write support for all cards by setting the variable [innodb-use-atomic-writes](../xtradbinnodb-server-system-variables/index#innodb_use_atomic_writes) to `OFF` in your my.cnf file. It's `ON` by default.
Enabling Atomic Writes in [MariaDB 5.5](../what-is-mariadb-55/index) to [MariaDB 10.1](../what-is-mariadb-101/index)
--------------------------------------------------------------------------------------------------------------------
To use atomic writes instead of the doublewrite buffer, add:
```
innodb_use_atomic_writes = 1
```
to the my.cnf config file.
Note that atomic writes are only supported on [Fusion-io devices that use the NVMFS file system](../fusion-io-introduction/index#atomic-writes) in these versions of MariaDB.
### About innodb\_use\_atomic\_writes (in [MariaDB 5.5](../what-is-mariadb-55/index) to [MariaDB 10.1](../what-is-mariadb-101/index))
The following happens when atomic writes are enabled
* if [innodb\_flush\_method](../xtradbinnodb-server-system-variables/index#innodb_flush_method) is neither `O_DIRECT`, `ALL_O_DIRECT`, or `O_DIRECT_NO_FSYNC`, it is switched to `O_DIRECT`
* [innodb\_use\_fallocate](../xtradbinnodb-server-system-variables/index#innodb_use_fallocate) is switched `ON` (files are extended using `posix_fallocate` rather than writing zeros behind the end of file)
* Whenever an Innodb datafile is opened, a special `ioctl()` is issued to switch on atomic writes. If the call fails, an error is logged and returned to the caller. This means that if the system tablespace is not located on an atomic write capable device or filesystem, InnoDB/XtraDB will refuse to start.
* if [innodb\_doublewrite](../xtradbinnodb-server-system-variables/index#innodb_doublewrite) is set to `ON`, `innodb_doublewrite` will be switched `OFF` and a message written to the error log.
Here is a flowchart showing how atomic writes work inside InnoDB:
Devices that Support Atomic Writes with MariaDB
-----------------------------------------------
MariaDB currently supports atomic writes on the following devices:
* [Fusion-io devices with the NVMFS file system](../fusion-io-introduction/index#atomic-writes) . [MariaDB 5.5](../what-is-mariadb-55/index) and above.
* [Shannon SSD](http://www.shannon-sys.com). [MariaDB 10.2](../what-is-mariadb-102/index) and above.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb INET_ATON INET\_ATON
==========
Syntax
------
```
INET_ATON(expr)
```
Description
-----------
Given the dotted-quad representation of an IPv4 network address as a string, returns an integer that represents the numeric value of the address. Addresses may be 4- or 8-byte addresses.
Returns NULL if the argument is not understood.
Examples
--------
```
SELECT INET_ATON('192.168.1.1');
+--------------------------+
| INET_ATON('192.168.1.1') |
+--------------------------+
| 3232235777 |
+--------------------------+
```
This is calculated as follows: 192 x 2563 + 168 x 256 2 + 1 x 256 + 1
See Also
--------
* [INET6\_ATON()](../inet6_aton/index)
* [INET\_NTOA()](../inet_ntoa/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ASIN ASIN
====
Syntax
------
```
ASIN(X)
```
Description
-----------
Returns the arc sine of X, that is, the value whose sine is X. Returns NULL if X is not in the range -1 to 1.
Examples
--------
```
SELECT ASIN(0.2);
+--------------------+
| ASIN(0.2) |
+--------------------+
| 0.2013579207903308 |
+--------------------+
SELECT ASIN('foo');
+-------------+
| ASIN('foo') |
+-------------+
| 0 |
+-------------+
SHOW WARNINGS;
+---------+------+-----------------------------------------+
| Level | Code | Message |
+---------+------+-----------------------------------------+
| Warning | 1292 | Truncated incorrect DOUBLE value: 'foo' |
+---------+------+-----------------------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SQL Server and MariaDB Types Comparison SQL Server and MariaDB Types Comparison
=======================================
This page helps to map each SQL Server type to the matching MariaDB type.
Numbers
-------
In MariaDB, numeric types can be declared as `SIGNED` or `UNSIGNED`. By default, numeric columns are `SIGNED`, so not specifying either will not break compatibility with SQL Server.
When using `UNSIGNED` values, there is a potential problem with subtractions. When subtracting an `UNSIGNED` valued from another, the result is usually of an `UNSIGNED` type. But if the result is negative, this will cause an error. To solve this problem, we can enable the [NO\_UNSIGNED\_SUBTRACTION](../sql-mode/index#no_unsigned_subtraction) flag in sql\_mode.
For more information see [Numeric Data Type Overview](../numeric-data-type-overview/index).
### Integer Numbers
| SQL Server Types | Size (bytes) | MariaDB Types | Size (bytes) | Notes |
| --- | --- | --- | --- | --- |
| `tinyint` | 1 | [TINYINT](../tinyint/index) | 1 | |
| `smallint` | 2 | [SMALLINT](../smallint/index) | 2 | |
| | | [MEDIUMINT](../mediumint/index) | 3 | Takes 3 bytes on disk, but 4 bytes in memory |
| `int` | 1 | [INT](../int/index) / [INTEGER](../integer/index) | 4 | |
| `bigint` | 8 | [BIGINT](../bigint/index) | 8 | |
### Real Numbers (approximated)
| SQL Server Types | Precision | Size | MariaDB Types | Size |
| --- | --- | --- | --- | --- |
| `float(1-24)` | 7 digits | 4 | [FLOAT(0-23)](../float/index) | 4 |
| `float(25-53)` | 15 digist | 8 | [FLOAT(24-53)](../float/index) | 8 |
MariaDB supports an alternative syntax: `FLOAT(M, D)`. M is the total number of digits, and D is the number of digits after the decimal point.
See also: [Floating-point Accuracy](../floating-point-accuracy/index).
#### Aliases
In SQL Server `real` is an alias for `float(24)`.
In MariaDB [DOUBLE](../double/index), and [DOUBLE PRECISION](../double-precision/index) are aliases for `FLOAT(24-53)`.
Normally, `REAL` is also a synonym for `FLOAT(24-53)`. However, the [sql\_mode](../sql-mode/index) variable can be set with the `REAL_AS_FLOAT` flag to make `REAL` a synonym for `FLOAT(0-23)`.
### Real Numbers (Exact)
| SQL Server Types | Precision | Size (bytes) | MariaDB Types | Precision | Size (bytes) |
| --- | --- | --- | --- | --- | --- |
| `decimal` | 0 - 38 | Up to 17 | [DECIMAL](../decimal/index) | 0 - 38 | [See table](../data-type-storage-requirements/index#decimal) |
MariaDB supports this syntax: `DECIMAL(M, D)`. M and D are both optional. M is the total number of digits (10 by default), and D is the number of digits after the decimal point (0 by default). In SQL Server, defaults are 18 and 0, respectively. The reason for this difference is that SQL standard imposes a default of 0 for D, but it leaves the implementation free to choose any default for M.
SQL Server `DECIMAL` is equivalent to MariaDB `DECIMAL(18)`.
#### Aliases
The following [aliases](../dec-numeric-fixed/index) for `DECIMAL` are recognized in both SQL Server and MariaDB: `DEC`, `NUMERIC`. MariaDB also allows one to use `FIXED`.
### Money
SQL Server `money` and `smallmoney` types represent real numbers guaranteeing a very low level of approximation (five decimal digits are accurate), optionally associated with one of the supported currencies.
MariaDB doesn't have monetary types. To represent amounts of money:
* Store the currency in a separate column, if necessary. It's possible to use a foreign key to a currencies table, or the [ENUM](../enum/index) type.
* Use a non-approximated type:
+ [DECIMAL](../decimal/index) is very convenient, as it allows one to store the number as-is. But calculations are potentially slower.
+ An integer type is faster for calculations. It is possible to store, for example, the amount of money multiplied by 100.
There is a small incompatibility that users should be aware about. `money` and `smallmoney` are accurate to about 4 decimal digits. This means that, if you use enough decimal digits, operations on these types may produce different results than the results they would produce on MariaDB types.
### Bits
The [BIT](../bit/index) type is supported in MariaDB. Its maximum size is `BIT(64)`. The `BIT` type has a fixed length. If we insert a value which requires less bits than the ones that are allocated, zero-bits are padded on the left.
In MariaDB, binary values can be written in one of the following ways:
* `b'value'`
* `0value` where `value` is a sequence of 0 and 1 digits. Hexadecimal syntax can also be used. For more details, see [Binary Literals](../binary-literals/index) and [Hexadecimal Literals](../hexadecimal-literals/index).
MariaDB and SQL Server have different sets of bitwise operators. See [Bit Functions and Operators](../bit-functions-and-operators/index).
BOOLEAN Pseudo-Type
-------------------
In SQL Server, it is common to use `bit` to represent boolean values. In MariaDB it is possible to do the same, but this is not a common practice.
A column can also be defined as [BOOLEAN](../boolean/index) or `BOOL`, which is just a synonym for [TINYINT](../tinyint/index). `TRUE` and `FALSE` keywords also exist, but they are synonyms for 1 and 0. To understand what this implies, see [Boolean Literals](../sql-language-structure-boolean-literals/index).
In MariaDB `'True'` and `'False'` are always strings.
Date and Time
-------------
| SQL Server Types | Range | Precision | Size (bytes) | MariaDB Types | Range | Size (bytes) | Precision | Notes |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| `date` | 0001-01-01 - 9999-12-31 | 3 | / | [DATE](../date/index) | 0001-01-01 - 9999-12-31 | 3 | / | They cover the same range |
| `datetime` | 1753-01-01 - 9999-12-31 | 8 | 0 to 3, rounded | [DATETIME](../datetime/index) | 001-01-01 - 9999-12-31 | 8 | 0 to 6 | MariaDB values are not approximated, see below. |
| `datetime2` | 001-01-01 - 9999-12-31 | 8 | 6 to 8 | [DATETIME](../datetime/index) | 001-01-01 - 9999-12-31 | 8 | 0 to 6 | MariaDB values are not approximated, see below. |
| `smalldatetime` | | | | [DATETIME](../datetime/index) | | | | |
| `datetimeoffset` | | | | [DATETIME](../datetime/index) | | | | |
| `time` | | | | [TIME](../time/index) | | | | |
You may also consider the following MariaDB types:
* [TIMESTAMP](../timestamp/index) has little to do with SQL Server's `timestamp`. In MariaDB it is the number of seconds elapsed since the beginning of 1970-01-01, with a decimal precision up to 6 digits (0 by default). The value can optionally be automatically set to the current timestamp on insert, on update, or both. It is not meant to be a unique row identifier.
* [YEAR](../year/index) is a 1-byte type representing years between 1901 and 2155, as well as 0000.
### Zero Values
MariaDB allows a special value where all the parts of a date are zeroes: `'0000-00-00'`. This can be disallowed by setting [sql\_mode=NO\_ZERO\_DATE](../sql-mode/index#no_zero_date).
It is also possible to use values where only some date parts are zeroes, for example `'1994-01-00'` or `'1994-00-00'`. These values can be disallowed by setting [sql\_mode=NO\_ZERO\_IN\_DATE](../sql-mode/index#no_zero_in_date). They are not affected by `NO_ZERO_DATE`.
### Syntax
Several different date formats are understood. Typically used formats are `'YYYY-MM-DD'` and `YYYYMMDD`. Several separators are accepted.
The syntax defined in standard SQL and ODBC are understood - for example, `DATE '1994-01-01'` and `{d '1994-01-01'}` . Using these eliminates possible ambiguities in contexts where a temporal value could be interpreted as a string or as an integer.
See [Date and Time Literals](../date-and-time-literals/index) for the details.
### Precision
For temporal types that include a day time, MariaDB allows a precision from 0 to 6 (microseconds), 0 being the default. The subsecond part is never approximated. It adds up to 3 bytes. See [Data Type Storage Requirements](../data-type-storage-requirements/index#microseconds) for the details.
String and Binary
-----------------
### Binary Strings
| SQL Server Types | Size (bytes) | MariaDB Types | Notes |
| --- | --- | --- | --- |
| `binary` | 1 to 8000 | [VARBINARY](../varbinary/index) or [BLOB](../blob-and-text-data-types/index) | See below for `BLOB` types |
| `varbinary` | 1 to 8000 | [VARBINARY](../varbinary/index) or [BLOB](../blob-and-text-data-types/index) | See below for `BLOB` types |
| `image` | 2^31-1 | [VARBINARY](../varbinary/index) or [BLOB](../blob-and-text-data-types/index) | See below for `BLOB` types |
The `VARBINARY` type is similar to `VARCHAR`, but stores binary byte strings, just like SQL Server `binary` does.
For large binary strings, MariaDB has four `BLOB` types, with different sizes. See [BLOB and TEXT Data Types](../blob-and-text-data-types/index) for more information.
### Character Strings
One important difference between SQL Server and MariaDB is that **in MariaDB character sets do not depend on types and collations**. Character sets can be set at database, table or column level. If this is not done, the default character sets applies, which is specified by the [character\_set\_server](../server-system-variables/index#character_set_server) system variable.
To create a MariaDB table that is identical to a SQL Server table, **it may be necessary to specify a character set for each string column**. However, in many cases using UTF-8 will work.
| SQL Server Types | Size (bytes) | MariaDB Types | Size (bytes) | Character set |
| --- | --- | --- | --- | --- |
| `char` | 1 to 8000 | [CHAR](../char/index) | 0 to 255 | `utf8mb4` (1, 4) |
| `varchar` | 1 to 8000 | [VARCHAR](../varchar/index) | 0 to 65,532 (2) | `utf8mb4` (1) |
| `text` | 2^31-1 | [TEXT](../blob-and-text-data-types/index) | 2^31-1 | `ucs2` |
| `nchar` | 2 to 8000 | [CHAR](../char/index) | 0 to 255 | `utf16` or `ucs2` (3, 4) |
| `nvarchar` | 2 to 8000 | [VARCHAR](../varchar/index) | 0 to 65,532 (2) (5) | `utf16` or `ucs2` (1) (3) |
| `ntext` | 2^30 - 1 | [TEXT](../blob-and-text-data-types/index) | 2^31-1 | `ucs2` |
**Notes:**
1) If SQL Server uses a non-unicode collation, a subset of UTF-8 is used. So it is possible to use a smaller character set on MariaDB too.
2) [InnoDB](../innodb/index) has a maximum row length of 65,535 bytes. [TEXT](../blob-and-text-data-types/index) columns do not contribute to the row size, because they are stored separately (except for the first 12 bytes).
3) In SQL Server, UTF-16 is used if data contains Supplementary Characters, otherwise UCS-2 is used. If not sure, use `utf16` in MariaDB.
4) In SQL Server, the value of `ANSI_PADDING` determines if `char` values should be padded with spaces to their maximum length. In MariaDB, this depends on the [PAD\_CHAR\_TO\_FULL\_LENGTH](../sql-mode/index#pad_char_to_full_length) sql\_mode flag.
5) See JSON, below.
SQL Server Special Types
------------------------
### rowversion
MariaDB does not have the `rowversion` type.
If the only purpose is to check if a row has been modified since its last read, a [TIMESTAMP](../timestamp/index) column can be used instead. Its default value should be `ON UPDATE CURRENT_TIMESTAMP`. In this way, the timestamp will be updated whenever the column is modified.
A way to preserve much more information is to use a [temporal table](../temporal-data-tables/index). Past versions of the row will be preserved.
### sql\_variant
MariaDB does not support the `sql_variant` type.
MariaDB is quite flexible about implicit and explicit [type conversions](../type-conversion/index). Therefore, for most cases storing the values as a string should be equivalent to using `sql_variant`.
Be aware that the maximum length of an `sql_variant` value is 8,000 bytes. In MariaDB, you may need to use `TINYBLOB`.
### uniqueidentifier
MariaDB does not support the `uniqueidentifier` type.
`uniqueidentifier` columns contain 16-bit GUIDs. MariaDB can generate unique values with the [UUID()](../uuid/index) or [UUID\_SHORT()](../uuid_short/index) functions, and stored them in `BIT(128)` or `BIT(64)` columns, respectively.
### xml
MariaDB does not support the `xml` type.
XML data can be stored in string columns. MariaDB supports several XML functions.
JSON
----
With SQL Server, typically JSON documents are stored in `nvarchar` columns in a text form.
MariaDB has a [JSON](../json-data-type/index) pseudo-type that maps to [LONGTEXT](../longtext/index). However, from [MariaDB 10.5](../what-is-mariadb-105/index) the `JSON` pseudo-type also checks that the value is valid a JSON document.
MariaDB supports different JSON functions than SQL Server. MariaDB currently has more functions, and SQL Server syntax will not work. See [JSON functions](../json-functions/index) for more information.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Basic MariaDB Articles Basic MariaDB Articles
=======================
These articles are at a basic level. They are more advanced than beginners and less advanced than intermediate developers and administrators.
| Title | Description |
| --- | --- |
| [Basic SQL Debugging](../basic-sql-debugging/index) | An introductory tutorial on debugging MariaDB. |
| [Configuring MariaDB for Remote Client Access](../configuring-mariadb-for-remote-client-access/index) | How to configure MariaDB for remote client access. |
| [Creating & Using Views](../creating-using-views/index) | A tutorial on creating and using views. |
| [Getting Started with Indexes](../getting-started-with-indexes/index) | Extensive tutorial on creating indexes for tables. |
| [Joining Tables with JOIN Clauses](../joining-tables-with-join-clauses/index) | An introductory tutorial on using the JOIN clause. |
| [The Essentials of an Index](../the-essentials-of-an-index/index) | Explains the basics of a table index. |
| [Troubleshooting Connection Issues](../troubleshooting-connection-issues/index) | Common problems when trying to connect to MariaDB. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Books on MariaDB Books on MariaDB
=================
| Title | Description |
| --- | --- |
| [Beginner Books](../beginner-books/index) | List of books on MariaDB for newcomers and beginners. |
| [Intermediate and Advanced Books](../intermediate-and-advanced-books/index) | List of books on MariaDB for intermediate and advanced developers and administrators. |
| [Books on MariaDB Code](../books-on-mariadb-code/index) | List of books on coding MariaDB Server and plugins. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb DEALLOCATE / DROP PREPARE DEALLOCATE / DROP PREPARE
=========================
Syntax
------
```
{DEALLOCATE | DROP} PREPARE stmt_name
```
Description
-----------
To deallocate a prepared statement produced with `[PREPARE](../prepare-statement/index)`, use a `DEALLOCATE PREPARE` statement that refers to the prepared statement name.
A prepared statement is implicitly deallocated when a new `PREPARE` command is issued. In that case, there is no need to use `DEALLOCATE`.
Attempting to execute a prepared statement after deallocating it results in an error, as if it was not prepared at all:
```
ERROR 1243 (HY000): Unknown prepared statement handler (stmt_name) given to EXECUTE
```
If the specified statement has not been PREPAREd, an error similar to the following will be produced:
```
ERROR 1243 (HY000): Unknown prepared statement handler (stmt_name) given to DEALLOCATE PREPARE
```
Example
-------
See [example in PREPARE](../prepare-statement/index#example).
See Also
--------
* [PREPARE Statement](../prepare-statement/index)
* [EXECUTE Statement](../execute-statement/index)
* [EXECUTE IMMEDIATE](../execute-immediate/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Differences Between MyRocks Variants Differences Between MyRocks Variants
====================================
MyRocks is available in
* Facebook's (FB) MySQL branch (originally based on MySQL 5.6)
* MariaDB (from 10.2 and 10.3)
* Percona Server from 5.7
This page lists differences between these variants.
This is a work in progress. The contents are not final
RocksDB Data Location
---------------------
FB and Percona store RocksdDB files in $datadir/`.rocksdb`. MariaDB puts them in $datadir/`#rocksdb`. This is more friendly for packaging and OS scripts.
Compression Algorithms
----------------------
* FB's branch doesn't provide binaries. One needs to compile it with appropriate compression libraries.
* In MariaDB, available compression algorithms can be seen in the [rocksdb\_supported\_compression\_types](../myrocks-system-variables/index#rocksdb_supported_compression_types) variable. From [MariaDB 10.7](../what-is-mariadb-107/index), algorithms can be [installed as a plugin](../compression-plugins/index). In earlier versions, the set of supported compression algorithms depends on the platform.
+ On Ubuntu 16.04 (current LTS) it is `Snappy,Zlib,LZ4,LZ4HC` .
+ On CentOS 7.4 it is `Snappy,Zlib`.
+ In the bintar tarball it is `Snappy,Zlib`.
* Percona Server supports: `Zlib, ZSTD, LZ4 (the default), LZ4HC`. Unsupported algorithms: `Snappy, BZip2, XPress`.
RocksDB Version Information
---------------------------
* FB's branch provides the `rocksdb_git_hash` \*status\* variable.
* MariaDB provides the `@@rocksdb_git_hash` \*system\* variable.
* Percona Server doesn't provide either.
RocksDB Version
---------------
* Facebook's branch uses RocksDB 5.10.0 (the version number can be found in `include/rocksdb/version.h`)
```
commit ba295cda29daee3ffe58549542804efdfd969784
Author: Andrew Kryczka <[email protected]>
Date: Fri Jan 12 11:03:55 2018 -0800
```
* MariaDB currently uses 5.8.0
```
commit 9a970c81af9807071bd690f4c808c5045866291a
Author: Yi Wu <[email protected]>
Date: Wed Sep 13 17:21:35 2017 -0700
```
* Percona Server uses 5.8.0
```
commit ab0542f5ec6e7c7e405267eaa2e2a603a77d570b
Author: Maysam Yabandeh <[email protected]>
Date: Fri Sep 29 07:55:22 2017 -0700
```
Binlog Position in information\_schema.rocksdb\_global\_info
------------------------------------------------------------
* FB branch provides information\_schema.rocksdb\_global\_info type=BINLOG, NAME={FILE, POS, GTID}.
* Percona Server doesn't provide it.
* MariaDB doesn't provide it.
One use of that information is to take the output of `myrocks_hotbackup` and make it a new master.
Gap Lock Detector
-----------------
* FB branch has a "Gap Lock Detector" feature. It is at the SQL layer. It can be controlled with `gap_lock_XXX` variables and is disabled by default (gap-lock-raise-error=false, gap-lock-write-lock=false).
* Percona Server has gap lock checking ON but doesn't seem to have any way to control it? Queries that use Gap Lock on MyRocks fail with an error like this:
```
mysql> insert into tbl2 select * from tbl1;
ERROR 1105 (HY000): Using Gap Lock without full unique key in multi-table or multi-statement transactions
is not allowed. You need to either rewrite queries to use all unique key columns in WHERE equal conditions,
or rewrite to single-table, single-statement transaction. Query: insert into tbl2 select * from tbl1
```
* MariaDB doesn't include the Gap Lock Detector.
Generated Columns
-----------------
* Both MariaDB and Percona Server support [generated columns](../generated-columns/index), but neither one supports them for the MyRocks storage engine (attempts to create a table will produce an error).
* [Invisible columns](../invisible-columns/index) in [MariaDB 10.3](../what-is-mariadb-103/index) are supported (as they are an SQL layer feature).
rpl\_skip\_tx\_api
------------------
Facebook's branch has a performance feature for replication slaves, `rpl_skip_tx_api`. It is not available in MariaDB or in Percona Server.
Details
-------
The above comparison was made using
* FB/MySQL 5.6.35
* Percona Server 5.7.20-19-log
* [MariaDB 10.2.13](https://mariadb.com/kb/en/mariadb-10213-release-notes/) (MyRocks is beta)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb sysVinit sysVinit
========
[sysVinit](https://en.wikipedia.org/wiki/Init#SysV-style) is one of the most common service managers. On systems that use [sysVinit](https://en.wikipedia.org/wiki/Init#SysV-style), the `[mysql.server](../mysqlserver/index)` script is normally installed to `/etc/init.d/mysql`.
Interacting with the MariaDB Server Process
-------------------------------------------
The service can be interacted with by using the `[service](https://linux.die.net/man/8/service)` command.
### Starting the MariaDB Server Process on Boot
On RHEL/CentOS and other similar distributions, the `[chkconfig](https://linux.die.net/man/8/chkconfig)` command can be used to enable the MariaDB Server process at boot:
```
chkconfig --add mysql
chkconfig --level 345 mysql on
```
On Debian and Ubuntu and other similar distributions, the `[update-rc.d](https://manpages.debian.org/wheezy/sysv-rc/update-rc.d.8.en.html)` command can be used:
```
update-rc.d mysql defaults
```
### Starting the MariaDB Server Process
```
service mysql start
```
### Stopping the MariaDB Server Process
```
service mysql stop
```
### Restarting the MariaDB Server Process
```
service mysql restart
```
### Checking the Status of the MariaDB Server Process
```
service mysql status
```
Manually Installing mysql.server with SysVinit
----------------------------------------------
If you install MariaDB from [source](../compiling-mariadb-from-source/index) or from a [binary tarball](../installing-mariadb-binary-tarballs/index) that does not install `[mysql.server](../mysqlserver/index)` automatically, and if you are on a system that uses [sysVinit](index), then you can manually install `mysql.server` with [sysVinit](index). See [mysql.server: Manually Installing with SysVinit](../mysqlserver/index#manually-installing-with-sysvinit) for more information.
SysVinit and Galera Cluster
---------------------------
### Bootstrapping a New Cluster
When using [Galera Cluster](../galera/index) with sysVinit, the first node in a cluster has to be started with `service mysql bootstrap`. See [Getting Started with MariaDB Galera Cluster: Bootstrapping a New Cluster](../getting-started-with-mariadb-galera-cluster/index#bootstrapping-a-new-cluster) for more information.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysql.help_category Table mysql.help\_category Table
==========================
`mysql.help_category` is one of the four tables used by the [HELP command](../help-command/index). It is populated when the server is installed by the `fill_help_table.sql` script. The other help tables are [help\_relation](../mysqlhelp_relation-table/index), [help\_topic](../mysqlhelp_topic-table/index) and [help\_keyword](../mysqlhelp_keyword-table/index).
**MariaDB starting with [10.4](../what-is-mariadb-104/index)**In [MariaDB 10.4](../what-is-mariadb-104/index) and later, this table uses the [Aria](../aria/index) storage engine.
**MariaDB until [10.3](../what-is-mariadb-103/index)**In [MariaDB 10.3](../what-is-mariadb-103/index) and before, this table uses the [MyISAM](../myisam-storage-engine/index) storage engine.
The `mysql.help_category` table contains the following fields:
| Field | Type | Null | Key | Default | Description |
| --- | --- | --- | --- | --- | --- |
| `help_category_id` | `smallint(5) unsigned` | NO | PRI | `NULL` | |
| `name` | `char(64)` | NO | UNI | `NULL` | |
| `parent_category_id` | `smallint(5) unsigned` | YES | | `NULL` | |
| `url` | `char(128)` | NO | | `NULL` | |
Example
-------
```
SELECT * FROM help_category;
+------------------+-----------------------------------------------+--------------------+-----+
| help_category_id | name | parent_category_id | url |
+------------------+-----------------------------------------------+--------------------+-----+
| 1 | Geographic | 0 | |
| 2 | Polygon properties | 34 | |
| 3 | WKT | 34 | |
| 4 | Numeric Functions | 38 | |
| 5 | Plugins | 35 | |
| 6 | MBR | 34 | |
| 7 | Control flow functions | 38 | |
| 8 | Transactions | 35 | |
| 9 | Help Metadata | 35 | |
| 10 | Account Management | 35 | |
| 11 | Point properties | 34 | |
| 12 | Encryption Functions | 38 | |
| 13 | LineString properties | 34 | |
| 14 | Miscellaneous Functions | 38 | |
| 15 | Logical operators | 38 | |
| 16 | Functions and Modifiers for Use with GROUP BY | 35 | |
| 17 | Information Functions | 38 | |
| 18 | Comparison operators | 38 | |
| 19 | Bit Functions | 38 | |
| 20 | Table Maintenance | 35 | |
| 21 | User-Defined Functions | 35 | |
| 22 | Data Types | 35 | |
| 23 | Compound Statements | 35 | |
| 24 | Geometry constructors | 34 | |
| 25 | GeometryCollection properties | 1 | |
| 26 | Administration | 35 | |
| 27 | Data Manipulation | 35 | |
| 28 | Utility | 35 | |
| 29 | Language Structure | 35 | |
| 30 | Geometry relations | 34 | |
| 31 | Date and Time Functions | 38 | |
| 32 | WKB | 34 | |
| 33 | Procedures | 35 | |
| 34 | Geographic Features | 35 | |
| 35 | Contents | 0 | |
| 36 | Geometry properties | 34 | |
| 37 | String Functions | 38 | |
| 38 | Functions | 35 | |
| 39 | Data Definition | 35 | |
+------------------+-----------------------------------------------+--------------------+-----+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore System Operations ColumnStore System Operations
=============================
System status
-------------
### Viewing system status
The system status shows the status of the system and all equipped servers. To view the system status, use the *getSystemStatus* command in [mcsadmin](../mariadb-columnstore-administrative-console/index), or simply use [mcsadmin](../mariadb-columnstore-administrative-console/index) *getSystemStatus* from the operating system prompt.
Example:
```
# mcsadmin getSystemStatus
getsystemstatus Sat Jun 11 01:01:22 2016
System columnstore-1
System and Module statuses
Component Status Last Status Change
------------ -------------------------- ------------------------
System ACTIVE Fri Jun 10 01:50:46 2016
Module pm1 ACTIVE Fri Jun 10 01:50:43 2016
```
The table below shows the available system and server statuses.
| Status | Definition |
| --- | --- |
| Active | The system, server, or Network Interface Card (NIC) is available to process database requests |
| Auto Disabled | Disabled as a result of a server failure. |
| Auto Init | Auto initialization mode during a fault recovery. |
| Auto Offline | The system or server is offline due to a fault. |
| Busy\_Init | The module/system is performing an initialization task at startup time before going to the ACTIVE state. |
| Degraded | The server is active, but the performance is degraded. A server is degraded when a NIC is not working. |
| Down | Communication failure. |
| Failed | A stop/start/restart request for the system or a server failed. |
| Initial | Initial state after a system reboot or install and before any action is taken. |
| Man Disabled | Disabled as a result of executing the altersystem-disableModule command. |
| Man Init | Manual initialization mode during a start or restart command. |
| Man Offline | The system or server was taken offline with the stop or shutdown command. |
| Up | Successfully communicating. |
When all servers are active, then the system status is active. If one or more servers are Man Offline and the others are active, the system is Man Offline. All equipped servers must be active before the system is shown as active.
### Simple external monitoring script
The following starter / reference shell script will wrap an mcsadmin call and produce output / return codes matching the nagios plugin specification. Most monitoring tools can easily integrate this and it typically requires configuring an agent on a columnstore node to periodically invoke this to determine current status.
```
#!/bin/bash
MCS_DIR="/usr/local/mariadb/columnstore"
# capture getSystemStatus and remove first 9 lines and blank lines to just have status table contents
STATUS=$($MCS_DIR/bin/mcsadmin getSystemStatus | tail -n +9 | sed '/^$/d' )
# grab system status line
SYSTEM_STATUS=$(echo "$STATUS" | grep 'System' | awk '{ printf $2; }')
# combine module status lines
MODULE_STATUS=$(echo "$STATUS" | grep 'Module' | awk '{ printf $2 ":" $3 " "; }')
# if system status is ACTIVE, then all good otherwise consider critical failure
if [ "$SYSTEM_STATUS" == "ACTIVE" ]
then
echo "OK - system: $SYSTEM_STATUS, modules: $MODULE_STATUS"
exit 0
else
echo "CRITICAL - system: $SYSTEM_STATUS, modules: $MODULE_STATUS"
exit 2
fi
```
### Viewing process status
To view the process status, use the *getProcessStatus* command in [mcsadmin](../mariadb-columnstore-administrative-console/index), or simply use [mcsadmin](../mariadb-columnstore-administrative-console/index) *getProcessStatus* from the operating system prompt. The table below shows the available system and server statuses.
Example:
```
[myusr@srv1 ~]# mcsadmin getProcessStatus
getprocessstatus Sat Jun 11 00:59:09 2016
MariaDB Columnstore Process statuses
Process Module Status Last Status Change Process ID
------------------ ------ --------------- ------------------------ ----------
ProcessMonitor pm1 ACTIVE Fri Jun 10 01:50:04 2016 2487
ProcessManager pm1 ACTIVE Fri Jun 10 01:50:10 2016 2673
SNMPTrapDaemon pm1 ACTIVE Fri Jun 10 01:50:16 2016 3534
DBRMControllerNode pm1 ACTIVE Fri Jun 10 01:50:20 2016 3585
ServerMonitor pm1 ACTIVE Fri Jun 10 01:50:22 2016 3625
DBRMWorkerNode pm1 ACTIVE Fri Jun 10 01:50:22 2016 3665
DecomSvr pm1 ACTIVE Fri Jun 10 01:50:26 2016 3742
PrimProc pm1 ACTIVE Fri Jun 10 01:50:28 2016 3770
ExeMgr pm1 ACTIVE Fri Jun 10 01:50:32 2016 3844
WriteEngineServer pm1 ACTIVE Fri Jun 10 01:50:36 2016 3934
DDLProc pm1 ACTIVE Fri Jun 10 01:50:40 2016 3991
DMLProc pm1 ACTIVE Fri Jun 10 01:50:45 2016 4058
mysqld pm1 ACTIVE Fri Jun 10 01:50:22 2016 2975
```
The table below shows the supported process states.
| Status | Definition |
| --- | --- |
| Active | The process is fully functional. |
| Auto Init | Auto initialization mode during a fault recovery |
| Auto Offline | The process is offline due to a fault. |
| Busy Init | The process is performing an initialization task at startup time before going to the ACTIVE state. |
| Failed | A stop/start/restart request for a process failed. |
| Hot Standby | The process is functional in a standby/ready state in case a failover occurs. |
| Initial | State after a system reboot or install and before any action is taken |
| Man Init | Manual initialization mode during a start or restart command |
| Man Offline | The process was taken offline with the stop or shutdown command. |
| Standby Init | Manual initialization mode during a start or restart command of a Hot Standby process. |
System operations
-----------------
### Stopping the system
To stop the system, use the *stopSystem* command in [mcsadmin](../mariadb-columnstore-administrative-console/index), or simply use [mcsadmin](../mariadb-columnstore-administrative-console/index) *stopSystem* from the operating system prompt.
Stopping the system stops the storage engine database processes. The process that supports the Management Console and System Alarms remains active.
### Starting the system or modules
To start the system, use the *startSystem* command in [mcsadmin](../mariadb-columnstore-administrative-console/index), or simply use [mcsadmin](../mariadb-columnstore-administrative-console/index) *startSystem* from the operating system prompt
### Restarting the system
To restart the system, use the *restartSystem* command in [mcsadmin](../mariadb-columnstore-administrative-console/index), or simply use [mcsadmin](../mariadb-columnstore-administrative-console/index) *restartSystem* from the operating system prompt
### Shutting down the system
To shut down the system completely including storage engine database processes as well as the process that supports management console and system alarms, use the *shutdownSystem* command in [mcsadmin](../mariadb-columnstore-administrative-console/index), or simply use [mcsadmin](../mariadb-columnstore-administrative-console/index) *shutdownSystem* from the operating system prompt
### Disabling system modules
A System Module can be disabled when the system is ACTIVE or OFFLINE. To disable a module, use the *alterSystem-disableModule module\_id* command in [mcsadmin](../mariadb-columnstore-administrative-console/index), or simply use [mcsadmin](../mariadb-columnstore-administrative-console/index) *alterSystem-disableModule module\_id* from the operating system prompt.
Example:
```
mcsadmin alterSystem-disablemodule PM2, PM3
```
The modules PM2 and PM3 will be stopped and disabled.
\_Note\_: Disabling a module may result in data loss if the data is local to the [PM](../columnstore-performance-module/index). If the data is SAN mounted, the dbroots would need to be moved to other PMs. Please see “Moving DBRoots” of this guide for more information on moving DBRoots.
### Enabling System Modules
To enable a module, use the *alterSystem-enableModule module\_id* command in [mcsadmin](../mariadb-columnstore-administrative-console/index), or simply use [mcsadmin](../mariadb-columnstore-administrative-console/index) *alterSystem-enableModule module\_id* from the operating system prompt.
Example:
```
mcsadmin alterSystem-enablemodule PM2, PM3
```
The modules PM2 and PM3 will be enabled and started.
### Switch Parent OAM Module
Parent OAM Module is the Performance Module that monitors the overall system including all the [UM](../columnstore-user-module/index) and [PM](../columnstore-performance-module/index) nodes and their status, as well as handles PM node failover. In a running system with more than 1 PM node there will be 2 Parent OAM Modules - an Active Parent and a Standby Parent.
To switch a module to the Standby Parent, use *switchParentOAMModule* in [mcsadmin](../mariadb-columnstore-administrative-console/index), or simply use [mcsadmin](../mariadb-columnstore-administrative-console/index) *switchParentOAMModule* from the operating system prompt. The Standby Parent OAM Module will become active.
To switch a module to a specific module: use *switchParentOAMModule module\_id* on [mcsadmin](../mariadb-columnstore-administrative-console/index), or simply use [mcsadmin](../mariadb-columnstore-administrative-console/index) *switchParentOAMModule module\_id* from the operating system prompt.
Example:
```
switchParentOAMModule pm3
```
The Performance-Module 3 will become the active Parent OAM Module
System configuraiton
--------------------
### Viewing network configuration
To view the network configuration of all nodes and their statuses use *getSystemNetworkConfig* on [mcsadmin](../mariadb-columnstore-administrative-console/index), or simply use [mcsadmin](../mariadb-columnstore-administrative-console/index) *getSystemNetworkConfig* from the operating system prompt.
Example:
```
[myusr@srv1 ~]mcsadmin getSystemNetworkConfig
getsystemnetworkconfig Sat Jun 11 01:34:55 2016
System Network Configuration
Module Name Module Description NIC ID Host Name IP Address Status
----------- ------------------------- ------ --------- --------------- ------------
pm1 Performance Module #1 1 localhost 127.0.0.1 UP
```
### Viewing module configuration
To view the module configuration of all nodes and their statuses use *getModuleConfig* on [mcsadmin](../mariadb-columnstore-administrative-console/index), or simply use [mcsadmin](../mariadb-columnstore-administrative-console/index) *getModuleConfig* from the operating system prompt.
```
[myusr@srv1 ~]mcsadmin getModuleConfig
getmoduleconfig Sat Jun 11 01:51:37 2016
Module Name Configuration
Module 'um1' Configuration information
ModuleType = um
ModuleDesc = User Module #1
ModuleIPAdd NIC ID 1 = 10.100.7.80
ModuleHostName NIC ID 1 = srvhst2
ModuleIPAdd NIC ID 2 = 10.100.107.81
ModuleHostName NIC ID 2 = srvhst2b
Module 'pm1' Configuration information
ModuleType = pm
ModuleDesc = Performance Module #1
ModuleIPAdd NIC ID 1 = 10.100.7.10
ModuleHostName NIC ID 1 = srvhst1
DBRootIDs assigned = 1
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb ST_AsWKB ST\_AsWKB
=========
A synonym for [ST\_AsBinary()](../st_asbinary/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb LevelDB Storage Engine MS1 LevelDB Storage Engine MS1
==========================
This page describes what will be implemented for milestone#1 of [LevelDB Storage Engine](../leveldb-storage-engine/index). *For development after MS1, see [LevelDB Storage Engine Development](../leveldb-storage-engine-development/index)*
Feature Description
===================
How the data is stored in LevelDB
---------------------------------
### One leveldb instance
We will use one LevelDB instance for mysqld process. LevelDB keys will be prefixed with 'dbname.table\_name.PRIMARY' for the table itself, and 'dbname.table\_name.index\_name' for the secondary indexes. This allows to store arbitrary number of tables/indexes in one LevelDB instance.
### Data encoding
We will rely on LevelDB's compression to make the storage compact. Data that goes into LevelDB's key will be stored in KeyTupleFormat (which allows mysql's lookup/index ordering functions to work).
Data that goes into LevelDB's value will be stored in table->record[0] format, except blobs. (Blobs will require special storage convention because they store a char\* pointer in table->record[0]).
We will need to support blobs because table `nodetable` has a `mediumtext` field.
### Secondary indexes
Non-unique secondary indexes will be supported.
LevelDB stores {KEY->VALUE} mappings. Non-unique index will need to have some unique values for KEY. This can be arranged by using this mapping:
```
KEY = {index_columns, primary_key_columns}.
VALUE = {nothing}
```
Using primary key as suffix will make DB::Get() useless. Instead, we will have to do lookups with:
```
get(secondary_index_key_val)
{
open cursor for (secondary_index_key_val)
read the first record
if (record > secondary_index_key_val)
return NOT_FOUND;
else
return FOUND;
}
```
*TODO: we will not support unique secondary indexes in MS1. ALTER/CREATE TABLE statements attempting to add a unique index will fail. Is this ok?*
Concurrency handling
--------------------
We will use what was discussed in the "Pessimistic locking proposal".
Basic idea is: LevelDB's operations do blind overwrites. In order to make sure we're not overwriting anything, we will use this pattern:
```
acqure record lock;
read;
modify as necessary;
write;
release record lock;
```
record locks can be obtained for {table\_name, primary\_key}. Locks are exclusive, for a given record, only one lock can obtained at a time. A lock can be obtained when its record doesn't exist (see INSERT below)
### C1. UPDATE
UPDATE will use the above scheme. It will get row locks for the keys it is reading in order to prevent concurrent updates.
### C.2 INSERT
INSERT will use a row lock to make sure the record of interest does not exist.
### C.3 DELETE
If a DELETE statement has a form of
```
DELETE FROM tbl WHERE tbl.primary_key=const
```
then it theoretically can be translated into a DB::Delete() call, that is, into a write-without-read. In other cases, we will need to do reads and put locks on the rows that we want to delete.
MS1 will only implement the variant with locking DELETE.
### C.4 SELECT
SELECT statements will use a read snapshot. They will not put (or check) whether there are any locks for records they are reading. This is similar to the definition of `read-committed` isolation level.
We will also support `SELECT FOR UPDATE`. In this mode, the read records will be locked with a write lock until the end of the transaction.
### C.5 Locking mechanism
As specified in previous sections, we will be employing locks on the values of {dbname, tablename, primary\_key\_value}. Locks will be exclusive: only one transaction can hold a lock at a time.
Locks are acquired one-by-one, which allows for deadlocks. There will be no deadlock detection or deadlock prevention systems. Instead, lock waits will time out after @@leveldb\_lock\_wait\_timeout milliseconds. When @@leveldb\_lock\_wait\_timeout==0, lock system will not wait at all, attempt to acquire a lock that's occupied will result in an immediate transaction abort.
Locks will be stored in a highly-concurrent hashtable. Current candidate for it is mysys/lf\_hash.
### C.6 Applying transaction changes
The changes made by transaction will be accumulated as a LevelDB batch operation, and applied at transaction commit. This has a consequence:
**the transaction is unable to see its own changes until it commits**
We'll call the above CANT-SEE-OWN-CHANGES property. The property is contrary to the SQL's semantics. In SQL, one expects to see the changes they've made in preceding statements. However, the set of transactions we're targeting can live with CANT-SEE-OWN-CHANGES, so we'll live with the property for MS1.
After MS1, LevelDB SE will make sure that CANT-SEE-OWN changes is not observed. It will use the following approach:
* keep track of what records have been modified by this transaction in a buffer $R.
* If SQL layer makes a request to read a row, then
+ Consult $R if the record was INSERTed. If yes, return what was inserted.
+ Consult $R if the record was modified. if yes, return what was recorded to be the result of modification
+ Consult $R if the record was deleted. If yes, return "record not found".
+ Finally, try reading the row from the LevelDB.
Table Access methods
--------------------
MS1 will support:
* Full table scan.
* index lookups and range scans over primary and secondary indexes.
### Optimizer statistics
* Estimate of #records in the table will be obtained from DB::GetApproximateSizes() (see below for details)
* Estimate of #records-in-range will be obtained from DB::GetApproximateSizes()
* There is no acceptable estimate for #rec\_per\_key of secondary indexes (or for prefixes of the primary key). MS1 will perform some trivial guesswork.
Note: DB::GetApproximateSizes() returns amount of disk space occupied by the data. The number cannot be directly translated to #rows, because
* We do not always know average record length
* Disk data is compressed.
Because of this, optimizer will have very imprecise input. It is expected to be still sufficient for MS1.
Write-optimized INSERTs
-----------------------
We will need to do fast bulk data loads. During the bulk load, writes-without-reads are ok: the user knows he's not overwriting data, he doesn't care about @@rows\_affected.
These will be achieved as follows:
* there will be a thread-local @@leveldb\_bulk\_load variable.
* Its default value is FALSE.
* When it is set to true, INSERTS (which make ha\_leveldb::write() calls) will work in bulk-loading mode.
Bulk-loading mode means:
* INSERTs will be done in batches of @@leveldb\_bulk\_load\_size each
* INSERTs will take no locks, and do no-read-writes. In other words, they will silently overwrite data
* @@affected\_rows will return the value that will show that all records were successfully inserted.
What will not be supported
--------------------------
* Non-blocking schema changes will not be supported at all in the first milestone. All DDL-modifying operations will use pump all the data from the original table to the table with the new DDL.
* Binlog XA on the master will not be supported.
* Crash-proof slave will not be supported.
* Building server packages (\*.rpm, \*.deb, etc) will not be supported (leveldb dependency may be challenging).
Other details
-------------
* The patch will be against mysql-5.6
* *TODO: How to run DROP TABLE* The only way we implement DROP TABLE is to delete record by record. The size of changes may become too big to be in RAM. If we split into multiple transactions, we'll have to handle crashes in the middle of DROP TABLE. *Q: can we avoid that for the first milestone?*
* *TODO: There is no efficient way to run TRUNCATE TABLE. Is this ok?*
* *TODO: In the above spec, nothing is said about max. transaction size. Is it ok not to have it for MS1?*
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Type Conversion Type Conversion
===============
Implicit type conversion takes place when MariaDB is using operands or different types, in order to make the operands compatible.
It is best practice not to rely upon implicit conversion; rather use [CAST](../cast/index) to explicitly convert types.
### Rules for Conversion on Comparison
* If either argument is NULL, the result of the comparison is NULL unless the NULL-safe [<=>](../null-safe-equal/index) equality comparison operator is used.
* If both arguments are integers, they are compared as integers.
* If both arguments are strings, they are compared as strings.
* If one argument is decimal and the other argument is decimal or integer, they are compared as decimals.
* If one argument is decimal and the other argument is a floating point, they are compared as floating point values.
* If one argument is string and the other argument is integer, they are compared as decimals. This conversion was added in [MariaDB 10.3.36](https://mariadb.com/kb/en/mariadb-10336-release-notes/). Prior to 10.3.36, this combination was compared as floating point values, which did not always work well for huge 64-bit integers because of a possible precision loss on conversion to double.
* If a hexadecimal argument is not compared to a number, it is treated as a binary string.
* If a constant is compared to a TIMESTAMP or DATETIME, the constant is converted to a timestamp, unless used as an argument to the [IN](../in/index) function.
* In other cases, arguments are compared as floating point, or real, numbers.
Note that if a string column is being compared with a numeric value, MariaDB will not use the index on the column, as there are numerous alternatives that may evaluate as equal (see examples below).
#### Comparison Examples
Converting a string to a number:
```
SELECT 15+'15';
+---------+
| 15+'15' |
+---------+
| 30 |
+---------+
```
Converting a number to a string:
```
SELECT CONCAT(15,'15');
+-----------------+
| CONCAT(15,'15') |
+-----------------+
| 1515 |
+-----------------+
```
Floating point number errors:
```
SELECT '9746718491924563214' = 9746718491924563213;
+---------------------------------------------+
| '9746718491924563214' = 9746718491924563213 |
+---------------------------------------------+
| 1 |
+---------------------------------------------+
```
Numeric equivalence with strings:
```
SELECT '5' = 5;
+---------+
| '5' = 5 |
+---------+
| 1 |
+---------+
SELECT ' 5' = 5;
+------------+
| ' 5' = 5 |
+------------+
| 1 |
+------------+
SELECT ' 5 ' = 5;
+--------------+
| ' 5 ' = 5 |
+--------------+
| 1 |
+--------------+
1 row in set, 1 warning (0.000 sec)
SHOW WARNINGS;
+-------+------+--------------------------------------------+
| Level | Code | Message |
+-------+------+--------------------------------------------+
| Note | 1292 | Truncated incorrect DOUBLE value: ' 5 ' |
+-------+------+--------------------------------------------+
```
As a result of the above, MariaDB cannot use the index when comparing a string with a numeric value in the example below:
```
CREATE TABLE t (a VARCHAR(10), b VARCHAR(10), INDEX idx_a (a));
INSERT INTO t VALUES
('1', '1'), ('2', '2'), ('3', '3'),
('4', '4'), ('5', '5'), ('1', '5');
EXPLAIN SELECT * FROM t WHERE a = '3' \G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: t
type: ref
possible_keys: idx_a
key: idx_a
key_len: 13
ref: const
rows: 1
Extra: Using index condition
EXPLAIN SELECT * FROM t WHERE a = 3 \G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: t
type: ALL
possible_keys: idx_a
key: NULL
key_len: NULL
ref: NULL
rows: 6
Extra: Using where
```
### Rules for Conversion on Dyadic Arithmetic Operations
Implicit type conversion also takes place on dyadic arithmetic operations ([+](../addition-operator/index),[-](../subtraction-operator-/index),[\*](../multiplication-operator/index),[/](../division-operator/index)). MariaDB chooses the minimum data type that is guaranteed to fit the result and converts both arguments to the result data type.
For [addition (+)](../addition-operator/index), [subtraction (-)](../subtraction-operator-/index) and [multiplication (\*)](../multiplication-operator/index), the result data type is chosen as follows:
* If either of the arguments is an approximate number (float, double), the result is double.
* If either of the arguments is a string (char, varchar, text), the result is double.
* If either of the arguments is a decimal number, the result is decimal.
* If either of the arguments is of a temporal type with a non-zero fractional second precision (time(N), datetime(N), timestamp(N)), the result is decimal.
* If either of the arguments is of a temporal type with a zero fractional second precision (time(0), date, datetime(0), timestamp(0)), the result may vary between int, int unsigned, bigint or bigint unsigned, depending on the exact data type combination.
* If both arguments are integer numbers (tinyint, smallint, mediumint, bigint), the result may vary between int, int unsigned, bigint or bigint unsigned, depending of the exact data types and their signs.
For [division (/)](../division-operator/index), the result data type is chosen as follows:
* If either of the arguments is an approximate number (float, double), the result is double.
* If either of the arguments is a string (char, varchar, text), the result is double.
* Otherwise, the result is decimal.
#### Arithmetic Examples
Note, the above rules mean that when an argument of a temporal data type appears in addition or subtraction, it's treated as a number by default.
```
SELECT TIME'10:20:30' + 1;
+--------------------+
| TIME'10:20:30' + 1 |
+--------------------+
| 102031 |
+--------------------+
```
In order to do temporal addition or subtraction instead, use the [DATE\_ADD()](../date_add/index) or [DATE\_SUB()](../date_sub/index) functions, or an [INTERVAL](../date-and-time-units/index) expression as the second argument:
```
SELECT TIME'10:20:30' + INTERVAL 1 SECOND;
+------------------------------------+
| TIME'10:20:30' + INTERVAL 1 SECOND |
+------------------------------------+
| 10:20:31 |
+------------------------------------+
```
```
SELECT "2.2" + 3;
+-----------+
| "2.2" + 3 |
+-----------+
| 5.2 |
+-----------+
SELECT 2.2 + 3;
+---------+
| 2.2 + 3 |
+---------+
| 5.2 |
+---------+
SELECT 2.2 / 3;
+---------+
| 2.2 / 3 |
+---------+
| 0.73333 |
+---------+
SELECT "2.2" / 3;
+--------------------+
| "2.2" / 3 |
+--------------------+
| 0.7333333333333334 |
+--------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb InnoDB Temporary Tablespaces InnoDB Temporary Tablespaces
============================
**MariaDB starting with [10.2](../what-is-mariadb-102/index)**The use of the temporary tablespaces in InnoDB was introduced in [MariaDB 10.2](../what-is-mariadb-102/index). In earlier versions, temporary tablespaces exist as part of the InnoDB [system](../innodb-system-tablespaces/index) tablespace or were file-per-table depending on the configuration of the [innodb\_file\_per\_table](../innodb-system-variables/index#innodb_file_per_table) system variable.
When the user creates a temporary table using the [CREATE TEMPORARY TABLE](../create-table/index) statement and the engine is set as InnoDB, MariaDB creates a temporary tablespace file. When the table is not compressed, MariaDB writes to a shared temporary tablespace as defined by the [innodb\_temp\_data\_file\_path](../innodb-system-variables/index#innodb_temp_data_file_path) system variable. MariaDB does not allow the creation of ROW\_FORMAT=COMPRESSED temporary tables. All temporary tables will be uncompressed. MariaDB deletes temporary tablespaces when the server shuts down gracefully and is recreated when it starts again. It cannot be placed on a raw device.
Internal temporary tablespaces, (that is, temporary tables that cannot be kept in memory) use either Aria or MyISAM, depending on the [aria\_used\_for\_temp\_tables](../aria-system-variables/index#aria_used_for_temp_tables) system variable. You can set the default storage engine for user-created temporary tables using the [default\_tmp\_storage\_engine](../server-system-variables/index#default_tmp_storage_engine) system variable.
Sizing Temporary Tablespaces
----------------------------
In order to size temporary tablespaces, use the [innodb\_temp\_data\_file\_path](../innodb-system-variables/index#innodb_temp_data_file_path) system variable. This system variable can be specified as a command-line argument to [mysqld](../mysqld-options/index) or it can be specified in a relevant server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index). For example:
```
[mariadb]
...
innodb_temp_data_file_path=ibtmp1:32M:autoextend
```
This system variable's syntax is the same as the [innodb\_data\_file\_path](../innodb-system-variables/index#innodb_data-_file_path) system variable. That is, a file name, size and option. By default, it writes a 12MB autoextending file to `ibtmp1` in the data directory.
To increase the size of the temporary tablespace, you can add a path to an additional tablespace file to the value of the the [innodb\_temp\_data\_file\_path](../innodb-system-variables/index#innodb_temp_data_file_path) system variable. Providing additional paths allows you to spread the temporary tablespace between multiple tablespace files. The last file can have the `autoextend` attribute, which ensures that you won't run out of space. For example:
```
[mariadb]
...
innodb_temp_data_file_path=ibtmp1:32M;ibtmp2:32M:autoextend
```
Unlike normal tablespaces, temporary tablespaces are deleted when you stop MariaDB. To shrink temporary tablespaces to their minimum sizes, restart the server.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb sysbench v0.5 - 3x Five Minute Runs on work with 5.2-wl86 sysbench v0.5 - 3x Five Minute Runs on work with 5.2-wl86
=========================================================
3x Five Minute Runs on work with 5.2-wl86 key cache partitions on and off
MariDB 5.2-wl86 sysbench benchmark comparison with key\_cache\_partitions off and 7 in %
Each test was run three times for 5 minutes.
```
Number of threads
1 4 8 16 32 64 128
sysbench test
delete -18.36 -20.66 -11.32 5.42 -2.91 -14.62 -3.47
insert -2.38 -30.11 -1.64 -0.98 -1.19 0.12 -2.37
oltp_complex_ro 0.16 2.61 4.03 2.99 3.10 5.73 20.95
oltp_complex_rw Dup key errors (due to sysbench)
oltp_simple -1.24 1.86 11.14 10.69 16.11 17.16 14.31
select -0.22 2.00 11.42 10.31 15.58 17.10 14.31
update_index -9.34 15.75 -0.36 -10.33 1.94 2.44 41.44
update_non_index 0.73 1.04 11.12 17.32 5.30 -0.24 -9.55
(MariaDB 5.2-wl86 key_cache_partitions off q/s /
MariaDB 5.2-wl86 key_cache_partitions=7 q/s * 100)
key_buffer_size = 32M
```
Benchmark was run on work: Linux openSUSE 11.1 (x86\_64), daul socket quad-core Intel 3.0GHz. with 6MB L2 cache, 8 GB RAM, data\_dir on single disk.
MariaDB and MySQL were compiled with
```
BUILD/compile-amd64-max
```
MariaDB revision was:
```
lp:~maria-captains/maria/maria-5.2-wl86
revno: 2742
committer: Igor Babaev <[email protected]>
branch nick: maria-5.2-keycache
timestamp: Tue 2010-02-16 08:41:11 -0800
message:
WL#86: Partitioned key cache for MyISAM.
This is the base patch for the task.
```
sysbench was run with the following parameters:
```
--oltp-table-size=20000000 \ # 20 mio rows
--max-time=300 \
--max-requests=0 \
--mysql-table-engine=MyISAM \
--mysql-user=root \
--mysql-engine-trx=no \
--myisam-max-rows=50000000"
```
and this variable part of parameters
```
--num-threads=$THREADS --test=${TEST_DIR}/${SYSBENCH_TEST}
```
Configuration used for MariDB:
```
--no-defaults \
--datadir=$DATA_DIR \
--language=./sql/share/english \
--key_buffer_size=32M \
--max_connections=256 \
--query_cache_size=0 \
--query_cache_type=0 \
--skip-grant-tables \
--socket=$MY_SOCKET \
--table_open_cache=512 \
--thread_cache=512 \
--tmpdir=$TEMP_DIR"
# --key_cache_partitions=7 \
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb QA Tests QA Tests
========
Optimizer and the Random Query Generator
----------------------------------------
The RQG is used to test various Optimizer features. See the [Optimizer Quality](../optimizer-quality/index) article for more information.
Aria Engine Recovery
--------------------
The [QA - Aria Recovery](../qa-aria-recovery/index) page contains a plan on how to test it.
Upgrade and Installer Testing
-----------------------------
* Upgrades using .deb and RPM packages are tested using very simple tests in BuildBot by the various `bld_kvm*` builders.
### TODO:
* More complex tests around .deb, RPM and tarballs;
* Decide on specific upgrade/downgrade paths (e.g. MySQL 5.5 to [MariaDB 2.2](../what-is-mariadb-22/index)?) and methods (mysqldump, mysql\_upgrade) that we support and test each individually;
* Test the Windows installer and service NSIS allows for scripted unattended installs by providing an `/SD` argument to functions such as `MessageBox`.
* Test the contents of the Windows package, e.g. if HELP, .test, etc. files are properly placed and runnable;
Linking Testing
---------------
The purpose of those tests is to check that various applications that use `libmysql` can be compiled, linked and run with MariaDB. They are run by the `compile-connectors` builder in BuildBot
* Perl DBD::mysql
+ We configure and compile the Perl DBI MySQL driver. Then we run the test suite provided with it.
* PHP
+ We configure and compile both the `mysql` and `mysqli` PHP drivers without mysql-nd. For each, we run those tests from the PHP test suite that are known to be good (other tests fail for both MySQL and MariaDB).
### TODO:
* Perl and PHP with the embedded library
Connectors Testing
------------------
The purpose of those tests is to check that the libraries that implement the MySQL protocol can work with MariaDB.
* The `libmysql` library/connector is tested both by the MTR test suite (since `mysqltest` links with it)
### TODO:
* PHP with the mysql-nd driver
* Connector C++
* JDBC
Replication Testing
-------------------
Individual applications:
* group commit:
```
perl runall.pl \
--engine=InnoDB \
--grammar=conf/replication/rpl_transactions.yy \
--gendata=conf/replication/rpl_transactions.zz \
--mysqld=--sync_binlog=1 \
--mysqld=--innodb-flush_log_at_trx_commit=1 \
--mysqld=--binlog-dbug_fsync_sleep=100000 \
--mysqld=--default-storage-engine=InnoDB \
--threads=15 \
--queries=1M \
--duration=600 \
--validator=None
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb OCT OCT
===
Syntax
------
```
OCT(N)
```
Description
-----------
Returns a string representation of the octal value of N, where N is a longlong ([BIGINT](../bigint/index)) number. This is equivalent to [CONV(N,10,8)](../conv/index). Returns NULL if N is NULL.
Examples
--------
```
SELECT OCT(34);
+---------+
| OCT(34) |
+---------+
| 42 |
+---------+
SELECT OCT(12);
+---------+
| OCT(12) |
+---------+
| 14 |
+---------+
```
See Also
--------
* [CONV()](../conv/index)
* [BIN()](../bin/index)
* [HEX()](../hex/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema USER_STATISTICS Table Information Schema USER\_STATISTICS Table
=========================================
The [Information Schema](../information_schema/index) `USER_STATISTICS` table holds statistics about user activity. This is part of the [User Statistics](../user-statistics/index) feature, which is not enabled by default.
You can use this table to find out such things as which user is causing the most load and which users are being abusive. You can also use this table to measure how close to capacity the server may be.
It contains the following columns:
| Field | Type | Notes |
| --- | --- | --- |
| `USER` | `varchar(48)` | The username. The value `'#mysql_system_user#'` appears when there is no username (such as for the slave SQL thread). |
| `TOTAL_CONNECTIONS` | `int(21)` | The number of connections created for this user. |
| `CONCURRENT_CONNECTIONS` | `int(21)` | The number of concurrent connections for this user. |
| `CONNECTED_TIME` | `int(21)` | The cumulative number of seconds elapsed while there were connections from this user. |
| `BUSY_TIME` | `double` | The cumulative number of seconds there was activity on connections from this user. |
| `CPU_TIME` | `double` | The cumulative CPU time elapsed while servicing this user's connections. |
| `BYTES_RECEIVED` | `int(21)` | The number of bytes received from this user's connections. |
| `BYTES_SENT` | `int(21)` | The number of bytes sent to this user's connections. |
| `BINLOG_BYTES_WRITTEN` | `int(21)` | The number of bytes written to the [binary log](../binary-log/index) from this user's connections. |
| `ROWS_READ` | `int(21)` | The number of rows read by this user's connections. |
| `ROWS_SENT` | `int(21)` | The number of rows sent by this user's connections. |
| `ROWS_DELETED` | `int(21)` | The number of rows deleted by this user's connections. |
| `ROWS_INSERTED` | `int(21)` | The number of rows inserted by this user's connections. |
| `ROWS_UPDATED` | `int(21)` | The number of rows updated by this user's connections. |
| `SELECT_COMMANDS` | `int(21)` | The number of `[SELECT](../select/index)` commands executed from this user's connections. |
| `UPDATE_COMMANDS` | `int(21)` | The number of `[UPDATE](../update/index)` commands executed from this user's connections. |
| `OTHER_COMMANDS` | `int(21)` | The number of other commands executed from this user's connections. |
| `COMMIT_TRANSACTIONS` | `int(21)` | The number of `[COMMIT](../commit/index)` commands issued by this user's connections. |
| `ROLLBACK_TRANSACTIONS` | `int(21)` | The number of `[ROLLBACK](../rollback/index)` commands issued by this user's connections. |
| `DENIED_CONNECTIONS` | `int(21)` | The number of connections denied to this user. |
| `LOST_CONNECTIONS` | `int(21)` | The number of this user's connections that were terminated uncleanly. |
| `ACCESS_DENIED` | `int(21)` | The number of times this user's connections issued commands that were denied. |
| `EMPTY_QUERIES` | `int(21)` | The number of times this user's connections sent empty queries to the server. |
| `TOTAL_SSL_CONNECTIONS` | `int(21)` | The number of [TLS connections](../secure-connections/index) created for this user. (>= [MariaDB 10.1.1](https://mariadb.com/kb/en/mariadb-1011-release-notes/)) |
| `MAX_STATEMENT_TIME_EXCEEDED` | `int(21)` | The number of times a statement was aborted, because it was executed longer than its `[MAX\_STATEMENT\_TIME](../aborting-statements-that-take-longer-than-a-certain-time-to-execute/index)` threshold. (>= [MariaDB 10.1.1](https://mariadb.com/kb/en/mariadb-1011-release-notes/)) |
Example
-------
```
SELECT * FROM information_schema.USER_STATISTICS\G
*************************** 1. row ***************************
USER: root
TOTAL_CONNECTIONS: 1
CONCURRENT_CONNECTIONS: 0
CONNECTED_TIME: 297
BUSY_TIME: 0.001725
CPU_TIME: 0.001982
BYTES_RECEIVED: 388
BYTES_SENT: 2327
BINLOG_BYTES_WRITTEN: 0
ROWS_READ: 0
ROWS_SENT: 12
ROWS_DELETED: 0
ROWS_INSERTED: 13
ROWS_UPDATED: 0
SELECT_COMMANDS: 4
UPDATE_COMMANDS: 0
OTHER_COMMANDS: 3
COMMIT_TRANSACTIONS: 0
ROLLBACK_TRANSACTIONS: 0
DENIED_CONNECTIONS: 0
LOST_CONNECTIONS: 0
ACCESS_DENIED: 0
EMPTY_QUERIES: 1
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema session_status Table Performance Schema session\_status Table
========================================
**MariaDB starting with [10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/)**The `session_status` table was added in [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/).
The `session_status` table contains a list of status variables for the current session. The table only stores status variable statistics for threads which are instrumented, and does not collect statistics for `Com_xxx` variables.
The table contains the following columns:
| Column | Description |
| --- | --- |
| `VARIABLE_NAME` | The session status variable name. |
| `VARIABLE_VALUE` | The session status variable value. |
It is not possible to empty this table with a `TRUNCATE TABLE` statement.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SQLyog: Community Edition SQLyog: Community Edition
=========================
SQLyog (<https://webyog.com/product/sqlyog/>) is a GUI tool to manage MySQL and MariaDB servers and databases in physical, virtual, and cloud environments. DBAs, developers, and database architects alike, use SQLyog to visually compare, optimize, and document schemas.
Key Features ● Automatically synchronize data ● Visually compare data ● Import external data
Additional Highlights ● Runs on Microsoft Windows with no dependencies on runtimes (such as Microsoft .NET and Java) and database abstraction layers (such as ODBC and JDBC). ● Distributed as a free Community edition and as a paid, proprietary Ultimate edition. Learn about why you should consider upgrading from the Community edition to the Ultimate edition here.
Click here for the Community version. <https://github.com/webyog/sqlyog-community/blob/master/README.md>
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SHOW PRIVILEGES SHOW PRIVILEGES
===============
Syntax
------
```
SHOW PRIVILEGES
```
Description
-----------
`SHOW PRIVILEGES` shows the list of [system privileges](../grant/index) that the MariaDB server supports. The exact list of privileges depends on the version of your server.
Note that before [MariaDB 10.3.23](https://mariadb.com/kb/en/mariadb-10323-release-notes/), [MariaDB 10.4.13](https://mariadb.com/kb/en/mariadb-10413-release-notes/) and [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/) , the [Delete history](../grant/index#table-privileges) privilege displays as `Delete versioning rows` ([MDEV-20382](https://jira.mariadb.org/browse/MDEV-20382)).
Example
-------
From [MariaDB 10.5.9](https://mariadb.com/kb/en/mariadb-1059-release-notes/)
```
SHOW PRIVILEGES;
+--------------------------+---------------------------------------+--------------------------------------------------------------------+
| Privilege | Context | Comment |
+--------------------------+---------------------------------------+--------------------------------------------------------------------+
| Alter | Tables | To alter the table |
| Alter routine | Functions,Procedures | To alter or drop stored functions/procedures |
| Create | Databases,Tables,Indexes | To create new databases and tables |
| Create routine | Databases | To use CREATE FUNCTION/PROCEDURE |
| Create temporary tables | Databases | To use CREATE TEMPORARY TABLE |
| Create view | Tables | To create new views |
| Create user | Server Admin | To create new users |
| Delete | Tables | To delete existing rows |
| Delete history | Tables | To delete versioning table historical rows |
| Drop | Databases,Tables | To drop databases, tables, and views |
| Event | Server Admin | To create, alter, drop and execute events |
| Execute | Functions,Procedures | To execute stored routines |
| File | File access on server | To read and write files on the server |
| Grant option | Databases,Tables,Functions,Procedures | To give to other users those privileges you possess |
| Index | Tables | To create or drop indexes |
| Insert | Tables | To insert data into tables |
| Lock tables | Databases | To use LOCK TABLES (together with SELECT privilege) |
| Process | Server Admin | To view the plain text of currently executing queries |
| Proxy | Server Admin | To make proxy user possible |
| References | Databases,Tables | To have references on tables |
| Reload | Server Admin | To reload or refresh tables, logs and privileges |
| Binlog admin | Server | To purge binary logs |
| Binlog monitor | Server | To use SHOW BINLOG STATUS and SHOW BINARY LOG |
| Binlog replay | Server | To use BINLOG (generated by mariadb-binlog) |
| Replication master admin | Server | To monitor connected slaves |
| Replication slave admin | Server | To start/stop slave and apply binlog events |
| Slave monitor | Server | To use SHOW SLAVE STATUS and SHOW RELAYLOG EVENTS |
| Replication slave | Server Admin | To read binary log events from the master |
| Select | Tables | To retrieve rows from table |
| Show databases | Server Admin | To see all databases with SHOW DATABASES |
| Show view | Tables | To see views with SHOW CREATE VIEW |
| Shutdown | Server Admin | To shut down the server |
| Super | Server Admin | To use KILL thread, SET GLOBAL, CHANGE MASTER, etc. |
| Trigger | Tables | To use triggers |
| Create tablespace | Server Admin | To create/alter/drop tablespaces |
| Update | Tables | To update existing rows |
| Set user | Server | To create views and stored routines with a different definer |
| Federated admin | Server | To execute the CREATE SERVER, ALTER SERVER, DROP SERVER statements |
| Connection admin | Server | To bypass connection limits and kill other users' connections |
| Read_only admin | Server | To perform write operations even if @@read_only=ON |
| Usage | Server Admin | No privileges - allow connect only |
+--------------------------+---------------------------------------+--------------------------------------------------------------------+
41 rows in set (0.000 sec)
```
See Also
--------
* [SHOW CREATE USER](../show-create-user/index) shows how the user was created.
* [SHOW GRANTS](../show-grants/index) shows the `GRANTS/PRIVILEGES` for a user.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB Audit Plugin Options and System Variables MariaDB Audit Plugin Options and System Variables
=================================================
There are a several options and system variables related to the [MariaDB Audit Plugin](../server_audit-mariadb-audit-plugin/index), once it has been [installed](../mariadb-audit-plugin-entitymdashentity-installation/index). System variables can be displayed using the [SHOW VARIABLES](../show-variables/index) statement like so:
```
SHOW GLOBAL VARIABLES LIKE '%server_audit%';
+-------------------------------+-----------------------+
| Variable_name | Value |
+-------------------------------+-----------------------+
| server_audit_events | CONNECT,QUERY,TABLE |
| server_audit_excl_users | |
| server_audit_file_path | server_audit.log |
| server_audit_file_rotate_now | OFF |
| server_audit_file_rotate_size | 1000000 |
| server_audit_file_rotations | 9 |
| server_audit_incl_users | |
| server_audit_logging | ON |
| server_audit_mode | 0 |
| server_audit_output_type | file |
| server_audit_query_log_limit | 1024 |
| server_audit_syslog_facility | LOG_USER |
| server_audit_syslog_ident | mysql-server_auditing |
| server_audit_syslog_info | |
| server_audit_syslog_priority | LOG_INFO |
+-------------------------------+-----------------------+
```
To change the value of one of these variables, you can use the `SET` statement, or set them at the command-line when starting MariaDB. It's recommended that you set them in the MariaDB configuration for the server like so:
```
[mariadb]
...
server_audit_excl_users='bob,ted'
...
```
### System Variables
Below is a list of all system variables related to the Audit Plugin. See [Server System Variables](../server-system-variables/index) for a complete list of system variables and instructions on setting them. See also the [full list of MariaDB options, system and status variables](../full-list-of-mariadb-options-system-and-status-variables/index).
#### `server_audit_events`
* **Description:** If set, then this restricts audit logging to certain event types. If not set, then every event type is logged to the audit log. For example: *SET GLOBAL server\_audit\_events='connect, query'*
* **Commandline:** `--server-audit-events=value`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `string`
* **Default Value:** Empty string
* **Valid Values:**
+ `CONNECT`, `QUERY`, `TABLE` (MariaDB Audit Plugin < 1.2.0)
+ `CONNECT`, `QUERY`, `TABLE`, `QUERY_DDL`, `QUERY_DML` (MariaDB Audit Plugin >= 1.2.0)
+ `CONNECT`, `QUERY`, `TABLE`, `QUERY_DDL`, `QUERY_DML`, `QUERY_DCL` (MariaDB Audit Plugin >=1.3.0)
+ `CONNECT`, `QUERY`, `TABLE`, `QUERY_DDL`, `QUERY_DML`, `QUERY_DCL`, `QUERY_DML_NO_SELECT` (MariaDB Audit Plugin >= 1.4.4)
+ See [MariaDB Audit Plugin - Versions](../mariadb-audit-plugin-versions/index) to determine which MariaDB releases contain each MariaDB Audit Plugin versions.
---
#### `server_audit_excl_users`
* **Description:** If not empty, it contains the list of users whose activity will NOT be logged. For example: `SET GLOBAL server_audit_excl_users='user_foo, user_bar'`. CONNECT records aren't affected by this variable - they are always logged. The user is still logged if it's specified in [server\_audit\_incl\_users](#server_audit_incl_users).
* **Commandline:** `--server-audit-excl-users=value`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `string`
* **Default Value:** Empty string
* **Size limit:** 1024 characters
---
#### `server_audit_file_path`
* **Description:** When [server\_audit\_output\_type=file](#server_audit_output_type), sets the path and the filename to the log file. If the specified path exists as a directory, then the log will be created inside that directory with the name 'server\_audit.log'. Otherwise the value is treated as a filename. The default value is 'server\_audit.log', which means this file will be created in the database directory.
* **Commandline:** `--server-audit-file-path=value`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `string`
* **Default Value:** `server_audit.log`
---
#### `server_audit_file_rotate_now`
* **Description:** When [server\_audit\_output\_type=file](#server_audit_output_type), the user can force the log file rotation by setting this variable to ON or 1.
* **Commandline:** `--server-audit-rotate-now[={0|1}]`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `boolean`
* **Default Value:** `OFF`
---
#### `server_audit_file_rotate_size`
* **Description:** When [server\_audit\_output\_type=file](#server_audit_output_type), it limits the size of the log file to the given amount of bytes. Reaching that limit turns on the rotation - the current log file is renamed as 'file\_path.1'. The empty log file is created as 'file\_path' to log into it. The default value is 1000000.
* **Commandline:** `--server-audit-rotate-size=#`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `numeric`
* **Default Value:** `1000000`
* **Range:** `100` to `9223372036854775807`
---
#### `server_audit_file_rotations`
* **Description:** When [server\_audit\_output\_type=file](#server_audit_output_type)', this specifies the number of rotations to save. If set to 0 then the log never rotates. The default value is 9.
* **Commandline:** `--server-audit-rotations=#`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `numeric`
* **Default Value:** `9`
* **Range:** `0` to `999`
---
#### `server_audit_incl_users`
* **Description:** If not empty, it contains a comma-delimited list of users whose activity will be logged. For example: `SET GLOBAL server_audit_incl_users='user_foo, user_bar'`. CONNECT records aren't affected by this variable - they are always logged. This setting has higher priority than [server\_audit\_excl\_users](#server_audit_excl_users). So if the same user is specified both in incl\_ and excl\_ lists, they will still be logged.
* **Commandline:** `--server-audit-incl-users=value`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `string`
* **Default Value:** Empty string
* **Size limit:** 1024 characters
---
#### `server_audit_loc_info`
* **Description:** Used by plugin internals. It has no useful meaning to users.
+ In earlier versions, users see it as a read-only variable.
+ In later versions, it is hidden from the user.
* **Commandline:** N/A
* **Scope:** Global
* **Dynamic:** No
* **Data Type:** `string`
* **Default Value:** Empty string
* **Introduced:** [MariaDB 10.1.12](https://mariadb.com/kb/en/mariadb-10112-release-notes/), [MariaDB 10.0.24](https://mariadb.com/kb/en/mariadb-10024-release-notes/), [MariaDB 5.5.48](https://mariadb.com/kb/en/mariadb-5548-release-notes/)
* **Hidden:** [MariaDB 10.1.18](https://mariadb.com/kb/en/mariadb-10118-release-notes/), [MariaDB 10.0.28](https://mariadb.com/kb/en/mariadb-10028-release-notes/), [MariaDB 5.5.53](https://mariadb.com/kb/en/mariadb-5553-release-notes/)
---
#### `server_audit_logging`
* **Description:** Enables/disables the logging. Expected values are ON/OFF. For example: `SET GLOBAL server_audit_logging=on` If the server\_audit\_output\_type is FILE, this will actually create/open the logfile so the [server\_audit\_file\_path](#server_audit_file_path) should be properly specified beforehand. Same about the SYSLOG-related parameters. The logging is turned off by default.
* **Commandline:** `--server-audit-logging[={0|1}]`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `boolean`
* **Default Value:** `OFF`
---
#### `server_audit_mode`
* **Description:** This variable doesn't have any distinctive meaning for a user. Its value mostly reflects the server version with which the plugin was started and is intended to be used by developers for testing.
* **Commandline:** `--server-audit-mode[=#]`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `numeric`
* **Default Value:** `0`
* **Range:** `0` to `1`
---
#### `server_audit_output_type`
* **Description:** Specifies the desired output type. Can be SYSLOG, FILE or null as no output. For example: `SET GLOBAL server_audit_output_type=file` file: log records will be saved into the rotating log file. The name of the file set by [server\_audit\_file\_path](#server_audit_file_path) variable. syslog: log records will be sent to the local syslogd daemon with the standard <syslog.h> API. The default value is 'file'.
* **Commandline:** `--server-audit-output-type=value`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `enum`
* **Default Value:** `file`
* **Valid Values:** `SYSLOG`, `FILE`
---
#### `server_audit_query_log_limit`
* **Description:** Limit on the length of the query string in a record.
* **Commandline:** `--server-audit-query-log-limit=#`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `numeric`
* **Default Value:** `1024`
* **Range:** `0` to `2147483647`
---
#### `server_audit_syslog_facility`
* **Description:** SYSLOG-mode variable. It defines the 'facility' of the records that will be sent to the syslog. Later the log can be filtered by this parameter.
* **Commandline:** `--server-audit-syslog-facility=value`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `enum`
* **Default Value:** `LOG_USER`
* **Valid Values:** `LOG_USER`, `LOG_MAIL`, `LOG_DAEMON`, `LOG_AUTH`, `LOG_SYSLOG`, `LOG_LPR`, `LOG_NEWS`, `LOG_UUCP`, `LOG_CRON`, `LOG_AUTHPRIV`, `LOG_FTP`, and `LOG_LOCAL0`–`LOG_LOCAL7`.
---
#### `server_audit_syslog_ident`
* **Description:** SYSLOG-mode variable. String value for the 'ident' part of each syslog record. Default value is 'mysql-server\_auditing'. New value becomes effective only after restarting the logging.
* **Commandline:** `--server-audit-syslog-ident=value`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `string`
* **Default Value:** `mysql-server_auditing`
---
#### `server_audit_syslog_info`
* **Description:** SYSLOG-mode variable. The 'info' string to be added to the syslog records. Can be changed any time.
* **Commandline:** `--server-audit-syslog-info=value`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `string`
* **Default Value:** Empty string
---
#### `server_audit_syslog_priority`
* **Description:** SYSLOG-mode variable. Defines the priority of the log records for the syslogd.
* **Commandline:** `--server-audit-syslog-priority=value`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** `enum`
* **Default Value:** `LOG_INFO`
* **Valid Values:**`LOG_EMERG`, `LOG_ALERT`, `LOG_CRIT`, `LOG_ERR`, `LOG_WARNING`, `LOG_NOTICE`, `LOG_INFO`, `LOG_DEBUG`
---
### Options
#### `server_audit`
* **Description:** Controls how the server should treat the plugin when the server starts up.
+ Valid values are:
- `OFF` - Disables the plugin without removing it from the `[mysql.plugins](../mysqlplugin-table/index)` table.
- `ON` - Enables the plugin. If the plugin cannot be initialized, then the server will still continue starting up, but the plugin will be disabled.
- `FORCE` - Enables the plugin. If the plugin cannot be initialized, then the server will fail to start with an error.
- `FORCE_PLUS_PERMANENT` - Enables the plugin. If the plugin cannot be initialized, then the server will fail to start with an error. In addition, the plugin cannot be uninstalled with `[UNINSTALL SONAME](../uninstall-soname/index)` or `[UNINSTALL PLUGIN](../uninstall-plugin/index)` while the server is running.
+ See [MariaDB Audit Plugin - Installation: Prohibiting Uninstallation](../mariadb-audit-plugin-installation/index#prohibiting-uninstallation) for more information on one use case.
+ See [Plugin Overview: Configuring Plugin Activation at Server Startup](../plugin-overview/index#configuring-plugin-activation-at-server-startup) for more information.
* **Commandline:** `--server-audit=val`
* **Data Type:** `enumerated`
* **Default Value:** `ON`
* **Valid Values:** `OFF`, `ON`, `FORCE`, `FORCE_PLUS_PERMANENT`
---
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb General Query Log General Query Log
=================
The general query log is a log of every SQL query received from a client, as well as each client connect and disconnect. Since it's a record of every query received by the server, it can grow large quite quickly.
However, if you only want a record of queries that change data, it might be better to use the [binary log](../binary-log/index) instead. One important difference is that the [binary log](../binary-log/index) only logs a query when the transaction is committed by the server, but the general query log logs a query immediately when it is received by the server.
Enabling the General Query Log
------------------------------
The general query log is disabled by default.
To enable the general query log, set the `[general\_log](../server-system-variables/index#general_log)` system variable to `1`. It can be changed dynamically with `[SET GLOBAL](../set/index#global-session)`. For example:
```
SET GLOBAL general_log=1;
```
It can also be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
...
general_log
```
Configuring the General Query Log Filename
------------------------------------------
By default, the general query log is written to `${hostname}.log` in the `[datadir](../server-system-variables/index#datadir)` directory. However, this can be changed.
One way to configure the general query log filename is to set the `[general\_log\_file](../server-system-variables/index#general_log_file)` system variable. It can be changed dynamically with `[SET GLOBAL](../set/index#global-session)`. For example:
```
SET GLOBAL general_log_file='mariadb.log';
```
It can also be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
...
general_log
general_log_file=mariadb.log
```
If it is a relative path, then the `[general\_log\_file](../server-system-variables/index#general_log_file)` is relative to the `[datadir](../server-system-variables/index#datadir)` directory.
However, the `[general\_log\_file](../server-system-variables/index#general_log_file)` system variable can also be an absolute path. For example:
```
[mariadb]
...
general_log
general_log_file=/var/log/mysql/mariadb.log
```
Another way to configure the general query log filename is to set the `[log-basename](../mysqld-options/index#-log-basename)` option, which configures MariaDB to use a common prefix for all log files (e.g. general query log, [slow query log](../slow-query-log/index), [error log](../error-log/index), [binary logs](../binary-log/index), etc.). The general query log filename will be built by adding a `.log` extension to this prefix. This option cannot be set dynamically. It can be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
...
log-basename=mariadb
general_log
```
The `[log-basename](../mysqld-options/index#-log-basename)` cannot be an absolute path. The log file name is relative to the `[datadir](../server-system-variables/index#datadir)` directory.
Choosing the General Query Log Output Destination
-------------------------------------------------
The general query log can either be written to a file on disk, or it can be written to the `[general\_log](../mysqlgeneral_log-table/index)` table in the `[mysql](../the-mysql-database-tables/index)` database. To choose the general query log output destination, set the `[log\_output](../server-system-variables/index#log_output)` system variable.
### Writing the General Query Log to a File
The general query log is output to a file by default. However, it can be explicitly chosen by setting the `[log\_output](../server-system-variables/index#log_output)` system variable to `FILE`. It can be changed dynamically with `[SET GLOBAL](../set/index#global-session)`. For example:
```
SET GLOBAL log_output='FILE';
```
It can also be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
...
log_output=FILE
general_log
general_log_file=queries.log
```
### Writing the General Query Log to a Table
The general query log can either be written to the `[general\_log](../mysqlgeneral_log-table/index)` table in the `[mysql](../the-mysql-database-tables/index)` database by setting the `[log\_output](../server-system-variables/index#log_output)` system variable to `TABLE`. It can be changed dynamically with `[SET GLOBAL](../set/index#global-session)`. For example:
```
SET GLOBAL log_output='TABLE';
```
It can also be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
...
log_output=TABLE
general_log
```
Some rows in this table might look like this:
```
SELECT * FROM mysql.general_log\G
*************************** 1. row ***************************
event_time: 2014-11-11 08:40:04.117177
user_host: root[root] @ localhost []
thread_id: 74
server_id: 1
command_type: Query
argument: SELECT * FROM test.s
*************************** 2. row ***************************
event_time: 2014-11-11 08:40:10.501131
user_host: root[root] @ localhost []
thread_id: 74
server_id: 1
command_type: Query
argument: SELECT * FROM mysql.general_log
...
```
See [Writing logs into tables](../writing-logs-into-tables/index) for more information.
Disabling the General Query Log for a Session
---------------------------------------------
A user with the [SUPER](../grant/index#global-privileges) privilege can disable logging to the general query log for a connection by setting the [SQL\_LOG\_OFF](../server-system-variables/index#sql_log_off) system variable to `1`. For example:
```
SET SESSION SQL_LOG_OFF=1;
```
Disabling the General Query Log for Specific Statements
-------------------------------------------------------
In [MariaDB 10.3.1](https://mariadb.com/kb/en/mariadb-1031-release-notes/) and later, it is possible to disable logging to the general query log for specific types of statements by setting the `[log\_disabled\_statements](../server-system-variables/index#log_disabled_statements)` system variable. This option cannot be set dynamically. It can be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
...
log_output=FILE
general_log
general_log_file=queries.log
log_disabled_statements='slave,sp'
```
Rotating the General Query Log on Unix and Linux
------------------------------------------------
Unix and Linux distributions offer the [logrotate](https://linux.die.net/man/8/logrotate) utility, which makes it very easy to rotate log files. See [Rotating Logs on Unix and Linux](../rotating-logs-on-unix-and-linux/index) for more information on how to use this utility to rotate the general query log.
See Also
--------
* [MariaDB audit plugin](../server_audit-mariadb-audit-plugin/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CURRENT_ROLE CURRENT\_ROLE
=============
Syntax
------
```
CURRENT_ROLE, CURRENT_ROLE()
```
Description
-----------
Returns the current [role](../roles/index) name. This determines your access privileges. The return value is a string in the utf8 [character set](../data-types-character-sets-and-collations/index).
If there is no current role, NULL is returned.
The output of `SELECT CURRENT_ROLE` is equivalent to the contents of the [ENABLED\_ROLES](../information-schema-enabled_roles-table/index) Information Schema table.
[USER()](../user/index) returns the combination of user and host used to login. [CURRENT\_USER()](../current_user/index) returns the account used to determine current connection's privileges.
Examples
--------
```
SELECT CURRENT_ROLE;
+--------------+
| CURRENT_ROLE |
+--------------+
| NULL |
+--------------+
SET ROLE staff;
SELECT CURRENT_ROLE;
+--------------+
| CURRENT_ROLE |
+--------------+
| staff |
+--------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Optimizer Trace Guide Optimizer Trace Guide
=====================
Optimizer trace uses the JSON format. It is basically a structured log file showing what actions were taken by the query optimizer.
A Basic Example
---------------
Let's take a simple query:
```
MariaDB> explain select * from t1 where a<10;
+------+-------------+-------+-------+---------------+------+---------+------+------+-----------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+-------+---------------+------+---------+------+------+-----------------------+
| 1 | SIMPLE | t1 | range | a | a | 5 | NULL | 10 | Using index condition |
+------+-------------+-------+-------+---------------+------+---------+------+------+-----------------------+
```
One can see the full trace [here](../basic-optimizer-trace-example/index). Taking only the component names, one gets:
```
MariaDB> select * from information_schema.optimizer_trace limit 1\G
*************************** 1. row ***************************
QUERY: select * from t1 where a<10
TRACE:
{
"steps": [
{
"join_preparation": { ... }
},
{
"join_optimization": {
"select_id": 1,
"steps": [
{ "condition_processing": { ... } },
{ "table_dependencies": [ ... ] },
{ "ref_optimizer_key_uses": [ ... ] },
{ "rows_estimation": [
{
"range_analysis": {
"analyzing_range_alternatives" : { ... },
"chosen_range_access_summary": { ... },
},
"selectivity_for_indexes" : { ... },
"selectivity_for_columns" : { ... }
}
]
},
{ "considered_execution_plans": [ ... ] },
{ "attaching_conditions_to_tables": { ... } }
]
}
},
{
"join_execution": { ... }
}
]
}
```
Trace Structure
---------------
For each SELECT, there are two "Steps":
* `join_preparation`
* `join_optimization`
Join preparation shows early query rewrites. `join_optmization` is where most of the query optimizations are done. They are:
* `condition_processing` - basic rewrites in WHERE/ON conditions.
* `ref_optimizer_key_uses` - Construction of possible ways to do ref and eq\_ref accesses.
* `rows_estimation` - Consideration of range and index\_merge accesses.
* `considered_execution_plans` - Join optimization itself, that is, choice of the join order.
* `attaching_conditions_to_tables` - Once the join order is fixed, parts of the WHERE clause are "attached" to tables to filter out rows as early as possible.
The above steps are for just one SELECT. If the query has subqueries, each SELECT will have these steps, and there will be extra steps/rewrites to handle the subquery construct itself.
Extracting Trace Components
---------------------------
If you are interested in some particular part of the trace, MariaDB has two functions that come in handy:
* [JSON\_EXTRACT](../json_extract/index) extracts a part of JSON document
* [JSON\_DETAILED](../json_detailed/index) presents it in a user-readable way.
For example, the contents of the `analyzing_range_alternatives` node can be extracted like so:
```
MariaDB> select JSON_DETAILED(JSON_EXTRACT(trace, '$**.analyzing_range_alternatives'))
-> from INFORMATION_SCHEMA.OPTIMIZER_TRACE\G
*************************** 1. row ***************************
JSON_DETAILED(JSON_EXTRACT(trace, '$**.analyzing_range_alternatives')): [
{
"range_scan_alternatives":
[
{
"index": "a_b_c",
"ranges":
[
"(1) <= (a,b) < (4,50)"
],
"rowid_ordered": false,
"using_mrr": false,
"index_only": false,
"rows": 4,
"cost": 6.2509,
"chosen": true
}
],
"analyzing_roworder_intersect":
{
"cause": "too few roworder scans"
},
"analyzing_index_merge_union": []
}
]
```
Examples of Various Information in the Trace
--------------------------------------------
### Basic Rewrites
A lot of applications construct database query text on the fly, which sometimes means that the query has constructs that are repetitive or redundant. In most cases, the optimizer will be able to remove them. One can check the trace to be sure:
```
explain select * from t1 where not (col1 >= 3);
```
Optimizer trace will show:
```
"steps": [
{
"join_preparation": {
"select_id": 1,
"steps": [
{
"expanded_query": "select t1.a AS a,t1.b AS b,t1.col1 AS col1 from t1 where t1.col1 < 3"
}
```
Here, one can see that `NOT` was removed.
Similarly, one can also see that `IN(...)` with one element is the same as equality:
```
explain select * from t1 where col1 in (1);
```
will show
```
"join_preparation": {
"select_id": 1,
"steps": [
{
"expanded_query": "select t1.a AS a,t1.b AS b,t1.col1 AS col1 from t1 where t1.col1 = 1"
```
On the other hand, converting an UTF-8 column to UTF-8 is not removed:
```
explain select * from t1 where convert(utf8_col using utf8) = 'hello';
```
will show
```
"join_preparation": {
"select_id": 1,
"steps": [
{
"expanded_query": "select t1.a AS a,t1.b AS b,t1.col1 AS col1,t1.utf8_col AS utf8_col from t1 where convert(t1.utf8_col using utf8) = 'hello'"
}
```
so redundant `CONVERT` calls should be used with caution.
### VIEW Processing
MariaDB has two algorithms to handle VIEWs: merging and materialization. If you run a query that uses a VIEW, the trace will have either
```
"view": {
"table": "view1",
"select_id": 2,
"algorithm": "merged"
}
```
or
```
{
"view": {
"table": "view2",
"select_id": 2,
"algorithm": "materialized"
}
},
```
depending on which algorithm was used.
### Range Optimizer - What Ranges Will Be Scanned
The MySQL/MariaDB optimizer has a complex part called the Range Optimizer. This is a module that examines WHERE (and ON) clauses and constructs index ranges that need to be scanned to answer the query. The rules for constructing the ranges are quite complex.
An example: Consider a table
```
create table some_events (
start_date date,
end_date date,
...
key (start_date, end_date)
);
```
and a query:
```
mysql> explain select * from some_events where start_date >= '2019-09-10' and end_date <= '2019-09-14';
+------+-------------+-------------+------+---------------+------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------------+------+---------------+------+---------+------+------+-------------+
| 1 | SIMPLE | some_events | ALL | start_date | NULL | NULL | NULL | 1000 | Using where |
+------+-------------+-------------+------+---------------+------+---------+------+------+-------------+
```
One might think that the optimizer would be able to use the restrictions on both *start\_date* and *end\_date* to construct a narrow range to be scanned. But this is not so, one of the restrictions creates a left-endpoint range and the other one creates a right-endpoint range, hence they cannot be combined.
```
select
JSON_DETAILED(JSON_EXTRACT(trace, '$**.analyzing_range_alternatives')) as trace
from information_schema.optimizer_trace\G
*************************** 1. row ***************************
trace: [
{
"range_scan_alternatives":
[
{
"index": "start_date",
"ranges":
[
"(2019-09-10,NULL) < (start_date,end_date)"
],
...
```
the potential range only uses one of the bounds.
### Ref Access Options
Index-based Nested-loops joins are called "ref access" in MySQL/MariaDB optimizer.
The optimizer analyzes the WHERE/ON conditions and collects all equality conditions that can be used by ref access using some index.
The list of conditions can be found in the `ref_optimizer_key_uses` node. (TODO example)
### Join Optimization
The join optimizer's node is named `considered_execution_plans`.
The optimizer constructs the join orders in a left-to-right fashion. That is, if the query is a join of three tables:
```
select * from t1, t2, t3 where ...
```
then the optimizer will
* Pick the first table (say, it is t1),
* consider adding another table (say, t2), and construct a prefix "t1, t2"
* consider adding the third table (t3), and constructing a prefix "t1, t2, t3", which is a complete join plan Other join orders will be considered as well.
The basic operation here is: "given a join prefix of tables A,B,C ..., try adding table X to it". In JSON, it looks like this:
```
{
"plan_prefix": ["t1", "t2"],
"table": "t3",
"best_access_path": {
"considered_access_paths": [
{
...
}
]
}
}
```
(search for `plan_prefix` followed by `table`).
If you are interested in how the join order of #t1,t2,t3# was constructed (or not constructed), you need to search for these patterns:
* `"plan_prefix":[], "table":"t1"`
* `"plan_prefix":["t1"], "table":"t2"`
* `"plan_prefix":["t1", "t2"], "table":"t3"`
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Merging into MariaDB Merging into MariaDB
=====================
This category explains how we merge various source trees into MariaDB
| Title | Description |
| --- | --- |
| [Merging with a Merge Tree](../merging-with-a-merge-tree/index) | Applies to XtraDB, InnoDB-5.6, Performance Schema 5.6, SphinxSE, PCRE |
| [Creating a New Merge Tree](../creating-a-new-merge-tree/index) | Obsolete article on creating a new merge tree in bzr |
| [Merging from MySQL (obsolete)](../merging-from-mysql-obsolete/index) | Note: This page is obsolete. The information is old, outdated, or otherwise... |
| [Merging TokuDB (obsolete)](../merging-tokudb-obsolete/index) | Note: This page is obsolete. The information is old, outdated, or otherwise... |
| [Merging New XtraDB Releases (obsolete)](../merging-new-xtradb-releases-obsolete/index) | Note: This page is obsolete. The information is old, outdated, or otherwise... |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Sysbench Results Sysbench Results
================
Results from various Sysbench runs. The data is in OpenDocument Spreadsheet format (.ods).
For reference, the "perro" and "work" systems were configured as follows:
| | |
| --- | --- |
| perro | Linux openSUSE 11.1 (x86\_64), single socket dual-core Intel 3.2GHz. with 1MB L2 cache, 2GB RAM, data\_dir on 2 disk software RAID 0 |
| work | Linux openSUSE 11.1 (x86\_64), dual socket quad-core Intel 3.0GHz. with 6MB L2 cache, 8 GB RAM, data\_dir on single disk. |
sysbench v0.5 results
---------------------
* Single Five Minutes Runs on T500 Laptop, OO.org spreadsheet: [Sysbench\_five\_minutes\_mariadb\_mysql\_t500.ods](http://askmonty.org/sysbench-results/Sysbench_five_minutes_mariadb_mysql_t500.ods)
* Single Five Minutes Runs on perro, OO.org spreadsheet: [Sysbench\_five\_minutes\_mariadb\_mysql\_perro.ods](http://askmonty.org/sysbench-results/Sysbench_five_minutes_mariadb_mysql_perro.ods)
* Single Five Minutes Runs on work, OO.org spreadsheet: [Sysbench\_five\_minutes\_mariadb\_mysql\_work.ods](http://askmonty.org/sysbench-results/Sysbench_five_minutes_mariadb_mysql_work.ods)
* Three Times Five Minutes Runs on work with 5.1.42, OO.org spreadsheet: [Sysbench\_five\_minutes\_mariadb\_mysql\_work\_5.1.42.ods](http://askmonty.org/sysbench-results/Sysbench_five_minutes_mariadb_mysql_work_5.1.42.ods)
* Three Times Five Minutes Runs on work with 5.2-wl86 key\_cache\_partitions on and off, OO.org spreadsheet: [Sysbench\_five\_minutes\_mariadb-5.2-wl86\_key\_cache\_partitions\_on\_off\_work.ods](http://askmonty.org/sysbench-results/Sysbench_five_minutes_mariadb-5.2-wl86_key_cache_partitions_on_off_work.ods)
* Three Times Five Minutes Runs on work with 5.1 vs. 5.2-wl86 key\_cache\_partitions off, OO.org spreadsheet: [Sysbench\_five\_minutes\_mariadb-5.2-wl86\_key\_cache\_partitions\_on\_off\_work.ods](http://askmonty.org/sysbench-results/Sysbench_five_minutes_mariadb-5.1_5.2-wl86_key_cache_partitions_off_work.ods)
* Three Times Fifteen Minutes Runs on perro with 5.2-wl86 key\_cache\_partitions off, 8, and 32 and key\_buffer\_size 400, OO.org spreadsheet: [Sysbench\_fifteen\_minutes\_mariadb-5.2-wl86\_key\_cache\_partitions\_off\_8\_32\_kbs\_400.ods](http://askmonty.org/sysbench-results/Sysbench_fifteen_minutes_mariadb-5.2-wl86_key_cache_partitions_off_8_32_kbs_400.ods)
* Three Times Fifteen Minutes Runs on perro with 5.2-wl86 key\_cache\_partitions off, 8, and 32 and key\_buffer\_size 75, OO.org spreadsheet: [Sysbench\_fifteen\_minutes\_mariadb-5.2-wl86\_key\_cache\_partitions\_off\_8\_32\_kbs\_75.ods](http://askmonty.org/sysbench-results/Sysbench_fifteen_minutes_mariadb-5.2-wl86_key_cache_partitions_off_8_32_kbs_75.ods)
* select\_random\_ranges and select\_random\_points, OO.org spreadsheet: [Sysbench\_select\_random\_ranges\_points.ods](http://askmonty.org/sysbench-results/Sysbench_select_random_ranges_points.ods)
* select\_100\_random\_points.lua result on perro with key\_cache\_partitions off and 32, OO.org spreadsheet: [Sysbench\_v0.5\_select\_100\_random\_points.ods](http://askmonty.org/sysbench-results/Sysbench_v0.5_select_100_random_points.ods)
* `select_random_points.lua --random-points=50` result on perro with key\_cache\_partitions off and 32, OO.org spreadsheet: [Sysbench\_v0.5\_select\_50\_random\_points.ods](http://askmonty.org/sysbench-results/Sysbench_v0.5_select_50_random_points.ods)
* `select_random_points.lua --random-points=10` result on perro with key\_cache\_partitions off and 32, OO.org spreadsheet: [Sysbench\_v0.5\_select\_10\_random\_points.ods](http://askmonty.org/sysbench-results/Sysbench_v0.5_select_10_random_points.ods)
* `select_random_points.lua --random-points=10, 50, and 100` results on perro with key\_cache\_segments off, 32, and 64 OO.org spreadsheet: [Sysbench\_v0.5\_select\_random\_points\_10\_50\_100\_perro.ods](http://askmonty.org/sysbench-results/Sysbench_v0.5_select_random_points_10_50_100_perro.ods)
* `select_random_points.lua --random-points=10, 50, and 100` results on pitbull with key\_cache\_segments off, 32, and 64 OO.org spreadsheet: [Sysbench\_v0.5\_select\_random\_points\_10\_50\_100\_pitbull.ods](http://askmonty.org/sysbench-results/Sysbench_v0.5_select_random_points_10_50_100_pitbull.ods)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Feedback Plugin Feedback Plugin
===============
The `feedback` plugin is designed to collect and, optionally, upload configuration and usage information to [MariaDB.org](http://mariadb.org/) or to any other configured URL.
See the [MariaDB User Feedback](http://mariadb.org/feedback_plugin/) page on MariaDB.org to see collected MariaDB usage statistics.
The `feedback` plugin exists in all MariaDB versions.
MariaDB is distributed with this plugin included, but it is not enabled by default. On Windows, this plugin is part of the server and has a special checkbox in the installer window. Either way, you need to explicitly install and enable it in order for feedback data to be sent.
Installing the Plugin
---------------------
Although the plugin's shared library is distributed with MariaDB by default, the plugin is not actually installed by MariaDB by default. There are two methods that can be used to install the plugin with MariaDB.
The first method can be used to install the plugin without restarting the server. You can install the plugin dynamically by executing `[INSTALL SONAME](../install-soname/index)` or `[INSTALL PLUGIN](../install-plugin/index)`. For example:
```
INSTALL SONAME 'feedback';
```
The second method can be used to tell the server to load the plugin when it starts up. The plugin can be installed this way by providing the `[--plugin-load](../mysqld-options/index#-plugin-load)` or the `[--plugin-load-add](../mysqld-options/index#-plugin-load-add)` options. This can be specified as a command-line argument to `[mysqld](../mysqld-options/index)` or it can be specified in a relevant server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index). For example:
```
[mariadb]
...
plugin_load_add = feedback
```
Uninstalling the Plugin
-----------------------
You can uninstall the plugin dynamically by executing `[UNINSTALL SONAME](../uninstall-soname/index)` or `[UNINSTALL PLUGIN](../uninstall-plugin/index)`. For example:
```
UNINSTALL SONAME 'feedback';
```
If you installed the plugin by providing the `[--plugin-load](../mysqld-options/index#-plugin-load)` or the `[--plugin-load-add](../mysqld-options/index#-plugin-load-add)` options in a relevant server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index), then those options should be removed to prevent the plugin from being loaded the next time the server is restarted.
Enabling the Plugin
-------------------
You can enable the plugin by setting the `[feedback](#feedback)` option to `ON` in a relevant server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index). For example:
```
[mariadb]
...
feedback=ON
```
In Windows, the plugin can also be enabled during a new [MSI](../installing-mariadb-msi-packages-on-windows/index) installation. The MSI GUI installation provides the "Enable feedback plugin" checkbox to enable the plugin. The MSI command-line installation provides the FEEDBACK=1 command-line option to enable the plugin.
See the next section for how to verify the plugin is installed and active and (if needed) install the plugin.
Verifying the Plugin's Status
-----------------------------
To verify whether the `feedback` plugin is installed and enabled, execute the `[SHOW PLUGINS](../show-plugins/index)` statement or query the `[information\_schema.plugins](../plugins-table-information-schema/index)` table. For example:
```
SELECT plugin_status FROM information_schema.plugins
WHERE plugin_name = 'feedback';
```
If that `SELECT` returns no rows, then you still need to [install the plugin](#installing-the-plugin).
When the plugin is installed and enabled, you will see:
```
SELECT plugin_status FROM information_schema.plugins
WHERE plugin_name = 'feedback';
+---------------+
| plugin_status |
+---------------+
| ACTIVE |
+---------------+
```
If you see `DISABLED` instead of `ACTIVE`, then you still need to [enable the plugin](#enabling-the-plugin).
Collecting Data
---------------
The `feedback` plugin will collect:
* Certain rows from [SHOW STATUS](../show-status/index) and [SHOW VARIABLES](../show-variables/index).
* All installed [plugins](../plugins/index) and their versions.
* System information such as CPU count, memory, architecture, and OS/linux distribution.
* The [feedback\_server\_uid](#feedback_server_uid), which is a SHA1 hash of the MAC address of the first network interface and the TCP port that the server listens on.
The `feedback` plugin creates the [FEEDBACK](../information-schema-feedback-table/index) table in the [INFORMATION\_SCHEMA](../information-schema/index) database. To see the data that has been collected by the plugin, you can execute:
```
SELECT * FROM information_schema.feedback;
```
Only the contents of this table are sent to the [feedback\_url](#feedback_url).
MariaDB stores collation usage statistics. Each collation that has been used by the server will have a record in "SELECT \* FROM information\_schema.feedback" output, for example:
```
+----------------------------------------+---------------------+
| VARIABLE_NAME | VARIABLE_VALUE |
+----------------------------------------+---------------------+
| Collation used utf8_unicode_ci | 10 |
| Collation used latin1_general_ci | 20 |
+----------------------------------------+---------------------+
```
Collations that have not been used will not be included into the result.
Sending Data
------------
The `feedback` plugin sends the data using a `POST` request to any URL or a list of URLs that you specify by setting the [feedback\_url](#feedback_url) system variable. By default, this is set to the following URL:
* <https://mariadb.org/feedback_plugin/post>
Both HTTP and HTTPS protocols are supported.
If HTTP traffic requires a proxy in your environment, then you can specify the proxy by setting the [feedback\_http\_proxy](#feedback_http_proxy) system variable.
If the [feedback\_url](#feedback_url) system variable is not set to an empty string, then the plugin will automatically send a report to all URLs in the list a few minutes after the server starts up and then once a week after that.
If the [feedback\_url](#feedback_url) system variable is set to an empty string, then the plugin will **not** automatically send any data. This may be necessary if outbound HTTP communication from your database server is not permitted. In this case, you can still upload the data manually, if you'd like.
First, generate the report file with the MariaDB command-line [mysql](../mysql-command-line-client/index) client:
```
$ mysql -e 'select * from information_schema.feedback' > report.txt
```
Then you can upload the generated `report.txt` [here](https://mariadb.org/feedback_plugin/post) using your web browser.
Or you can do it from the command line with tools such as [curl](https://curl.haxx.se/docs/manpage.html). For example:
```
$ curl -F [email protected] https://mariadb.org/feedback_plugin/post
```
Manual uploading allows you to be absolutely sure that we receive only the data shown in the [INFORMATION\_SCHEMA.FEEDBACK](../information-schema-feedback-table/index) table and that no private or sensitive information is being sent.
Versions
--------
| Version | Status | Introduced |
| --- | --- | --- |
| 1.1 | Stable | [MariaDB 10.0.10](https://mariadb.com/kb/en/mariadb-10010-release-notes/) |
| 1.1 | Beta | [MariaDB 5.5.20](https://mariadb.com/kb/en/mariadb-5520-release-notes/), [MariaDB 5.3.3](https://mariadb.com/kb/en/mariadb-533-release-notes/) |
System Variables
----------------
### `feedback_http_proxy`
* **Description:** Proxy server for use when http calls cannot be made, such as in a firewall environment. The format is `host:port`.
* **Commandline:** `--feedback-http=proxy=value`
* **Read-only:** Yes
* **Data Type:** string
* **Default Value:** `''` (empty)
---
### `feedback_send_retry_wait`
* **Description:** Time in seconds before retrying if the plugin failed to send the data for any reason.
* **Commandline:** `--feedback-send-retry-wait=#`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** numeric
* **Default Value:** `60`
* **Valid Values:** `1` to `86400`
---
### `feedback_send_timeout`
* **Description:** An attempt to send the data times out and fails after this many seconds.
* **Commandline:** `--feedback-send-timeout=#`
* **Scope:** Global
* **Dynamic:** Yes
* **Data Type:** numeric
* **Default Value:** `60`
* **Valid Values:** `1` to `86400`
---
### `feedback_server_uid`
* **Description:** Automatically calculated server unique id hash.
* **Scope:** Global
* **Dynamic:** No
* **Data Type:** string
---
### `feedback_url`
* **Description:** URL to which the data is sent. More than one URL, separated by spaces, can be specified. Set it to an empty string to disable data sending.
* **Commandline:** `--feedback-url=url`
* **Scope:** Global
* **Dynamic:** No
* **Data Type:** string
* **Default Value:** `<https://mariadb.org/feedback_plugin/post>`
---
### `feedback_user_info`
* **Description:** The value of this option is not used by the plugin, but it is included in the feedback data. It can be used to add any user-specified string to the report. This could be used to help to identify it. For example, a support contract number, or a computer name (if you collect reports internally by specifying your own `feedback-url`).
* **Commandline:** `--feedback-user-info=string`
* **Scope:** Global
* **Dynamic:** No
* **Data Type:** string
* **Default Value:** Empty string
---
Options
-------
### `feedback`
* **Description:** Controls how the server should treat the plugin when the server starts up.
+ Valid values are:
- `OFF` - Disables the plugin without removing it from the `[mysql.plugins](../mysqlplugin-table/index)` table.
- `ON` - Enables the plugin. If the plugin cannot be initialized, then the server will still continue starting up, but the plugin will be disabled.
- `FORCE` - Enables the plugin. If the plugin cannot be initialized, then the server will fail to start with an error.
- `FORCE_PLUS_PERMANENT` - Enables the plugin. If the plugin cannot be initialized, then the server will fail to start with an error. In addition, the plugin cannot be uninstalled with `[UNINSTALL SONAME](../uninstall-soname/index)` or `[UNINSTALL PLUGIN](../uninstall-plugin/index)` while the server is running.
+ See [Plugin Overview: Configuring Plugin Activation at Server Startup](../plugin-overview/index#configuring-plugin-activation-at-server-startup) for more information.
* **Commandline:** `--feedback=value`
* **Data Type:** `enumerated`
* **Default Value:** `ON`
* **Valid Values:** `OFF`, `ON`, `FORCE`, `FORCE_PLUS_PERMANENT`
---
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ST_SRID ST\_SRID
========
Syntax
------
```
ST_SRID(g)
SRID(g)
```
Description
-----------
Returns an integer indicating the Spatial Reference System ID for the geometry value g.
In MariaDB, the SRID value is just an integer associated with the geometry value. All calculations are done assuming Euclidean (planar) geometry.
`ST_SRID()` and `SRID()` are synonyms.
Examples
--------
```
SELECT SRID(GeomFromText('LineString(1 1,2 2)',101));
+-----------------------------------------------+
| SRID(GeomFromText('LineString(1 1,2 2)',101)) |
+-----------------------------------------------+
| 101 |
+-----------------------------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb LONGTEXT LONGTEXT
========
Syntax
------
```
LONGTEXT [CHARACTER SET charset_name] [COLLATE collation_name]
```
Description
-----------
A [TEXT](../text/index) column with a maximum length of 4,294,967,295 or 4GB (`232 - 1`) characters. The effective maximum length is less if the value contains multi-byte characters. The effective maximum length of LONGTEXT columns also depends on the configured maximum packet size in the client/server protocol and available memory. Each LONGTEXT value is stored using a four-byte length prefix that indicates the number of bytes in the value.
From [MariaDB 10.2.7](https://mariadb.com/kb/en/mariadb-1027-release-notes/), JSON is an alias for LONGTEXT. See [JSON Data Type](../json-data-type/index) for details.
Oracle Mode
-----------
**MariaDB starting with [10.3](../what-is-mariadb-103/index)**In [Oracle mode from MariaDB 10.3](../sql_modeoracle-from-mariadb-103/index#synonyms-for-basic-sql-types), `CLOB` is a synonym for `LONGTEXT`.
See Also
--------
* [TEXT](../text/index)
* [BLOB and TEXT Data Types](../blob-and-text-data-types/index)
* [Data Type Storage Requirements](../data-type-storage-requirements/index)
* [JSON Data Type](../json-data-type/index)
* [Oracle mode from MariaDB 10.3](../sql_modeoracle-from-mariadb-103/index#synonyms-for-basic-sql-types)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Using CONNECT - Condition Pushdown Using CONNECT - Condition Pushdown
==================================
The [ODBC](../connect-table-types-odbc-table-type-accessing-tables-from-other-dbms/index), [JDBC](../connect-jdbc-table-type-accessing-tables-from-other-dbms/index), [MYSQL](../connect-table-types-mysql-table-type-accessing-mysqlmariadb-tables/index), [TBL](../connect-table-types-tbl-table-type-table-list/index) and WMI table types use engine condition pushdown in order to restrict the number of rows returned by the RDBS source or the WMI component.
The CONDITION\_PUSHDOWN argument used in old versions of CONNECT is no longer needed because CONNECT uses condition pushdown unconditionally.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Install Cassandra on Fulltest VMs Install Cassandra on Fulltest VMs
=================================
CassandraSE is no longer actively being developed and has been removed in [MariaDB 10.6](../what-is-mariadb-106/index). See [MDEV-23024](https://jira.mariadb.org/browse/MDEV-23024).
Here are the steps I took to install Cassandra on the Fulltest VMs.
1. backed up the fulltest VMs with:
```
rsync -avP /kvm/vms/*fulltest* host:/destination/path/
```
2. boot the amd64 fulltest VM:
```
vm=vm-precise-amd64-fulltest.qcow2
kvm -m 2048 -hda /kvm/vms/${vm} -boot c -smp 2 -cpu qemu64 -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:22666-:22 -nographic
```
3. login to the VM:
```
ssh -t -p 22666 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /kvm/vms/ssh-keys/id_dsa dbart@localhost
```
4. in the VM, install Cassandra:
```
sudo vi /etc/apt/sources.list.d/cassandra.list
# paste in the following two lines:
deb http://www.apache.org/dist/cassandra/debian 11x main
deb-src http://www.apache.org/dist/cassandra/debian 11x main
gpg --keyserver pgp.mit.edu --recv-keys F758CE318D77295D
gpg --export --armor F758CE318D77295D | sudo apt-key add -
gpg --keyserver pgp.mit.edu --recv-keys 2B5C1B00
gpg --export --armor 2B5C1B00 | sudo apt-key add -
sudo apt-get update
sudo apt-get install cassandra
```
5. in the VM, launch the `cassandra-cli` program and test the Cassandra installation:
```
create keyspace DEMO;
use DEMO;
create column family Users
with key_validation_class = 'UTF8Type'
and comparator = 'UTF8Type'
and default_validation_class = 'UTF8Type';
set Users[1234][name] = scott;
set Users[1234][password] = tiger;
get Users[1234];
quit;
```
* Output of the above:
```
dbart@ubuntu-precise-amd64:~$
cassandra-cli
Connected to: "Test Cluster" on 127.0.0.1/9160
Welcome to Cassandra CLI version 1.1.9
Type 'help;' or '?' for help.
Type 'quit;' or 'exit;' to quit.
[default@unknown]
create keyspace DEMO;
622a672f-dd03-37bf-bf78-f3e99a8f18a6
Waiting for schema agreement...
... schemas agree across the cluster
[default@unknown]
use DEMO;
Authenticated to keyspace: DEMO
[default@DEMO]
create column family Users
...
with key_validation_class = 'UTF8Type'
...
and comparator = 'UTF8Type'
...
and default_validation_class = 'UTF8Type';
605eea14-d3e5-3d1d-ab1d-f4863c814538
Waiting for schema agreement...
... schemas agree across the cluster
[default@DEMO]
set Users[1234][name] = scott;
Value inserted.
Elapsed time: 46 msec(s).
[default@DEMO]
set Users[1234][password] = tiger;
Value inserted.
Elapsed time: 2.77 msec(s).
[default@DEMO]
get Users[1234];
=> (column=name, value=scott, timestamp=1361818884084000)
=> (column=password, value=tiger, timestamp=1361818887944000)
Returned 2 results.
Elapsed time: 53 msec(s).
[default@DEMO]
quit;
```
6. in the VM, shut it down:
```
sudo shutdown -h now
```
7. Do steps 2-6 for `vm-precise-i386-fulltest.qcow2`. The output of the testing step was:
```
dbart@ubuntu-precise-i386:~$
cassandra-cli
Connected to: "Test Cluster" on 127.0.0.1/9160
Welcome to Cassandra CLI version 1.1.9
Type 'help;' or '?' for help.
Type 'quit;' or 'exit;' to quit.
[default@unknown]
create keyspace DEMO;
5eafc25e-71b6-3585-9db1-891b3348790c
Waiting for schema agreement...
... schemas agree across the cluster
[default@unknown]
use DEMO;
Authenticated to keyspace: DEMO
[default@DEMO]
create column family Users
...
with key_validation_class = 'UTF8Type'
...
and comparator = 'UTF8Type'
...
and default_validation_class = 'UTF8Type';
9c2ad7bc-8dc0-35ce-8067-4dc4577319f1
Waiting for schema agreement...
... schemas agree across the cluster
[default@DEMO]
set Users[1234][name] = scott;
Value inserted.
Elapsed time: 51 msec(s).
[default@DEMO]
set Users[1234][password] = tiger;
Value inserted.
Elapsed time: 2.44 msec(s).
[default@DEMO]
get Users[1234];
=> (column=name, value=scott, timestamp=1361819341068000)
=> (column=password, value=tiger, timestamp=1361819345337000)
Returned 2 results.
Elapsed time: 57 msec(s).
[default@DEMO]
quit;
```
8. on the other build hosts, rsync the files over:
```
rsync -avP host::kvm/vms/*fulltest* /kvm/vms/
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Server Status Variables Server Status Variables
=======================
The full list of status variables are listed in the contents on this page; most are described on this page, but some are described elsewhere:
* [Aria Status Variables](../aria-server-status-variables/index)
* [Galera Status Variables](../mariadb-galera-cluster-status-variables/index)
* [InnoDB Status Variables](../innodb-status-variables/index)
* [Mroonga Status Variables](../mroonga-status-variables/index)
* [MyRocks Status Variables](../myrocks-status-variables/index)
* [Performance Scheme Status Variables](../performance-schema-status-variables/index)
* [Replication and Binary Log Status Variables](../replication-and-binary-log-server-status-variables/index)
* [S3 Storage Engine Status Variables](../s3-storage-engine-status-variables/index)
* [Server\_Audit Status Variables](../server_audit-status-variables/index)
* [Sphinx Status Variables](../sphinx-status-variables/index)
* [Spider Status Variables](../spider-server-status-variables/index)
* [TokuDB Status Variables](../tokudb-status-variables/index)
See also the [Full list of MariaDB options, system and status variables](../full-list-of-mariadb-options-system-and-status-variables/index).
Use the [SHOW STATUS](../show-status/index) statement to view status variables. This information also can be obtained using the [mysqladmin extended-status](../mysqladmin/index) command, or by querying the [Information Schema GLOBAL\_STATUS and SESSION\_STATUS](../information-schema-global_status-and-session_status-tables/index) tables.
Issuing a [FLUSH STATUS](../flush/index) will reset many status variables to zero.
List of Server Status Variables
-------------------------------
#### `Aborted_clients`
* **Description:** Number of aborted client connections. This can be due to the client not calling mysql\_close() before exiting, the client sleeping without issuing a request to the server for more seconds than specified by [wait\_timeout](../server-system-variables/index#wait_timeout) or [interactive\_timeout](../server-system-variables/index#interactive_timeout), or by the client program ending in the midst of transferring data. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Aborted_connects`
* **Description:** Number of failed server connection attempts. This can be due to a client using an incorrect password, a client not having privileges to connect to a database, a connection packet not containing the correct information, or if it takes more than [connect\_timeout](../server-system-variables/index#connect_timeout) seconds to get a connect packet. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Aborted_connects_preauth`
* **Description:** Number of connection attempts that were aborted prior to authentication (regardless of whether or not an error occured).
* **Scope:** Global
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.4.5](https://mariadb.com/kb/en/mariadb-1045-release-notes/)
---
#### `Access_denied_errors`
* **Description:** Number of access denied errors.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Acl_column_grants`
* **Description:** Number of column permissions granted (rows in the [mysql.columns\_priv table](../mysqlcolumns_priv-table/index)).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Acl_database_grants`
* **Description:** Number of database permissions granted (rows in the [mysql.db table](../mysqldb-table/index)).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Acl_function_grants`
* **Description:** Number of function permissions granted (rows in the [mysql.procs\_priv table](../mysqlprocs_priv-table/index) with a routine type of `FUNCTION`).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Acl_package_body_grants`
* **Description:**
* **Scope:** Global
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/)
---
#### `Acl_package_spec_grants`
* **Description:**
* **Scope:** Global
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/)
---
#### `Acl_procedure_grants`
* **Description:** Number of procedure permissions granted (rows in the [mysql.procs\_priv table](../mysqlprocs_priv-table/index) with a routine type of `PROCEDURE`).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Acl_proxy_users`
* **Description:** Number of proxy permissions granted (rows in the [mysql.proxies\_priv table](../mysqlproxies_priv-table/index)).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Acl_role_grants`
* **Description:** Number of role permissions granted (rows in the [mysql.roles\_mapping table](../mysqlroles_mapping-table/index)).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Acl_roles`
* **Description:** Number of roles (rows in the [mysql.user table](../mysqluser-table/index) where `is_role='Y'`).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Acl_table_grants`
* **Description:** Number of table permissions granted (rows in the [mysql.tables\_priv table](mysql.tables_priv-table)).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Acl_users`
* **Description:** Number of users (rows in the [mysql.user table](../mysqluser-table/index) where `is_role='N'`).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Busy_time`
* **Description:** Cumulative time in seconds of activity on connections.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Bytes_received`
* **Description:** Total bytes received from all clients.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Bytes_sent`
* **Description:** Total bytes sent to all clients.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_admin_commands`
* **Description:** Number of admin commands executed. These include table dumps, change users, binary log dumps, shutdowns, pings and debugs.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_alter_db`
* **Description:** Number of [ALTER DATABASE](../alter-database/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_alter_db_upgrade`
* **Description:** Number of [ALTER DATABASE ... UPGRADE](../alter-database/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_alter_event`
* **Description:** Number of [ALTER EVENT](../alter-event/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_alter_function`
* **Description:** Number of [ALTER FUNCTION](../alter-function/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_alter_procedure`
* **Description:** Number of [ALTER PROCEDURE](../alter-procedure/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_alter_sequence`
* **Description:** Number of [ALTER SEQUENCE](../alter-sequence/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.1](https://mariadb.com/kb/en/mariadb-1031-release-notes/)
---
#### `Com_alter_server`
* **Description:** Number of [ALTER SERVER](../alter-server/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_alter_table`
* **Description:** Number of [ALTER TABLE](../alter-table/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_alter_tablespace`
* **Description:** Number of [ALTER TABLESPACE](../alter-tablespace/index) commands executed (unsupported by MariaDB).
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Removed:** [MariaDB 10.7.0](https://mariadb.com/kb/en/mariadb-1070-release-notes/)
---
#### `Com_alter_user`
* **Description:** Number of [ALTER USER](../alter-user/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.0](https://mariadb.com/kb/en/mariadb-1020-release-notes/)
---
#### `Com_analyze`
* **Description:** Number of [ANALYZE](../analyze-table/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_assign_to_keycache`
* **Description:** Number of assign to keycache commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_backup`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Removed:** [MariaDB 10.4.1](https://mariadb.com/kb/en/mariadb-1041-release-notes/)
---
#### `Com_backup_lock`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Removed:** [MariaDB 10.4.2](https://mariadb.com/kb/en/mariadb-1042-release-notes/)
---
#### `Com_backup_table`
* **Description:** Removed in [MariaDB 5.5](../what-is-mariadb-55/index). In older versions, Com\_backup\_table contains the number of [BACKUP TABLE](../backup-table-deprecated/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Removed:** [MariaDB 5.5](../what-is-mariadb-55/index)
---
#### `Com_begin`
* **Description:** Number of [BEGIN](../begin-end/index) or [START TRANSACTION](../start-transaction/index) statements executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_binlog`
* **Description:** Number of [BINLOG](../binlog/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_call_procedure`
* **Description:** Number of [CALL](../call/index) procedure\_name statements executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_change_db`
* **Description:** Number of [USE](../use/index) database\_name commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_check`
* **Description:** Number of [CHECK TABLE](../check-table/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_checksum`
* **Description:** Number of [CHECKSUM TABLE](../checksum-table/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_commit`
* **Description:** Number of [COMMIT](../transactions-commit-statement/index) commands executed. Differs from [Handler\_commit](#handler_commit), which counts internal commit statements.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_compound_sql`
* **Description:** Number of [compund](../programmatic-and-compound-statements/index) sql statements.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_create_db`
* **Description:** Number of [CREATE DATABASE](../create-database/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_create_event`
* **Description:** Number of [CREATE EVENT](../create-event/index) commands executed. Differs from [Executed\_events](#executed_events) in that it is incremented when the CREATE EVENT is run, and not when the event executes.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_create_function`
* **Description:** Number of [CREATE FUNCTION](../create-function/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_create_index`
* **Description:** Number of [CREATE INDEX](../create-index/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_create_package`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/)
---
#### `Com_create_package_body`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/)
---
#### `Com_create_procedure`
* **Description:** Number of [CREATE PROCEDURE](../create-procedure/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_create_role`
* **Description:** Number of [CREATE ROLE](../create-role/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_create_sequence`
* **Description:** Number of [CREATE SEQUENCE](../create-sequence/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.0](https://mariadb.com/kb/en/mariadb-1030-release-notes/)
---
#### `Com_create_server`
* **Description:** Number of [CREATE SERVER](../create-server/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_create_table`
* **Description:** Number of [CREATE TABLE](../create-table/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_create_temporary_table`
* **Description:** Number of [CREATE TEMPORARY TABLE](../create-table/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_create_trigger`
* **Description:** Number of [CREATE TRIGGER](../create-trigger/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_create_udf`
* **Description:** Number of [CREATE UDF](../create-function-udf/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_create_user`
* **Description:** Number of [CREATE USER](../create-user/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_create_view`
* **Description:** Number of [CREATE VIEW](../create-view/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_dealloc_sql`
* **Description:** Number of [DEALLOCATE](../deallocate-drop-prepared-statement/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_delete`
* **Description:** Number of [DELETE](../delete/index) commands executed. Differs from [Handler\_delete](#handler_delete), which counts the number of times rows have been deleted from tables.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_delete_multi`
* **Description:** Number of multi-table [DELETE](../delete/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_do`
* **Description:** Number of [DO](../do/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_drop_db`
* **Description:** Number of [DROP DATABASE](../drop-database/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_drop_event`
* **Description:** Number of [DROP EVENT](../drop-event/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_drop_function`
* **Description:** Number of [DROP FUNCTION](../drop-function/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_drop_index`
* **Description:** Number of [DROP INDEX](../drop-index/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_drop_package`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/)
---
#### `Com_drop_package_body`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/)
---
#### `Com_drop_procedure`
* **Description:** Number of [DROP PROCEDURE](../drop-procedure/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_drop_role`
* **Description:** Number of [DROP ROLE](../drop-role/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_drop_sequence`
* **Description:** Number of [DROP SEQUENCE](../drop-sequence/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.0](https://mariadb.com/kb/en/mariadb-1030-release-notes/)
---
#### `Com_drop_server`
* **Description:** Number of [DROP SERVER](../drop-server/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_drop_table`
* **Description:** Number of [DROP TABLE](../drop-table/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_drop_temporary_table`
* **Description:** Number of [DROP TEMPORARY TABLE](../drop-table/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_drop_trigger`
* **Description:** Number of [DROP TRIGGER](../drop-trigger/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_drop_user`
* **Description:** Number of [DROP USER](../drop-user/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_drop_view`
* **Description:** Number of [DROP VIEW](../drop-view/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_empty_query`
* **Description:** Number of queries to the server that do not produce SQL queries. An SQL query simply returning no results does not increment `Com_empty_query` - see [Empty\_queries](#empty_queries) instead. An example of an empty query sent to the server is `mysql --comments -e '-- sql comment'`
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_execute_immediate`
* **Description:** Number of [EXECUTE IMMEDIATE](../execute-immediate/index) statements executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.3](https://mariadb.com/kb/en/mariadb-1023-release-notes/)
---
#### `Com_execute_sql`
* **Description:** Number of [EXECUTE](../execute-statement/index) statements executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_flush`
* **Description:** Number of [FLUSH](../flush/index) commands executed. This differs from [Flush\_commands](#flush_commands), which also counts internal server flush requests.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_get_diagnostics`
* **Description:** Number of [GET DIAGNOSTICS](../get-diagnostics/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_grant`
* **Description:** Number of [GRANT](../grant/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_grant_role`
* **Description:** Number of [GRANT](../grant/index#roles) role commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_ha_close`
* **Description:** Number of [HANDLER](../handler-commands/index) table\_name CLOSE commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_ha_open`
* **Description:** Number of [HANDLER](../handler-commands/index) table\_name OPEN commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_ha_read`
* **Description:** Number of [HANDLER](../handler-commands/index) table\_name READ commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_help`
* **Description:** Number of [HELP](../help-command/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_insert`
* **Description:** Number of [INSERT](../insert/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_insert_select`
* **Description:** Number of [INSERT ... SELECT](../insert-select/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_install_plugin`
* **Description:** Number of [INSTALL PLUGIN](../install-plugin/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_kill`
* **Description:** Number of [KILL](../kill/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_load`
* **Description:** Number of LOAD commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_load_master_data`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Removed:** [MariaDB 5.5](../what-is-mariadb-55/index)
---
#### `Com_load_master_table`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Removed:** [MariaDB 5.5](../what-is-mariadb-55/index)
---
#### `Com_multi`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.0](https://mariadb.com/kb/en/mariadb-1020-release-notes/)
---
#### `Com_lock_tables`
* **Description:** Number of [lock-tables|LOCK TABLES]] commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_optimize`
* **Description:** Number of [OPTIMIZE](../optimize-table/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_preload_keys`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_prepare_sql`
* **Description:** Number of [PREPARE](../prepare-statement/index) statements executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_purge`
* **Description:** Number of [PURGE](../sql-commands-purge-logs/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_purge_before_date`
* **Description:** Number of [PURGE BEFORE](../sql-commands-purge-logs/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_release_savepoint`
* **Description:** Number of [RELEASE SAVEPOINT](../savepoint/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_rename_table`
* **Description:** Number of [RENAME TABLE](../rename-table/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_rename_user`
* **Description:** Number of [RENAME USER](../rename-user/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_repair`
* **Description:** Number of [REPAIR TABLE](../repair-table/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_replace`
* **Description:** Number of [REPLACE](../replace/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_replace_select`
* **Description:** Number of [REPLACE](../replace/index) ... [SELECT](../select/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_reset`
* **Description:** Number of [RESET](../reset/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_resignal`
* **Description:** Number of [RESIGNAL](../resignal/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_restore_table`
* **Description:** Removed in [MariaDB 5.5](../what-is-mariadb-55/index). In older versions, Com\_restore\_table contains the number of [RESTORE TABLE](../restore-table-removed/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Removed:** [MariaDB 5.5](../what-is-mariadb-55/index)
---
#### `Com_revoke`
* **Description:** Number of [REVOKE](../revoke/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_revoke_all`
* **Description:** Number of [REVOKE ALL](../revoke/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_revoke_grant`
* **Description:** Number of [REVOKE](../revoke/index#roles) role commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_rollback`
* **Description:** Number of [ROLLBACK](../rollback/index) commands executed. Differs from [Handler\_rollback](#handler_rollback), which is the number of transaction rollback requests given to a storage engine.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_rollback_to_savepoint`
* **Description:** Number of [ROLLBACK ... TO SAVEPOINT](../rollback/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_savepoint`
* **Description:** Number of [SAVEPOINT](../savepoint/index) commands executed. Differs from [Handler\_savepoint](#handler_savepoint), which is the number of transaction savepoint creation requests.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_select`
* **Description:** Number of [SELECT](../select/index) commands executed. Also includes queries that make use of the [query cache](../query-cache/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_set_option`
* **Description:** Number of [SET OPTION](../set/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_signal`
* **Description:** Number of [SIGNAL](../signal/index) statements executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_authors`
* **Description:** Number of [SHOW AUTHORS](../show-authors/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_binlog_events`
* **Description:** Number of [SHOW BINLOG EVENTS](../show-binlog-events/index) statements executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_binlogs`
* **Description:** Number of [SHOW BINARY LOGS](../show-binary-logs/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_charsets`
* **Description:** Number of [SHOW CHARACTER SET](../show-character-set/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_client_statistics`
* **Description:** Number of [SHOW CLIENT STATISTICS](../show-client_statistics/index) commands executed. Removed in [MariaDB 10.1.1](https://mariadb.com/kb/en/mariadb-1011-release-notes/) when that statement was replaced by the generic [SHOW information\_schema\_table](../information-schema-plugins-show-and-flush-statements/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Removed:** [MariaDB 10.1.1](https://mariadb.com/kb/en/mariadb-1011-release-notes/)
---
#### `Com_show_collations`
* **Description:** Number of [SHOW COLLATION](../show-collation/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_column_types`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Removed:** [MariaDB 5.5](../what-is-mariadb-55/index)
---
#### `Com_show_contributors`
* **Description:** Number of [SHOW CONTRIBUTORS](../show-contributors/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_create_db`
* **Description:** Number of [SHOW CREATE DATABASE](../show-create-database/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_create_event`
* **Description:** Number of [SHOW CREATE EVENT](../show-create-event/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_create_func`
* **Description:** Number of [SHOW CREATE FUNCTION](../show-create-function/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_create_package`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/)
---
#### `Com_show_create_package_body`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/)
---
#### `Com_show_create_proc`
* **Description:** Number of [SHOW CREATE PROCEDURE](../show-create-procedure/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_create_table`
* **Description:** Number of [SHOW CREATE TABLE](../show-create-table/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_create_trigger`
* **Description:** Number of [SHOW CREATE TRIGGER](../show-create-table/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_create_user`
* **Description:** Number of [SHOW CREATE USER](../show-create-user/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.0](https://mariadb.com/kb/en/mariadb-1020-release-notes/)
---
#### `Com_show_databases`
* **Description:** Number of [SHOW DATABASES](../show-databases/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_engine_logs`
* **Description:** Number of [SHOW ENGINE LOGS](../show-engine/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_engine_mutex`
* **Description:** Number of [SHOW ENGINE MUTEX](../show-engine/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_engine_status`
* **Description:** Number of [SHOW ENGINE STATUS](../show-engine/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_events`
* **Description:** Number of [SHOW EVENTS](../show-events/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_errors`
* **Description:** Number of [SHOW ERRORS](../show-errors/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_explain`
* **Description:** Number of [SHOW EXPLAIN](../show-explain/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_fields`
* **Description:** Number of [SHOW COLUMNS](../show-columns/index) or SHOW FIELDS commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_function_status`
* **Description:** Number of [SHOW FUNCTION STATUS](../show-function-status/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_generic`
* **Description:** Number of generic [SHOW](../show/index) commands executed, such as [SHOW INDEX\_STATISTICS](../show-index_statistics/index) and [SHOW TABLE\_STATISTICS](../show-table_statistics/index)
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_grants`
* **Description:** Number of [SHOW GRANTS](../show-grants/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_keys`
* **Description:** Number of [SHOW INDEX](../show-index/index) or SHOW KEYS commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_index_statistics`
* **Description:** Number of [SHOW INDEX\_STATISTICS](../show-index_statistics/index) commands executed. Removed in [MariaDB 10.1.1](https://mariadb.com/kb/en/mariadb-1011-release-notes/) when that statement was replaced by the generic [SHOW information\_schema\_table](../information-schema-plugins-show-and-flush-statements/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Removed:** [MariaDB 10.1.1](https://mariadb.com/kb/en/mariadb-1011-release-notes/)
---
#### `Com_show_open_tables`
* **Description:** Number of [SHOW OPEN TABLES](../show-open-tables/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_package_status`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/)
---
#### `Com_show_package_body_status`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.5](https://mariadb.com/kb/en/mariadb-1035-release-notes/)
---
#### `Com_show_plugins`
* **Description:** Number of [SHOW PLUGINS](../show-plugins/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_privileges`
* **Description:** Number of [SHOW PRIVILEGES](../show-privileges/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_procedure_status`
* **Description:** Number of [SHOW PROCEDURE STATUS](../show-procedure-status/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_processlist`
* **Description:** Number of [SHOW PROCESSLIST](../show-processlist/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_profile`
* **Description:** Number of [SHOW PROFILE](../show-profile/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_profiles`
* **Description:** Number of [SHOW PROFILES](../show-profiles/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_relaylog_events`
* **Description:** Number of [SHOW RELAYLOG EVENTS](../show-relaylog-events/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_status`
* **Description:** Number of [SHOW STATUS](../show-status/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`st
---
#### `Com_show_storage_engines`
* **Description:** Number of [SHOW STORAGE ENGINES](../show-engines/index) - or `SHOW ENGINES` - commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_table_statistics`
* **Description:** Number of [SHOW TABLE STATISTICS](../show-table_statistics/index) commands executed. Removed in [MariaDB 10.1.1](https://mariadb.com/kb/en/mariadb-1011-release-notes/) when that statement was replaced by the generic [SHOW information\_schema\_table](../information-schema-plugins-show-and-flush-statements/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Removed:** [MariaDB 10.1.1](https://mariadb.com/kb/en/mariadb-1011-release-notes/)
---
#### `Com_show_table_status`
* **Description:** Number of [SHOW TABLE STATUS](../show-table-status/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_tables`
* **Description:** Number of [SHOW TABLES](../show-tables/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_triggers`
* **Description:** Number of [SHOW TRIGGERS](../show-triggers/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_user_statistics`
* **Description:** Number of [SHOW USER STATISTICS](../show-user_statistics/index) commands executed. Removed in [MariaDB 10.1.1](https://mariadb.com/kb/en/mariadb-1011-release-notes/) when that statement was replaced by the generic [SHOW information\_schema\_table](../information-schema-plugins-show-and-flush-statements/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Removed:** [MariaDB 10.1.1](https://mariadb.com/kb/en/mariadb-1011-release-notes/)
---
#### `Com_show_variable`
* **Description:** Number of [SHOW VARIABLES](../show-variables/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_show_warnings`
* **Description:** Number of [SHOW WARNINGS](../show-warnings/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_shutdown`
* **Description:** Number of [SHUTDOWN](../shutdown/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_stmt_close`
* **Description:** Number of [prepared statements](../prepared-statements/index) closed ([deallocated or dropped](../deallocate-drop-prepared-statement/index)).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_stmt_execute`
* **Description:** Number of [prepared statements](../prepared-statements/index) [executed](../execute-statement/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_stmt_fetch`
* **Description:** Number of [prepared statements](../prepared-statements/index) fetched.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_stmt_prepare`
* **Description:** Number of [prepared statements](../prepared-statements/index) [prepared](../prepare-statement/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_stmt_reprepare`
* **Description:** Number of [prepared statements](../prepared-statements/index) reprepared.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_stmt_reset`
* **Description:** Number of [prepared statements](../prepared-statements/index) where the data of a prepared statement which was accumulated in chunks by sending long data has been reset.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_stmt_send_long_data`
* **Description:** Number of [prepared statements](../prepared-statements/index) where the parameter data has been sent in chunks (long data).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_truncate`
* **Description:** Number of [TRUNCATE](../truncate/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_uninstall_plugin`
* **Description:** Number of [UNINSTALL PLUGIN](../uninstall-plugin/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_unlock_tables`
* **Description:** Number of [UNLOCK TABLES](../transactions-lock/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_update`
* **Description:** Number of [UPDATE](../update/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_update_multi`
* **Description:** Number of multi-table [UPDATE](../update/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_xa_commit`
* **Description:** Number of XA statements committed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_xa_end`
* **Description:** Number of XA statements ended.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_xa_prepare`
* **Description:** Number of XA statements prepared.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_xa_recover`
* **Description:** Number of XA RECOVER statements executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_xa_rollback`
* **Description:** Number of XA statements rolled back.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Com_xa_start`
* **Description:** Number of XA statements started.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Compression`
* **Description:** Whether client-server traffic is compressed.
* **Scope:** Session
* **Data Type:** `boolean`
---
#### `Connection_errors_accept`
* **Description:** Number of errors that occurred during calls to accept() on the listening port. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Connection_errors_internal`
* **Description:** Number of refused connections due to internal server errors, for example out of memory errors, or failed thread starts. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Connection_errors_max_connections`
* **Description:** Number of refused connections due to the [max\_connections](../server-system-variables/index#max_connections) limit being reached. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Connection_errors_peer_address`
* **Description:** Number of errors while searching for the connecting client IP address. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Connection_errors_select`
* **Description:** Number of errors during calls to select() or poll() on the listening port. The client would not necessarily have been rejected in these cases. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Connection_errors_tcpwrap`
* **Description:** Number of connections the libwrap library refused. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Connections`
* **Description:** Number of connection attempts (both successful and unsuccessful)
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Cpu_time`
* **Description:** Total CPU time used.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Created_tmp_disk_tables`
* **Description:** Number of on-disk temporary tables created.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Created_tmp_files`
* **Description:** Number of temporary files created. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Created_tmp_tables`
* **Description:** Number of in-memory temporary tables created.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Delayed_errors`
* **Description:** Number of errors which occurred while doing [INSERT DELAYED](../insert-delayed/index). The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Delayed_insert_threads`
* **Description:** Number of [INSERT DELAYED](../insert-delayed/index) threads.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Delayed_writes`
* **Description:** Number of [INSERT DELAYED](../insert-delayed/index) rows written. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Delete_scan`
* **Description:** Number of [DELETE](../delete/index)s that required a full table scan.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Empty_queries`
* **Description:** Number of queries returning no results. Note this is not the same as [Com\_empty\_query](#com_empty_query).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Executed_events`
* **Description:** Number of times events created with [CREATE EVENT](../create-event/index) have executed. This differs from [Com\_create\_event](#com_create_event) in that it is only incremented when the event has run, not when it executes.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Executed_triggers`
* **Description:** Number of times triggers created with [CREATE TRIGGER](../create-trigger/index) have executed. This differs from [Com\_create\_trigger](#com_create_trigger) in that it is only incremented when the trigger has run, not when it executes.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Feature_application_time_periods`
* **Description:** Number of times a table created with [periods](../create-table/index#periods) has been opened.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.4.5](https://mariadb.com/kb/en/mariadb-1045-release-notes/)
---
#### `Feature_check_constraint`
* **Description:** Number of times [constraints](../constraint/index) were checked. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.2](https://mariadb.com/kb/en/mariadb-1022-release-notes/)
---
#### `Feature_custom_aggregate_functions`
* **Description:** Number of queries which make use of [custom aggregate functions](stored-aggregate-function).
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.6](https://mariadb.com/kb/en/mariadb-1036-release-notes/)
---
#### `Feature_delay_key_write`
* **Description:** Number of tables opened that are using [delay\_key\_write](../server-system-variables/index#delay_key_write). The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Feature_dynamic_columns`
* **Description:** Number of times the [COLUMN\_CREATE()](../column_create/index) function was used.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Feature_fulltext`
* **Description:** Number of times the [MATCH … AGAINST()](../match-against/index) function was used.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Feature_gis`
* **Description:** Number of times a table with a any of the [geometry](../geometry-types/index) columns was opened.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Feature_insert_returning`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.5.0](https://mariadb.com/kb/en/mariadb-1050-release-notes/)
---
#### `Feature_invisible_columns`
* **Description:** Number of invisible columns in all opened tables.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)
---
#### `Feature_json`
* **Description:** Number of times JSON functionality has been used, such as one of the [JSON functions](../json-functions/index). Does not include the [CONNECT engine JSON type](../connect-json-table-type/index), or [EXPLAIN/ANALYZE FORMAT=JSON](../analyze-statement/index#analyze-formatjson).
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
---
#### `Feature_locale`
* **Description:** Number of times the [@@lc\_messages](../server-system-variables/index#lc_messages) variable was assigned into.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Feature_subquery`
* **Description:** Number of subqueries (excluding subqueries in the FROM clause) used.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Feature_system_versioning`
* **Description:** Number of times [system versioning](../system-versioned-tables/index) functionality has been used (opening a table WITH SYSTEM VERSIONING).
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.7](https://mariadb.com/kb/en/mariadb-1037-release-notes/)
#### `Feature_timezone`
* **Description:** Number of times an explicit timezone (excluding [UTC](../coordinated-universal-time/index) and SYSTEM) was specified.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Feature_trigger`
* **Description:** Number of triggers loaded.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Feature_window_functions`
* **Description:** Number of times [window functions](../window-functions/index) were used.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.2.2](https://mariadb.com/kb/en/mariadb-1022-release-notes/)
---
#### `Feature_xml`
* **Description:** Number of times XML functions ([EXTRACTVALUE()](../extractvalue/index) and [UPDATEXML()](../updatexml/index)) were used.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Flush_commands`
* **Description:** Number of [FLUSH](../flush/index) statements executed, as well as due to internal server flush requests. This differs from [Com\_flush](#com_flush), which simply counts FLUSH statements, not internal server flush operations.
* **Scope:** Global
* **Data Type:** `numeric`
* **Removed:** [MariaDB 10.5.1](https://mariadb.com/kb/en/mariadb-1051-release-notes/)
---
#### `Handler_commit`
* **Description:** Number of internal [COMMIT](../commit/index) requests. Differs from [Com\_commit](#com_commit), which counts the number of [COMMIT](../commit/index) statements executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_delete`
* **Description:** Number of times rows have been deleted from tables. Differs from [Com\_delete](#com_delete), which counts [DELETE](../delete/index) statements.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_discover`
* **Description:** Discovery is when the server asks the NDBCLUSTER storage engine if it knows about a table with a given name. Handler\_discover indicates the number of times that tables have been discovered in this way.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_external_lock`
* **Description:** Incremented for each call to the external\_lock() function, which generally occurs at the beginning and end of access to a table instance.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_icp_attempts`
* **Description:** Number of times pushed index condition was checked. The smaller the ratio of Handler\_icp\_attempts to [Handler\_icp\_match](#handler_icp_match) the better the filtering. See [Index Condition Pushdown](../index-condition-pushdown/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_icp_match`
* **Description:** Number of times pushed index condition was matched. The smaller the ratio of [Handler\_icp\_attempts](#handler_icp_attempts) to Handler\_icp\_match the better the filtering. See See [Index Condition Pushdown](../index-condition-pushdown/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_mrr_init`
* **Description:** Counts how many MRR (multi-range read) scans were performed. See [Multi Range Read optimization](../multi-range-read-optimization/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_mrr_key_refills`
* **Description:** Number of times key buffer was refilled (not counting the initial fill). A non-zero value indicates there wasn't enough memory to do key sort-and-sweep passes in one go. See [Multi Range Read optimization](../multi-range-read-optimization/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_mrr_rowid_refills`
* **Description:** Number of times rowid buffer was refilled (not counting the initial fill). A non-zero value indicates there wasn't enough memory to do rowid sort-and-sweep passes in one go. See [Multi Range Read optimization](../multi-range-read-optimization/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_prepare`
* **Description:** Number of two-phase commit prepares.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_read_first`
* **Description:** Number of requests to read the first row from an index. A high value indicates many full index scans, e.g. `SELECT a FROM table_name` where `a` is an indexed column.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_read_key`
* **Description:** Number of row read requests based on an index value. A high value indicates indexes are regularly being used, which is usually positive.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_read_last`
* **Description:** Number of requests to read the last row from an index. [ORDER BY DESC](../order-by/index) results in a last-key request followed by several previous-key requests.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_read_next`
* **Description:** Number of requests to read the next row from an index (in order). Increments when doing an index scan or querying an index column with a range constraint.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_read_prev`
* **Description:** Number of requests to read the previous row from an index (in order). Mostly used with [ORDER BY DESC](../select/index#order-by).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_read_retry`
* **Description:** Number of read retrys triggered by semi\_consistent\_read (InnoDB feature).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Handler_read_rnd`
* **Description:** Number of requests to read a row based on its position. If this value is high, you may not be using joins that don't use indexes properly, or be doing many full table scans.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_read_rnd_deleted`
* **Description:** Number of requests to delete a row based on its position.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_read_rnd_next`
* **Description:** Number of requests to read the next row. A large number of these may indicate many table scans and improperly used indexes.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_rollback`
* **Description:** Number of transaction rollback requests given to a storage engine. Differs from [Com\_rollback](#com_rollback), which is the number of [ROLLBACK](../rollback/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_savepoint`
* **Description:** Number of transaction savepoint creation requests. Differs from [Com\_savepoint](#com_savepoint) which is the number of [SAVEPOINT](../savepoint/index) commands executed.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_savepoint_rollback`
* **Description:** Number of requests to rollback to a transaction [savepoint](../savepoint/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_tmp_delete`
* **Description:** Number of requests to delete a row in a temporary table.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.0](https://mariadb.com/kb/en/mariadb-1030-release-notes/)
---
#### `Handler_tmp_update`
* **Description:** Number of requests to update a row to a temporary table.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_tmp_write`
* **Description:** Number of requests to write a row to a temporary table.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_update`
* **Description:** Number of requests to update a row in a table. Since [MariaDB 5.3](../what-is-mariadb-53/index), this no longer counts temporary tables - see [Handler\_tmp\_update](#handler_tmp_update).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Handler_write`
* **Description:** Number of requests to write a row to a table. Since [MariaDB 5.3](../what-is-mariadb-53/index), this no longer counts temporary tables - see [Handler\_tmp\_write](#handler_tmp_write).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Key_blocks_not_flushed`
* **Description:** Number of key cache blocks which have been modified but not flushed to disk.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Key_blocks_unused`
* **Description:** Number of unused key cache blocks.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Key_blocks_used`
* **Description:** Max number of key cache blocks which have been used simultaneously.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Key_blocks_warm`
* **Description:** Number of key cache blocks in the warm list.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Key_read_requests`
* **Description:** Number of key cache block read requests. See [Optimizing key\_buffer\_size](../optimizing-key_buffer_size/index).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Key_reads`
* **Description:** Number of physical index block reads. See [Optimizing key\_buffer\_size](../optimizing-key_buffer_size/index).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Key_write_requests`
* **Description:** Number of requests to write a block to the key cache.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Key_writes`
* **Description:** Number of key cache block write requests
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Last_query_cost`
* **Description:** The most recent query optimizer query cost calculation. Can not be calculated for complex queries, such as subqueries or UNION. It will be set to 0 for complex queries.
* **Scope:** Session
* **Data Type:** `numeric`
---
#### `Maria_*`
* **Description:** When the Maria storage engine was renamed Aria, the Maria variables existing at the time were renamed at the same time. See [Aria Server Status Variables](../aria-server-status-variables/index).
---
#### `Max_statement_time_exceeded`
* **Description:** Number of queries that exceeded the execution time specified by [max\_statement\_time](../server-system-variables/index#max_statement_time). See [Aborting statements that take longer than a certain time to execute](../aborting-statements-that-take-longer-than-a-certain-time-to-execute/index).
* **Data Type:** `numeric`
---
#### `Max_used_connections`
* **Description:** Max number of connections ever open at the same time. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Session
* **Data Type:** `numeric`
---
#### `Memory_used`
* **Description:** Global or per-connection memory usage, in bytes. This includes all per-connection memory allocations, but excludes global allocations such as the key\_buffer, innodb\_buffer\_pool etc.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Memory_used_initial`
* **Description:** Amount of memory that was used when the server started to service the user connections.
* **Scope:** Global
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)
---
#### `Not_flushed_delayed_rows`
* **Description:** Number of [INSERT DELAYED](../insert-delayed/index) rows waiting to be written.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Open_files`
* **Description:** Number of regular files currently opened by the server. Does not include sockets or pipes, or storage engines using internal functions.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Open_streams`
* **Description:** Number of currently opened streams, usually log files.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Open_table_definitions`
* **Description:** Number of currently cached .frm files.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Open_tables`
* **Description:** Number of currently opened tables, excluding temporary tables.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Opened_files`
* **Description:** Number of files the server has opened.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Opened_plugin_libraries`
* **Description:** Number of shared libraries that the server has opened to load [plugins](../plugins/index).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Opened_table_definitions`
* **Description:** Number of .frm files that have been cached.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Opened_tables`
* **Description:** Number of tables the server has opened.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Opened_views`
* **Description:** Number of views the server has opened.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Prepared_stmt_count`
* **Description:** Current number of prepared statements.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Qcache_free_blocks`
* **Description:** Number of free [query cache](../query-cache/index) memory blocks.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Qcache_free_memory`
* **Description:** Amount of free [query cache](../query-cache/index) memory.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Qcache_hits`
* **Description:** Number of requests served by the [query cache](../query-cache/index). The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Qcache_inserts`
* **Description:** Number of queries ever cached in the [query cache](../query-cache/index). The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Qcache_lowmem_prunes`
* **Description:** Number of pruning operations performed to remove old results to make space for new results in the [query cache](../query-cache/index). The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Qcache_not_cached`
* **Description:** Number of queries that are uncacheable by the [query cache](../query-cache/index), or use SQL\_NO\_CACHE. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Qcache_queries_in_cache`
* **Description:** Number of queries currently cached by the [query cache](../query-cache/index).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Qcache_total_blocks`
* **Description:** Number of blocks used by the [query cache](../query-cache/index).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Queries`
* **Description:** Number of statements executed by the server, excluding COM\_PING and COM\_STATISTICS. Differs from [Questions](#questions) in that it also counts statements executed within [stored programs](../stored-programs-and-views/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Questions`
* **Description:** Number of statements executed by the server, excluding COM\_PING, COM\_STATISTICS, COM\_STMT\_PREPARE, COM\_STMT\_CLOSE, and COM\_STMT\_RESET statements. Differs from [Queries](#queries) in that it doesn't count statements executed within [stored programs](../stored-programs-and-views/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Resultset_metadata_skipped`
* **Description:** Number of times sending the metadata has been skipped. Metadata is not resent if metadata does not change between prepare and execute of prepared statement, or between executes.
* **Scope:** Global, Session
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.6.0](https://mariadb.com/kb/en/mariadb-1060-release-notes/)
---
#### `Rows_read`
* **Description:** Number of requests to read a row (excluding temporary tables).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rows_sent`
* **Description:**
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Rows_tmp_read`
* **Description:** Number of requests to read a row in a temporary table.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Select_full_join`
* **Description:** Number of joins which did not use an index. If not zero, you may need to check table indexes.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Select_full_range_join`
* **Description:** Number of joins which used a range search of the first table.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Select_range`
* **Description:** Number of joins which used a range on the first table.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Select_range_check`
* **Description:** Number of joins without keys that check for key usage after each row. If not zero, you may need to check table indexes.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Select_scan`
* **Description:** Number of joins which used a full scan of the first table.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Slow_launch_threads`
* **Description:** Number of threads which took longer than [slow\_launch\_time](../server-system-variables/index#slow_launch_time) to create. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Slow_queries`
* **Description:** Number of queries which took longer than [long\_query\_time](../server-system-variables/index#long_query_time) to run. The [slow query log](../slow-query-log/index) does not need to be active for this to be recorded.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Sort_merge_passes`
* **Description:** Number of merge passes performed by the sort algorithm. If too high, you may need to look at improving your query indexes, or increasing the [sort\_buffer\_size](../server-system-variables/index#sort_buffer_size).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Sort_priority_queue_sorts`
* **Description:** The number of times that sorting was done through a priority queue. (The total number of times sorting was done is a sum [Sort\_range](#sort_range) and [Sort\_scan](#sort_scan)). See [filesort with small LIMIT optimization](../filesort-with-small-limit-optimization/index).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Sort_range`
* **Description:** Number of sorts which used a range.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Sort_rows`
* **Description:** Number of rows sorted.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Sort_scan`
* **Description:** Number of sorts which used a full table scan.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Subquery_cache_hit`
* **Description:** Counter for all [subquery cache](../subquery-cache/index) hits. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Subquery_cache_miss`
* **Description:** Counter for all [subquery cache](../subquery-cache/index) misses. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Syncs`
* **Description:** Number of times my\_sync() has been called, or the number of times the server has had to force data to disk. Covers the [binary log](../binary-log/index), .frm creation (if these operations are configured to sync) and some storage engines ([Archive](../archive/index), [CSV](../csv/index), [Aria](../aria/index)), but not [XtraDB/InnoDB](../innodb/index)).
* **Scope:** Global, Session
* **Data Type:** `numeric`
---
#### `Table_locks_immediate`
* **Description:** Number of table locks which were completed immediately. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Table_locks_waited`
* **Description:** Number of table locks which had to wait. Indicates table lock contention. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Table_open_cache_active_instances`
* **Description:** Number of active instances for open tables cache lookups.
* **Scope:**
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)
---
#### `Table_open_cache_hits`
* **Description:** Number of hits for open tables cache lookups.
* **Scope:**
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)
---
#### `Table_open_cache_misses`
* **Description:** Number of misses for open tables cache lookups.
* **Scope:**
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)
---
#### `Table_open_cache_overflows`
* **Description:** Number of overflows for open tables cache lookups.
* **Scope:**
* **Data Type:** `numeric`
* **Introduced:** [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)
---
#### `Tc_log_max_pages_used`
* **Description:** Max number of pages used by the memory-mapped file-based [transaction coordinator log](../transaction-coordinator-log/index). The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Tc_log_page_size`
* **Description:** Page size of the memory-mapped file-based [transaction coordinator log](../transaction-coordinator-log/index).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Tc_log_page_waits`
* **Description:** Number of times a two-phase commit was forced to wait for a free memory-mapped file-based [transaction coordinator log](../transaction-coordinator-log/index) page. The global value can be flushed by `[FLUSH STATUS](../flush/index)`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Threads_cached`
* **Description:** Number of threads cached in the thread cache. This value will be zero if the [thread pool](../thread-pool-in-mariadb/index) is in use.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Threads_connected`
* **Description:** Number of clients connected to the server. See [Handling Too Many Connections](../handling-too-many-connections/index). The `Threads_connected` name is inaccurate when the [thread pool](../thread-pool-in-mariadb/index) is in use, since each client connection does not correspond to a dedicated thread in that case.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Threads_created`
* **Description:** Number of threads created to respond to client connections. If too large, look at increasing [thread\_cache\_size](../server-system-variables/index#thread_cache_size).
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Threads_running`
* **Description:** Number of client connections that are actively running a command, and not just sleeping while waiting to receive the next command to execute. Some internal system threads also count towards this status variable if they would show up in the output of the `[SHOW PROCESSLIST](../show-processlist/index)` statement.
+ In [MariaDB 10.3.2](https://mariadb.com/kb/en/mariadb-1032-release-notes/) and before, a global counter was updated each time a client connection dispatched a command. In these versions, the global and session status variable are always the same value.
+ In [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/) and later, the global counter has been removed as a performance improvement. Instead, when the global status variable is queried, it is calculated dynamically by essentially adding up all the running client connections as they would appear in `[SHOW PROCESSLIST](../show-processlist/index)` output. A client connection is only considered to be running if its thread `[COMMAND](../thread-command-values/index)` value is not equal to `Sleep`. When the session status variable is queried, it always returns `1`.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Update_scan`
* **Description:** Number of updates that required a full table scan.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Uptime`
* **Description:** Number of seconds the server has been running.
* **Scope:** Global
* **Data Type:** `numeric`
---
#### `Uptime_since_flush_status`
* **Description:** Number of seconds since the last [FLUSH STATUS](../flush/index).
* **Scope:** Global
* **Data Type:** `numeric`
---
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Information Schema KEY_CACHES Table Information Schema KEY\_CACHES Table
====================================
The [Information Schema](../information_schema/index) `KEY_CACHES` table shows statistics about the [segmented key cache](../segmented-key-cache/index),.
It contains the following columns:
| Column Name | Description |
| --- | --- |
| `KEY_CACHE_NAME` | The name of the key cache |
| `SEGMENTS` | total number of segments (set to `NULL` for regular key caches) |
| `SEGMENT_NUMBER` | segment number (set to `NULL` for any regular key caches and for rows containing aggregation statistics for segmented key caches) |
| `FULL_SIZE` | memory for cache buffers/auxiliary structures |
| `BLOCK_SIZE` | size of the blocks |
| `USED_BLOCKS` | number of currently used blocks |
| `UNUSED_BLOCKS` | number of currently unused blocks |
| `DIRTY_BLOCKS` | number of currently dirty blocks |
| `READ_REQUESTS` | number of read requests |
| `READS` | number of actual reads from files into buffers |
| `WRITE_REQUESTS` | number of write requests |
| `WRITES` | number of actual writes from buffers into files |
Example
-------
```
SELECT * FROM information_schema.KEY_CACHES \G
********************** 1. row **********************
KEY_CACHE_NAME: default
SEGMENTS: NULL
SEGMENT_NUMBER: NULL
FULL_SIZE: 134217728
BLOCK_SIZE: 1024
USED_BLOCKS: 36
UNUSED_BLOCKS: 107146
DIRTY_BLOCKS: 0
READ_REQUESTS: 40305
READS: 21
WRITE_REQUESTS: 19239
WRITES: 358
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Geometry Hierarchy Geometry Hierarchy
==================
Description
-----------
Geometry is the base class. It is an abstract class. The instantiable subclasses of Geometry are restricted to zero-, one-, and two-dimensional geometric objects that exist in two-dimensional coordinate space. All instantiable geometry classes are defined so that valid instances of a geometry class are topologically closed (that is, all defined geometries include their boundary).
The base Geometry class has subclasses for Point, Curve, Surface, and GeometryCollection:
* [Point](../point/index) represents zero-dimensional objects.
* Curve represents one-dimensional objects, and has subclass [LineString](../linestring/index), with sub-subclasses Line and LinearRing.
* Surface is designed for two-dimensional objects and has subclass [Polygon](../polygon/index).
* [GeometryCollection](../geometrycollection/index) has specialized zero-, one-, and two-dimensional collection classes named [MultiPoint](../multipoint/index), [MultiLineString](../multilinestring/index), and [MultiPolygon](../multipolygon/index) for modeling geometries corresponding to collections of Points, LineStrings, and Polygons, respectively. MultiCurve and MultiSurface are introduced as abstract superclasses that generalize the collection interfaces to handle Curves and Surfaces.
Geometry, Curve, Surface, MultiCurve, and MultiSurface are defined as non-instantiable classes. They define a common set of methods for their subclasses and are included for extensibility.
[Point](../point-properties/index), [LineString](../linestring-properties/index), [Polygon](../polygon-properties/index), [GeometryCollection](../geometrycollection/index), [MultiPoint](../multipoint/index), [MultiLineString](../multilinestring/index), and [MultiPolygon](../multipolygon/index) are instantiable classes.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema users Table Performance Schema users Table
==============================
Description
-----------
Each user that connects to the server is stored as a row in the `users` table, along with current and total connections.
The table size is determined at startup by the value of the [performance\_schema\_users\_size](../performance-schema-system-variables/index#performance_schema_users_size) system variable. If this is set to `0`, user statistics will be disabled.
| Column | Description |
| --- | --- |
| `USER` | The connection's client user name for the connection, or `NULL` if an internal thread. |
| `CURRENT_CONNECTIONS` | Current connections for the user. |
| `TOTAL_CONNECTIONS` | Total connections for the user. |
Example
-------
```
SELECT * FROM performance_schema.users;
+------------------+---------------------+-------------------+
| USER | CURRENT_CONNECTIONS | TOTAL_CONNECTIONS |
+------------------+---------------------+-------------------+
| debian-sys-maint | 0 | 35 |
| NULL | 20 | 23 |
| root | 1 | 2 |
+------------------+---------------------+-------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Geometry Types Geometry Types
==============
Description
-----------
MariaDB provides a standard way of creating spatial columns for geometry types, for example, with [CREATE TABLE](../create-table/index) or [ALTER TABLE](../alter-table/index). Currently, spatial columns are supported for [MyISAM](../myisam/index), [InnoDB](../innodb/index) and [ARCHIVE](../archive/index) tables. See also [SPATIAL INDEX](../spatial/index).
The basic geometry type is `GEOMETRY`. But the type can be more specific. The following types are supported:
| Geometry Types |
| --- |
| [POINT](../point/index) |
| [LINESTRING](../linestring/index) |
| [POLYGON](../polygon/index) |
| [MULTIPOINT](../multipoint/index) |
| [MULTILINESTRING](../multilinestring/index) |
| [MULTIPOLYGON](../multipolygon/index) |
| [GEOMETRYCOLLECTION](../geometrycollection/index) |
| GEOMETRY |
Examples
--------
**Note:** For clarity, only one type is listed per table in the examples below, but a table row can contain multiple types. For example:
```
CREATE TABLE object (shapeA POLYGON, shapeB LINESTRING);
```
### [POINT](../point/index)
```
CREATE TABLE gis_point (g POINT);
SHOW FIELDS FROM gis_point;
INSERT INTO gis_point VALUES
(PointFromText('POINT(10 10)')),
(PointFromText('POINT(20 10)')),
(PointFromText('POINT(20 20)')),
(PointFromWKB(AsWKB(PointFromText('POINT(10 20)'))));
```
### [LINESTRING](../linestring/index)
```
CREATE TABLE gis_line (g LINESTRING);
SHOW FIELDS FROM gis_line;
INSERT INTO gis_line VALUES
(LineFromText('LINESTRING(0 0,0 10,10 0)')),
(LineStringFromText('LINESTRING(10 10,20 10,20 20,10 20,10 10)')),
(LineStringFromWKB(AsWKB(LineString(Point(10, 10), Point(40, 10)))));
```
### [POLYGON](../polygon/index)
```
CREATE TABLE gis_polygon (g POLYGON);
SHOW FIELDS FROM gis_polygon;
INSERT INTO gis_polygon VALUES
(PolygonFromText('POLYGON((10 10,20 10,20 20,10 20,10 10))')),
(PolyFromText('POLYGON((0 0,50 0,50 50,0 50,0 0), (10 10,20 10,20 20,10 20,10 10))')),
(PolyFromWKB(AsWKB(Polygon(LineString(Point(0, 0), Point(30, 0), Point(30, 30), Point(0, 0))))));
```
### [MULTIPOINT](../multipoint/index)
```
CREATE TABLE gis_multi_point (g MULTIPOINT);
SHOW FIELDS FROM gis_multi_point;
INSERT INTO gis_multi_point VALUES
(MultiPointFromText('MULTIPOINT(0 0,10 10,10 20,20 20)')),
(MPointFromText('MULTIPOINT(1 1,11 11,11 21,21 21)')),
(MPointFromWKB(AsWKB(MultiPoint(Point(3, 6), Point(4, 10)))));
```
### [MULTILINESTRING](../multilinestring/index)
```
CREATE TABLE gis_multi_line (g MULTILINESTRING);
SHOW FIELDS FROM gis_multi_line;
INSERT INTO gis_multi_line VALUES
(MultiLineStringFromText('MULTILINESTRING((10 48,10 21,10 0),(16 0,16 23,16 48))')),
(MLineFromText('MULTILINESTRING((10 48,10 21,10 0))')),
(MLineFromWKB(AsWKB(MultiLineString(LineString(Point(1, 2), Point(3, 5)), LineString(Point(2, 5), Point(5, 8), Point(21, 7))))));
```
### [MULTIPOLYGON](../multipolygon/index)
```
CREATE TABLE gis_multi_polygon (g MULTIPOLYGON);
SHOW FIELDS FROM gis_multi_polygon;
INSERT INTO gis_multi_polygon VALUES
(MultiPolygonFromText('MULTIPOLYGON(((28 26,28 0,84 0,84 42,28 26),(52 18,66 23,73 9,48 6,52 18)),((59 18,67 18,67 13,59 13,59 18)))')),
(MPolyFromText('MULTIPOLYGON(((28 26,28 0,84 0,84 42,28 26),(52 18,66 23,73 9,48 6,52 18)),((59 18,67 18,67 13,59 13,59 18)))')),
(MPolyFromWKB(AsWKB(MultiPolygon(Polygon(LineString(Point(0, 3), Point(3, 3), Point(3, 0), Point(0, 3)))))));
```
### [GEOMETRYCOLLECTION](../geometrycollection/index)
```
CREATE TABLE gis_geometrycollection (g GEOMETRYCOLLECTION);
SHOW FIELDS FROM gis_geometrycollection;
INSERT INTO gis_geometrycollection VALUES
(GeomCollFromText('GEOMETRYCOLLECTION(POINT(0 0), LINESTRING(0 0,10 10))')),
(GeometryFromWKB(AsWKB(GeometryCollection(Point(44, 6), LineString(Point(3, 6), Point(7, 9)))))),
(GeomFromText('GeometryCollection()')),
(GeomFromText('GeometryCollection EMPTY'));
```
### [GEOMETRY](../geometry/index)
```
CREATE TABLE gis_geometry (g GEOMETRY);
SHOW FIELDS FROM gis_geometry;
INSERT into gis_geometry SELECT * FROM gis_point;
INSERT into gis_geometry SELECT * FROM gis_line;
INSERT into gis_geometry SELECT * FROM gis_polygon;
INSERT into gis_geometry SELECT * FROM gis_multi_point;
INSERT into gis_geometry SELECT * FROM gis_multi_line;
INSERT into gis_geometry SELECT * FROM gis_multi_polygon;
INSERT into gis_geometry SELECT * FROM gis_geometrycollection;
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Merging New XtraDB Releases (obsolete) Merging New XtraDB Releases (obsolete)
======================================
**Note:** This page is obsolete. The information is old, outdated, or otherwise currently incorrect. We are keeping the page for historical reasons only. **Do not** rely on the information in this article.
### Background
Percona used to maintain XtraDB as a patch series against the InnoDB plugin. This affected how we started merging XtraDB in.
Now Percona maintains a normal source repository on launchpad (`lp:percona-server`). But we continue to merge the old way to preserve the history of our changes.
### Merging
There used to be a `lp:percona-xtradb` tree, that we were merging from as:
```
bzr merge lp:percona-xtradb
```
Now we have to maintain our own XtraDB-5.5 repository to merge from. It is `lp:~maria-captains/maria/xtradb-mergetree-5.5`. Follow the procedures as described in [Merging with a merge tree](../merging-with-a-merge-tree/index) to merge from it.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Stored Routine Privileges Stored Routine Privileges
=========================
It's important to give careful thought to the privileges associated with [stored functions](../stored-functions/index) and [stored procedures](../stored-procedures/index). The following is an explanation of how they work.
Creating Stored Routines
------------------------
* To create a stored routine, the `[CREATE ROUTINE](../grant/index#database-privileges)` privilege is needed. The `[SUPER](../grant/index#global-privileges)` privilege is required if a `DEFINER` is declared that's not the creator's account (see [DEFINER clause](#definer-clause) below). The `SUPER` privilege is also required if statement-based binary logging is used. See [Binary Logging of Stored Routines](../binary-logging-of-stored-routines/index) for more details.
Altering Stored Routines
------------------------
* To make changes to, or drop, a stored routine, the `[ALTER ROUTINE](../grant/index#function-privileges)` privilege is needed. The creator of a routine is temporarily granted this privilege if they attempt to change or drop a routine they created, unless the [automatic\_sp\_privileges](../server-system-variables/index#automatic_sp_privileges) variable is set to `0` (it defaults to 1).
* The `SUPER` privilege is also required if statement-based binary logging is used. See [Binary Logging of Stored Routines](../binary-logging-of-stored-routines/index) for more details.
Running Stored Routines
-----------------------
* To run a stored routine, the `[EXECUTE](../grant/index#procedure-privileges)` privilege is needed. This is also temporarily granted to the creator if they attempt to run their routine unless the [automatic\_sp\_privileges](../server-system-variables/index#automatic_sp_privileges) variable is set to `0`.
* The `[SQL SECURITY clause](#sql-security-clause)` (by default `DEFINER`) specifies what privileges are used when a routine is called. If `SQL SECURITY` is `INVOKER`, the function body will be evaluated using the privileges of the user calling the function. If `SQL SECURITY` is `DEFINER`, the function body is always evaluated using the privileges of the definer account. `DEFINER` is the default. Thus, by default, users who can access the database associated with the stored routine can also run the routine, and potentially perform operations they wouldn't normally have permissions for.
* The creator of a routine is the account that ran the `[CREATE FUNCTION](../create-function/index)` or `[CREATE PROCEDURE](../create-procedure/index)` statement, regardless of whether a `DEFINER` is provided. The definer is by default the creator unless otherwise specified.
* The server automatically changes the privileges in the [mysql.proc](../mysqlproc-table/index) table as required, but will not look out for manual changes.
### DEFINER Clause
If left out, the `DEFINER` is treated as the account that created the stored routine or view. If the account creating the routine has the `SUPER` privilege, another account can be specified as the `DEFINER`.
### SQL SECURITY Clause
This clause specifies the context the stored routine or view will run as. It can take two values - `DEFINER` or `INVOKER`. `DEFINER` is the account specified as the `DEFINER` when the stored routine or view was created (see the section above). `INVOKER` is the account invoking the routine or view.
As an example, let's assume a routine, created by a superuser who's specified as the `DEFINER`, deletes all records from a table. If `SQL SECURITY=DEFINER`, anyone running the routine, regardless of whether they have delete privileges, will be able to delete the records. If `SQL SECURITY = INVOKER`, the routine will only delete the records if the account invoking the routine has permission to do so.
`INVOKER` is usually less risky, as a user cannot perform any operations they're normally unable to. However, it's not uncommon for accounts to have relatively limited permissions, but be specifically granted access to routines, which are then invoked in the `DEFINER` context.
Dropping Stored Routines
------------------------
All privileges that are specific to a stored routine will be dropped when a [DROP FUNCTION](../drop-function/index) or [DROP ROUTINE](drop-routine) is run. However, if a [CREATE OR REPLACE FUNCTION](../create-function/index) or [CREATE OR REPLACE PROCEDURE](../create-procedure/index) is used to drop and replace and the routine, any privileges specific to that routine will not be dropped.
See Also
--------
* [Changing the DEFINER of MySQL stored routines etc.](https://mariadb.com/blog/changing-definer-mysql-stored-routines-etc) - maria.com post on what to do after you've dropped a user, and now want to change the DEFINER on all database objects that currently have it set to this dropped user.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Query Cache Thread States Query Cache Thread States
=========================
This article documents thread states that are related to the [Query Cache](../query-cache/index). These correspond to the `STATE` values listed by the [SHOW PROCESSLIST](../show-processlist/index) statement or in the [Information Schema PROCESSLIST Table](../information-schema-processlist-table/index) as well as the `PROCESSLIST_STATE` value listed in the [Performance Schema threads Table](../performance-schema-threads-table/index).
| Value | Description |
| --- | --- |
| checking privileges on cached query | Checking whether the user has permission to access a result in the query cache. |
| checking query cache for query | Checking whether the current query exists in the query cache. |
| invalidating query cache entries | Marking query cache entries as invalid as the underlying tables have changed. |
| sending cached result to client | A result found in the query cache is being sent to the client. |
| storing result in query cache | Saving the the result of a query into the query cache. |
| Waiting for query cache lock | Waiting to take a query cache lock. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb OpenStreetMap Dataset OpenStreetMap Dataset
=====================
This page describes how to use the OpenStreetMap dataset in testing.
Database Schema
---------------
The database schema is available [here](../osmdb06sql/index). To import:
```
mysqladmin create osm
cat osmdb06.sql | mysql osm
```
By default, this schema uses a mixture of InnoDB and MyISAM tables. To convert all tables to Aria:
```
sed -i -e 's/InnoDB/Aria/gi' osmdb06.sql
sed -i -e 's/MyISAM/Aria/gi' osmdb06.sql
```
30 tables are created.
Data
----
The data is provided in the form of XML files (.OSM files) that require the Java-based [Osmosis](http://wiki.openstreetmap.org/wiki/Osmosis) tool to load into MariaDB. The tool is available from [dev.openstreetmap.org](http://dev.openstreetmap.org/~bretth/osmosis-build/osmosis-latest.tgz). Version 0.36 is known to work.
Various .OSM files are available, including the [entire world](http://wiki.openstreetmap.org/wiki/Planet.osm) (>200Gb unzipped) and [individual countries](http://download.geofabrik.de/osm/).
Data is loaded with the following command-line (in the example, we're using the bulgaria.osm file, replace with the file you choose):
```
chmod +x bin/osmosis
bin/osmosis --read-xml file=bulgaria.osm --write-apidb dbType="mysql" host="localhost:port" validateSchemaVersion=no database="osm" user="root" password="<password-goes-here>"
```
Data is inserted into 19 tables, as follows:
```
MariaDB [(none)]> use information_schema;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [information_schema]> select TABLE_NAME, TABLE_ROWS from TABLES
-> where TABLE_ROWS > 0
-> AND
-> TABLE_SCHEMA='osm'
-> ORDER BY TABLE_ROWS DESC;
+--------------------------+------------+
| TABLE_NAME | TABLE_ROWS |
+--------------------------+------------+
| current_way_nodes | 1559099 |
| way_nodes | 1559099 |
| current_nodes | 1477247 |
| nodes | 1477247 |
| node_tags | 311751 |
| way_tags | 287585 |
| ways | 100007 |
| current_ways | 100007 |
| changeset_tags | 18738 |
| current_relation_members | 14560 |
| relation_members | 14560 |
| changesets | 9369 |
| relation_tags | 3948 |
| current_relations | 937 |
| relations | 937 |
| users | 537 |
+--------------------------+------------+
16 rows in set (0.00 sec)
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Graphical and Enhanced Clients Graphical and Enhanced Clients
===============================
This list is incomplete - most MySQL tools will work with MariaDB. See also a list of projects that officially [work with MariaDB](../works-with-mariadb/index).
| Title | Description |
| --- | --- |
| [dbForge Studio for MariaDB](../dbforge-studio-for-mariadb-universal-gui-tool-for-management-administration/index) | Universal GUI Tool for Management & Administration, Development for MariaDB and MySQL |
| [DBeaver](../graphical-and-enhanced-clients-dbeaver/index) | Free convenient cross-platform and cross-database Java GUI client |
| [ERBuilder Data Modeler](../erbuilder-data-modeler/index) | A data modeling tool for multiple databases platforms including MariaDB, MySQL, and more ... |
| [SQLyog: Community Edition](../sqlyog-community-edition/index) | SQLyog Community Edition |
| [HeidiSQL](../heidisql/index) | Windows GUI client for MariaDB and MySQL. |
| [Navicat](../navicat/index) | Graphical front-end for MariaDB |
| [Querious](../querious/index) | Mac OS X tool for database administration |
| [TablePlus](../tableplus/index) | A modern, native GUI client for multiple databases |
| [Database Workbench](../database-workbench/index) | Database development environment for multiple database systems including MySQL and MariaDB |
| [Moon Modeler](../moon-modeler/index) | Moon Modeler is a database design tool for MariaDB. Draw ER diagrams, visua... |
| [SQL Diagnostic Manager & SQLyog](../cost-effective-agentless-mariadb-database-performance-management/index) | Graphical MariaDB manager and monitor |
| [mycli](../mycli/index) | Command line interface with auto-completion and syntax highlighting |
| [ocelotgui](../ocelotgui/index) | Linux client for MySQL and MariaDB |
| [phpMyAdmin](../phpmyadmin/index) | Web-based MariaDB administration tool |
| [Sequel Pro](../graphical-and-enhanced-clients-sequel-pro/index) | Database management tool running on Mac |
| [SQLTool Pro Database Editor](../sqltool-pro-database-editor/index) | Android SQL client |
| [dbForge Data Compare](../dbforge-data-compare/index) | A tool for MariaDB & MySQL data comparison and synchronization of data betw... |
| [dbForge Data Generator](../dbforge-data-generator/index) | A tool for generation of large volumes of meaningful test table data. |
| [dbForge Documenter for MariaDB and MySQL](../dbforge-documenter-for-mariadb-and-mysql/index) | dbForge Documenter is a useful tool for MariaDB and MySQL database for the ... |
| [dbForge Fusion: MySQL & MariaDB Plugin for VS](../dbforge-fusion-mysql-mariadb-plugin-for-vs/index) | Visual Studio plugin designed to simplify database development and management. |
| [dbForge Query Builder for MySQL & MariaDB](../dbforge-query-builder-for-mysql-mariadb/index) | A tool for visual query creation without code typing. |
| [dbForge Schema Compare for MariaDB & MySQL](../dbforge-schema-compare-for-mariadb-mysql/index) | A tool for comparison and synchronization of DDL differences between database objects. |
| [DbSchema](../dbschema/index) | Mariadb Diagram Designer & Admin GUI Tool |
| [Improved SQL Document Parser Performance in Updated dbForge Tools for MySQL and MariaDB](../graphical-and-enhanced-clients-improved-sql-document-parser-performance-in-/index) | Devart has upgraded dbForge Tools for MySQL and MariaDB with improved SQL d... |
| [OmniDB](../graphical-and-enhanced-clients-omnidb/index) | Browser-based IDE for MariaDB Administration |
| [TOAD Edge](../toad-edge/index) | Windows GUI for MySQL. SQL Syntax Check. Freeware (Basic Features) & Payware (Extended Features). |
| [TOAD for MySQL](../toad-for-mysql-80/index) | Windows GUI for MySQL. Compatible with MariaDB. Freeware. SQL syntax check. |
| [SQLPro Studio](../sqlpro-studio/index) | SQLPro Studio is a fully native database client for macOS and iOS. |
| [SB Data Generator](../sb-data-generator/index) | A tool to generate and populate selected tables or entire databases with realistic test data. |
| [Beekeeper Studio](../beekeeper-studio/index) | Open source and free GUI with a focus on usability. Mac, Linux, and Windows |
| [LibreOffice Base](../libreoffice-base/index) | An open source RDBMS front-end tool to create and manage various databases |
| [Valentina Studio](../valentina-studio/index) | Free, advanced MariaDB GUI native on macOS, Windows & Linux, with advanced commercial version |
| [DbVisualizer](../dbvisualizer/index) | Cross-platform universal database tool supporting MariaDB, PostgreSQL, MySQL and more |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ps_thread_account ps\_thread\_account
===================
Syntax
------
```
sys.ps_thread_account(thread_id)
```
Description
-----------
`ps_thread_account` is a [stored function](../stored-functions/index) available with the [Sys Schema](../sys-schema/index) that returns the account (username@hostname) associated with the given *thread\_id*.
Returns `NULL` if the thread\_id is not found.
Examples
--------
```
SELECT sys.ps_thread_account(sys.ps_thread_id(CONNECTION_ID()));
+----------------------------------------------------------+
| sys.ps_thread_account(sys.ps_thread_id(CONNECTION_ID())) |
+----------------------------------------------------------+
| msandbox@localhost |
+----------------------------------------------------------+
SELECT sys.ps_thread_account(sys.ps_thread_id(2042));
+-----------------------------------------------+
| sys.ps_thread_account(sys.ps_thread_id(2042)) |
+-----------------------------------------------+
| NULL |
+-----------------------------------------------+
SELECT sys.ps_thread_account(sys.ps_thread_id(NULL));
+-----------------------------------------------+
| sys.ps_thread_account(sys.ps_thread_id(NULL)) |
+-----------------------------------------------+
| msandbox@localhost |
+-----------------------------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB Developer Meeting - Athens - Sunday, 13 Nov 2011 MariaDB Developer Meeting - Athens - Sunday, 13 Nov 2011
=========================================================
Agenda and notes from day 3 of the MariaDB Developer Meeting in Athens, Greece.
Agenda
------
| Time | Main Track | Side Track |
| --- | --- | --- |
| 09:00‑10:00 | Email & Hacking time (Public) | |
| 10:00‑11:00 | Project Tracking tool discussion (Public) | |
| 11:00‑11:15 | Small break | |
| 11:15‑12:30 | State of SkySQL (Public) | |
| 12:30‑13:30 | Lunch | |
| 13:30‑14:00 | Unconference (Public) | MariaDB Benchmarking (An overview of benchmarking scripts Vlado has developed, future plans for those, etc. Show us degree of automation: we come up with a query over DBT-3 dataset and let's see if we could re-run it and get performance tables/charts.) (Public) |
| 14:00‑15:15 | Unconference (Public) | Website Consolidation and Re‑Design (Public) |
| 15:15‑15:45 | Coffee break | |
| 15:45‑17:30 | Monty Program Company Meeting (Private) | |
[Printed Schedule](http://askmonty.org/blog/wp-content/uploads/2011/11/MariaDB-%CE%B5%CF%86%CE%B7%CE%BC%CE%B5%CF%81%CE%AF%CE%B4%CE%B1-13-Nov-2011.pdf) (pdf)
Notes
-----
| Title | Description |
| --- | --- |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Backup/Restore + Data Export/Import via dbForge Studio Backup/Restore + Data Export/Import via dbForge Studio
======================================================
Without a doubt, you want your backup/restore and export/import operations to be fast, easy, and automated wherever possible. You can have it all that way with [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/studio/). As the name implies, it is an IDE for MySQL development, management, and administration, yet it works just as perfectly as a [MariaDB GUI client](https://www.devart.com/dbforge/mysql/studio/mariadb-gui-client.html). Now, let's see how it tackles routine database backups.
Create a MariaDB backup
-----------------------
1. On the **Database** menu, go to **Backup and Restore**, and click **Backup Database** to open **Database Backup Wizard**.
2. On the **General** page, specify the required connection and database, the path for the backup file to be saved to, and the output file name in the respective fields. Optionally, you can append a timestamp to the file name, enable the auto-deletion of old files, and compress your backup into an archive. After you set it all up, click **Next**.
3. On the **Backup content** page, select the content for your backup and click **Next**.
4. On the **Options** page, configure your detailed backup options—there are quite a few of those to match your requirements most precisely. Then click **Next**.
5. On the **Errors handling** page, configure the **Errors handling** and **Log settings** options. Afterwards, click **Backup** to run the backup process.
Note that you have two more options here: you can select **Save Project** to save your current backup project with all the settings—or you can select **Save Command Line** to save a backup script that you can execute from the command line whenever you need.
6. After you click **Backup**, wait for the backup process to be completed.
Note that you don't have to go through every wizard page to click **Backup**. You can do it whenever you've finished configuring your settings.
7. Finally, confirm the successful completion by clicking **Finish**.
As you can see, it's very easy. Furthermore, you can schedule to run regular backups using **Action > Create Basic Task** in **Windows Task Scheduler**.
Restore a MariaDB backup
------------------------
This is an even faster task, done in half as many steps.
1. On the **Database** menu, go to **\*Backup and Restore**, and click **Restore Database** to open the **Database Restore Wizard**.
2. On the **Database Script File** page, specify the required connection and database, as well as the path to the previously saved backup file.
3. After that, click **Restore**, and let the Studio do the rest for you.
And when it's done, click **Finish**, and there you have it.
You can learn more about this functionality on the dedicated [backup/restore page](https://www.devart.com/dbforge/mysql/studio/mysql-backup.html). Please note: while the page focuses on MySQL databases, everything that's described there is just as perfectly applicable to MariaDB from the same Studio with the same workflow.
Export data from MariaDB
------------------------
With dbForge Studio, you can export data to 14 most popular formats: HTML, TXT, XLS, XLSX, MDB, RTF, PDF, JSON, XML, CSV, OBSC, DBF, SQL, and Google Sheets. You can do it with an easy-to-follow wizard that guides you through the entire process and delivers quite a few customization options.
Let's see how it works. And before we start, note that different formats may have slightly different wizard pages. In our walkthrough, we'll take the HTML format as an example.
1. To open the export wizard, on the **Database** menu, click **Export Data**.
2. On the **Export format** page, pick the required format and click **Next**.
3. On the **Source** page, select the required connection, database, as well as tables and views to be exported. Then click **Next**.
4. On the **Output settings** page, specify the path for the output, select to export data into a single or several separate files, and configure a few other settings, such as timestamps and compression. Then click **Next**.
5. On the **Options** page, configure and preview table grid options for exported data. Click **Next**.
6. On the **Data formats** page, you have two tabs. On the **Columns** tab, you can check the list of columns to be exported.
Then, on the **Formats** tab, you can adjust the default format settings for Date, Time, Date Time, Currency, Float, Integer, Boolean, Null String, as well as select the required binary encoding.
Once you make sure everything is correct, click **Next**.
7. On the **Exported rows** page, select to export all rows or define a certain range of rows, and then click \*Next**.**
8. On the **Errors handling** page, configure the errors handling behavior and select to keep a log file, if necessary.
But before you click **Export**, note that you can save templates with your settings for recurring export operations. To do that, click **Save** in the lower left corner of the wizard, specify a name and a destination for the template file to be saved to, and then click **Save**.
Also note that you don't have to go through every wizard page to click **Export**. You can do it whenever you've finished configuring your settings.
9. Finally, after you click **Export**, watch the progress and click **Finish** upon completion.
Done! Now, if you want, you can open the folder with the output file right away.
Import data into MariaDB
------------------------
dbForge Studio supports 10 data formats for import, including TXT, XLS, XLSX, MDB, XML, JSON, CSV, ODBC, DBF, and Google Sheets. Just like with export, you have a helpful wizard at hand, whose pages may have differences, depending on the format. And let's pick a different format this time, say, the Microsoft Excel format (XLS).
1. To open the wizard, on the **Database** menu, click **Import Data**.
2. On the **Source file** page, choose the required format, select the file to import data from, and click **Next**.
3. On the **Destination** page, select the target connection and database. Then you can select to import data either to a new table or to an existing table. Click **Next**.
4. On the **Options** page, configure and preview table grid options for imported data. Click **Next**.
5. On the **Data formats** page, you have two tabs. The first tab is called **Common Formats**, where you can specify the required formats for null strings, thousand and decimal separators, boolean variables, date and time.
The second tab is called **Column Settings**, where you can configure format settings for separate columns.
Once you make sure everything is correct, click **Next**.
6. On the **Mapping** page, you can map the source columns to the target ones and preview the results. If you're importing data into a new table, the Studio will automatically create and map all the columns, so you will only have to make adjustments if you wish. Then click **Next**.
7. On the **Modes** page, select one of the 5 available import modes and click **Next**.
8. On the **Output** page, select the preferred output option and click **Next**.
9. On the **Errors handling** page, configure the errors handling behavior and select to keep a log file, if necessary.
Similarly to export, you can save templates with your settings for recurring import operations. To do that, click **Save** in the lower left corner of the wizard, specify a name and a destination for the template file to be saved to, and then click **Save**.
Also note that you don't have to go through every wizard page to click **Import**. You can do it whenever you've finished configuring your settings.
10. After you click **Import**, wait for the process to be completed. Then click **Finish** to confirm the successful completion, and check the results if you wish. That's it!
You can learn more about this functionality on the dedicated [data export/import page](https://www.devart.com/dbforge/mysql/studio/data-export-import.html). Please note: while the page focuses on MySQL databases, everything that's described there is just as perfectly applicable to MariaDB from the same Studio with the same workflow.
There is much more to dbForge Studio when it comes to MariaDB development and management. You can have a brief overview of its features and capabilities on [the Features page](https://www.devart.com/dbforge/mysql/studio/features.html).
That said, if you'd love to have a single IDE that doesn't need any 3rd-party extensions because it can perfectly deal with nearly any task on its own, feel free to [download dbForge Studio for a free 30-day trial](https://www.devart.com/dbforge/mysql/studio/download.html) and give it a go in your daily work.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Mroonga Status Variables Mroonga Status Variables
========================
This page documents status variables related to the [Mroonga storage engine](../mroonga/index). See [Server Status Variables](../server-status-variables/index) for a complete list of status variables that can be viewed with [SHOW STATUS](../show-status/index).
#### `Mroonga_count_skip`
* **Description:** Incremented each time the 'fast line count feature' is used. Can be used to check if the feature is working after enabling it.
* **Data Type:** `numeric`
---
#### `Mroonga_fast_order_limit`
* **Description:** Incremented each time the 'fast ORDER BY LIMIT feature' is used. Can be used to check if the feature is working after enabling it.
* **Data Type:** `numeric`
---
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema events_statements_history Table Performance Schema events\_statements\_history Table
====================================================
The `events_statements_history` table by default contains the ten most recent completed statement events per thread. This number can be adjusted by setting the [performance\_schema\_events\_statements\_history\_size](../performance-schema-system-variables/index#performance_schema_events_statements_history_size) system variable when the server starts up.
The table structure is identical to the [events\_statements\_current](../performance-schema-events_statements_current-table/index) table structure, and contains the following columns:
The table contains the following columns:
| Column | Description |
| --- | --- |
| `THREAD_ID` | Thread associated with the event. Together with `EVENT_ID` uniquely identifies the row. |
| `EVENT_ID` | Thread's current event number at the start of the event. Together with `THREAD_ID` uniquely identifies the row. |
| `END_EVENT_ID` | `NULL` when the event starts, set to the thread's current event number at the end of the event. |
| `EVENT_NAME` | Event instrument name and a `NAME` from the `setup_instruments` table |
| `SOURCE` | Name and line number of the source file containing the instrumented code that produced the event. |
| `TIMER_START` | Value in picoseconds when the event timing started or `NULL` if timing is not collected. |
| `TIMER_END` | Value in picoseconds when the event timing ended, or `NULL` if timing is not collected. |
| `TIMER_WAIT` | Value in picoseconds of the event's duration or `NULL` if timing is not collected. |
| `LOCK_TIME` | Time in picoseconds spent waiting for locks. The time is calculated in microseconds but stored in picoseconds for compatibility with other timings. |
| `SQL_TEXT` | The SQL statement, or `NULL` if the command is not associated with an SQL statement. |
| `DIGEST` | [Statement digest](../performance-schema-digests/index). |
| `DIGEST_TEXT` | [Statement digest](../performance-schema-digests/index) text. |
| `CURRENT_SCHEMA` | Statement's default database for the statement, or `NULL` if there was none. |
| `OBJECT_SCHEMA` | Reserved, currently `NULL` |
| `OBJECT_NAME` | Reserved, currently `NULL` |
| `OBJECT_TYPE` | Reserved, currently `NULL` |
| `OBJECT_INSTANCE_BEGIN` | Address in memory of the statement object. |
| `MYSQL_ERRNO` | Error code. See [MariaDB Error Codes](../mariadb-error-codes/index) for a full list. |
| `RETURNED_SQLSTATE` | The [SQLSTATE](../sqlstate/index) value. |
| `MESSAGE_TEXT` | Statement error message. See [MariaDB Error Codes](../mariadb-error-codes/index). |
| `ERRORS` | `0` if `SQLSTATE` signifies completion (starting with 00) or warning (01), otherwise `1`. |
| `WARNINGS` | Number of warnings from the diagnostics area. |
| `ROWS_AFFECTED` | Number of rows affected the statement affected. |
| `ROWS_SENT` | Number of rows returned. |
| `ROWS_EXAMINED` | Number of rows read during the statement's execution. |
| `CREATED_TMP_DISK_TABLES` | Number of on-disk temp tables created by the statement. |
| `CREATED_TMP_TABLES` | Number of temp tables created by the statement. |
| `SELECT_FULL_JOIN` | Number of joins performed by the statement which did not use an index. |
| `SELECT_FULL_RANGE_JOIN` | Number of joins performed by the statement which used a range search of the first table. |
| `SELECT_RANGE` | Number of joins performed by the statement which used a range of the first table. |
| `SELECT_RANGE_CHECK` | Number of joins without keys performed by the statement that check for key usage after each row. |
| `SELECT_SCAN` | Number of joins performed by the statement which used a full scan of the first table. |
| `SORT_MERGE_PASSES` | Number of merge passes by the sort algorithm performed by the statement. If too high, you may need to increase the [sort\_buffer\_size](../server-system-variables/index#sort_buffer_size). |
| `SORT_RANGE` | Number of sorts performed by the statement which used a range. |
| `SORT_ROWS` | Number of rows sorted by the statement. |
| `SORT_SCAN` | Number of sorts performed by the statement which used a full table scan. |
| `NO_INDEX_USED` | `0` if the statement performed a table scan with an index, `1` if without an index. |
| `NO_GOOD_INDEX_USED` | `0` if a good index was found for the statement, `1` if no good index was found. See the `Range checked for each record description` in the [EXPLAIN](../explain/index) article. |
| `NESTING_EVENT_ID` | Reserved, currently `NULL`. |
| `NESTING_EVENT_TYPE` | Reserved, currently `NULL`. |
It is possible to empty this table with a `TRUNCATE TABLE` statement.
[events\_statements\_current](../performance-schema-events_statements_current-table/index) and [events\_statements\_history\_long](../performance-schema-events_statements_history_long-table/index) are related tables.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb COS COS
===
Syntax
------
```
COS(X)
```
Description
-----------
Returns the cosine of X, where X is given in radians.
Examples
--------
```
SELECT COS(PI());
+-----------+
| COS(PI()) |
+-----------+
| -1 |
+-----------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb PolygonFromText PolygonFromText
===============
A synonym for [ST\_PolyFromText](../st_polyfromtext/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Not Equal Operator: != Not Equal Operator: !=
======================
Syntax
------
```
<>, !=
```
Description
-----------
Not equal operator. Evaluates both SQL expressions and returns 1 if they are not equal and 0 if they are equal, or `NULL` if either expression is NULL. If the expressions return different data types, (for instance, a number and a string), performs type conversion.
When used in row comparisons these two queries return the same results:
```
SELECT (t1.a, t1.b) != (t2.x, t2.y)
FROM t1 INNER JOIN t2;
SELECT (t1.a != t2.x) OR (t1.b != t2.y)
FROM t1 INNER JOIN t2;
```
Examples
--------
```
SELECT '.01' <> '0.01';
+-----------------+
| '.01' <> '0.01' |
+-----------------+
| 1 |
+-----------------+
SELECT .01 <> '0.01';
+---------------+
| .01 <> '0.01' |
+---------------+
| 0 |
+---------------+
SELECT 'zapp' <> 'zappp';
+-------------------+
| 'zapp' <> 'zappp' |
+-------------------+
| 1 |
+-------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Assignment Operator (=) Assignment Operator (=)
=======================
Syntax
------
```
identifier = expr
```
Description
-----------
The equal sign is used as both an assignment operator in certain contexts, and as a [comparison operator](../equal/index). When used as assignment operator, the value on the right is assigned to the variable (or column, in some contexts) on the left.
Since its use can be ambiguous, unlike the [:= assignment operator](../assignment-operator/index), the *`=`* assignment operator cannot be used in all contexts, and is only valid as part of a [SET](../set/index) statement, or the SET clause of an [UPDATE](../update/index) statement
This operator works with both [user-defined variables](../user-defined-variables/index) and [local variables](../declare-variable/index).
Examples
--------
```
UPDATE table_name SET x = 2 WHERE x > 100;
```
```
SET @x = 1, @y := 2;
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Aria Enabling Encryption Aria Enabling Encryption
========================
In order to enable data-at-rest encryption for tables using the [Aria](../aria/index) storage engine, you first need to configure the server to use an [Encryption Key Management](../encryption-key-management/index) plugin. Once this is done, you can enable encryption by setting the relevant system variables.
Encrypting User-created Tables
------------------------------
With tables that the user creates, you can enable encryption by setting the `[aria\_encrypt\_tables](../aria-system-variables/index#aria_encrypt_tables)` system variable to `ON`, then restart the Server. Once this is set, Aria automatically enables encryption on all tables you create after with the `[ROW\_FORMAT](../create-table/index#row_format)` table option set to `PAGE`.
Currently, Aria does not support encryption on tables where the `[ROW\_FORMAT](../create-table/index#row_format)` table option is set to the `FIXED` or `DYNAMIC` values.
Unlike InnoDB, Aria does not support the `[ENCRYPTED](../create-table/index#encrypted)` table option (see [MDEV-18049](https://jira.mariadb.org/browse/MDEV-18049) about that). Encryption for Aria can only be enabled globally using the `[aria\_encrypt\_tables](../aria-system-variables/index#aria_encrypt_tables)` system variable.
### Encrypting Existing Tables
In cases where you have existing Aria tables that you would like to encrypt, the process is a little more complicated. Unlike InnoDB, Aria does not utilize [background encryption threads](../innodb-background-encryption-threads/index) to automatically perform encryption changes (see [MDEV-18971](https://jira.mariadb.org/browse/MDEV-18971) about that). Therefore, to encrypt existing tables, you need to identify each table that needs to be encrypted, and then you need to manually rebuild each table.
First, set the `[aria\_encrypt\_tables](../aria-system-variables/index#aria_encrypt_tables)` system variable to encrypt new tables.
```
SET GLOBAL aria_encrypt_tables=ON;
```
Identify Aria tables that have the `[ROW\_FORMAT](../create-table/index#row_format)` table option set to `PAGE`.
```
SELECT TABLE_SCHEMA, TABLE_NAME
FROM information_schema.TABLES
WHERE ENGINE='Aria'
AND ROW_FORMAT='PAGE'
AND TABLE_SCHEMA != 'information_schema';
```
For each table in the result-set, issue an `[ALTER TABLE](../alter-table/index)` statement to rebuild the table.
```
ALTER TABLE test.aria_table ENGINE=Aria ROW_FORMAT=PAGE;
```
This statement causes Aria to rebuild the table using the `[ROW\_FORMAT](../create-table/index#row_format)` table option. In the process, with the new default setting, it encrypts the table when it writes to disk.
Encrypting Internal On-disk Temporary Tables
--------------------------------------------
During the execution of queries, MariaDB routinely creates internal temporary tables. These internal temporary tables initially use the [MEMORY](../memory-storage-engine/index) storage engine, which is entirely stored in memory. When the table size exceeds the allocation defined by the `[max\_heap\_table\_size](../server-system-variables/index#max_heap_table_size)` system variable, MariaDB writes the data to disk using another storage engine. If you have the `[aria\_used\_for\_temp\_tables](../aria-system-variables/index#aria_used_for_temp_tables)` set to `ON`, MariaDB uses Aria in writing the internal temporary tables to disk.
Encryption for internal temporary tables is handled separately from encryption for user-created tables. To enable encryption for these tables, set the `[encrypt\_tmp\_disk\_tables](../server-system-variables/index#encrypt_tmp_disk_tables)` system variable to `ON`. Once set, all internal temporary tables that are written to disk using Aria are automatically encrypted.
Manually Encrypting Tables
--------------------------
Currently, Aria does not support manually encrypting tables through the `[ENCRYPTED](../create-table/index#encrypted)` and `[ENCRYPTION\_KEY\_ID](../create-table/index#encryption_key_id)` table options. For more information, see [MDEV-18049](https://jira.mariadb.org/browse/MDEV-18049).
In cases where you want to encrypt tables manually or set the specific encryption key, use [InnoDB](../innodb-encryption/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Date and Time Literals Date and Time Literals
======================
Standard syntaxes
-----------------
MariaDB supports the SQL standard and ODBC syntaxes for [DATE](../date/index), [TIME](../time/index) and [TIMESTAMP](../timestamp/index) literals.
SQL standard syntax:
* DATE 'string'
* TIME 'string'
* TIMESTAMP 'string'
ODBC syntax:
* {d 'string'}
* {t 'string'}
* {ts 'string'}
The timestamp literals are treated as [DATETIME](../datetime/index) literals, because in MariaDB the range of `DATETIME` is closer to the `TIMESTAMP` range in the SQL standard.
`string` is a string in a proper format, as explained below.
`DATE` literals
----------------
A `DATE` string is a string in one of the following formats: `'YYYY-MM-DD'` or `'YY-MM-DD'`. Note that any punctuation character can be used as delimiter. All delimiters must consist of 1 character. Different delimiters can be used in the same string. Delimiters are optional (but if one delimiter is used, all delimiters must be used).
A `DATE` literal can also be an integer, in one of the following formats: `YYYYMMDD` or `YYMMDD`.
All the following `DATE` literals are valid, and they all represent the same value:
```
'19940101'
'940101'
'1994-01-01'
'94/01/01'
'1994-01/01'
'94:01!01'
19940101
940101
```
`DATETIME` literals
--------------------
A `DATETIME` string is a string in one of the following formats: `'YYYY-MM-DD HH:MM:SS'` or `'YY-MM-DD HH:MM:SS'`. Note that any punctuation character can be used as delimiter for the date part and for the time part. All delimiters must consist of 1 character. Different delimiters can be used in the same string. The hours, minutes and seconds parts can consist of one character. For this reason, delimiters are mandatory for `DATETIME` literals.
The delimiter between the date part and the time part can be a `T` or any sequence of space characters (including tabs, new lines and carriage returns).
A `DATETIME` literal can also be a number, in one of the following formats: `YYYYMMDDHHMMSS`, `YYMMDDHHMMSS`, `YYYYMMDD` or `YYMMDD`. In this case, all the time subparts must consist of 2 digits.
All the following `DATE` literals are valid, and they all represent the same value:
```
'1994-01-01T12:30:03'
'1994/01/01\n\t 12+30+03'
'1994/01\\01\n\t 12+30-03'
'1994-01-01 12:30:3'
```
`TIME` literals
----------------
A `TIME` string is a string in one of the following formats: `'D HH:MM:SS'`, `'HH:MM:SS`, `'D HH:MM'`, `'HH:MM'`, `'D HH'`, or `'SS'`. `D` is a value from 0 to 34 which represents days. `:` is the only allowed delimiter for `TIME` literals. Delimiters are mandatory, with an exception: the `'HHMMSS'` format is allowed. When delimiters are used, each part of the literal can consist of one character.
A `TIME` literal can also be a number in one of the following formats: `HHMMSS`, `MMSS`, or `SS`.
The following literals are equivalent:
```
'09:05:00'
'9:05:0'
'9:5:0'
'090500'
```
2-digit years
-------------
The year part in `DATE` and `DATETIME` literals is determined as follows:
* `70` - `99` = `1970` - `1999`
* `00` - `69` = `2000` - `2069`
Microseconds
------------
`DATETIME` and `TIME` literals can have an optional microseconds part. For both string and numeric forms, it is expressed as a decimal part. Up to 6 decimal digits are allowed. Examples:
```
'12:30:00.123456'
123000.123456
```
See [Microseconds in MariaDB](../microseconds-in-mariadb/index) for details.
Date and time literals and the `SQL_MODE`
-----------------------------------------
Unless the [SQL\_MODE](../sql-mode/index) `NO_ZERO_DATE` flag is set, some special values are allowed: the `'0000-00-00'` `DATE`, the `'00:00:00'` `TIME`, and the `0000-00-00 00:00:00` `DATETIME`.
If the `ALLOW_INVALID_DATES` flag is set, the invalid dates (for example, 30th February) are allowed. If not, if the `NO_ZERO_DATE` is set, an error is produced; otherwise, a zero-date is returned.
Unless the `NO_ZERO_IN_DATE` flag is set, each subpart of a date or time value (years, hours...) can be set to 0.
See also
--------
* [Date and time units](../date-and-time-units/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SHOW COLLATION SHOW COLLATION
==============
Syntax
------
```
SHOW COLLATION
[LIKE 'pattern' | WHERE expr]
```
Description
-----------
The output from `SHOW COLLATION` includes all available [collations](../data-types-character-sets-and-collations/index). The `LIKE` clause, if present on its own, indicates which collation names to match. The `WHERE` and `LIKE` clauses can be given to select rows using more general conditions, as discussed in [Extended SHOW](../extended-show/index).
The same information can be queried from the [Information Schema COLLATIONS](../information-schema-collations-table/index) table.
See [Setting Character Sets and Collations](../setting-character-sets-and-collations/index) for details on specifying the collation at the server, database, table and column levels.
Examples
--------
```
SHOW COLLATION LIKE 'latin1%';
+-------------------+---------+----+---------+----------+---------+
| Collation | Charset | Id | Default | Compiled | Sortlen |
+-------------------+---------+----+---------+----------+---------+
| latin1_german1_ci | latin1 | 5 | | Yes | 1 |
| latin1_swedish_ci | latin1 | 8 | Yes | Yes | 1 |
| latin1_danish_ci | latin1 | 15 | | Yes | 1 |
| latin1_german2_ci | latin1 | 31 | | Yes | 2 |
| latin1_bin | latin1 | 47 | | Yes | 1 |
| latin1_general_ci | latin1 | 48 | | Yes | 1 |
| latin1_general_cs | latin1 | 49 | | Yes | 1 |
| latin1_spanish_ci | latin1 | 94 | | Yes | 1 |
+-------------------+---------+----+---------+----------+---------+
```
```
SHOW COLLATION WHERE Sortlen LIKE '8' AND Charset LIKE 'utf8';
+--------------------+---------+-----+---------+----------+---------+
| Collation | Charset | Id | Default | Compiled | Sortlen |
+--------------------+---------+-----+---------+----------+---------+
| utf8_unicode_ci | utf8 | 192 | | Yes | 8 |
| utf8_icelandic_ci | utf8 | 193 | | Yes | 8 |
| utf8_latvian_ci | utf8 | 194 | | Yes | 8 |
| utf8_romanian_ci | utf8 | 195 | | Yes | 8 |
| utf8_slovenian_ci | utf8 | 196 | | Yes | 8 |
| utf8_polish_ci | utf8 | 197 | | Yes | 8 |
| utf8_estonian_ci | utf8 | 198 | | Yes | 8 |
| utf8_spanish_ci | utf8 | 199 | | Yes | 8 |
| utf8_swedish_ci | utf8 | 200 | | Yes | 8 |
| utf8_turkish_ci | utf8 | 201 | | Yes | 8 |
| utf8_czech_ci | utf8 | 202 | | Yes | 8 |
| utf8_danish_ci | utf8 | 203 | | Yes | 8 |
| utf8_lithuanian_ci | utf8 | 204 | | Yes | 8 |
| utf8_slovak_ci | utf8 | 205 | | Yes | 8 |
| utf8_spanish2_ci | utf8 | 206 | | Yes | 8 |
| utf8_roman_ci | utf8 | 207 | | Yes | 8 |
| utf8_persian_ci | utf8 | 208 | | Yes | 8 |
| utf8_esperanto_ci | utf8 | 209 | | Yes | 8 |
| utf8_hungarian_ci | utf8 | 210 | | Yes | 8 |
| utf8_sinhala_ci | utf8 | 211 | | Yes | 8 |
| utf8_croatian_ci | utf8 | 213 | | Yes | 8 |
+--------------------+---------+-----+---------+----------+---------+
```
See Also
--------
* [Supported Character Sets and Collations](../supported-character-sets-and-collations/index)
* [Setting Character Sets and Collations](../setting-character-sets-and-collations/index)
* [Information Schema COLLATIONS](../information-schema-collations-table/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Arithmetic Operators Arithmetic Operators
=====================
Arithmetic operators for addition, subtraction, multiplication, division and the modulo operator
| Title | Description |
| --- | --- |
| [Addition Operator (+)](../addition-operator/index) | Addition. |
| [DIV](../div/index) | Integer division. |
| [Division Operator (/)](../division-operator/index) | Division. |
| [MOD](../mod/index) | Modulo operation. Remainder of N divided by M. |
| [Modulo Operator (%)](../modulo-operator/index) | Modulo operator. Returns the remainder of N divided by M. |
| [Multiplication Operator (\*)](../multiplication-operator/index) | Multiplication. |
| [Subtraction Operator (-)](../subtraction-operator-/index) | Subtraction and unary minus. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore Upgrade Guides ColumnStore Upgrade Guides
===========================
| Title | Description |
| --- | --- |
| [MariaDB ColumnStore 1.5 Upgrades](../mariadb-columnstore-15-upgrades/index) | How to upgrade to MariaDB ColumnStore 1.5. |
| [MariaDB ColumnStore 1.4 Upgrades](../mariadb-columnstore-14-upgrades/index) | How to upgrade to MariaDB ColumnStore 1.4. |
| [MariaDB ColumnStore 1.2 Upgrades](../mariadb-columnstore-12-upgrades/index) | |
| [MariaDB ColumnStore 1.1 Upgrades](../mariadb-columnstore-11-upgrades/index) | |
| [MariaDB ColumnStore 1.0 Upgrades](../mariadb-columnstore-10-upgrades/index) | |
| [InfiniDB Migration to ColumnStore](../infinidb-migration-to-columnstore/index) | |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Building RPM Packages From Source Building RPM Packages From Source
=================================
To generate RPM packages from the build, supply the `-DRPM=yes` flag to CMake.
The value `yes` (or whatever) will build generic RPM packages. It is also possible to use one of these special values that will slightly modify what kind of layout the resulting RPM packages will have:
* fedora
* centos7/rhel7
* centos8/rhel8
* sles
What these do are controlled in the following CMake files:
* `cmake/cpack_rpm.cmake`
* `cmake/build_configurations/mysql_release.cmake`
* `cmake/mariadb_connector_c.cmake`
See Also
--------
* [About the MariaDB RPM Files](../about-the-mariadb-rpm-files/index)
* [Building MariaDB on CentOS](../source-building-mariadb-on-centos/index)
* [Installing MariaDB RPM Files](../rpm/index)
* [MariaDB RPM Packages](../mariadb-rpm-packages/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Ignored Indexes Ignored Indexes
===============
**MariaDB starting with [10.6.0](https://mariadb.com/kb/en/mariadb-1060-release-notes/)**Ignored indexes were added in [MariaDB 10.6](../what-is-mariadb-106/index).
Ignored indexes are indexes that are visible and maintained, but which are not used by the optimizer. MySQL 8 has a similar feature which they call "invisible indexes".
Syntax
------
By default, an index is not ignored. One can mark existing index as ignored (or not ignored) with an [ALTER TABLE](../alter-table/index) statement:
```
ALTER TABLE table_name ALTER {KEY|INDEX} [IF EXISTS] key_name [NOT] IGNORED;
```
It is also possible to specify IGNORED attribute when creating an index with a [CREATE TABLE](../create-table/index), or [CREATE INDEX](../create-index/index) statement:
```
CREATE TABLE table_name (
...
INDEX index_name ( ...) [NOT] IGNORED
...
```
```
CREATE INDEX index_name (...) [NOT] IGNORED ON tbl_name (...);
```
table's primary key cannot be ignored. This applies to both explicitly defined primary key, as well as implicit primary key - if there is no explicit primary key defined but the table has a unique key containing only NOT NULL columns, the first of such keys becomes the implicitly defined primary key.
Handling for ignored indexes
----------------------------
The optimizer will treats ignored indexes as if they didn't exist. They will not be used in the query plans, or as a source of statistical information. Also, an attempt to use an ignored index in a `USE INDEX`, `FORCE INDEX`, or `IGNORE INDEX` hint will result in an error - the same what would have if one used a name of a non-existent index.
Information about whether or not indexes are ignored can be viewed in the IGNORED column in the [Information Schema STATISTICS table](../information-schema-statistics-table/index) or the [SHOW INDEX](../show-index/index) statement.
Intended Usage
--------------
The primary use case is as follows: a DBA sees an index that seems to have little or no usage and considers whether to remove it. Dropping the index is a risk as it may still be needed in a few cases. For example, the optimizer may rely on the estimates provided by the index without using the index in query plans. If dropping an index causes an issue, it will take a while to re-create the index. On the other hand, marking the index as ignored (or not ignored) is instant, so the suggested workflow is:
1. Mark the index as ignored
2. Check if everything continues to work
3. If not, mark the index as not ignored.
4. If everything continues to work, one can safely drop the index.
Examples
--------
```
CREATE TABLE t1 (id INT PRIMARY KEY, b INT, KEY k1(b) IGNORED);
```
```
CREATE OR REPLACE TABLE t1 (id INT PRIMARY KEY, b INT, KEY k1(b));
ALTER TABLE t1 ALTER INDEX k1 IGNORED;
```
```
CREATE OR REPLACE TABLE t1 (id INT PRIMARY KEY, b INT);
CREATE INDEX k1 ON t1(b) IGNORED;
```
```
SELECT * FROM INFORMATION_SCHEMA.STATISTICS WHERE TABLE_NAME = 't1'\G
*************************** 1. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: test
TABLE_NAME: t1
NON_UNIQUE: 0
INDEX_SCHEMA: test
INDEX_NAME: PRIMARY
SEQ_IN_INDEX: 1
COLUMN_NAME: id
COLLATION: A
CARDINALITY: 0
SUB_PART: NULL
PACKED: NULL
NULLABLE:
INDEX_TYPE: BTREE
COMMENT:
INDEX_COMMENT:
IGNORED: NO
*************************** 2. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: test
TABLE_NAME: t1
NON_UNIQUE: 1
INDEX_SCHEMA: test
INDEX_NAME: k1
SEQ_IN_INDEX: 1
COLUMN_NAME: b
COLLATION: A
CARDINALITY: 0
SUB_PART: NULL
PACKED: NULL
NULLABLE: YES
INDEX_TYPE: BTREE
COMMENT:
INDEX_COMMENT:
IGNORED: YES
```
```
SHOW INDEXES FROM t1\G
*************************** 1. row ***************************
Table: t1
Non_unique: 0
Key_name: PRIMARY
Seq_in_index: 1
Column_name: id
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null:
Index_type: BTREE
Comment:
Index_comment:
Ignored: NO
*************************** 2. row ***************************
Table: t1
Non_unique: 1
Key_name: k1
Seq_in_index: 1
Column_name: b
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment:
Index_comment:
Ignored: YES
```
The optimizer does not make use of an index when it is ignored, while if the index is not ignored (the default), the optimizer will consider it in the optimizer plan, as shown in the [EXPLAIN](../explain/index) output.
```
CREATE OR REPLACE TABLE t1 (id INT PRIMARY KEY, b INT, KEY k1(b) IGNORED);
EXPLAIN SELECT * FROM t1 ORDER BY b;
+------+-------------+-------+------+---------------+------+---------+------+------+----------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+------+---------------+------+---------+------+------+----------------+
| 1 | SIMPLE | t1 | ALL | NULL | NULL | NULL | NULL | 1 | Using filesort |
+------+-------------+-------+------+---------------+------+---------+------+------+----------------+
ALTER TABLE t1 ALTER INDEX k1 NOT IGNORED;
EXPLAIN SELECT * FROM t1 ORDER BY b;
+------+-------------+-------+-------+---------------+------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+-------+---------------+------+---------+------+------+-------------+
| 1 | SIMPLE | t1 | index | NULL | k1 | 5 | NULL | 1 | Using index |
+------+-------------+-------+-------+---------------+------+---------+------+------+-------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Preparing and Installing MariaDB ColumnStore 1.0.X Preparing and Installing MariaDB ColumnStore 1.0.X
===================================================
| Title | Description |
| --- | --- |
| [Preparing for ColumnStore Installation - 1.0.X](../preparing-for-columnstore-installation-10x/index) | Before installing MariaDB ColumnStore, there is some preparation necessary |
| [MariaDB ColumnStore Cluster Test Tool](../mariadb-columnstore-cluster-test-tool/index) | Introduction MariaDB ColumnStore Cluster Test Tool is used to validate tha... |
| [Installing and Configuring a Single Server ColumnStore System](../installing-and-configuring-mariadb-columnstore/index) | How to install ColumnStore on a single server system. |
| [Installing and Configuring a Multi Server ColumnStore System - 1.0.X](../installing-and-configuring-a-multi-server-columnstore-system-10x/index) | How to install ColumnStore on a multi-server system |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb REPAIR VIEW REPAIR VIEW
===========
Syntax
------
```
REPAIR [NO_WRITE_TO_BINLOG | LOCAL] VIEW view_name[, view_name] ... [FROM MYSQL]
```
Description
-----------
The `REPAIR VIEW` statement was introduced to assist with fixing [MDEV-6916](https://jira.mariadb.org/browse/MDEV-6916), an issue introduced in [MariaDB 5.2](../what-is-mariadb-52/index) where the view algorithms were swapped compared to their MySQL on disk representation. It checks whether the view algorithm is correct. It is run as part of [mysql\_upgrade](../mysql_upgrade/index), and should not normally be required in regular use.
By default it corrects the checksum and if necessary adds the mariadb-version field. If the optional `FROM MYSQL` clause is used, and no mariadb-version field is present, the MERGE and TEMPTABLE algorithms are toggled.
By default, `REPAIR VIEW` statements are written to the [binary log](../binary-log/index) and will be [replicated](../replication/index). The `NO_WRITE_TO_BINLOG` keyword (`LOCAL` is an alias) will ensure the statement is not written to the binary log.
See Also
--------
* [CHECK VIEW](../check-view/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Big Query Settings Big Query Settings
==================
[MariaDB 5.3](../what-is-mariadb-53/index) and beyond have a number of features that are targeted at big queries and so are disabled by default.
This page describes recommended settings for IO-bound queries that shovel through lots of records.
First, turn on [Batched Key Access](../block-based-join-algorithms/index):
```
# Turn on disk-ordered reads
optimizer_switch='mrr=on'
optimizer_switch='mrr_cost_based=off'
# Turn on Batched Key Access (BKA)
join_cache_level = 6
```
Give BKA buffer space to operate on. Ideally, it should have enough space to fit all the data examined by the query.
```
# Size limit for the whole join
join_buffer_space_limit = 300M
# Limit for each individual table
join_buffer_size = 100M
```
Turn on [index\_merge/sort-intersection](../index_merge_sort_intersection/index):
```
optimizer_switch='index_merge_sort_intersection=on'
```
If your queries examine big fraction of the tables (somewhere more than ~ 30%), turn on [hash join](hash-join):
```
# Turn on both Hash Join and Batched Key Access
join_cache_level = 8
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Relational Databases: Foreign Keys Relational Databases: Foreign Keys
==================================
You already know that a relationship between two tables is created by assigning a common field to the two tables (see [Relational Databases: Table Keys](../relational-databases-table-keys/index)). This common field must be a primary key to one table. Consider a relationship between a *customer* table and a *sale* table. The relationship is not much good if instead of using the primary key, *customer\_code*, in the *sale* table, you use another field that is not unique, such as the customer's first name. You would be unlikely to know for sure which customer made the sale in that case. So, in the table below, *customer\_code* is called the *foreign\_key* in the *sale* table; in other words, it is the primary key in a foreign table.
Foreign keys allow for something called *referential integrity*. What this means is that if a foreign key contains a value, this value refers to an existing record in the related table. For example, take a look at the tables below:
### Lecturer table
| Code | First Name | Surname |
| --- | --- | --- |
| 1 | Anne | Cohen |
| 2 | Leonard | Clark |
| 3 | Vusi | Cave |
### Course table
| Course Title | Lecturer Code |
| --- | --- |
| Introduction to Programming | 1 |
| Information Systems | 2 |
| Systems Software | 3 |
Referential integrity exists here, as all the lecturers in the *course* table exist in the *lecturer* table. However, let's assume Anne Cohen leaves the institution, and you remove her from the lecturer table. In a situation where referential integrity is not enforced, she would be removed from the lecturer table, but not from the course table, as shown below:
### Lecturer table
| Code | First Name | Surname |
| --- | --- | --- |
| 2 | Leonard | Clark |
| 3 | Vusi | Cave |
### Course table
| Course Title | Lecturer Code |
| --- | --- |
| Introduction to Programming | 1 |
| Information Systems | 2 |
| Systems Software | 3 |
Now, when you look up who lectures *Introduction to Programming*, you are sent to a non-existent record. This is called poor data intregrity.
Foreign keys also allow *cascading* deletes and updates. For example, if Anne Cohen leaves, taking the Introduction to Programming Course with her, all trace of her can be removed from both the *lecturer* and *course* table using one statement. The delete *cascades* through the relevant tables, removing all relevant records.
Foreign keys can also contain null values, indicating that no relationship exists.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb FIELD FIELD
=====
Syntax
------
```
FIELD(pattern, str1[,str2,...])
```
Description
-----------
Returns the index position of the string or number matching the given pattern. Returns `0` in the event that none of the arguments match the pattern. Raises an Error 1582 if not given at least two arguments.
When all arguments given to the `FIELD()` function are strings, they are treated as case-insensitive. When all the arguments are numbers, they are treated as numbers. Otherwise, they are treated as doubles.
If the given pattern occurs more than once, the `FIELD()` function only returns the index of the first instance. If the given pattern is `NULL`, the function returns `0`, as a `NULL` pattern always fails to match.
This function is complementary to the `[ELT()](../elt/index)` function.
Examples
--------
```
SELECT FIELD('ej', 'Hej', 'ej', 'Heja', 'hej', 'foo')
AS 'Field Results';
+---------------+
| Field Results |
+---------------+
| 2 |
+---------------+
SELECT FIELD('fo', 'Hej', 'ej', 'Heja', 'hej', 'foo')
AS 'Field Results';
+---------------+
| Field Results |
+---------------+
| 0 |
+---------------+
SELECT FIELD(1, 2, 3, 4, 5, 1) AS 'Field Results';
+---------------+
| Field Results |
+---------------+
| 5 |
+---------------+
SELECT FIELD(NULL, 2, 3) AS 'Field Results';
+---------------+
| Field Results |
+---------------+
| 0 |
+---------------+
SELECT FIELD('fail') AS 'Field Results';
Error 1582 (42000): Incorrect parameter count in call
to native function 'field'
```
See also
--------
* [ELT()](../elt/index) function. Returns the N'th element from a set of strings.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb COT COT
===
Syntax
------
```
COT(X)
```
Description
-----------
Returns the cotangent of X.
Examples
--------
```
SELECT COT(42);
+--------------------+
| COT(42) |
+--------------------+
| 0.4364167060752729 |
+--------------------+
SELECT COT(12);
+---------------------+
| COT(12) |
+---------------------+
| -1.5726734063976893 |
+---------------------+
SELECT COT(0);
ERROR 1690 (22003): DOUBLE value is out of range in 'cot(0)'
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SHOW CLIENT_STATISTICS SHOW CLIENT\_STATISTICS
=======================
Syntax
------
```
SHOW CLIENT_STATISTICS
```
Description
-----------
The `SHOW CLIENT_STATISTICS` statement is part of the [User Statistics](../user-statistics/index) feature. It was removed as a separate statement in [MariaDB 10.1.1](https://mariadb.com/kb/en/mariadb-1011-release-notes/), but effectively replaced by the generic [SHOW information\_schema\_table](../information-schema-plugins-show-and-flush-statements/index) statement. The [information\_schema.CLIENT\_STATISTICS](../information-schema-client_statistics-table/index) table holds statistics about client connections.
The [userstat](../server-system-variables/index#userstat) system variable must be set to 1 to activate this feature. See the [User Statistics](../user-statistics/index) and [information\_schema.CLIENT\_STATISTICS](../information-schema-client_statistics-table/index) articles for more information.
Example
-------
```
SHOW CLIENT_STATISTICS\G
*************************** 1. row ***************************
Client: localhost
Total_connections: 35
Concurrent_connections: 0
Connected_time: 708
Busy_time: 2.5557979999999985
Cpu_time: 0.04123740000000002
Bytes_received: 3883
Bytes_sent: 21595
Binlog_bytes_written: 0
Rows_read: 18
Rows_sent: 115
Rows_deleted: 0
Rows_inserted: 0
Rows_updated: 0
Select_commands: 70
Update_commands: 0
Other_commands: 0
Commit_transactions: 1
Rollback_transactions: 0
Denied_connections: 0
Lost_connections: 0
Access_denied: 0
Empty_queries: 35
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb INSERT Function INSERT Function
===============
Syntax
------
```
INSERT(str,pos,len,newstr)
```
Description
-----------
Returns the string `str`, with the substring beginning at position `pos` and `len` characters long replaced by the string `newstr`. Returns the original string if `pos` is not within the length of the string. Replaces the rest of the string from position `pos` if `len` is not within the length of the rest of the string. Returns NULL if any argument is NULL.
Examples
--------
```
SELECT INSERT('Quadratic', 3, 4, 'What');
+-----------------------------------+
| INSERT('Quadratic', 3, 4, 'What') |
+-----------------------------------+
| QuWhattic |
+-----------------------------------+
SELECT INSERT('Quadratic', -1, 4, 'What');
+------------------------------------+
| INSERT('Quadratic', -1, 4, 'What') |
+------------------------------------+
| Quadratic |
+------------------------------------+
SELECT INSERT('Quadratic', 3, 100, 'What');
+-------------------------------------+
| INSERT('Quadratic', 3, 100, 'What') |
+-------------------------------------+
| QuWhat |
+-------------------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CONNECT CSV and FMT Table Types CONNECT CSV and FMT Table Types
===============================
CSV Type
--------
Many source data files are formatted with variable length fields and records. The simplest format, known as `CSV` (Comma Separated Variables), has column fields separated by a separator character. By default, the separator is a comma but can be specified by the `SEP_CHAR` option as any character, for instance a semi-colon.
If the CSV file first record is the list of column names, specifying the `HEADER=1` option will skip the first record on reading. On writing, if the file is empty, the column names record is automatically written.
For instance, given the following *people.csv* file:
```
Name;birth;children
"Archibald";17/05/01;3
"Nabucho";12/08/03;2
```
You can create the corresponding table by:
```
create table people (
name char(12) not null,
birth date not null date_format='DD/MM/YY',
children smallint(2) not null)
engine=CONNECT table_type=CSV file_name='people.csv'
header=1 sep_char=';' quoted=1;
```
Alternatively the engine can attempt to automatically detect the column names, data types and widths using:
```
create table people
engine=CONNECT table_type=CSV file_name='people.csv'
header=1 sep_char=';' quoted=1;
```
For CSV tables, the *flag* column option is the rank of the column into the file starting from 1 for the leftmost column. This is to enable having column displayed in a different order than in the file and/or to define the table specifying only some columns of the CSV file. For instance:
```
create table people (
name char(12) not null,
children smallint(2) not null flag=3,
birth date not null flag=2 date_format='DD/MM/YY')
engine=CONNECT table_type=CSV file_name='people.csv'
header=1 sep_char=';' quoted=1;
```
In this case the command:
```
select * from people;
```
will display the table as:
| name | children | birth |
| --- | --- | --- |
| Archibald | 3 | 2001-05-17 |
| Nabucho | 2 | 2003-08-12 |
Many applications produce CSV files having some fields quoted, in particular because the field text contains the separator character. For such files, specify the 'QUOTED=*n*' option to indicate the level of quoting and/or the '`QCHAR=c`' to specify what is this eventual quoting character, which is `"` by default. Quoting with single quotes must be specified as `QCHAR=''''`. On writing, fields will be quoted depending on the value of the quoting level, which is `–1` by default meaning no quoting:
| | |
| --- | --- |
| 0 | The fields between quotes are read and the quotes discarded. On writing, fields will be quoted only if they contain the separator character or begin with the quoting character. If they contain the quoting character, it will be doubled. |
| 1 | Only text fields will be written between quotes, except null fields. This includes also the column names of an eventual header. |
| 2 | All fields will be written between quotes, except null fields. |
| 3 | All fields will be written between quotes, including null fields. |
Files written this way are successfully read by most applications including spreadsheets.
**Note 1:** If only the QCHAR option is specified, the QUOTED option will default to 1.
**Note 2:** For CSV tables whose separator is the tab character, specify `sep_char='\t'`.
**Note 3:** When creating a table on an existing CSV file, you can let CONNECT analyze the file and make the column description. However, this is a not an elaborate analysis of the file and, for instance, `DATE` fields will not be recognized as such but will be regarded as string fields.
**Note 4:** The CSV parser only reads and buffers up to 4KB per row by default, rows longer than this will be truncated when read from the file. If the rows are expected to be longer than this use `lrecl` to increase this. For example to set an 8KB maximum row read you would use `lrecl=8192`
### Restrictions on CSV Tables
* If `[secure\_file\_priv](../server-system-variables/index#secure_file_priv)` is set to the path of some directory, then CSV tables can only be created with files in that directory.
FMT Type
--------
FMT tables handle files of various formats that are an extension of the concept of CSV files. CONNECT supports these files providing all lines have the same format and that all fields present in all records are recognizable (optional fields must have recognizable delimiters). These files are made by specific application and CONNECT handles them in read only mode.
FMT tables must be created as CSV tables, specifying their type as FMT. In addition, each column description must be added to its format specification.
Column Format Specification of FMT tables
-----------------------------------------
The input format for each column is specified as a FIELD\_FORMAT option. A simple example is:
```
IP Char(15) not null field_format=' %n%s%n',
```
In the above example, the format for this (1st) field is `' %n%s%n'`. Note that the blank character at the beginning of this format **is** significant. No trailing blank should be specified in the column formats.
The syntax and meaning of the column input format is the one of the C **scanf** function.
However, CONNECT uses the input format in a specific way. Instead of using it to directly store the input value in the column buffer; it uses it to delimit the sub string of the input record that contains the corresponding column value. Retrieving this value is done later by the column functions as for standard CSV files.
This is why all column formats are made of five components:
1. An eventual description of what is met and ignored before the column value.
2. A marker of the beginning of the column value written as `%n`.
3. The format specification of the column value itself.
4. A marker of the end of the column value written as `%n` (or `%m` for optional fields).
5. An eventual description of what is met after the column value (not valid is `%m` was used).
For example, taking the file *funny.txt*:
```
12345,'BERTRAND',#200;5009.13
56, 'POIROT-DELMOTTE' ,#4256 ;18009
345 ,'TRUCMUCHE' , #67; 19000.25
```
You can make a table *fmtsample* with 4 columns ID, NAME, DEPNO and SALARY, using the Create Table statement and column formats:
```
create table FMTSAMPLE (
ID Integer(5) not null field_format=' %n%d%n',
NAME Char(16) not null field_format=' , ''%n%[^'']%n''',
DEPNO Integer(4) not null field_format=' , #%n%d%n',
SALARY Double(12,2) not null field_format=' ; %n%f%n')
Engine=CONNECT table_type=FMT file_name='funny.txt';
```
**Field 1** is an integer (`%d`) with eventual leading blanks.
**Field 2** is separated from field 1 by optional blanks, a comma, and other optional blanks and is between single quotes. The leading quote is included in component 1 of the column format, followed by the `%n` marker. The column value is specified as `%[^']` meaning to keep any characters read until a quote is met. The ending marker (`%n`) is followed by the 5th component of the column format, the single quote that follows the column value.
**Field 3,** also separated by a comma, is a number preceded by a pound sign.
**Field 4,** separated by a semicolon eventually surrounded by blanks, is a number with an optional decimal point (`%f`).
This table will be displayed as:
| ID | NAME | DEPNO | SALARY |
| --- | --- | --- | --- |
| 12345 | BERTRAND | 200 | 5009.13 |
| 56 | POIROT-DELMOTTE | 4256 | 18009.00 |
| 345 | TRUCMUCHE | 67 | 19000.25 |
Optional Fields
---------------
To be recognized, a field normally must be at least one character long. For instance, a numeric field must have at least one digit, or a character field cannot be void. However many existing files do not follow this format.
Let us suppose for instance that the preceding example file could be:
```
12345,'BERTRAND',#200;5009.13
56, 'POIROT-DELMOTTE' ,# ;18009
345 ,'' , #67; 19000.25
```
This will display an error message such as *“Bad format line x field y of FMTSAMPLE”.* To avoid this and accept these records, the corresponding fields must be specified as "optional". In the above example, fields 2 and 3 can have null values (in lines 3 and 2 respectively). To specify them as optional, their format must be terminated by `%m` (instead of the second `%n`). A statement such as this can do the table creation:
```
create table FMTAMPLE (
ID Integer(5) not null field_format=' %n%d%n',
NAME Char(16) not null field_format=' , ''%n%[^'']%m',
DEPNO Integer(4) field_format=''' , #%n%d%m',
SALARY Double(12,2) field_format=' ; %n%f%n')
Engine=CONNECT table_type=FMT file_name='funny.txt';
```
Note that, because the statement must be terminated by `%m` with no additional characters, skipping the ending quote of field 2 was moved from the end of the second column format to the beginning of the third column format.
The table result is:
| ID | NAME | DEPNO | SALARY |
| --- | --- | --- | --- |
| 12345 | BERTRAND | 200 | 5,009.13 |
| 56 | POIROT-DELMOTTE | NULL | 18,009.00 |
| 345 | NULL | 67 | 19,000.25 |
Missing fields are replaced by null values if the column is nullable, blanks for character strings and 0 for numeric fields if it is not.
**Note 1:** Because the formats are specified between quotes, quotes belonging to the formats must be doubled or escaped to avoid a `CREATE TABLE` statement syntax error.
**Note 2:** Characters separating columns can be included as well in component 5 of the preceding column format or in component 1 of the succeeding column format but for blanks, which should be always included in component 1 of the succeeding column format because line trailing blanks can be sometimes lost. This is also mandatory for optional fields.
**Note 3:** Because the format is mainly used to find the sub-string corresponding to a column value, the field specification does not necessarily match the column type. For instance supposing a table contains two integer columns, NBONE and NBTWO, the two lines describing these columns could be:
```
NBONE integer(5) not null field_format=' %n%d%n',
NBTWO integer(5) field_format=' %n%s%n',
```
The first one specifies a required integer field (`%d`), the second line describes a field that can be an integer, but can be replaced by a "-" (or any other) character. Specifying the format specification for this column as a character field (`%s`) enables to recognize it with no error in all cases. Later on, this field will be converted to integer by the column read function, and a null 0 value will be generated for field specified in their format as non-numeric.
Bad Record Error Processing
---------------------------
When no match if found for a column field the process aborts with a message such as:
```
Bad format line 3 field 4 of funny.txt
```
This can mean as well that one line of the input line is ill formed or that the column format for this field has been wrongly specified. When you know that your file contains records that are ill formatted and should be eliminated from normal processing, set the “maxerr” option of the CREATE TABLE statement, for instance:
```
Option_list='maxerr=100'
```
This will indicate that no error message be raised for the 100 first wrong lines. You can set Maxerr to a number greater than the number of wrong lines in your files to ignore them and get no errors.
Additionally, the “accept” option permit to keep those ill formatted lines with the bad field, and all succeeding fields of the record, nullified. If “accept” is specified without “maxerr”, all ill formatted lines will be accepted.
**Note:** This error processing also applies to CSV tables.
Fields Containing a Formatted Date
----------------------------------
A special case is one of columns containing a formatted date. In this case, two formats must be specified:
1. The field recognition format used to delimit the date in the input record.
2. The date format used to interpret the date.
3. The field length option if the date representation is different than the standard type size.
For example, let us suppose we have a web log source file containing records such as:
```
165.91.215.31 - - [17/Jul/2001:00:01:13 -0400] - "GET /usnews/home.htm HTTP/1.1" 302
```
The create table statement shall be like this:
```
create table WEBSAMP (
IP char(15) not null field_format='%n%s%n',
DATE datetime not null field_format=' - - [%n%s%n -0400]'
date_format='DD/MMM/YYYY:hh:mm:ss' field_length=20,
FILE char(128) not null field_format=' - "GET %n%s%n',
HTTP double(4,2) not null field_format=' HTTP/%n%f%n"',
NBONE int(5) not null field_format=' %n%d%n')
Engine=CONNECT table_type=FMT lrecl=400
file_name='e:\\data\\token\\Websamp.dat';
```
**Note 1:** Here, `field_length=20` was necessary because the default size for datetime columns is only 19. The `lrecl=400` was also specified because the actual file contains more information in each records making the record size calculated by default too small.
**Note 2:** The file name could have been specified as `'e:/data/token/Websamp.dat'`.
**Note 3:** FMT tables are currently read only.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb SHOW PROFILE SHOW PROFILE
============
Syntax
------
```
SHOW PROFILE [type [, type] ... ]
[FOR QUERY n]
[LIMIT row_count [OFFSET offset]]
type:
ALL
| BLOCK IO
| CONTEXT SWITCHES
| CPU
| IPC
| MEMORY
| PAGE FAULTS
| SOURCE
| SWAPS
```
Description
-----------
The `SHOW PROFILE` and `[SHOW PROFILES](../show-profiles/index)` statements display profiling information that indicates resource usage for statements executed during the course of the current session.
Profiling is controlled by the [profiling](../server-system-variables/index#profiling) session variable, which has a default value of `0` (`OFF`). Profiling is enabled by setting profiling to `1` or `ON`:
```
SET profiling = 1;
```
`SHOW PROFILES` displays a list of the most recent statements sent to the master. The size of the list is controlled by the `[profiling\_history\_size](../server-system-variables/index#profiling_history_size)` session variable, which has a default value of `15`. The maximum value is `100`. Setting the value to `0` has the practical effect of disabling profiling.
All statements are profiled except `SHOW PROFILES` and `SHOW PROFILE`, so you will find neither of those statements in the profile list. Malformed statements are profiled. For example, `SHOW PROFILING` is an illegal statement, and a syntax error occurs if you try to execute it, but it will show up in the profiling list.
`SHOW PROFILE` displays detailed information about a single statement. Without the `FOR QUERY *n*` clause, the output pertains to the most recently executed statement. If `FOR QUERY *n*` is included, `SHOW PROFILE` displays information for statement *n*. The values of *n* correspond to the `Query_ID` values displayed by `SHOW PROFILES`.
The `LIMIT *row\_count*` clause may be given to limit the output to *row\_count* rows. If `LIMIT` is given, `OFFSET *offset*` may be added to begin the output offset rows into the full set of rows.
By default, `SHOW PROFILE` displays Status and Duration columns. The Status values are like the State values displayed by `[SHOW PROCESSLIST](../show-processlist/index)`, although there might be some minor differences in interpretation for the two statements for some status values (see <http://dev.mysql.com/doc/refman/5.6/en/thread-information.html>).
Optional type values may be specified to display specific additional types of information:
* `**ALL**` displays all information
* `**BLOCK IO**` displays counts for block input and output operations
* `**CONTEXT SWITCHES**` displays counts for voluntary and involuntary context switches
* `**CPU**` displays user and system CPU usage times
* `**IPC**` displays counts for messages sent and received
* `**MEMORY**` is not currently implemented
* `**PAGE FAULTS**` displays counts for major and minor page faults
* `**SOURCE**` displays the names of functions from the source code, together with the name and line number of the file in which the function occurs
* `**SWAPS**` displays swap counts
Profiling is enabled per session. When a session ends, its profiling information is lost.
The `[information\_schema.PROFILING](../information-schema-profiling-table/index) table contains similar information.`
Examples
--------
```
SELECT @@profiling;
+-------------+
| @@profiling |
+-------------+
| 0 |
+-------------+
SET profiling = 1;
USE test;
DROP TABLE IF EXISTS t1;
CREATE TABLE T1 (id INT);
SHOW PROFILES;
+----------+------------+--------------------------+
| Query_ID | Duration | Query |
+----------+------------+--------------------------+
| 1 | 0.00009200 | SELECT DATABASE() |
| 2 | 0.00023800 | show databases |
| 3 | 0.00018900 | show tables |
| 4 | 0.00014700 | DROP TABLE IF EXISTS t1 |
| 5 | 0.24476900 | CREATE TABLE T1 (id INT) |
+----------+------------+--------------------------+
SHOW PROFILE;
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000042 |
| checking permissions | 0.000044 |
| creating table | 0.244645 |
| After create | 0.000013 |
| query end | 0.000003 |
| freeing items | 0.000016 |
| logging slow query | 0.000003 |
| cleaning up | 0.000003 |
+----------------------+----------+
SHOW PROFILE FOR QUERY 4;
+--------------------+----------+
| Status | Duration |
+--------------------+----------+
| starting | 0.000126 |
| query end | 0.000004 |
| freeing items | 0.000012 |
| logging slow query | 0.000003 |
| cleaning up | 0.000002 |
+--------------------+----------+
SHOW PROFILE CPU FOR QUERY 5;
+----------------------+----------+----------+------------+
| Status | Duration | CPU_user | CPU_system |
+----------------------+----------+----------+------------+
| starting | 0.000042 | 0.000000 | 0.000000 |
| checking permissions | 0.000044 | 0.000000 | 0.000000 |
| creating table | 0.244645 | 0.000000 | 0.000000 |
| After create | 0.000013 | 0.000000 | 0.000000 |
| query end | 0.000003 | 0.000000 | 0.000000 |
| freeing items | 0.000016 | 0.000000 | 0.000000 |
| logging slow query | 0.000003 | 0.000000 | 0.000000 |
| cleaning up | 0.000003 | 0.000000 | 0.000000 |
+----------------------+----------+----------+------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mariadb-slap mariadb-slap
============
**MariaDB starting with [10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/)**From [MariaDB 10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/), `mariadb-slap` is a symlink to `mysqlslap`, the tool for load-testing MariaDB.
**MariaDB starting with [10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/)**From [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/), `mysqlslap` is the symlink, and `mariadb-slap` the binary name.
See [mysqlslap](../mysqlslap/index) for details.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Semi-join Subquery Optimizations Semi-join Subquery Optimizations
================================
MariaDB has a set of optimizations specifically targeted at *semi-join subqueries*.
What is a semi-join subquery
----------------------------
A semi-join subquery has a form of
```
SELECT ... FROM outer_tables WHERE expr IN (SELECT ... FROM inner_tables ...) AND ...
```
that is, the subquery is an IN-subquery and it is located in the WHERE clause. The most important part here is
*with semi-join subquery, we're only interested in records of outer\_tables that have matches in the subquery*
Let's see why this is important. Consider a semi-join subquery:
```
select * from Country
where
Continent='Europe' and
Country.Code in (select City.country
from City
where City.Population>1*1000*1000);
```
One can execute it "naturally", by starting from countries in Europe and checking if they have populous Cities:
The semi-join property also allows "backwards" execution: we can start from big cities, and check which countries they are in:
To contrast, let's change the subquery to be non-semi-join:
```
select * from Country
where
Country.Continent='Europe' and
(Country.Code in (select City.country
from City where City.Population>1*1000*1000)
or Country.SurfaceArea > 100*1000 -- Added this part
);
```
It is still possible to start from countries, and then check
* if a country has any big cities
* if it has a large surface area:
The opposite, city-to-country way is not possible. This is not a semi-join.
### Difference from inner joins
Semi-join operations are similar to regular relational joins. There is a difference though: with semi-joins, you don't care how many matches an inner table has for an outer row. In the above countries-with-big-cities example, Germany will be returned once, even if it has three cities with populations of more than one million each.
Semi-join optimizations in MariaDB
----------------------------------
MariaDB uses semi-join optimizations to run IN subqueries.The optimizations are enabled by default. You can disable them by turning off their [optimizer\_switch](../server-system-variables/index#optimizer_switch) like so:
```
SET optimizer_switch='semijoin=off'
```
MariaDB has five different semi-join execution strategies:
* [Table pullout optimization](../table-pullout-optimization/index)
* [FirstMatch execution strategy](../firstmatch-strategy/index)
* [Semi-join Materialization execution strategy](../semi-join-materialization-strategy/index)
* [LooseScan execution strategy](../loosescan-strategy/index)
* [DuplicateWeedout execution strategy](../duplicateweedout-strategy/index)
See Also
--------
* [What is MariaDB 5.3](../what-is-mariadb-53/index)
* [Subquery Optimizations Map](../subquery-optimizations-map/index)
* ["Observations about subquery use cases"](http://s.petrunia.net/blog/?p=35) blog post
* [http:*en.wikipedia.org/wiki/Semijoin*](http://en.wikipedia.org/wiki/Semijoin)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Transaction Timeouts Transaction Timeouts
====================
MariaDB has always had the [wait\_timeout](../server-system-variables/index#wait_timeout) and [interactive\_timeout](../server-system-variables/index#interactive_timeout) settings, which close connections after a certain period of inactivity.
However, these are by default set to a long wait period. In situations where transactions may be started, but not committed or rolled back, more granular control and a shorter timeout may be desirable so as to avoid locks being held for too long.
[MariaDB 10.3](../what-is-mariadb-103/index) introduced three new variables to handle this situation.
* [idle\_transaction\_timeout](../server-system-variables/index#idle_transaction_timeout) (all transactions)
* [idle\_write\_transaction\_timeout](../server-system-variables/index#idle_write_transaction_timeout) (write transactions - called `idle_readwrite_transaction_timeout` until [MariaDB 10.3.2](https://mariadb.com/kb/en/mariadb-1032-release-notes/))
* [idle\_readonly\_transaction\_timeout](../server-system-variables/index#idle_readonly_transaction_timeout) (read transactions)
These accept a time in seconds to time out, by closing the connection, transactions that are idle for longer than this period. By default all are set to zero, or no timeout.
[idle\_transaction\_timeout](../server-system-variables/index#idle_transaction_timeout) affects all transactions, [idle\_write\_transaction\_timeout](../server-system-variables/index#idle_write_transaction_timeout) affects write transactions only and [idle\_readonly\_transaction\_timeout](../server-system-variables/index#idle_readonly_transaction_timeout) affects read transactions only. The latter two variables work independently. However, if either is set along with [idle\_transaction\_timeout](../server-system-variables/index#idle_transaction_timeout), the settings for [idle\_write\_transaction\_timeout](../server-system-variables/index#idle_write_transaction_timeout) or [idle\_readonly\_transaction\_timeout](../server-system-variables/index#idle_readonly_transaction_timeout) will take precedence.
Examples
--------
```
SET SESSION idle_transaction_timeout=2;
BEGIN;
SELECT * FROM t;
Empty set (0.000 sec)
## wait 3 seconds
SELECT * FROM t;
ERROR 2006 (HY000): MySQL server has gone away
```
```
SET SESSION idle_write_transaction_timeout=2;
BEGIN;
SELECT * FROM t;
Empty set (0.000 sec)
## wait 3 seconds
SELECT * FROM t;
Empty set (0.000 sec)
INSERT INTO t VALUES(1);
## wait 3 seconds
SELECT * FROM t;
ERROR 2006 (HY000): MySQL server has gone away
```
```
SET SESSION idle_transaction_timeout=2, SESSION idle_readonly_transaction_timeout=10;
BEGIN;
SELECT * FROM t;
Empty set (0.000 sec)
## wait 3 seconds
SELECT * FROM t;
Empty set (0.000 sec)
## wait 11 seconds
SELECT * FROM t;
ERROR 2006 (HY000): MySQL server has gone away
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MLineFromWKB MLineFromWKB
============
Syntax
------
```
MLineFromWKB(wkb[,srid])
MultiLineStringFromWKB(wkb[,srid])
```
Description
-----------
Constructs a [MULTILINESTRING](../multilinestring/index) value using its [WKB](../well-known-binary-wkb-format/index) representation and [SRID](../srid/index).
`MLineFromWKB()` and `MultiLineStringFromWKB()` are synonyms.
Examples
--------
```
SET @g = ST_AsBinary(MLineFromText('MULTILINESTRING((10 48,10 21,10 0),(16 0,16 23,16 48))'));
SELECT ST_AsText(MLineFromWKB(@g));
+--------------------------------------------------------+
| ST_AsText(MLineFromWKB(@g)) |
+--------------------------------------------------------+
| MULTILINESTRING((10 48,10 21,10 0),(16 0,16 23,16 48)) |
+--------------------------------------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Date & Time Functions Date & Time Functions
======================
Functions for handling date and time, e.g. TIME, DATE, DAYNAME etc.
| Title | Description |
| --- | --- |
| [Microseconds in MariaDB](../microseconds-in-mariadb/index) | Microseconds have been supported since MariaDB 5.3. |
| [Date and Time Units](../date-and-time-units/index) | Date or time units |
| [ADD\_MONTHS](../add_months/index) | Adds a number of months to a date. |
| [ADDDATE](../adddate/index) | Add days or another interval to a date. |
| [ADDTIME](../addtime/index) | Adds a time to a time or datetime. |
| [CONVERT\_TZ](../convert_tz/index) | Converts a datetime from one time zone to another. |
| [CURDATE](../curdate/index) | Returns the current date. |
| [CURRENT\_DATE](../current_date/index) | Synonym for CURDATE(). |
| [CURRENT\_TIME](../current_time/index) | Synonym for CURTIME(). |
| [CURRENT\_TIMESTAMP](../current_timestamp/index) | Synonym for NOW(). |
| [CURTIME](../curtime/index) | Returns the current time. |
| [DATE FUNCTION](../date-function/index) | Extracts the date portion of a datetime. |
| [DATEDIFF](../datediff/index) | Difference in days between two date/time values. |
| [DATE\_ADD](../date_add/index) | Date arithmetic - addition. |
| [DATE\_FORMAT](../date_format/index) | Formats the date value according to the format string. |
| [DATE\_SUB](../date_sub/index) | Date arithmetic - subtraction. |
| [DAY](../day/index) | Synonym for DAYOFMONTH(). |
| [DAYNAME](../dayname/index) | Return the name of the weekday. |
| [DAYOFMONTH](../dayofmonth/index) | Returns the day of the month. |
| [DAYOFWEEK](../dayofweek/index) | Returns the day of the week index. |
| [DAYOFYEAR](../dayofyear/index) | Returns the day of the year. |
| [EXTRACT](../extract/index) | Extracts a portion of the date. |
| [FROM\_DAYS](../from_days/index) | Returns a date given a day. |
| [FROM\_UNIXTIME](../from_unixtime/index) | Returns a datetime from a Unix timestamp. |
| [GET\_FORMAT](../get_format/index) | Returns a format string. |
| [HOUR](../hour/index) | Returns the hour. |
| [LAST\_DAY](../last_day/index) | Returns the last day of the month. |
| [LOCALTIME](../localtime/index) | Synonym for NOW(). |
| [LOCALTIMESTAMP](../localtimestamp/index) | Synonym for NOW(). |
| [MAKEDATE](../makedate/index) | Returns a date given a year and day. |
| [MAKETIME](../maketime/index) | Returns a time. |
| [MICROSECOND](../microsecond/index) | Returns microseconds from a date or datetime. |
| [MINUTE](../minute/index) | Returns a minute from 0 to 59. |
| [MONTH](../month/index) | Returns a month from 1 to 12. |
| [MONTHNAME](../monthname/index) | Returns the full name of the month. |
| [NOW](../now/index) | Returns the current date and time. |
| [PERIOD\_ADD](../period_add/index) | Add months to a period. |
| [PERIOD\_DIFF](../period_diff/index) | Number of months between two periods. |
| [QUARTER](../quarter/index) | Returns year quarter from 1 to 4. |
| [SECOND](../second/index) | Returns the second of a time. |
| [SEC\_TO\_TIME](../sec_to_time/index) | Converts a second to a time. |
| [STR\_TO\_DATE](../str_to_date/index) | Converts a string to date. |
| [SUBDATE](../subdate/index) | Subtract a date unit or number of days. |
| [SUBTIME](../subtime/index) | Subtracts a time from a date/time. |
| [SYSDATE](../sysdate/index) | Returns the current date and time. |
| [TIME Function](../time-function/index) | Extracts the time. |
| [TIMEDIFF](../timediff/index) | Returns the difference between two date/times. |
| [TIMESTAMP FUNCTION](../timestamp-function/index) | Return the datetime, or add a time to a date/time. |
| [TIMESTAMPADD](../timestampadd/index) | Add interval to a date or datetime. |
| [TIMESTAMPDIFF](../timestampdiff/index) | Difference between two datetimes. |
| [TIME\_FORMAT](../time_format/index) | Formats the time value according to the format string. |
| [TIME\_TO\_SEC](../time_to_sec/index) | Returns the time argument, converted to seconds. |
| [TO\_DAYS](../to_days/index) | Number of days since year 0. |
| [TO\_SECONDS](../to_seconds/index) | Number of seconds since year 0. |
| [UNIX\_TIMESTAMP](../unix_timestamp/index) | Returns a Unix timestamp. |
| [UTC\_DATE](../utc_date/index) | Returns the current UTC date. |
| [UTC\_TIME](../utc_time/index) | Returns the current UTC time. |
| [UTC\_TIMESTAMP](../utc_timestamp/index) | Returns the current UTC date and time. |
| [WEEK](../week/index) | Returns the week number. |
| [WEEKDAY](../weekday/index) | Returns the weekday index. |
| [WEEKOFYEAR](../weekofyear/index) | Returns the calendar week of the date as a number in the range from 1 to 53. |
| [YEAR](../year/index) | Returns the year for the given date. |
| [YEARWEEK](../yearweek/index) | Returns year and week for a date. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema TABLE_CONSTRAINTS Table Information Schema TABLE\_CONSTRAINTS Table
===========================================
The [Information Schema](../information_schema/index) `TABLE_CONSTRAINTS` table contains information about tables that have [constraints](../constraint/index).
It has the following columns:
| Column | Description |
| --- | --- |
| `CONSTRAINT_CATALOG` | Always `def`. |
| `CONSTRAINT_SCHEMA` | Database name containing the constraint. |
| `CONSTRAINT_NAME` | Constraint name. |
| `TABLE_SCHEMA` | Database name. |
| `TABLE_NAME` | Table name. |
| `CONSTRAINT_TYPE` | Type of constraint; one of `UNIQUE`, `PRIMARY KEY`, `FOREIGN KEY` or `CHECK`. |
The [REFERENTIAL\_CONSTRAINTS](../information-schema-referential_constraints-table/index) table has more information about foreign keys.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Buildbot Setup for Ubuntu-Debian Buildbot Setup for Ubuntu-Debian
================================
Setting up a Buildbot slave on Ubuntu and Debian
------------------------------------------------
One great way to contribute to MariaDB development is to run a buildbot builder. These builders are used for running automated builds and tests of MariaDB. The instructions on this page should help you get a builder set up on Ubuntu and Debian.
### Setting up your MariaDB build environment
For Ubuntu and Debian, a quick way to install much of what you need is:
```
sudo apt-get build-dep mariadb-server
```
If you're running a version of Debian or Ubuntu that doesn't have MariaDB, then do the following:
```
sudo apt-get build-dep mysql-server
```
After running one (or both) of the above, run the following to catch things that they may have missed:
```
sudo apt-get install devscripts fakeroot doxygen texlive-latex-base ghostscript libevent-dev libssl-dev zlib1g-dev libpam0g-dev libreadline-gplv2-dev autoconf automake automake1.11 dpatch ghostscript-x libfontenc1 libjpeg62 libltdl-dev libltdl7 libmail-sendmail-perl libxfont1 lmodern texlive-latex-base-doc ttf-dejavu ttf-dejavu-extra libaio-dev xfonts-encodings xfonts-utils libxml2-dev unixodbc-dev bzr scons check libboost-all-dev openssl epm libjudy-dev libjemalloc-dev libcrack2-dev git libkrb5-dev libcurl4-openssl-dev thrift-compiler libsystemd-dev dh-systemd libssl1.0.2 openjdk-8-jdk uuid-dev libnuma-dev gdb libarchive-dev libasio-dev dh-exec
```
After setting up the build environment do a test build to confirm that things are working. First get the source code using the **git** instructions on the [Getting the MariaDB Source Code](../getting-the-mariadb-source-code/index) page, then follow the steps on the [Generic Build Instructions](../generic-build-instructions/index) page for building MariaDB using **cmake**. If your build succeeds, you're ready to move on to the next step of installing and configuring buildbot.
Do not hesitate to ask for help on the [maria-developers](https://launchpad.net/~maria-developers) mailing list or on [IRC](../irc/index).
### Buildbot installation and setup
#### Using APT
The easiest way to install buildbot on Ubuntu and Debian is to install the buildbot-slave package, like so:
```
sudo apt-get install buildbot-slave
```
#### Using Pip
Another way to install buildbot is using the Python **pip** package manager. Pip can be installed with:
```
sudo apt-get install python-pip
```
Next install twisted and the buildbot-slave package using pip:
```
sudo pip install twisted==11.0.0
sudo pip install buildbot-slave==0.8.9
```
#### Creating the Buildbot builder
After the buildbot-slave package is installed (either via apt or pip), you need to create the builder using the `buildslave create-slave` command. As part of this command you will need to specify a name for your buildslave and a password. Both need to be given to the MariaDB Buildbot maintainers so that they can add your builder to the build pool. Ask on the [maria-developers](https://launchpad.net/~maria-developers) mailing list or on [IRC](../irc/index) for who these people are.
An example command for creating the slave is:
```
sudo buildslave create-slave /var/lib/buildbot/slaves/maria buildbot.askmonty.org slavename password
```
If you installed buildbot using pip, the convention is to create a buildbot user and then, as that user, create the buildslave in the home directory like so:
```
sudo buildslave create-slave ~/maria-slave buildbot.askmonty.org slavename password
```
Put some appropriate info in info/admin and info/host files that are created, this will display on the information screen about your builder. See here for example: [bb01](http://buildbot.askmonty.org/buildbot/buildslaves/bb01)
Submit your builder information to the MariaDB Buildbot admins. Also let them know if your machine can run multiple builds at the same time (and how many). After adding your builder's information to the main buildbot configuration, all that's left is for you to do is to start your builder.
### Starting and stopping your builder
If you installed your builder using apt, then you can start and stop it with:
```
sudo /etc/init.d/buildslave start
sudo /etc/init.d/buildslave stop
```
If you installed your buildslave using pip, then do the following as the buildbot user in their home directory:
```
buildslave start maria-slave
buildslave stop maria-slave
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb RESET MASTER RESET MASTER
============
```
RESET MASTER [TO #]
```
Deletes all [binary log](../binary-log/index) files listed in the index file, resets the binary log index file to be empty, and creates a new binary log file with a suffix of .000001.
If `TO #` is given, then the first new binary log file will start from number #.
This statement is for use only when the master is started for the first time, and should never be used if any slaves are actively [replicating](../replication/index) from the binary log.
See Also
--------
* The [PURGE BINARY LOGS](../sql-commands-purge-logs/index) statement is intended for use in active replication.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Stored Aggregate Functions Stored Aggregate Functions
==========================
**MariaDB starting with [10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/)**The ability to create stored aggregate functions was added in [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/).
[Aggregate functions](../aggregate-functions/index) are functions that are computed over a sequence of rows and return one result for the sequence of rows.
Creating a custom aggregate function is done using the [CREATE FUNCTION](../create-function/index) statement with two main differences:
* The addition of the AGGREGATE keyword, so `CREATE AGGREGATE FUNCTION`
* The `FETCH GROUP NEXT ROW` instruction inside the loop
* Oracle PL/SQL compatibility using SQL/PL is provided
Standard Syntax
---------------
```
CREATE AGGREGATE FUNCTION function_name (parameters) RETURNS return_type
BEGIN
All types of declarations
DECLARE CONTINUE HANDLER FOR NOT FOUND RETURN return_val;
LOOP
FETCH GROUP NEXT ROW; // fetches next row from table
other instructions
END LOOP;
END
```
Stored aggregate functions were a [2016 Google Summer of Code](../google-summer-of-code-2016/index) project by Varun Gupta.
### Using SQL/PL
```
SET sql_mode=Oracle;
DELIMITER //
CREATE AGGREGATE FUNCTION function_name (parameters) RETURN return_type
declarations
BEGIN
LOOP
FETCH GROUP NEXT ROW; -- fetches next row from table
-- other instructions
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
RETURN return_val;
END //
DELIMITER ;
```
Examples
--------
First a simplified example:
```
CREATE TABLE marks(stud_id INT, grade_count INT);
INSERT INTO marks VALUES (1,6), (2,4), (3,7), (4,5), (5,8);
SELECT * FROM marks;
+---------+-------------+
| stud_id | grade_count |
+---------+-------------+
| 1 | 6 |
| 2 | 4 |
| 3 | 7 |
| 4 | 5 |
| 5 | 8 |
+---------+-------------+
DELIMITER //
CREATE AGGREGATE FUNCTION IF NOT EXISTS aggregate_count(x INT) RETURNS INT
BEGIN
DECLARE count_students INT DEFAULT 0;
DECLARE CONTINUE HANDLER FOR NOT FOUND
RETURN count_students;
LOOP
FETCH GROUP NEXT ROW;
IF x THEN
SET count_students = count_students+1;
END IF;
END LOOP;
END //
DELIMITER ;
```
A non-trivial example that cannot easily be rewritten using existing functions:
```
DELIMITER //
CREATE AGGREGATE FUNCTION medi_int(x INT) RETURNS DOUBLE
BEGIN
DECLARE CONTINUE HANDLER FOR NOT FOUND
BEGIN
DECLARE res DOUBLE;
DECLARE cnt INT DEFAULT (SELECT COUNT(*) FROM tt);
DECLARE lim INT DEFAULT (cnt-1) DIV 2;
IF cnt % 2 = 0 THEN
SET res = (SELECT AVG(a) FROM (SELECT a FROM tt ORDER BY a LIMIT lim,2) ttt);
ELSE
SET res = (SELECT a FROM tt ORDER BY a LIMIT lim,1);
END IF;
DROP TEMPORARY TABLE tt;
RETURN res;
END;
CREATE TEMPORARY TABLE tt (a INT);
LOOP
FETCH GROUP NEXT ROW;
INSERT INTO tt VALUES (x);
END LOOP;
END //
DELIMITER ;
```
### SQL/PL Example
This uses the same marks table as created above.
```
SET sql_mode=Oracle;
DELIMITER //
CREATE AGGREGATE FUNCTION aggregate_count(x INT) RETURN INT AS count_students INT DEFAULT 0;
BEGIN
LOOP
FETCH GROUP NEXT ROW;
IF x THEN
SET count_students := count_students+1;
END IF;
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
RETURN count_students;
END aggregate_count //
DELIMITER ;
SELECT aggregate_count(stud_id) FROM marks;
```
See Also
--------
* [Stored Function Overview](../stored-function-overview/index)
* [CREATE FUNCTION](../create-function/index)
* [SHOW CREATE FUNCTION](../show-create-function/index)
* [DROP FUNCTION](../drop-function/index)
* [Stored Routine Privileges](../stored-routine-privileges/index)
* [SHOW FUNCTION STATUS](../show-function-status/index)
* [Information Schema ROUTINES Table](../information-schema-routines-table/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ST_PolyFromWKB ST\_PolyFromWKB
===============
Syntax
------
```
ST_PolyFromWKB(wkb[,srid])
ST_PolygonFromWKB(wkb[,srid])
PolyFromWKB(wkb[,srid])
PolygonFromWKB(wkb[,srid])
```
Description
-----------
Constructs a [POLYGON](../polygon/index) value using its [WKB](../well-known-binary-wkb-format/index) representation and [SRID](../srid/index).
`ST_PolyFromWKB()`, `ST_PolygonFromWKB()`, `PolyFromWKB()` and `PolygonFromWKB()` are synonyms.
Examples
--------
```
SET @g = ST_AsBinary(ST_PolyFromText('POLYGON((1 1,1 5,4 9,6 9,9 3,7 2,1 1))'));
SELECT ST_AsText(ST_PolyFromWKB(@g)) AS p;
+----------------------------------------+
| p |
+----------------------------------------+
| POLYGON((1 1,1 5,4 9,6 9,9 3,7 2,1 1)) |
+----------------------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Managing ColumnStore System Managing ColumnStore System
============================
Documentation for the latest release of Columnstore is not available on the Knowledge Base. Instead, see:
* [Release Notes](https://mariadb.com/docs/release-notes/mariadb-columnstore-1-5-2-release-notes/)
* [Deployment Instructions](https://mariadb.com/docs/deploy/community-single-columnstore/)
| Title | Description |
| --- | --- |
| [ColumnStore Administrative Console](../columnstore-administrative-console/index) | Configure, monitor, and manage ColumnStore system and servers |
| [ColumnStore System Operations](../columnstore-system-operations/index) | MariaDB ColumnStore System Operations, Status and Configuration |
| [ColumnStore System Monitoring Configuration](../columnstore-system-monitoring-configuration/index) | Configuring various monitoring parameters |
| [Managing ColumnStore Module Configurations](../managing-columnstore-module-configurations/index) | Managing module configurations |
| [MariaDB ColumnStore Backup and Restore](../mariadb-columnstore-backup-and-restore/index) | |
| [ColumnStore Audit Plugin](../columnstore-audit-plugin/index) | Introduction MariaDB server includes an optional Audit Plugin that enables... |
| [ColumnStore Configuration File Update and Distribution](../columnstore-configuration-file-update-and-distribution/index) | In the case where an entry in the MariaDB ColumnStore's configuration needs... |
| [ColumnStore Multiple User Module Guide](../columnstore-multiple-user-module-guide/index) | Introduction This Document describes the setup and the functionality of t... |
| [ColumnStore Partition Management](../columnstore-partition-management/index) | Partition Management SQL Commands to view, drop, disable, and enable partitions |
| [ColumnStore Redistribute Data](../columnstore-redistribute-data/index) | Introduction When new PM nodes are added to a running instance it may be d... |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB Audit Plugin MariaDB Audit Plugin
=====================
MariaDB and MySQL are used in a broad range of environments, but if you needed to record user access to be in compliance with auditing regulations for your organization, you would previously have had to use other database solutions. To meet this need, though, MariaDB has developed the MariaDB Audit Plugin. Although the MariaDB Audit Plugin has some unique features available only for MariaDB, it can be used also with MySQL.
Basically, the purpose of the MariaDB Audit Plugin is to log the server's activity. For each client session, it records who connected to the server (i.e., user name and host), what queries were executed, and which tables were accessed and server variables that were changed. This information is stored in a rotating log file or it may be sent to the local `syslogd`.
The MariaDB Audit Plugin works with MariaDB, MySQL (as of, version 5.5.34 and 10.0.7) and Percona Server. MariaDB started including by default the Audit Plugin from versions 10.0.10 and 5.5.37, and it can be installed in any version from [MariaDB 5.5.20](https://mariadb.com/kb/en/mariadb-5520-release-notes/).
#### Additional documentation
Below are links to additional documentation on the MariaDB Audit Plugin. They explain in detail how to install, configure and use the Audit Plugin.
* [Installation](../mariadb-audit-plugin-installation/index)
* [Configuration](../mariadb-audit-plugin-configuration/index)
* [Log Settings](../mariadb-audit-plugin-log-settings/index)
* [Log Location & Rotation](../mariadb-audit-plugin-location-and-rotation-of-logs/index)
* [Log Format](../mariadb-audit-plugin-log-format/index)
* [Status Variables](../mariadb-audit-plugin-status-variables/index)
* [System Variables](../mariadb-audit-plugin-system-variables/index)
* [Release Notes](../release-notes-mariadb-audit-plugin/index)
#### Tutorials
Below are links to some tutorials on MariaDB's site and other sites. They may help you to get more out of the MariaDB Audit Plugin.
* [Introducing the MariaDB Audit Plugin](https://mariadb.com/resources/blog/introducing-mariadb-audit-plugin)
by Anatoliy Dimitrov, September 2, 2014
* [Activating MariaDB Audit Log](https://tunnelix.com/activating-mariadb-audit-log/) by Jaykishan Mutkawoa, May 30, 2016
* [Installing MariaDB Audit Plugin on Amazon RDS](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.MySQL.Options.AuditPlugin.html)
Amazon RDS supports using the MariaDB Audit Plugin on MySQL and MariaDB database instances.
#### Web Log Articles
Below are links to web log articles on the MariaDB Audit Plugin. You may find them useful in understanding better how to use the Audit Plugin. Since some of these articles are older, they won't include changes and improvements in newer versions. You can rely on the documentation pages listed above for the most current information.
* [Activating Auditing for MariaDB in 5 Minutes](https://mariadb.com/resources/blog/activating-auditing-mariadb-and-mysql-5-minutes)
by Ralf Gebhardt, September 29, 2013
* [Query and Password Filtering with the MariaDB Audit Plugin](https://mariadb.com/resources/blog/query-and-password-filtering-mariadb-audit-plugin)
by Ralf Gebhardt, May 4, 2015
* [Set Up a Remote Log File using rsyslog](https://mariadb.com/resources/blog/mariadb-audit-plugin-set-remote-log-file-using-rsyslog)
by Ralf Gebhardt, December 16, 2013
* [MySQL Auditing with MariaDB Auditing Plugin](https://planet.mysql.com/entry/?id=5994184) by Peter Zeitsev, February 15, 2016
#### Sub-Documents
| Title | Description |
| --- | --- |
| [MariaDB Audit Plugin - Installation](../mariadb-audit-plugin-installation/index) | Installing the MariaDB Audit Plugin. |
| [MariaDB Audit Plugin - Configuration](../mariadb-audit-plugin-configuration/index) | Audit Plugin global variables within MariaDB |
| [MariaDB Audit Plugin - Log Settings](../mariadb-audit-plugin-log-settings/index) | Log audit events to a file or syslog. |
| [MariaDB Audit Plugin - Location and Rotation of Logs](../mariadb-audit-plugin-location-and-rotation-of-logs/index) | Logs can be written to a separate file or to the system logs |
| [MariaDB Audit Plugin - Log Format](../mariadb-audit-plugin-log-format/index) | The audit log is a set of records written as a list of fields to a file in plain‐text format. |
| [MariaDB Audit Plugin - Versions](../mariadb-audit-plugin-versions/index) | Releases of the MariaDB Audit Plugin, and in which versions of MariaDB each... |
| [MariaDB Audit Plugin Options and System Variables](../mariadb-audit-plugin-options-and-system-variables/index) | Description of Server\_Audit plugin options and system variables. |
| [MariaDB Audit Plugin - Status Variables](../mariadb-audit-plugin-status-variables/index) | Server Audit plugin status variables |
| [Release Notes - MariaDB Audit Plugin](../release-notes-mariadb-audit-plugin/index) | MariaDB Audit Plugin release notes |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CRC32 CRC32
=====
Syntax
------
<= [MariaDB 10.7](../what-is-mariadb-107/index)
```
CRC32(expr)
```
From [MariaDB 10.8](../what-is-mariadb-108/index)
```
CRC32([par,]expr)
```
Description
-----------
Computes a cyclic redundancy check (CRC) value and returns a 32-bit unsigned value. The result is NULL if the argument is NULL. The argument is expected to be a string and (if possible) is treated as one if it is not.
Uses the ISO 3309 polynomial that used by zlib and many others. [MariaDB 10.8](../what-is-mariadb-108/index) introduced the [CRC32C()](../crc32c/index) function, which uses the alternate Castagnoli polynomia.
**MariaDB starting with [10.8](../what-is-mariadb-108/index)**Often, CRC is computed in pieces. To facilitate this, [MariaDB 10.8.0](https://mariadb.com/kb/en/mariadb-1080-release-notes/) introduced an optional parameter: CRC32('MariaDB')=CRC32(CRC32('Maria'),'DB').
Examples
--------
```
SELECT CRC32('MariaDB');
+------------------+
| CRC32('MariaDB') |
+------------------+
| 4227209140 |
+------------------+
SELECT CRC32('mariadb');
+------------------+
| CRC32('mariadb') |
+------------------+
| 2594253378 |
+------------------+
```
From [MariaDB 10.8.0](https://mariadb.com/kb/en/mariadb-1080-release-notes/)
```
SELECT CRC32(CRC32('Maria'),'DB');
+----------------------------+
| CRC32(CRC32('Maria'),'DB') |
+----------------------------+
| 4227209140 |
+----------------------------+
```
See Also
--------
* [CRC32C()](../crc32c/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb JSON_VALID JSON\_VALID
===========
**MariaDB starting with [10.2.3](https://mariadb.com/kb/en/mariadb-1023-release-notes/)**JSON functions were added in [MariaDB 10.2.3](https://mariadb.com/kb/en/mariadb-1023-release-notes/).
Syntax
------
```
JSON_VALID(value)
```
Description
-----------
Indicates whether the given value is a valid JSON document or not. Returns `1` if valid, `0` if not, and NULL if the argument is NULL.
From [MariaDB 10.4.3](https://mariadb.com/kb/en/mariadb-1043-release-notes/), the JSON\_VALID function is automatically used as a [CHECK constraint](../constraint/index#check-constraints) for the [JSON data type alias](../json-data-type/index) in order to ensure that a valid json document is inserted.
Examples
--------
```
SELECT JSON_VALID('{"id": 1, "name": "Monty"}');
+------------------------------------------+
| JSON_VALID('{"id": 1, "name": "Monty"}') |
+------------------------------------------+
| 1 |
+------------------------------------------+
SELECT JSON_VALID('{"id": 1, "name": "Monty", "oddfield"}');
+------------------------------------------------------+
| JSON_VALID('{"id": 1, "name": "Monty", "oddfield"}') |
+------------------------------------------------------+
| 0 |
+------------------------------------------------------+
```
See Also
--------
* [JSON video tutorial](https://www.youtube.com/watch?v=sLE7jPETp8g) covering JSON\_VALID.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb InnoDB Encryption Keys InnoDB Encryption Keys
======================
InnoDB uses [encryption key management](../encryption-key-management/index) plugins to support the use of multiple [encryption keys](../encryption-key-management/index#using-multiple-encryption-keys).
Encryption Keys
---------------
Each encryption key has a 32-bit integer that serves as a key identifier.
The default key is set using the [innodb\_default\_encryption\_key\_id](../innodb-system-variables/index#innodb_default_encryption_key_id) system variable.
Encryption keys can also be specified with the [ENCRYPTION\_KEY\_ID](../create-table/index#encryption_key_id) table option for tables that use [file-per-table](../innodb-file-per-table-tablespaces/index) tablespaces.
InnoDB encrypts the [temporary tablespace](../innodb-temporary-tablespaces/index) using the encryption key with the ID `1`.
InnoDB encrypts the [Redo Log](../innodb-redo-log/index) using the encryption key with the ID `1`.
### Keys with Manually Encrypted Tablespaces
With tables that use [manually](../innodb-enabling-encryption/index#enabling-encryption-for-manually-encrypted-tablespaces) enabled encryption, one way to set the specific encryption key for the table is to use the [ENCRYPTION\_KEY\_ID](../create-table/index#encryption_key_id) table option. For example:
```
CREATE TABLE tab1 (
id int PRIMARY KEY,
str varchar(50)
) ENCRYPTED=YES ENCRYPTION_KEY_ID=100;
SELECT NAME, ENCRYPTION_SCHEME, CURRENT_KEY_ID
FROM information_schema.INNODB_TABLESPACES_ENCRYPTION
WHERE NAME='db1/tab1';
+----------+-------------------+----------------+
| NAME | ENCRYPTION_SCHEME | CURRENT_KEY_ID |
+----------+-------------------+----------------+
| db1/tab1 | 1 | 100 |
+----------+-------------------+----------------+
```
If the [ENCRYPTION\_KEY\_ID](../create-table/index#encryption_key_id) table option is not set for a table that uses [manually](../innodb-enabling-encryption/index#enabling-encryption-for-manually-encrypted-tablespaces) enabled encryption, then it will inherit the value from the [innodb\_default\_encryption\_key\_id](../innodb-system-variables/index#innodb_default_encryption_key_id) system variable. For example:
```
SET SESSION innodb_default_encryption_key_id=100;
CREATE TABLE tab1 (
id int PRIMARY KEY,
str varchar(50)
) ENCRYPTED=YES;
SELECT NAME, ENCRYPTION_SCHEME, CURRENT_KEY_ID
FROM information_schema.INNODB_TABLESPACES_ENCRYPTION
WHERE NAME='db1/tab1';
+----------+-------------------+----------------+
| NAME | ENCRYPTION_SCHEME | CURRENT_KEY_ID |
+----------+-------------------+----------------+
| db1/tab1 | 1 | 100 |
+----------+-------------------+----------------+
```
### Keys with Automatically Encrypted Tablespaces
With tables that use [automatically](../innodb-enabling-encryption/index#enabling-encryption-for-automatically-encrypted-tablespaces) enabled encryption, one way to set the specific encryption key for the table is to use the [innodb\_default\_encryption\_key\_id](../innodb-system-variables/index#innodb_default_encryption_key_id) system variable. For example:
```
SET GLOBAL innodb_encryption_threads=4;
SET GLOBAL innodb_encrypt_tables=ON;
SET SESSION innodb_default_encryption_key_id=100;
CREATE TABLE tab1 (
id int PRIMARY KEY,
str varchar(50)
);
SELECT NAME, ENCRYPTION_SCHEME, CURRENT_KEY_ID
FROM information_schema.INNODB_TABLESPACES_ENCRYPTION
WHERE NAME='db1/tab1';
+----------+-------------------+----------------+
| NAME | ENCRYPTION_SCHEME | CURRENT_KEY_ID |
+----------+-------------------+----------------+
| db1/tab1 | 1 | 100 |
+----------+-------------------+----------------+
```
InnoDB tables that are part of the [system](../innodb-system-tablespaces/index) tablespace can only be encrypted using the encryption key set by the [innodb\_default\_encryption\_key\_id](../innodb-system-variables/index#innodb_default_encryption_key_id) system variable.
If the table is in a [file-per-table](../innodb-file-per-table-tablespaces/index) tablespace, and if [innodb\_encrypt\_tables](../innodb-system-variables/index#innodb_encrypt_tables) is set to `ON` or `FORCE`, and if [innodb\_encryption\_threads](../innodb-system-variables/index#innodb_encryption_threads) is set to a value greater than `0`, then you can also set the specific encryption key for the table by using the [ENCRYPTION\_KEY\_ID](../create-table/index#encryption_key_id) table option. For example:
```
SET GLOBAL innodb_encryption_threads=4;
SET GLOBAL innodb_encrypt_tables=ON;
CREATE TABLE tab1 (
id int PRIMARY KEY,
str varchar(50)
) ENCRYPTION_KEY_ID=100;
SELECT NAME, ENCRYPTION_SCHEME, CURRENT_KEY_ID
-> FROM information_schema.INNODB_TABLESPACES_ENCRYPTION
-> WHERE NAME='db1/tab1';
+----------+-------------------+----------------+
| NAME | ENCRYPTION_SCHEME | CURRENT_KEY_ID |
+----------+-------------------+----------------+
| db1/tab1 | 1 | 100 |
+----------+-------------------+----------------+
```
However, if [innodb\_encrypt\_tables](../innodb-system-variables/index#innodb_encrypt_tables) is set to `OFF` or if [innodb\_encryption\_threads](../innodb-system-variables/index#innodb_encryption_threads) is set to `0`, then this will not work. See [InnoDB Encryption Troubleshooting: Setting Encryption Key ID For an Unencrypted Table](../innodb-encryption-troubleshooting/index#setting-encryption-key-id-for-an-unencrypted-table) for more information.
Key Rotation
------------
Some [key management and encryption plugins](../encryption-key-management/index) allow you to automatically rotate and version your encryption keys. If a plugin support key rotation, and if it rotates the encryption keys, then InnoDB's [background encryption threads](../innodb-background-encryption-threads/index) can re-encrypt InnoDB pages that use the old key version with the new key version.
You can set the maximum age for an encryption key using the [innodb\_encryption\_rotate\_key\_age](../innodb-system-variables/index#innodb_encryption_rotate_key_age) system variable. When this variable is set to a non-zero value, background encryption threads constantly check pages to determine if any page is encrypted with a key version that's too old. When the key version is too old, any page encrypted with the older version of the key is automatically re-encrypted in the background to use a more current version of the key. Bear in mind, this constant checking can sometimes result in high CPU usage.
Key rotation for the InnoDB [Redo Log](../innodb-redo-log/index) is only supported in [MariaDB 10.4.0](https://mariadb.com/kb/en/mariadb-1040-release-notes/) and later. For more information, see [MDEV-12041](https://jira.mariadb.org/browse/MDEV-12041).
In order for key rotation to work, both the backend key management service (KMS) and the corresponding [key management and encryption plugin](../encryption-key-management/index) have to support key rotation. See [Encryption Key Management: Support for Key Rotation in Encryption Plugins](../encryption-key-management/index#support-for-key-rotation-in-encryption-plugins) to determine which plugins currently support key rotation.
### Disabling Background Key Rotation Operations
In the event that you encounter issues with background key encryption, you can disable it by setting the [innodb\_encryption\_rotate\_key\_age](../innodb-system-variables/index#innodb_encryption_rotate_key_age) system variable to `0`. You may find this useful when the constant key version checks lead to excessive CPU usage. It's also useful in cases where your encryption key management plugin does not support key rotation, (such as with the [file\_key\_management](../encryption-key-management/index#file-key-management-encryption-plugin) plugin). For more information, see [MDEV-14180](https://jira.mariadb.org/browse/MDEV-14180).
There are, however, issues that can arise when the background key rotation is disabled.
#### Pending Encryption Operations
Prior to [MariaDB 10.2.24](https://mariadb.com/kb/en/mariadb-10224-release-notes/), [MariaDB 10.3.15](https://mariadb.com/kb/en/mariadb-10315-release-notes/), and [MariaDB 10.4.5](https://mariadb.com/kb/en/mariadb-1045-release-notes/), when you update the value on the [innodb\_encrypt\_tables](../innodb-system-variables/index#innodb_encrypt_tables) system variable InnoDB internally treats the subsequent [background operations](../innodb-background-encryption-threads/index#background-operations) to encrypt and decrypt tablespaces as background key rotations. See [MDEV-14398](https://jira.mariadb.org/browse/MDEV-14398) for more information.
In older versions of MariaDB, if you have recently changed the value of the [innodb\_encrypt\_tables](../innodb-system-variables/index#innodb_encrypt_tables) system variable, then you must ensure that any pending background encryption or decryption operations are complete before disabling key rotation. You can check the status of background encryption operations by querying the [INNODB\_TABLESPACES\_ENCRYPTION](../information-schema-innodb_tablespaces_encryption-table/index) table in the [information\_schema](../information-schema/index) database.
See [InnoDB Background Encryption Threads: Checking the Status of Background Operations](../innodb-background-encryption-threads/index#checking-the-status-of-background-operations) for some example queries.
Otherwise, in older versions of MariaDB, if you disable key rotation while there are background encryption threads at work, it may result in unencrypted tables that you want encrypted or vice versa.
For more information, see [MDEV-14398](https://jira.mariadb.org/browse/MDEV-14398).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Galera Load Balancer Galera Load Balancer
====================
Galera Load Balancer is a simple Load Balancer specifically designed for [Galera Cluster](../galera/index). Like Galera, it only runs on Linux. Galera Load Balancer is developed and mantained by Codership. Documentation is available [on fromdual.com](http://www.fromdual.com/galera-load-balancer-documentation).
Galera Load Balancer is inspired by pen, which is a generic TCP load balancer. However, since pen is a generic TCP connections load balancer, the techniques it uses are not well-suited to the particular use case of database servers. Galera Load Balancer is optimized for this type of workload.
Several balancing policies are supported. Each node can be assigned a different weight. Nodes with a higher weight are preferred. Depending on the selected policy, other nodes can even be ignored, until the preferred nodes crash.
A lightweight daemon called glbd receives the connections from clients and it redirects them to nodes. No specific client exists for this demo: a generic TCP client, like nc, can be used to send administrative commands and read the usage statistics.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Optimizer Statistics in MyRocks Optimizer Statistics in MyRocks
===============================
This article describes how MyRocks storage engine provides statistics to the query optimizer.
There are three kinds of statistics:
* Table statistics (number of rows in the table, average row size)
* Index cardinality (how distinct values are in the index)
* records-in-range estimates (how many rows are in a certain range "const1 < tbl.key < const2".
How MyRocks computes statistics
-------------------------------
MyRocks (actually RocksDB) uses LSM files which are written once and never updated. When an LSM file is written, MyRocks will compute index cardinalities and number-of-rows for the data in the file. (The file generally has rows, index records and/or tombstones for multiple tables/indexes).
For performance reasons, statistics are computed based on a fraction of rows in the LSM file. The percentage of rows used is controlled by [rocksdb\_table\_stats\_sampling\_pct](../myrocks-system-variables/index#rocksdb_table_stats_sampling_pct); the default value is 10%.
Before the data is dumped into LSM file, it is stored in the MemTable. MemTable doesn't allow computing index cardinalities, but it can provide an approximate number of rows in the table. Use of MemTable data for statistics is controlled by [rocksdb\_force\_compute\_memtable\_stats](../myrocks-system-variables/index#rocksdb_force_compute_memtable_stats); the default value is `ON`.
### Are index statistics predictable?
Those who create/run MTR tests, need to know whether EXPLAIN output is deterministic. For MyRocks tables, the answer is NO (just like for InnoDB).
Statistics are computed using sampling and GetApproximateMemTableStats() which means that the #rows column in the EXPLAIN output may vary slightly.
### Records-in-range estimates
MyRocks uses RocksDB's GetApproximateSizes() call to produce an estimate for the number of rows in the certain range. The data in MemTable is also taken into account by issuing a GetApproximateMemTableStats call.
ANALYZE TABLE
-------------
ANALYZE TABLE will possibly flush the MemTable (depending on the [rocksdb\_flush\_memtable\_on\_analyze](../myrocks-system-variables/index#rocksdb_flush_memtable_on_analyze) and [rocksdb\_pause\_background\_work](../myrocks-system-variables/index#rocksdb_pause_background_work) settings).
After that, it will re-read statistics from the SST files and re-compute the summary numbers (TODO: and if the data was already on disk, the result should not be different from the one we had before ANALYZE?)
Debugging helper variables
--------------------------
There are a few variables that will cause MyRocks to report certain pre-defined estimate numbers to the optimizer:
* @@rocksdb\_records\_in\_range - if not 0, report that any range has this many rows
* @@rocksdb\_force\_index\_records\_in\_range - if not 0, and FORCE INDEX hint is used, report that any range has this many rows.
* @@rocksdb\_debug\_optimizer\_n\_rows - if not 0, report that any MyRocks table has this many rows.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mroonga_escape mroonga\_escape
===============
Syntax
------
```
mroonga_escape (string [,special_characters])
```
* `string` - required parameter specifying the text you want to escape
* `special_characters` - optional parameter specifying the characters to escape
Description
-----------
`mroonga_escape` is a [user-defined function](../user-defined-functions/index) (UDF) included with the [Mroonga storage engine](../mroonga/index), used for escaping a string. See [Creating Mroonga User-Defined Functions](../creating-mroonga-user-defined-functions/index) for details on creating this UDF if required.
If no `special_characters` parameter is provided, by default `+-<>*()":` are escaped.
Returns the escaped string.
Example
-------
```
SELECT mroonga_escape("+-<>~*()\"\:");
'\\+\\-\\<\\>\\~\\*\\(\\)\\"\\:
```
See Also
--------
* [Creating Mroonga User-Defined Functions](../creating-mroonga-user-defined-functions/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Securing Communications in Galera Cluster Securing Communications in Galera Cluster
=========================================
By default, Galera Cluster replicates data between each node without encrypting it. This is generally acceptable when the cluster nodes runs on the same host or in networks where security is guaranteed through other means. However, in cases where the cluster nodes exist on separate networks or they are in a high-risk network, the lack of encryption does introduce security concerns as a malicious actor could potentially eavesdrop on the traffic or get a complete copy of the data by triggering an SST.
To mitigate this concern, Galera Cluster allows you to encrypt data in transit as it is replicated between each cluster node using the Transport Layer Security (TLS) protocol. TLS was formerly known as Secure Socket Layer (SSL), but strictly speaking the SSL protocol is a predecessor to TLS and, that version of the protocol is now considered insecure. The documentation still uses the term SSL often and for compatibility reasons TLS-related server system and status variables still use the prefix `ssl_`, but internally, MariaDB only supports its secure successors.
In order to secure connections between the cluster nodes, you need to ensure that all servers were compiled with TLS support. See [Secure Connections Overview](../secure-connections-overview/index) to determine how to check whether a server was compiled with TLS support.
For each cluster node, you also need a certificate, private key, and the Certificate Authority (CA) chain to verify the certificate. If you want to use self-signed certificates that are created with OpenSSL, then see [Certificate Creation with OpenSSL](../certificate-creation-with-openssl/index) for information on how to create those.
Securing Galera Cluster Replication Traffic
-------------------------------------------
In order to enable TLS for Galera Cluster's replication traffic, there are a number of [wsrep\_provider\_options](../wsrep_provider_options/index) that you need to set, such as:
* You need to set the path to the server's certificate by setting the `[socket.ssl\_cert](../wsrep_provider_options/index#socketssl_cert)` wsrep\_provider\_option.
* You need to set the path to the server's private key by setting the `[socket.ssl\_key](../wsrep_provider_options/index#socketssl_key)` wsrep\_provider\_option.
* You need to set the path to the certificate authority (CA) chain that can verify the server's certificate by setting the `[socket.ssl\_ca](../wsrep_provider_options/index#socketssl_ca)` wsrep\_provider\_option.
* If you want to restrict the server to certain ciphers, then you also need to set the `[socket.ssl\_cipher](../wsrep_provider_options/index#socketssl_cipher)` wsrep\_provider\_option.
It is also a good idea to set MariaDB Server's regular TLS-related system variables, so that TLS will be enabled for regular client connections as well. See [Securing Connections for Client and Server](../securing-connections-for-client-and-server/index) for information on how to do that.
For example, to set these variables for the server, add the system variables to a relevant server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index):
```
[mariadb]
...
ssl_cert = /etc/my.cnf.d/certificates/server-cert.pem
ssl_key = /etc/my.cnf.d/certificates/server-key.pem
ssl_ca = /etc/my.cnf.d/certificates/ca.pem
wsrep_provider_options="socket.ssl_cert=/etc/my.cnf.d/certificates/server-cert.pem;socket.ssl_key=/etc/my.cnf.d/certificates/server-key.pem;socket.ssl_ca=/etc/my.cnf.d/certificates/ca.pem"
```
And then [restart the server](../starting-and-stopping-mariadb-starting-and-stopping-mariadb/index) to make the changes persistent.
By setting both MariaDB Server's TLS-related system variables and Galera Cluster's TLS-related wsrep\_provider\_options, the server can secure both external client connections and Galera Cluster's replication traffic.
Securing State Snapshot Transfers
---------------------------------
The method that you would use to enable TLS for [State Snapshot Transfers (SSTs)](../introduction-to-state-snapshot-transfers-ssts/index) would depend on the value of `[wsrep\_sst\_method](../galera-cluster-system-variables/index#wsrep_sst_method)`.
### mariabackup
See [mariabackup SST Method: TLS](../mariabackup-sst-method/index#tls) for more information.
### xtrabackup-v2
See [xtrabackup-v2 SST Method: TLS](../xtrabackup-v2-sst-method/index#tls) for more information.
### mysqldump
This SST method simply uses the `[mysqldump](../mysqldump/index)` utility, so TLS would be enabled by following the guide at [Securing Connections for Client and Server: Enabling TLS for MariaDB Clients](../securing-connections-for-client-and-server/index#enabling-tls-for-mariadb-clients)
### rsync
This SST method supports encryption in transit via `[stunnel](https://www.stunnel.org/)`. See [Introduction to State Snapshot Transfers (SSTs): rsync](../introduction-to-state-snapshot-transfers-ssts/index#rsync) for more information.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb InnoDB Undo Log InnoDB Undo Log
===============
Overview
--------
When a [transaction](../transactions/index) writes data, it always inserts them in the table indexes or data (in the buffer pool or in physical files). No private copies are created. The old versions of data being modified by active [XtraDB/InnoDB](../innodb/index) transactions are stored in the undo log. The original data can then be restored, or viewed by a consistent read.
Implementation Details
----------------------
Before a row is modified, it is copied into the undo log. Each normal row contains a pointer to the most recent version of the same row in the undo log. Each row in the undo log contains a pointer to previous version, if any. So, each modified row has an history chain.
Rows are never physically deleted until a transaction ends. If they were deleted, the restore would be impossible. Thus, rows are simply marked for deletion.
Each transaction uses a *view* of the records. The [transaction level](../set-transaction/index#isolation-levels) determines how this view is created. For example, READ UNCOMMITTED usually uses the current version of rows, even if they are not committed (*dirty reads*). Other isolation levels require that the most recent committed version of rows is searched in the undo log. READ COMMITTED uses a different view for each table, while REPEATABLE READ and SERIALIZABLE use the same view for all tables.
There is also a global history list of the data. When a transaction is committed, its history is added to this history list. The order of the list is the chronological order of the commits.
The purge thread deletes the rows in the undo log which are not needed by any existing view. The rows for which a most recent version exists are deleted, as well as the delete-marked rows.
If InnoDB needs to restore an old version, it will simply replace the newer version with the older one. When a transaction inserts a new row, there is no older version. However, in that case, the restore can be done by deleting the inserted rows.
Effects of Long-Running Transactions
------------------------------------
Understanding how the undo log works helps with understanding the negative effects long transactions.
* Long transactions generate several old versions of the rows in the undo log. Those rows will probably be needed for a longer time, because other long transactions will need them. Since those transactions will generate more modified rows, a sort of combinatory explosion can be observed. Thus, the undo log requires more space.
* Transaction may need to read very old versions of the rows in the history list, thus their performance will degrade.
Of course read-only transactions do not write more entries in the undo log; however, they delay the purging of existing entries.
Also, long transactions can more likely result in deadlocks, but this problem is not related to the undo log.
Configuration
-------------
The undo log is not a log file that can be viewed on disk in the usual sense, such as the [error log](../error-log/index) or [slow query log](../slow-query-log/index), rather an area of storage.
The undo log is usually part of the physical system tablespace, but from [MariaDB 10.0](../what-is-mariadb-100/index), the [innodb\_undo\_directory](../xtradbinnodb-server-system-variables/index#innodb_undo_directory) and [innodb\_undo\_tablespaces](../xtradbinnodb-server-system-variables/index#innodb_undo_tablespaces) system variables can be used to split into different tablespaces and store in a different location (perhaps on a different storage device).
Each insert or update portion of the undo log is known as a rollback segment. The [innodb\_undo\_logs](../xtradbinnodb-server-system-variables/index#innodb_undo_logs) system variable specifies the number of rollback segments to be used per transaction.
The related [innodb\_available\_undo\_logs](../xtradbinnodb-server-status-variables/index#innodb_available_undo_logs) status variable stores the total number of available InnoDB undo logs.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb JSON Sample Files JSON Sample Files
=================
### Expense.json
```
[
{
"WHO": "Joe",
"WEEK": [
{
"NUMBER": 3,
"EXPENSE": [
{
"WHAT": "Beer",
"AMOUNT": 18.00
},
{
"WHAT": "Food",
"AMOUNT": 12.00
},
{
"WHAT": "Food",
"AMOUNT": 19.00
},
{
"WHAT": "Car",
"AMOUNT": 20.00
}
]
},
{
"NUMBER": 4,
"EXPENSE": [
{
"WHAT": "Beer",
"AMOUNT": 19.00
},
{
"WHAT": "Beer",
"AMOUNT": 16.00
},
{
"WHAT": "Food",
"AMOUNT": 17.00
},
{
"WHAT": "Food",
"AMOUNT": 17.00
},
{
"WHAT": "Beer",
"AMOUNT": 14.00
}
]
},
{
"NUMBER": 5,
"EXPENSE": [
{
"WHAT": "Beer",
"AMOUNT": 14.00
},
{
"WHAT": "Food",
"AMOUNT": 12.00
}
]
}
]
},
{
"WHO": "Beth",
"WEEK": [
{
"NUMBER": 3,
"EXPENSE": [
{
"WHAT": "Beer",
"AMOUNT": 16.00
}
]
},
{
"NUMBER": 4,
"EXPENSE": [
{
"WHAT": "Food",
"AMOUNT": 17.00
},
{
"WHAT": "Beer",
"AMOUNT": 15.00
}
]
},
{
"NUMBER": 5,
"EXPENSE": [
{
"WHAT": "Food",
"AMOUNT": 12.00
},
{
"WHAT": "Beer",
"AMOUNT": 20.00
}
]
}
]
},
{
"WHO": "Janet",
"WEEK": [
{
"NUMBER": 3,
"EXPENSE": [
{
"WHAT": "Car",
"AMOUNT": 19.00
},
{
"WHAT": "Food",
"AMOUNT": 18.00
},
{
"WHAT": "Beer",
"AMOUNT": 18.00
}
]
},
{
"NUMBER": 4,
"EXPENSE": [
{
"WHAT": "Car",
"AMOUNT": 17.00
}
]
},
{
"NUMBER": 5,
"EXPENSE": [
{
"WHAT": "Beer",
"AMOUNT": 14.00
},
{
"WHAT": "Car",
"AMOUNT": 12.00
},
{
"WHAT": "Beer",
"AMOUNT": 19.00
},
{
"WHAT": "Food",
"AMOUNT": 12.00
}
]
}
]
}
]
```
### OEM example
This is an example showing how an OEM table can be implemented. It is out of the scope of this document to explain how it works and to be a full guide on writing OEM tables for CONNECT.
#### tabfic.h
The header File tabfic.h:
```
// TABFIC.H Olivier Bertrand 2008-2010
// External table type to read FIC files
#define TYPE_AM_FIC (AMT)129
typedef class FICDEF *PFICDEF;
typedef class TDBFIC *PTDBFIC;
typedef class FICCOL *PFICCOL;
/* ------------------------- FIC classes ------------------------- */
/*******************************************************************/
/* FIC: OEM table to read FIC files. */
/*******************************************************************/
/*******************************************************************/
/* This function is exported from the Tabfic.dll */
/*******************************************************************/
extern "C" PTABDEF __stdcall GetFIC(PGLOBAL g, void *memp);
/*******************************************************************/
/* FIC table definition class. */
/*******************************************************************/
class FICDEF : public DOSDEF { /* Logical table description */
friend class TDBFIC;
public:
// Constructor
FICDEF(void) {Pseudo = 3;}
// Implementation
virtual const char *GetType(void) {return "FIC";}
// Methods
virtual BOOL DefineAM(PGLOBAL g, LPCSTR am, int poff);
virtual PTDB GetTable(PGLOBAL g, MODE m);
protected:
// No Members
}; // end of class FICDEF
/*******************************************************************/
/* This is the class declaration for the FIC table. */
/*******************************************************************/
class TDBFIC : public TDBFIX {
friend class FICCOL;
public:
// Constructor
TDBFIC(PFICDEF tdp);
// Implementation
virtual AMT GetAmType(void) {return TYPE_AM_FIC;}
// Methods
virtual void ResetDB(void);
virtual int RowNumber(PGLOBAL g, BOOL b = FALSE);
// Database routines
virtual PCOL MakeCol(PGLOBAL g, PCOLDEF cdp, PCOL cprec, int n);
virtual BOOL OpenDB(PGLOBAL g, PSQL sqlp);
virtual int ReadDB(PGLOBAL g);
virtual int WriteDB(PGLOBAL g);
virtual int DeleteDB(PGLOBAL g, int irc);
protected:
// Members
int ReadMode; // To read soft deleted lines
int Rows; // Used for RowID
}; // end of class TDBFIC
/*******************************************************************/
/* Class FICCOL: for Monetary columns. */
/*******************************************************************/
class FICCOL : public DOSCOL {
public:
// Constructors
FICCOL(PGLOBAL g, PCOLDEF cdp, PTDB tdbp,
PCOL cprec, int i, PSZ am = "FIC");
// Implementation
virtual int GetAmType(void) {return TYPE_AM_FIC;}
// Methods
virtual void ReadColumn(PGLOBAL g);
protected:
// Members
char Fmt; // The column format
}; // end of class FICCOL
```
#### tabfic.cpp
The source File tabfic.cpp:
```
/*******************************************************************/
/* FIC: OEM table to read FIC files. */
/*******************************************************************/
#if defined(WIN32)
#define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff
#include <windows.h>
#endif // WIN32
#include "global.h"
#include "plgdbsem.h"
#include "reldef.h"
#include "filamfix.h"
#include "tabfix.h"
#include "tabfic.h"
int TDB::Tnum;
int DTVAL::Shift;
/*******************************************************************/
/* Initialize the CSORT static members. */
/*******************************************************************/
int CSORT::Limit = 0;
double CSORT::Lg2 = log(2.0);
size_t CSORT::Cpn[1000] = {0}; /* Precalculated cmpnum values */
/* ------------- Implementation of the FIC subtype --------------- */
/*******************************************************************/
/* This function is exported from the DLL. */
/*******************************************************************/
PTABDEF __stdcall GetFIC(PGLOBAL g, void *memp)
{
return new(g, memp) FICDEF;
} // end of GetFIC
/* -------------- Implementation of the FIC classes -------------- */
/*******************************************************************/
/* DefineAM: define specific AM block values from FIC file. */
/*******************************************************************/
BOOL FICDEF::DefineAM(PGLOBAL g, LPCSTR am, int poff)
{
ReadMode = GetIntCatInfo("Readmode", 0);
// Indicate that we are a BIN format
return DOSDEF::DefineAM(g, "BIN", poff);
} // end of DefineAM
/*******************************************************************/
/* GetTable: makes a new TDB of the proper type. */
/*******************************************************************/
PTDB FICDEF::GetTable(PGLOBAL g, MODE m)
{
return new(g) TDBFIC(this);
} // end of GetTable
/* --------------------------------------------------------------- */
/*******************************************************************/
/* Implementation of the TDBFIC class. */
/*******************************************************************/
TDBFIC::TDBFIC(PFICDEF tdp) : TDBFIX(tdp, NULL)
{
ReadMode = tdp->ReadMode;
Rows = 0;
} // end of TDBFIC constructor
/*******************************************************************/
/* Allocate FIC column description block. */
/*******************************************************************/
PCOL TDBFIC::MakeCol(PGLOBAL g, PCOLDEF cdp, PCOL cprec, int n)
{
PCOL colp;
// BINCOL is alright except for the Monetary format
if (cdp->GetFmt() && toupper(*cdp->GetFmt()) == 'M')
colp = new(g) FICCOL(g, cdp, this, cprec, n);
else
colp = new(g) BINCOL(g, cdp, this, cprec, n);
return colp;
} // end of MakeCol
/*******************************************************************/
/* RowNumber: return the ordinal number of the current row. */
/*******************************************************************/
int TDBFIC::RowNumber(PGLOBAL g, BOOL b)
{
return (b) ? Txfp->GetRowID() : Rows;
} // end of RowNumber
/*******************************************************************/
/* FIC Access Method reset table for re-opening. */
/*******************************************************************/
void TDBFIC::ResetDB(void)
{
Rows = 0;
TDBFIX::ResetDB();
} // end of ResetDB
/*******************************************************************/
/* FIC Access Method opening routine. */
/*******************************************************************/
BOOL TDBFIC::OpenDB(PGLOBAL g, PSQL sqlp)
{
if (Use == USE_OPEN) {
// Table already open, just replace it at its beginning.
return TDBFIX::OpenDB(g);
} // endif use
if (Mode != MODE_READ) {
// Currently FIC tables cannot be modified.
strcpy(g->Message, "FIC tables are read only");
return TRUE;
} // endif Mode
/*****************************************************************/
/* Physically open the FIC file. */
/*****************************************************************/
if (TDBFIX::OpenDB(g))
return TRUE;
Use = USE_OPEN;
return FALSE;
} // end of OpenDB
/*******************************************************************/
/* ReadDB: Data Base read routine for FIC access method. */
/*******************************************************************/
int TDBFIC::ReadDB(PGLOBAL g)
{
int rc;
/*****************************************************************/
/* Now start the reading process. */
/*****************************************************************/
do {
rc = TDBFIX::ReadDB(g);
} while (rc == RC_OK && ((ReadMode == 0 && *To_Line == '*') ||
(ReadMode == 2 && *To_Line != '*')));
Rows++;
return rc;
} // end of ReadDB
/*******************************************************************/
/* WriteDB: Data Base write routine for FIC access methods. */
/*******************************************************************/
int TDBFIC::WriteDB(PGLOBAL g)
{
strcpy(g->Message, "FIC tables are read only");
return RC_FX;
} // end of WriteDB
/*******************************************************************/
/* Data Base delete line routine for FIC access methods. */
/*******************************************************************/
int TDBFIC::DeleteDB(PGLOBAL g, int irc)
{
strcpy(g->Message, "Delete not enabled for FIC tables");
return RC_FX;
} // end of DeleteDB
// ---------------------- FICCOL functions --------------------------
/*******************************************************************/
/* FICCOL public constructor. */
/*******************************************************************/
FICCOL::FICCOL(PGLOBAL g, PCOLDEF cdp, PTDB tdbp, PCOL cprec, int i,
PSZ am) : DOSCOL(g, cdp, tdbp, cprec, i, am)
{
// Set additional FIC access method information for column.
Fmt = toupper(*cdp->GetFmt()); // Column format
} // end of FICCOL constructor
/*******************************************************************/
/* Handle the monetary value of this column. It is a big integer */
/* that represents the value multiplied by 1000. */
/* In this function we translate it to a double float value. */
/*******************************************************************/
void FICCOL::ReadColumn(PGLOBAL g)
{
char *p;
int rc;
uint n;
double fmon;
PTDBFIC tdbp = (PTDBFIC)To_Tdb;
/*****************************************************************/
/* If physical reading of the line was deferred, do it now. */
/*****************************************************************/
if (!tdbp->IsRead())
if ((rc = tdbp->ReadBuffer(g)) != RC_OK) {
if (rc == RC_EF)
sprintf(g->Message, MSG(INV_DEF_READ), rc);
longjmp(g->jumper[g->jump_level], 11);
} // endif
p = tdbp->To_Line + Deplac;
/*****************************************************************/
/* Set Value from the line field. */
/*****************************************************************/
if (*(SHORT*)(p + 8) < 0) {
n = ~*(SHORT*)(p + 8);
fmon = (double)n;
fmon *= 4294967296.0;
n = ~*(int*)(p + 4);
fmon += (double)n;
fmon *= 4294967296.0;
n = ~*(int*)p;
fmon += (double)n;
fmon++;
fmon /= 1000000.0;
fmon = -fmon;
} else {
fmon = ((double)*(USHORT*)(p + 8));
fmon *= 4294967296.0;
fmon += ((double)*(ULONG*)(p + 4));
fmon *= 4294967296.0;
fmon += ((double)*(ULONG*)p);
fmon /= 1000000.0;
} // enif neg
Value->SetValue(fmon);
} // end of ReadColumn
```
#### tabfic.def
The file tabfic.def: (required only on Windows)
```
LIBRARY TABFIC
DESCRIPTION 'FIC files'
EXPORTS
GetFIC @1
```
### JSON UDFs in a separate library
Although the JSON UDF’s can be nicely included in the CONNECT library module, there are cases when you may need to have them in a separate library.
This is when CONNECT is compiled embedded, or if you want to test or use these UDF’s with other MariaDB versions not including them.
To make it, you need to have access to the last MariaDB source code. Then, make a project containing these files:
1. jsonudf.cpp
2. json.cpp
3. value.cpp
4. osutil.c
5. plugutil.c
6. maputil.cpp
7. jsonutil.cpp
jsonutil.cpp is not distributed with the source code, you will have to make it from the following:
```
#include "my_global.h"
#include "mysqld.h"
#include "plugin.h"
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <errno.h>
#include "global.h"
extern "C" int GetTraceValue(void) { return 0; }
uint GetJsonGrpSize(void) { return 100; }
/***********************************************************************/
/* These replace missing function of the (not used) DTVAL class. */
/***********************************************************************/
typedef struct _datpar *PDTP;
PDTP MakeDateFormat(PGLOBAL, PSZ, bool, bool, int) { return NULL; }
int ExtractDate(char*, PDTP, int, int val[6]) { return 0; }
#ifdef __WIN__
my_bool CloseFileHandle(HANDLE h)
{
return !CloseHandle(h);
} /* end of CloseFileHandle */
#else /* UNIX */
my_bool CloseFileHandle(HANDLE h)
{
return (close(h)) ? TRUE : FALSE;
} /* end of CloseFileHandle */
int GetLastError()
{
return errno;
} /* end of GetLastError */
#endif // UNIX
/***********************************************************************/
/* Program for sub-allocating one item in a storage area. */
/* Note: This function is equivalent to PlugSubAlloc except that in */
/* case of insufficient memory, it returns NULL instead of doing a */
/* long jump. The caller must test the return value for error. */
/***********************************************************************/
void *PlgDBSubAlloc(PGLOBAL g, void *memp, size_t size)
{
PPOOLHEADER pph; // Points on area header.
if (!memp) // Allocation is to be done in the Sarea
memp = g->Sarea;
size = ((size + 7) / 8) * 8; /* Round up size to multiple of 8 */
pph = (PPOOLHEADER)memp;
if ((uint)size > pph->FreeBlk) { /* Not enough memory left in pool */
sprintf(g->Message,
"Not enough memory in Work area for request of %d (used=%d free=%d)",
(int)size, pph->To_Free, pph->FreeBlk);
return NULL;
} // endif size
// Do the suballocation the simplest way
memp = MakePtr(memp, pph->To_Free); // Points to sub_allocated block
pph->To_Free += size; // New offset of pool free block
pph->FreeBlk -= size; // New size of pool free block
return (memp);
} // end of PlgDBSubAlloc
```
You can create the file by copy/paste from the above.
Set all the additional include directories to the MariaDB include directories used in plugin compiling plus the reference of the storage/connect directories, and compile like any other UDF giving any name to the made library module (I used jsonudf.dll on Windows)
Then you can create the functions using this name as the soname parameter.
There are some restrictions when using the UDF’s this way:
* The connect\_json\_grp\_size variable cannot be accessed. The group size is set to 100.
* In case of error, warnings are replaced by messages sent to stderr.
* No trace.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Table Elimination External Resources Table Elimination External Resources
====================================
* [an example of how to do this in MariaDB](http://www.anchormodeling.com/?page_id=303)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb DROP ROLE DROP ROLE
=========
Syntax
------
```
DROP ROLE [IF EXISTS] role_name [,role_name ...]
```
Description
-----------
The `DROP ROLE` statement removes one or more MariaDB [roles](../roles/index). To use this statement, you must have the global [CREATE USER](../grant/index#create-user) privilege or the [DELETE](../grant/index#table-privileges) privilege for the mysql database.
`DROP ROLE` does not disable roles for connections which selected them with [SET ROLE](../set-role/index). If a role has previously been set as a [default role](../set-default-role/index), `DROP ROLE` does not remove the record of the default role from the [mysql.user](../mysqluser-table/index) table. If the role is subsequently recreated and granted, it will again be the user's default. Use [SET DEFAULT ROLE NONE](../set-default-role/index) to explicitly remove this.
If any of the specified user accounts do not exist, `ERROR 1396 (HY000)` results. If an error occurs, `DROP ROLE` will still drop the roles that do not result in an error. Only one error is produced for all roles which have not been dropped:
```
ERROR 1396 (HY000): Operation DROP ROLE failed for 'a','b','c'
```
Failed `CREATE` or `DROP` operations, for both users and roles, produce the same error code.
#### IF EXISTS
If the `IF EXISTS` clause is used, MariaDB will return a warning instead of an error if the role does not exist.
Examples
--------
```
DROP ROLE journalist;
```
The same thing using the optional `IF EXISTS` clause:
```
DROP ROLE journalist;
ERROR 1396 (HY000): Operation DROP ROLE failed for 'journalist'
DROP ROLE IF EXISTS journalist;
Query OK, 0 rows affected, 1 warning (0.00 sec)
Note (Code 1975): Can't drop role 'journalist'; it doesn't exist
```
See Also
--------
* [Roles Overview](../roles-overview/index)
* [CREATE ROLE](../create-role/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MBR (Minimum Bounding Rectangle) MBR (Minimum Bounding Rectangle)
=================================
| Title | Description |
| --- | --- |
| [MBR Definition](../mbr-definition/index) | Minimum Bounding Rectangle. |
| [MBRContains](../mbrcontains/index) | Indicates one Minimum Bounding Rectangle contains another. |
| [MBRDisjoint](../mbrdisjoint/index) | Indicates whether the Minimum Bounding Rectangles of two geometries are disjoint. |
| [MBREqual](../mbrequal/index) | Whether the Minimum Bounding Rectangles of two geometries are the same. |
| [MBRIntersects](../mbrintersects/index) | Indicates whether the Minimum Bounding Rectangles of the two geometries intersect. |
| [MBROverlaps](../mbroverlaps/index) | Whether the Minimum Bounding Rectangles of two geometries overlap. |
| [MBRTouches](../mbrtouches/index) | Whether the Minimum Bounding Rectangles of two geometries touch. |
| [MBRWithin](../mbrwithin/index) | Indicates whether one Minimum Bounding Rectangle is within another |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MyRocks and CHECK TABLE MyRocks and CHECK TABLE
=======================
MyRocks supports the [CHECK TABLE](../check-table/index) command.
The command will do a number of checks to verify that the table data is self-consistent.
The details about the errors are printed into the [error log](../error-log/index). If [log\_warnings](../server-system-variables/index#log_warnings) > 2, the error log will also have some informational messages which can help with troubleshooting.
Besides this, RocksDB has its own (low-level) log in `#rocksdb/LOG` file.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema events_transactions_summary_by_user_by_event_name Table Performance Schema events\_transactions\_summary\_by\_user\_by\_event\_name Table
=================================================================================
**MariaDB starting with [10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/)**The events\_transactions\_summary\_by\_user\_by\_event\_name table was introduced in [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/).
The `events_transactions_summary_by_user_by_event_name` table contains information on transaction events aggregated by user and event name.
The table contains the following columns:
```
+----------------------+---------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------------------+---------------------+------+-----+---------+-------+
| USER | char(32) | YES | | NULL | |
| EVENT_NAME | varchar(128) | NO | | NULL | |
| COUNT_STAR | bigint(20) unsigned | NO | | NULL | |
| SUM_TIMER_WAIT | bigint(20) unsigned | NO | | NULL | |
| MIN_TIMER_WAIT | bigint(20) unsigned | NO | | NULL | |
| AVG_TIMER_WAIT | bigint(20) unsigned | NO | | NULL | |
| MAX_TIMER_WAIT | bigint(20) unsigned | NO | | NULL | |
| COUNT_READ_WRITE | bigint(20) unsigned | NO | | NULL | |
| SUM_TIMER_READ_WRITE | bigint(20) unsigned | NO | | NULL | |
| MIN_TIMER_READ_WRITE | bigint(20) unsigned | NO | | NULL | |
| AVG_TIMER_READ_WRITE | bigint(20) unsigned | NO | | NULL | |
| MAX_TIMER_READ_WRITE | bigint(20) unsigned | NO | | NULL | |
| COUNT_READ_ONLY | bigint(20) unsigned | NO | | NULL | |
| SUM_TIMER_READ_ONLY | bigint(20) unsigned | NO | | NULL | |
| MIN_TIMER_READ_ONLY | bigint(20) unsigned | NO | | NULL | |
| AVG_TIMER_READ_ONLY | bigint(20) unsigned | NO | | NULL | |
| MAX_TIMER_READ_ONLY | bigint(20) unsigned | NO | | NULL | |
+----------------------+---------------------+------+-----+---------+-------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Installing MariaDB Galera on IBM Cloud Installing MariaDB Galera on IBM Cloud
======================================
Get MariaDB Galera on IBM Cloud
You should have an IBM Cloud account, otherwise you can [register here](https://cloud.ibm.com/registration). At the end of the tutorial you will have a cluster with MariaDB up and running. IBM Cloud uses Bitnami charts to deploy MariaDB Galera on with helm
1. We will provision a new Kubernetes Cluster for you if, you already have one skip to step **2**
2. We will deploy the IBM Cloud Block Storage plug-in, if already have it skip to step **3**
3. MariaDB Galera deployment
Step 1 provision Kubernetes Cluster
-----------------------------------
* Click the **Catalog** button on the top
* Select **Service** from the catalog
* Search for **Kubernetes Service** and click on it
* You are now at the Kubernetes deployment page, you need to specify some details about the cluster
* Choose a plan **standard** or **free**, the free plan only has one worker node and no subnet, to provision a standard cluster, you will need to upgrade you account to Pay-As-You-Go
* To upgrade to a Pay-As-You-Go account, complete the following steps:
* In the console, go to Manage > Account.
* Select Account settings, and click Add credit card.
* Enter your payment information, click Next, and submit your information
* Choose **classic** or **VPC**, read the [docs](https://cloud.ibm.com/docs/containers?topic=containers-infrastructure_providers) and choose the most suitable type for yourself
* Now choose your location settings, for more information please visit [Locations](https://cloud.ibm.com/docs/containers?topic=containers-regions-and-zones#zones)
* Choose **Geography** (continent)
* Choose **Single** or **Multizone**, in single zone your data is only kept in on datacenter, on the other hand with Multizone it is distributed to multiple zones, thus safer in an unforseen zone failure
* Choose a **Worker Zone** if using Single zones or **Metro** if Multizone
* If you wish to use Multizone please set up your account with [VRF](https://cloud.ibm.com/docs/dl?topic=dl-overview-of-virtual-routing-and-forwarding-vrf-on-ibm-cloud) or [enable Vlan spanning](https://cloud.ibm.com/docs/vlans?topic=vlans-vlan-spanning#vlan-spanning)
* If at your current location selection, there is no available Virtual LAN, a new Vlan will be created for you
* Choose a **Worker node setup** or use the preselected one, set **Worker node amount per zone**
* Choose **Master Service Endpoint**, In VRF-enabled accounts, you can choose private-only to make your master accessible on the private network or via VPN tunnel. Choose public-only to make your master publicly accessible. When you have a VRF-enabled account, your cluster is set up by default to use both private and public endpoints. For more information visit [endpoints](https://cloud.ibm.com/docs/account?topic=account-service-endpoints-overview).
* Give cluster a **name**
* Give desired **tags** to your cluster, for more information visit [tags](https://cloud.ibm.com/docs/account?topic=account-tag)
* Click **create**
* Wait for you cluster to be provisioned
* Your cluster is ready for usage
Step 2 deploy IBM Cloud Block Storage plug-in
---------------------------------------------
The Block Storage plug-in is a persistent, high-performance iSCSI storage that you can add to your apps by using Kubernetes Persistent Volumes (PVs).
* Click the **Catalog** button on the top
* Select **Software** from the catalog
* Search for **IBM Cloud Block Storage plug-in** and click on it
* On the application page Click in the dot next to the cluster, you wish to use
* Click on **Enter or Select Namespace** and choose the default Namespace or use a custom one (if you get error please wait 30 minutes for the cluster to finalize)
* Give a **name** to this workspace
* Click **install** and wait for the deployment
Step 3 deploy MariaDB Galera
----------------------------
We will deploy MariaDB on our cluster
* Click the **Catalog** button on the top
* Select **Software** from the catalog
* Search for **MariaDB** and click on it
* On the application page Click in the dot next to the cluster, you wish to use
* Click on **Enter or Select Namespace** and choose the default Namespace or use a custom one
* Give a unique **name** to workspace, which you can easily recognize
* Select which resource group you want to use, it's for access controll and billing purposes. For more information please visit [resource groups](https://cloud.ibm.com/docs/account?topic=account-account_setup#bp_resourcegroups)
* Give **tags** to your MariaDB Galera, for more information visit [tags](https://cloud.ibm.com/docs/account?topic=account-tag)
* Click on **Parameters with default values**, You can set deployment values or use the default ones
* Please set the MariaDB Galera root password in the parameters
* After finishing everything, **tick** the box next to the agreements and click **install**
* The MariaDB Galera workspace will start installing, wait a couple of minutes
* Your MariaDB Galera workspace has been successfully deployed
Verify MariaDB Galera installation
----------------------------------
* Go to [Resources](http://cloud.ibm.com/resources) in your browser
* Click on **Clusters**
* Click on your Cluster
* Now you are at you clusters overview, here Click on **Actions** and **Web terminal** from the dropdown menu
* Click **install** - wait couple of minutes
* Click on **Actions**
* Click **Web terminal** --> a terminal will open up
* **Type** in the terminal, please change NAMESPACE to the namespace you choose at the deployment setup:
```
$ kubectl get ns
```
```
$ kubectl get pod -n NAMESPACE -o wide
```
```
$ kubectl get service -n NAMESPACE
```
* Enter your pod with bash , please replace PODNAME with your mariadb pod's name
```
$ kubectl exec --stdin --tty PODNAME -n NAMESPACE -- /bin/bash
```
* After you are in your pod , please verify that mariadb is running on your pods cluster. Please enter the root password after the prompt
```
mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"
```
You have succesfully deployed MariaDB Galera on IBM Cloud!
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb IsSimple IsSimple
========
A synonym for [ST\_IsSImple](../st_issimple/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb DESCRIBE DESCRIBE
========
Syntax
------
```
{DESCRIBE | DESC} tbl_name [col_name | wild]
```
Description
-----------
`DESCRIBE` provides information about the columns in a table. It is a shortcut for `[SHOW COLUMNS FROM](../show-columns/index)`. These statements also display information for [views](../views/index).
`col_name` can be a column name, or a string containing the SQL "`%`" and "`_`" wildcard characters to obtain output only for the columns with names matching the string. There is no need to enclose the string within quotes unless it contains spaces or other special characters.
```
DESCRIBE city;
+------------+----------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+----------+------+-----+---------+----------------+
| Id | int(11) | NO | PRI | NULL | auto_increment |
| Name | char(35) | YES | | NULL | |
| Country | char(3) | NO | UNI | | |
| District | char(20) | YES | MUL | | |
| Population | int(11) | YES | | NULL | |
+------------+----------+------+-----+---------+----------------+
```
The description for `[SHOW COLUMNS](../show-columns/index)` provides more information about the output columns.
See Also
--------
* [SHOW COLUMNS](../show-columns/index)
* [INFORMATION\_SCHEMA.COLUMNS Table](../information-schema-columns-table/index)
* [mysqlshow](../mysqlshow/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb InnoDB Page Flushing InnoDB Page Flushing
====================
Page Flushing with InnoDB Page Cleaner Threads
----------------------------------------------
InnoDB page cleaner threads flush dirty pages from the [InnoDB buffer pool](../innodb-buffer-pool/index). These dirty pages are flushed using a least-recently used (LRU) algorithm.
### innodb\_max\_dirty\_pages\_pct
The [innodb\_max\_dirty\_pages\_pct](../innodb-system-variables/index#innodb_max_dirty_pages_pct) variable specifies the maximum percentage of unwritten (dirty) pages in the [buffer pool](../innodb-buffer-pool/index). If this percentage is exceeded, flushing will take place.
### innodb\_max\_dirty\_pages\_pct\_lwm
The [innodb\_max\_dirty\_pages\_pct\_lwm](../innodb-system-variables/index#innodb_max_dirty_pages_pct_lwm) variable determines the low-water mark percentage of dirty pages that will enable preflushing to lower the dirty page ratio. The value 0 (the default) means that there will be no separate background flushing so long as:
* the share of dirty pages does not exceed [innodb\_max\_dirty\_pages\_pct](../innodb-system-variables/index#innodb_max_dirty_pages_pct)
* the last checkpoint age (LSN difference since the latest checkpoint) does not exceed [innodb\_log\_file\_size](../innodb-system-variables/index#innodb_log_file_size) (minus some safety margin)
* the [buffer pool](../innodb-buffer-pool/index) is not running out of space, which could trigger eviction flushing
Note that in [MariaDB 10.5.7](https://mariadb.com/kb/en/mariadb-1057-release-notes/) and [MariaDB 10.5.8](https://mariadb.com/kb/en/mariadb-1058-release-notes/) only, flushing was more aggressive, and the page cleaner thread would always run in the background, as long as dirty pages exist in the buffer pool. To make flushing more eager, set to a higher value, for example `SET GLOBAL innodb_max_dirty_pages_pct_lwm=0.001;` (the default until [MariaDB 10.2.1](https://mariadb.com/kb/en/mariadb-1021-release-notes/)).
### Page Flushing with Multiple InnoDB Page Cleaner Threads
**MariaDB [10.2.2](https://mariadb.com/kb/en/mariadb-1022-release-notes/) - [10.5.1](https://mariadb.com/kb/en/mariadb-1051-release-notes/)**The [innodb\_page\_cleaners](../innodb-system-variables/index#innodb_page_cleaners) system variable was added in [MariaDB 10.2.2](https://mariadb.com/kb/en/mariadb-1022-release-notes/), and makes it possible to use multiple InnoDB page cleaner threads. It is deprecated and ignored from [MariaDB 10.5.1](https://mariadb.com/kb/en/mariadb-1051-release-notes/), as the original reasons for for splitting the buffer pool have mostly gone away.
The number of InnoDB page cleaner threads can be configured by setting the [innodb\_page\_cleaners](../innodb-system-variables/index#innodb_page_cleaners) system variable. This system variable can be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
...
innodb_page_cleaners=8
```
In [MariaDB 10.3.3](https://mariadb.com/kb/en/mariadb-1033-release-notes/) and later, this system variable can also be changed dynamically with [SET GLOBAL](../set/index#global-session). For example:
```
SET GLOBAL innodb_page_cleaners=8;
```
This system variable's default value is either `4` or the configured value of the [innodb\_buffer\_pool\_instances](../innodb-system-variables/index#innodb_buffer_pool_instances) system variable, whichever is lower.
### Page Flushing with a Single InnoDB Page Cleaner Thread
In [MariaDB 10.2.1](https://mariadb.com/kb/en/mariadb-1021-release-notes/) and before, and from [MariaDB 10.5.1](https://mariadb.com/kb/en/mariadb-1051-release-notes/), when the original reasons for splitting the buffer pool have mostly gone away, only a single InnoDB page cleaner thread is supported.
Page Flushing with Multi-threaded Flush Threads
-----------------------------------------------
**MariaDB [10.1.0](https://mariadb.com/kb/en/mariadb-1010-release-notes/) - [10.3.2](https://mariadb.com/kb/en/mariadb-1032-release-notes/)**InnoDB's multi-thread flush feature was first added in [MariaDB 10.1.0](https://mariadb.com/kb/en/mariadb-1010-release-notes/). It was deprecated in [MariaDB 10.2.9](https://mariadb.com/kb/en/mariadb-1029-release-notes/) and removed in [MariaDB 10.3.2](https://mariadb.com/kb/en/mariadb-1032-release-notes/).
In [MariaDB 10.3.1](https://mariadb.com/kb/en/mariadb-1031-release-notes/) and before, InnoDB's multi-thread flush feature can be used. This is especially useful in [MariaDB 10.1](../what-is-mariadb-101/index), which only supports a single page cleaner thread.
InnoDB's multi-thread flush feature can be enabled by setting the [innodb\_use\_mtflush](../innodb-system-variables/index#innodb_use_mtflush) system variable. The number of threads cane be configured by setting the [innodb\_mtflush\_threads](../innodb-system-variables/index#innodb_mtflush_threads) system variable. This system variable can be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
...
innodb_use_mtflush = ON
innodb_mtflush_threads = 8
```
The [innodb\_mtflush\_threads](../innodb-system-variables/index#innodb_mtflush_threads) system variable's default value is `8`. The maximum value is `64`. In multi-core systems, it is recommended to set its value close to the configured value of the [innodb\_buffer\_pool\_instances](../innodb-system-variables/index#innodb_buffer_pool_instances) system variable. However, it is also recommended to use your own benchmarks to find a suitable value for your particular application.
InnoDB's multi-thread flush feature was deprecated in [MariaDB 10.2.9](https://mariadb.com/kb/en/mariadb-1029-release-notes/) and removed from [MariaDB 10.3.2](https://mariadb.com/kb/en/mariadb-1032-release-notes/). In later versions of MariaDB, use multiple InnoDB page cleaner threads instead.
Configuring the InnoDB I/O Capacity
-----------------------------------
Increasing the amount of I/O capacity available to InnoDB can also help increase the performance of page flushing.
The amount of I/O capacity available to InnoDB can be configured by setting the [innodb\_io\_capacity](../innodb-system-variables/index#innodb_io_capacity) system variable. This system variable can be changed dynamically with [SET GLOBAL](../set/index#global-session). For example:
```
SET GLOBAL innodb_io_capacity=20000;
```
This system variable can also be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
...
innodb_io_capacity=20000
```
The maximum amount of I/O capacity available to InnoDB in an emergency defaults to either `2000` or twice [innodb\_io\_capacity](../innodb-system-variables/index#innodb_io_capacity), whichever is higher, or can be directly configured by setting the [innodb\_io\_capacity\_max](../innodb-system-variables/index#innodb_io_capacity_max) system variable. This system variable can be changed dynamically with [SET GLOBAL](../set/index#global-session). For example:
```
SET GLOBAL innodb_io_capacity_max=20000;
```
This system variable can also be set in a server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index) prior to starting up the server. For example:
```
[mariadb]
...
innodb_io_capacity_max=20000
```
See Also
--------
* [Significant performance boost with new MariaDB page compression on FusionIO](https://blog.mariadb.org/significant-performance-boost-with-new-mariadb-page-compression-on-fusionio/)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Query Cache Query Cache
===========
The query cache stores results of SELECT queries so that if the identical query is received in future, the results can be quickly returned.
This is extremely useful in high-read, low-write environments (such as most websites). It does not scale well in environments with high throughput on multi-core machines, so it is disabled by default.
Note that the query cache cannot be enabled in certain environments. See [Limitations](#limitations).
Setting Up the Query Cache
--------------------------
Unless MariaDB has been specifically built without the query cache, the query cache will always be available, although inactive. The [have\_query\_cache](../server-system-variables/index#have_query_cache) server variable will show whether the query cache is available.
```
SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
```
If this is set to `NO`, you cannot enable the query cache unless you rebuild or reinstall a version of MariaDB with the cache available.
To see if the cache is enabled, view the [query\_cache\_type](../server-system-variables/index#query_cache_type) server variable. It is enabled by default in MariaDB versions up to 10.1.6, but disabled starting with [MariaDB 10.1.7](https://mariadb.com/kb/en/mariadb-1017-release-notes/) - if needed enable it by setting `query_cache_type` to `1`.
Although enabled in versions prior to [MariaDB 10.1.7](https://mariadb.com/kb/en/mariadb-1017-release-notes/), the [query\_cache\_size](../server-system-variables/index#query_cache_size) is by default 0KB there, which effectively disables the query cache. From 10.1.7 on the cache size defaults to 1MB. If needed set the cache to a size large enough amount, for example:
```
SET GLOBAL query_cache_size = 1000000;
```
Starting from [MariaDB 10.1.7](https://mariadb.com/kb/en/mariadb-1017-release-notes/), `query_cache_type` is automatically set to ON if the server is started with the `query_cache_size` set to a non-zero (and non-default) value.
See [Limiting the size of the Query Cache](#limiting-the-size-of-the-query-cache) below for details.
How the Query Cache Works
-------------------------
When the query cache is enabled and a new SELECT query is processed, the query cache is examined to see if the query appears in the cache.
Queries are considered identical if they use the same database, same protocol version and same default character set. Prepared statements are always considered as different to non-prepared statements, see [Query cache internal structure](#query-cache-internal-structure) for more info.
If the identical query is not found in the cache, the query will be processed normally and then stored, along with its result set, in the query cache. If the query is found in the cache, the results will be pulled from the cache, which is much quicker than processing it normally.
Queries are examined in a case-sensitive manner, so :
```
SELECT * FROM t
```
Is different from :
```
select * from t
```
Comments are also considered and can make the queries differ, so :
```
/* retry */SELECT * FROM t
```
Is different from :
```
/* retry2 */SELECT * FROM t
```
See the [query\_cache\_strip\_comments](../server-system-variables/index#query_cache_strip_comments) server variable for an option to strip comments before searching.
Each time changes are made to the data in a table, all affected results in the query cache are cleared. It is not possible to retrieve stale data from the query cache.
When the space allocated to query cache is exhausted, the oldest results will be dropped from the cache.
When using `query_cache_type=ON`, and the query specifies `SQL_NO_CACHE` (case-insensitive), the server will not cache the query and will not fetch results from the query cache.
When using `query_cache_type=DEMAND` (after [MDEV-6631](https://jira.mariadb.org/browse/MDEV-6631) feature request) and the query specifies `SQL_CACHE`, the server will cache the query.
One important point of [MDEV-6631](https://jira.mariadb.org/browse/MDEV-6631) is : switching between `query_cache_type=ON` and `query_cache_type=DEMAND` can "turn off" query cache of old queries without the `SQL_CACHE` string, that's not yet defined if we should include another `query_cache_type` (DEMAND\_NO\_PRUNE) value or not to allow use of old queries
Queries Stored in the Query Cache
---------------------------------
If the [query\_cache\_type](../server-system-variables/index#query_cache_type) system variable is set to `1`, or `ON`, all queries fitting the size constraints will be stored in the cache unless they contain a `SQL_NO_CACHE` clause, or are of a nature that caching makes no sense, for example making use of a function that returns the current time. Check that `SQL_NO_CACHE` will force server to don't use query cache locks.
If any of the following functions are present in a query, it will not be cached. Queries with these functions are sometimes called 'non-deterministic' - don't get confused with the use of this term in other contexts.
| | |
| --- | --- |
| [BENCHMARK()](../benchmark/index) | [CONNECTION\_ID()](../connection_id/index) |
| [CONVERT\_TZ()](../convert_tz/index) | [CURDATE()](../curdate/index) |
| [CURRENT\_DATE()](../current_date/index) | [CURRENT\_TIME()](../current_time/index) |
| [CURRENT\_TIMESTAMP()](../current_timestamp/index) | [CURTIME()](../curtime/index) |
| [DATABASE()](../database/index) | [ENCRYPT()](../encrypt/index) (one parameter) |
| [FOUND\_ROWS()](../found_rows/index) | [GET\_LOCK()](../get_lock/index) |
| [LAST\_INSERT\_ID()](../last_insert_id/index) | [LOAD\_FILE()](../load_file/index) |
| [MASTER\_POS\_WAIT()](../master_pos_wait/index) | [NOW()](../now/index) |
| [RAND()](../rand/index) | [RELEASE\_LOCK()](../release_lock/index) |
| [SLEEP()](../sleep/index) | [SYSDATE()](../sysdate/index) |
| [UNIX\_TIMESTAMP()](../unix_timestamp/index) (no parameters) | [USER()](../user/index) |
| [UUID()](../uuid/index) | [UUID\_SHORT()](../uuid_short/index) |
A query will also not be added to the cache if:
* It is of the form:
+ SELECT SQL\_NO\_CACHE ...
+ SELECT ... INTO OUTFILE ...
+ SELECT ... INTO DUMPFILE ...
+ SELECT ... FOR UPDATE
+ SELECT \* FROM ... WHERE autoincrement\_column IS NULL
+ SELECT ... LOCK IN SHARE MODE
* It uses TEMPORARY table
* It uses no tables at all
* It generates a warning
* The user has a column-level privilege on any table in the query
* It accesses a table from INFORMATION\_SCHEMA, mysql or the performance\_schema database
* It makes use of user or local variables
* It makes use of stored functions
* It makes use of user-defined functions
* It is inside a transaction with the SERIALIZABLE isolation level
* It is quering a table inside a transaction after the same table executed a query cache invalidation using INSERT, UPDATE or DELETE
The query itself can also specify that it is not to be stored in the cache by using the `SQL_NO_CACHE` attribute. Query-level control is an effective way to use the cache more optimally.
It is also possible to specify that *no* queries must be stored in the cache unless the query requires it. To do this, the [query\_cache\_type](../server-system-variables/index#query_cache_type) server variable must be set to `2`, or `DEMAND`. Then, only queries with the `SQL_CACHE` attribute are cached.
Limiting the Size of the Query Cache
------------------------------------
There are two main ways to limit the size of the query cache. First, the overall size in bytes is determined by the [query\_cache\_size](../server-system-variables/index#query_cache_size) server variable. About 40KB is needed for various query cache structures.
The query cache size is allocated in 1024 byte-blocks, thus it should be set to a multiple of 1024.
The query result is stored using a minimum block size of [query\_cache\_min\_res\_unit](../server-system-variables/index#query_cache_min_res_unit). Check two conditions to use a good value of this variable: Query cache insert result blocks with locks, each new block insert lock query cache, a small value will increase locks and fragmentation and waste less memory for small results, a big value will increase memory use wasting more memory for small results but it reduce locks. Test with your workload for fine tune this variable.
If the [strict mode](../sql-mode/index) is enabled, setting the query cache size to an invalid value will cause an error. Otherwise, it will be set to the nearest permitted value, and a warning will be triggered.
```
SHOW VARIABLES LIKE 'query_cache_size';
+------------------+----------+
| Variable_name | Value |
+------------------+----------+
| query_cache_size | 67108864 |
+------------------+----------+
SET GLOBAL query_cache_size = 8000000;
Query OK, 0 rows affected, 1 warning (0.03 sec)
SHOW VARIABLES LIKE 'query_cache_size';
+------------------+---------+
| Variable_name | Value |
+------------------+---------+
| query_cache_size | 7999488 |
+------------------+---------+
```
The ideal size of the query cache is very dependent on the specific needs of each system. Setting a value too small will result in query results being dropped from the cache when they could potentially be re-used later. Setting a value too high could result in reduced performance due to lock contention, as the query cache is locked during updates.
The second way to limit the cache is to have a maximum size for each set of query results. This prevents a single query with a huge result set taking up most of the available memory and knocking a large number of smaller queries out of the cache. This is determined by the [query\_cache\_limit](../server-system-variables/index#query_cache_limit) server variable.
If you attempt to set a query cache that is too small (the amount depends on the architecture), the resizing will fail and the query cache will be set to zero, for example :
```
SET GLOBAL query_cache_size=40000;
Query OK, 0 rows affected, 2 warnings (0.03 sec)
SHOW WARNINGS;
+---------+------+-----------------------------------------------------------------+
| Level | Code | Message |
+---------+------+-----------------------------------------------------------------+
| Warning | 1292 | Truncated incorrect query_cache_size value: '40000' |
| Warning | 1282 | Query cache failed to set size 39936; new query cache size is 0 |
+---------+------+-----------------------------------------------------------------+
```
Examining the Query Cache
-------------------------
A number of status variables provide information about the query cache.
```
SHOW STATUS LIKE 'Qcache%';
+-------------------------+----------+
| Variable_name | Value |
+-------------------------+----------+
| Qcache_free_blocks | 1158 |
| Qcache_free_memory | 3760784 |
| Qcache_hits | 31943398 |
| Qcache_inserts | 42998029 |
| Qcache_lowmem_prunes | 34695322 |
| Qcache_not_cached | 652482 |
| Qcache_queries_in_cache | 4628 |
| Qcache_total_blocks | 11123 |
+-------------------------+----------+
```
`Qcache_inserts` contains the number of queries added to the query cache, `Qcache_hits` contains the number of queries that have made use of the query cache, while `Qcache_lowmem_prunes` contains the number of queries that were dropped from the cache due to lack of memory.
The above example could indicate a poorly performing cache. More queries have been added, and more queries have been dropped, than have actually been used.
Note that before [MariaDB 5.5](../what-is-mariadb-55/index), queries returned from the query cache did not increment the [Com\_select](../server-status-variables/index#com_select) status variable, so to find the total number of valid queries run on the server, add [Com\_select](../server-status-variables/index#com_select) to [Qcache\_hits](../server-status-variables/index#qcache_hits). Starting from [MariaDB 5.5](../what-is-mariadb-55/index), results returned by the query cache count towards `Com_select` (see [MDEV-4981](https://jira.mariadb.org/browse/MDEV-4981)).
The [QUERY\_CACHE\_INFO plugin](../query_cache_info-plugin/index) creates the [QUERY\_CACHE\_INFO](../information-schema-query_cache_info-table/index) table in the [INFORMATION\_SCHEMA](../information_schema/index), allowing you to examine the contents of the query cache.
Query Cache Fragmentation
-------------------------
The Query Cache uses blocks of variable length, and over time may become fragmented. A high `Qcache_free_blocks` relative to `Qcache_total_blocks` may indicate fragmentation. [FLUSH QUERY CACHE](../flush-query-cache/index) will defragment the query cache without dropping any queries :
```
FLUSH QUERY CACHE;
```
After this, there will only be one free block :
```
SHOW STATUS LIKE 'Qcache%';
+-------------------------+----------+
| Variable_name | Value |
+-------------------------+----------+
| Qcache_free_blocks | 1 |
| Qcache_free_memory | 6101576 |
| Qcache_hits | 31981126 |
| Qcache_inserts | 43002404 |
| Qcache_lowmem_prunes | 34696486 |
| Qcache_not_cached | 655607 |
| Qcache_queries_in_cache | 4197 |
| Qcache_total_blocks | 8833 |
+-------------------------+----------+
```
Emptying and disabling the Query Cache
--------------------------------------
To empty or clear all results from the query cache, use [RESET QUERY CACHE](../reset/index). [FLUSH TABLES](../flush/index) will have the same effect.
Setting either [query\_cache\_type](../server-system-variables/index#query_cache_type) or [query\_cache\_size](../server-system-variables/index#query_cache_size) to `0` will disable the query cache, but to free up the most resources, set both to `0` when you wish to disable caching.
Limitations
-----------
* The query cache needs to be disabled in order to use [OQGRAPH](../oqgraph/index).
* The query cache is not used by the [Spider](../spider/index) storage engine (amongst others).
* The query cache also needs to be disabled for MariaDB [Galera](../galera/index) cluster versions prior to "5.5.40-galera", "10.0.14-galera" and "10.1.2".
LOCK TABLES and the Query Cache
-------------------------------
The query cache can be used when tables have a write lock (which may seem confusing since write locks should avoid table reads). This behaviour can be changed by setting the [query\_cache\_wlock\_invalidate](../server-system-variables/index#query_cache_wlock_invalidate) system variable to `ON`, in which case each write lock will invalidate the table query cache. Setting to `OFF`, the default, means that cached queries can be returned even when a table lock is being held. For example:
```
1> SELECT * FROM T1
+---+
| a |
+---+
| 1 |
+---+
-- Here the query is cached
-- From another connection execute:
2> LOCK TABLES T1 WRITE;
-- Expected result with: query_cache_wlock_invalidate = OFF
1> SELECT * FROM T1
+---+
| a |
+---+
| 1 |
+---+
-- read from query cache
-- Expected result with: query_cache_wlock_invalidate = ON
1> SELECT * FROM T1
-- Waiting Table Write Lock
```
Transactions and the Query Cache
--------------------------------
The query cache handles transactions. Internally a flag (FLAGS\_IN\_TRANS) is set to 0 when a query was executed outside a transaction, and to 1 when the query was inside a transaction ([BEGIN](begin) / [COMMIT](../commit/index) / [ROLLBACK](../rollback/index)). This flag is part of the "query cache hash", in others words one query inside a transaction is different from a query outside a transaction.
Queries that change rows ([INSERT](../insert/index) / [UPDATE](../update/index) / [DELETE](../delete/index) / [TRUNCATE](../truncate/index)) inside a transaction will invalidate all queries from the table, and turn off the query cache to the changed table. Transactions that don't end with COMMIT / ROLLBACK check that even without COMMIT / ROLLBACK, the query cache is turned off to allow row level locking and consistency level.
Examples:
```
SELECT * FROM T1 <first insert to query cache, using FLAGS_IN_TRANS=0>
+---+
| a |
+---+
| 1 |
+---+
```
```
BEGIN;
SELECT * FROM T1 <first insert to query cache, using FLAGS_IN_TRANS=1>
+---+
| a |
+---+
| 1 |
+---+
```
```
SELECT * FROM T1 <result from query cache, using FLAGS_IN_TRANS=1>
+---+
| a |
+---+
| 1 |
+---+
```
```
INSERT INTO T1 VALUES(2); <invalidate queries from table T1 and disable query cache to table T1>
```
```
SELECT * FROM T1 <don't use query cache, a normal query from innodb table>
+---+
| a |
+---+
| 1 |
| 2 |
+---+
```
```
SELECT * FROM T1 <don't use query cache, a normal query from innodb table>
+---+
| a |
+---+
| 1 |
| 2 |
+---+
```
```
COMMIT; <query cache is now turned on to T1 table>
```
```
SELECT * FROM T1 <first insert to query cache, using FLAGS_IN_TRANS=0>
+---+
| a |
+---+
| 1 |
+---+
```
```
SELECT * FROM T1 <result from query cache, using FLAGS_IN_TRANS=0>
+---+
| a |
+---+
| 1 |
+---+
```
Query Cache Internal Structure
------------------------------
Internally, each flag that can change a result using the same query is a different query. For example, using the latin1 charset and using the utf8 charset with the same query are treated as different queries by the query cache.
Some fields that differentiate queries are (from "Query\_cache\_query\_flags" internal structure) :
* query (string)
* current database schema name (string)
* client long flag (0/1)
* client protocol 4.1 (0/1)
* protocol type (internal value)
* more results exists (protocol flag)
* in trans (inside transaction or not)
* autocommit ([autocommit](../server-system-variables/index#autocommit) session variable)
* pkt\_nr (protocol flag)
* character set client ([character\_set\_client](../server-system-variables/index#character_set_client) session variable)
* character set results ([character\_set\_results](../server-system-variables/index#character_set_results) session variable)
* collation connection ([collation\_connection](../server-system-variables/index#collation_connection) session variable)
* limit ([sql\_select\_limit](../server-system-variables/index#sql_select_limit) session variable)
* time zone ([time\_zone](../server-system-variables/index#time_zone) session variable)
* sql\_mode ([sql\_mode](../server-system-variables/index#sql_mode) session variable)
* max\_sort\_length ([max\_sort\_length](../server-system-variables/index#max_sort_length) session variable)
* group\_concat\_max\_len ([group\_concat\_max\_len](../server-system-variables/index#group_concat_max_len) session variable)
* default\_week\_format ([default\_week\_format](../server-system-variables/index#default_week_format) session variable)
* div\_precision\_increment ([div\_precision\_increment](../server-system-variables/index#div_precision_increment) session variable)
* lc\_time\_names ([lc\_time\_names](../server-system-variables/index#lc_time_names) session variable)
More information can be found by viewing the source code ([MariaDB 10.1](../what-is-mariadb-101/index)) :
* <https://github.com/MariaDB/server/blob/10.1/sql/sql_cache.cc>
* <https://github.com/MariaDB/server/blob/10.1/sql/sql_cache.h>
Timeout and Mutex Contention
----------------------------
When searching for a query inside the query cache, a try\_lock function waits with a timeout of 50ms. If the lock fails, the query isn't executed via the query cache. This timeout is hard coded ([MDEV-6766](https://jira.mariadb.org/browse/MDEV-6766) include two variables to tune this timeout).
From the sql\_cache.cc, function "try\_lock" using TIMEOUT :
```
struct timespec waittime;
set_timespec_nsec(waittime,(ulong)(50000000L)); /* Wait for 50 msec */
int res= mysql_cond_timedwait(&COND_cache_status_changed,
&structure_guard_mutex, &waittime);
if (res == ETIMEDOUT)
break;
```
When inserting a query inside the query cache or aborting a query cache insert (using the [KILL](../kill/index) command for example), a try\_lock function waits until the query cache returns; no timeout is used in this case.
When two processes execute the same query, only the last process stores the query result. All other processes increase the [Qcache\_not\_cached](../server-status-variables/index#qcache_not_cached) status variable.
SQL\_NO\_CACHE and SQL\_CACHE
-----------------------------
There are two aspects to the query cache: placing a query in the cache, and retrieving it from the cache.
1. Adding a query to the query cache. This is done automatically for cacheable queries (see ([Queries Stored in the Query Cache](#queries-stored-in-the-query-cache)) when the [query\_cache\_type](../server-system-variables/index#query_cache_type) system variable is set to `1`, or `ON` and the query contains no SQL\_NO\_CACHE clause, or when the [query\_cache\_type](../server-system-variables/index#query_cache_type) system variable is set to `2`, or `DEMAND`, and the query contains the SQL\_CACHE clause.
2. Retrieving a query from the cache. This is done after the server receives the query and before the query parser. In this case one point should be considered:
When using SQL\_NO\_CACHE, it should be after the first SELECT hint, for example :
```
SELECT SQL_NO_CACHE .... FROM (SELECT SQL_CACHE ...) AS temp_table
```
instead of
```
SELECT SQL_CACHE .... FROM (SELECT SQL_NO_CACHE ...) AS temp_table
```
The second query will be checked. The query cache only checks if SQL\_NO\_CACHE/SQL\_CACHE exists after the first SELECT. (More info at [MDEV-6631](https://jira.mariadb.org/browse/MDEV-6631))
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Dynamic Columns API Dynamic Columns API
===================
This page describes client-side of [MariaDB 10.0.1](https://mariadb.com/kb/en/mariadb-1001-release-notes/) API and MariaDB Connector/C 2.0 for reading and writing [Dynamic Columns](../dynamic-columns/index) blobs.
Normally, you should use [Dynamic column functions](../dynamic-columns/index#dynamic-columns-functions) which are run inside the MariaDB server and allow one to access Dynamic Columns content without any client-side libraries.
If you need to read/write dynamic column blobs **on the client** for some reason, this API enables that.
Where to get it
---------------
The API is a part of `libmysql` C client library. In order to use it, one needs to include this header file
```
#include <mysql/ma_dyncol.h>
```
and link against `libmysql`.
Data structures
---------------
### DYNAMIC\_COLUMN
`DYNAMIC_COLUMN` represents a packed dynamic column blob. It is essentially a string-with-length and is defined as follows:
```
/* A generic-purpose arbitrary-length string defined in MySQL Client API */
typedef struct st_dynamic_string
{
char *str;
size_t length,max_length,alloc_increment;
} DYNAMIC_STRING;
...
typedef DYNAMIC_STRING DYNAMIC_COLUMN;
```
### DYNAMIC\_COLUMN\_VALUE
Dynamic columns blob stores {name, value} pairs. `DYNAMIC_COLUMN_VALUE` structure is used to represent the value in accessible form.
```
struct st_dynamic_column_value
{
DYNAMIC_COLUMN_TYPE type;
union
{
long long long_value;
unsigned long long ulong_value;
double double_value;
struct {
MYSQL_LEX_STRING value;
CHARSET_INFO *charset;
} string;
struct {
decimal_digit_t buffer[DECIMAL_BUFF_LENGTH];
decimal_t value;
} decimal;
MYSQL_TIME time_value;
} x;
};
typedef struct st_dynamic_column_value DYNAMIC_COLUMN_VALUE;
```
Every value has a type, which is determined by the `type` member.
| type | structure field |
| --- | --- |
| `DYN_COL_NULL` | - |
| `DYN_COL_INT` | `value.x.long_value` |
| `DYN_COL_UINT` | `value.x.ulong_value` |
| `DYN_COL_DOUBLE` | `value.x.double_value` |
| `DYN_COL_STRING` | `value.x.string.value`, `value.x.string.charset` |
| `DYN_COL_DECIMAL` | `value.x.decimal.value` |
| `DYN_COL_DATETIME` | `value.x.time_value` |
| `DYN_COL_DATE` | `value.x.time_value` |
| `DYN_COL_TIME` | `value.x.time_value` |
| `DYN_COL_DYNCOL` | `value.x.string.value` |
Notes
* Values with type `DYN_COL_NULL` do not ever occur in dynamic columns blobs.
* Type `DYN_COL_DYNCOL` means that the value is a packed dynamic blob. This is how nested dynamic columns are done.
* Before storing a value to `value.x.decimal.value`, one must call `mariadb_dyncol_prepare_decimal()` to initialize the space for storage.
### enum\_dyncol\_func\_result
`enum enum_dyncol_func_result` is used as return value.
| value | name | meaning |
| --- | --- | --- |
| 0 | `ER_DYNCOL_OK` | OK |
| 0 | `ER_DYNCOL_NO` | (the same as ER\_DYNCOL\_OK but for functions which return a YES/NO) |
| 1 | `ER_DYNCOL_YES` | YES response or success |
| 2 | `ER_DYNCOL_TRUNCATED` | Operation succeeded but the data was truncated |
| -1 | `ER_DYNCOL_FORMAT` | Wrong format of the encoded string |
| -2 | `ER_DYNCOL_LIMIT` | A limit of implementation reached |
| -3 | `ER_DYNCOL_RESOURCE` | Out of resources |
| -4 | `ER_DYNCOL_DATA` | Incorrect input data |
| -5 | `ER_DYNCOL_UNKNOWN_CHARSET` | Unknown character set |
Result codes that are less than zero represent error conditions.
Function reference
------------------
Functions come in pairs:
* `xxx_num()` operates on the old (pre-MariaDB-10.0.1) dynamic column blob format where columns were identified by numbers.
* `xxx_named()` can operate on both old or new data format. If it modifies the blob, it will convert it to the new data format.
You should use `xxx_named()` functions, unless you need to keep the data compatible with MariaDB versions before 10.0.1.
### mariadb\_dyncol\_init
1. define mariadb\_dyncol\_init(A) memset((A), 0, sizeof(\*(A)))
It is correct initialization for an empty packed dynamic blob.
### mariadb\_dyncol\_free
```
void mariadb_dyncol_free(DYNAMIC_COLUMN *str);
```
where
| | | |
| --- | --- | --- |
| `str` | `IN` | Packed dynamic blob which memory should be freed. |
### mariadb\_dyncol\_create\_many (num|named)
Create a packed dynamic blob from arrays of values and names.
```
enum enum_dyncol_func_result
mariadb_dyncol_create_many_num(DYNAMIC_COLUMN *str,
uint column_count,
uint *column_numbers,
DYNAMIC_COLUMN_VALUE *values,
my_bool new_string);
enum enum_dyncol_func_result
mariadb_dyncol_create_many_named(DYNAMIC_COLUMN *str,
uint column_count,
MYSQL_LEX_STRING *column_keys,
DYNAMIC_COLUMN_VALUE *values,
my_bool new_string);
```
where
| | | |
| --- | --- | --- |
| `str` | `OUT` | Packed dynamic blob will be put here |
| `column_count` | `IN` | Number of columns |
| `column_numbers` | `IN` | Column numbers array (old format) |
| `column_keys` | `IN` | Column names array (new format) |
| `values` | `IN` | Column values array |
| `new_string` | `IN` | If TRUE then the `str` will be reinitialized (not freed) before usage |
### mariadb\_dyncol\_update\_many (num|named)
Add or update columns in a dynamic columns blob. To delete a column, update its value to a "non-value" of type `DYN_COL_NULL`
```
enum enum_dyncol_func_result
mariadb_dyncol_update_many_num(DYNAMIC_COLUMN *str,
uint column_count,
uint *column_numbers,
DYNAMIC_COLUMN_VALUE *values);
enum enum_dyncol_func_result
mariadb_dyncol_update_many_named(DYNAMIC_COLUMN *str,
uint column_count,
MYSQL_LEX_STRING *column_keys,
DYNAMIC_COLUMN_VALUE *values);
```
| | | |
| --- | --- | --- |
| `str` | `IN/OUT` | Dynamic columns blob to be modified. |
| `column_count` | `IN` | Number of columns in following arrays |
| `column_numbers` | `IN` | Column numbers array (old format) |
| `column_keys` | `IN` | Column names array (new format) |
| `values` | `IN` | Column values array |
### mariadb\_dyncol\_exists (num|named)
Check if column with given name exists in the blob
```
enum enum_dyncol_func_result
mariadb_dyncol_exists_num(DYNAMIC_COLUMN *str, uint column_number);
enum enum_dyncol_func_result
mariadb_dyncol_exists_named(DYNAMIC_COLUMN *str, MYSQL_LEX_STRING *column_key);
```
| | | |
| --- | --- | --- |
| `str` | `IN` | Packed dynamic columns string. |
| `column_number` | `IN` | Column number (old format) |
| `column_key` | `IN` | Column name (new format) |
The function returns YES/NO or Error code
### mariadb\_dyncol\_column\_count
Get number of columns in a dynamic column blob
```
enum enum_dyncol_func_result
mariadb_dyncol_column_count(DYNAMIC_COLUMN *str, uint *column_count);
```
| | | |
| --- | --- | --- |
| `str` | `IN` | Packed dynamic columns string. |
| `column_count` | `OUT` | Number of not NULL columns in the dynamic columns string |
### mariadb\_dyncol\_list (num|named)
List columns in a dynamic column blob.
```
enum enum_dyncol_func_result
mariadb_dyncol_list_num(DYNAMIC_COLUMN *str, uint *column_count, uint **column_numbers);
enum enum_dyncol_func_result
mariadb_dyncol_list_named(DYNAMIC_COLUMN *str, uint *column_count,
MYSQL_LEX_STRING **column_keys);
```
| | | |
| --- | --- | --- |
| `str` | `IN` | Packed dynamic columns string. |
| `column_count` | `OUT` | Number of columns in following arrays |
| `column_numbers` | `OUT` | Column numbers array (old format). Caller should free this array. |
| `column_keys` | `OUT` | Column names array (new format). Caller should free this array. |
### mariadb\_dyncol\_get (num|named)
Get a value of one column
```
enum enum_dyncol_func_result
mariadb_dyncol_get_num(DYNAMIC_COLUMN *org, uint column_number,
DYNAMIC_COLUMN_VALUE *value);
enum enum_dyncol_func_result
mariadb_dyncol_get_named(DYNAMIC_COLUMN *str, MYSQL_LEX_STRING *column_key,
DYNAMIC_COLUMN_VALUE *value);
```
| | | |
| --- | --- | --- |
| `str` | `IN` | Packed dynamic columns string. |
| `column_number` | `IN` | Column numbers array (old format) |
| `column_key` | `IN` | Column names array (new format) |
| `value` | `OUT` | Value of the column |
If the column is not found NULL returned as a value of the column.
### mariadb\_dyncol\_unpack
Get value of all columns
```
enum enum_dyncol_func_result
mariadb_dyncol_unpack(DYNAMIC_COLUMN *str,
uint *column_count,
MYSQL_LEX_STRING **column_keys,
DYNAMIC_COLUMN_VALUE **values);
```
| | | |
| --- | --- | --- |
| `str` | `IN` | Packed dynamic columns string to unpack. |
| `column_count` | `OUT` | Number of columns in following arrays |
| `column_keys` | `OUT` | Column names array (should be free by caller) |
| `values` | `OUT` | Values of the columns array (should be free by caller) |
### mariadb\_dyncol\_has\_names
Check whether the dynamic columns blob uses new data format (the one where columns are identified by names)
```
my_bool mariadb_dyncol_has_names(DYNAMIC_COLUMN *str);
```
| | | |
| --- | --- | --- |
| `str` | `IN` | Packed dynamic columns string. |
### mariadb\_dyncol\_check
Check whether dynamic column blob has correct data format.
```
enum enum_dyncol_func_result
mariadb_dyncol_check(DYNAMIC_COLUMN *str);
```
| | | |
| --- | --- | --- |
| `str` | `IN` | Packed dynamic columns string. |
### mariadb\_dyncol\_json
Get contents od a dynamic columns blob in a JSON form
```
enum enum_dyncol_func_result
mariadb_dyncol_json(DYNAMIC_COLUMN *str, DYNAMIC_STRING *json);
```
| | | |
| --- | --- | --- |
| `str` | `IN` | Packed dynamic columns string. |
| `json` | `OUT` | JSON representation |
mariadb\_dyncol\_json() allocates memory for the parameter json which must be explicitly freed by the mariadb\_dyncol\_free() function to prevent memory leakage.
### mariadb\_dyncol\_val\_TYPE
Get dynamic column value as one of the base types
```
enum enum_dyncol_func_result
mariadb_dyncol_val_str(DYNAMIC_STRING *str, DYNAMIC_COLUMN_VALUE *val,
CHARSET_INFO *cs, my_bool quote);
enum enum_dyncol_func_result
mariadb_dyncol_val_long(longlong *ll, DYNAMIC_COLUMN_VALUE *val);
enum enum_dyncol_func_result
mariadb_dyncol_val_double(double *dbl, DYNAMIC_COLUMN_VALUE *val);
```
| | | |
| --- | --- | --- |
| `str` or `ll` or `dbl` | `OUT` | value of the column |
| `val` | `IN` | Value |
### mariadb\_dyncol\_prepare\_decimal
Initialize `DYNAMIC_COLUMN_VALUE` before value of `value.x.decimal.value` can be set
```
void mariadb_dyncol_prepare_decimal(DYNAMIC_COLUMN_VALUE *value);
```
| | | |
| --- | --- | --- |
| `value` | `OUT` | Value of the column |
This function links `value.x.decimal.value` to `value.x.decimal.buffer`.
### mariadb\_dyncol\_value\_init
Initialize a `DYNAMIC_COLUMN_VALUE` structure to a safe default.
```
#define mariadb_dyncol_value_init(V) (V)->type= DYN_COL_NULL
```
### mariadb\_dyncol\_column\_cmp\_named
Compare two column names for equality
```
int mariadb_dyncol_column_cmp_named(const MYSQL_LEX_STRING *s1,
const MYSQL_LEX_STRING *s2);
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Segmented Key Cache Performance Segmented Key Cache Performance
===============================
Testing method for segmented key cache performance
--------------------------------------------------
We used [SysBench v0.5](https://launchpad.net/sysbench) from Launchpad to test the [segmented key cache](../segmented-key-cache/index) performance for the MyISAM storage engine of [MariaDB 5.2.2](https://mariadb.com/kb/en/mariadb-522-release-notes/)-gamma.
As wrapper scripts for automated running of SysBench we used the `sysbench/` directory from [MariaDB Tools](https://launchpad.net/mariadb-tools).
To test that splitting the key cache's global mutex into several mutex helps under multi user load, we wrote a new SysBench test called `select_random_points.lua`. We used one big table and selected random points with increasing number of concurrent users.
Main testing outcomes
---------------------
We see up to 250% performance gain depending on the amount of concurrent users.
Detailed testing outcomes
-------------------------
### On our machine pitbull
#### On pitbull with --random-points=10
In relative numbers:
```
Threads 1 4 8 16 32 64 128
(32/off) -3% 53% 122% 155% 226% 269% 237%
(64/off) -6% 55% 130% 162% 234% 270% 253%
select_random_points.lua --random-points=10
```
#### On pitbull with --random-points=50
In relative numbers:
```
Threads 1 4 8 16 32 64 128
(32/off) -3% 53% 113% 154% 232% 254% 231%
(64/off) -1% 55% 121% 161% 235% 268% 244%
select_random_points.lua --random-points=50
```
#### On pitbull with --random-points=100
In relative numbers:
```
Threads 1 4 8 16 32 64 128
(32/off) -3% 54% 121% 160% 209% 246% 219%
(64/off) -6% 56% 129% 167% 219% 260% 241%
select_random_points.lua --random-points=100
```
#### Detailed numbers of all runs on pitbull
You can find the absolute and relative numbers in our OpenOffice.org spread sheet here: [SysBench v0.5 select\_random\_points on pitbull](http://askmonty.org/w/images/4/47/Sysbench_v0.5_select_random_points_10_50_100_pitbull.ods)
### On our machine perro
#### On perro with --random-points=10
In relative numbers:
```
Threads 1 4 8 16 32 64 128
(32/off) 1% 2% 17% 45% 73% 70% 71%
(64/off) -0.3% 6% 19% 46% 72% 74% 80%
select_random_points.lua --random-points=10
```
#### On perro with --random-points=50
In relative numbers:
```
Threads 1 4 8 16 32 64 128
(32/off) 1% 10% 26% 69% 105% 122% 114%
(64/off) -1% 8% 27% 75% 111% 120% 131%
select_random_points.lua --random-points=50
```
#### On perro with --random-points=100
In relative numbers:
```
Threads 1 4 8 16 32 64 128
(32/off) -0.2% 1% 22% 73% 114% 114% 126%
(64/off) -0.1% 4% 22% 75% 112% 125% 135%
select_random_points.lua --random-points=100
```
#### Detailed numbers of all runs on perro
You can find the absolute and relative numbers in our OpenOffice.org spread sheet here: [SysBench v0.5 select\_random\_points on perro](http://askmonty.org/w/images/f/fb/Sysbench_v0.5_select_random_points_10_50_100_perro.ods)
Table and query used
--------------------
Table definition:
```
CREATE TABLE sbtest (
id int unsigned NOT NULL AUTO_INCREMENT,
k int unsigned NOT NULL DEFAULT '0',
c char(120) NOT NULL DEFAULT '',
pad char(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=MyISAM
```
Query used:
```
SELECT id, k, c, pad
FROM sbtest
WHERE k IN (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
```
The `?` parameters were replaced by random numbers when running the SysBench test. We used 10, 50, and 100 random points in our tests.
We inserted 20 million rows using random data, which gave us a data and index file size of:
```
3.6G sbtest.MYD
313M sbtest.MYI
```
We chose our key buffer size to be big enough to hold the index file.
Testing environment
-------------------
### MariaDB sources
We used [MariaDB 5.2.2](https://mariadb.com/kb/en/mariadb-522-release-notes/)-gamma with following revision from our launchpad repository [Revision #2878](http://bazaar.launchpad.net/%7Emaria-captains/maria/5.2/revision/2878)
```
revno: 2878
committer: Sergei Golubchik <[email protected]>
branch nick: 5.2
timestamp: Tue 2010-10-26 07:37:44 +0200
message:
fixes for windows
```
### Compiling MariaDB
We compiled MariaDB using this line:
```
BUILD/compile-amd64-max
```
### MariaDB runtime options
We used following configuration for running MariaDB
```
MYSQLD_OPTIONS="--no-defaults \
--datadir=$DATA_DIR \
--language=./sql/share/english \
--log-error \
--key_buffer_size=512M \
--max_connections=256 \
--query_cache_size=0 \
--query_cache_type=0 \
--skip-grant-tables \
--socket=$MY_SOCKET \
--table_open_cache=512 \
--thread_cache=512 \
--key_cache_segments=0 \ # 0 | 32 | 64
--tmpdir=$TEMP_DIR"
```
### SysBench v0.5 select\_random\_points.lua options
We run the SysBench v0.5 select\_random\_points.lua test with following options:
```
# 20 million rows.
TABLE_SIZE=20000000
SYSBENCH_OPTIONS="--oltp-table-size=$TABLE_SIZE \
--max-requests=0 \
--mysql-table-engine=MyISAM \
--mysql-user=root \
--mysql-engine-trx=no \
--myisam-max-rows=50000000 \
--rand-seed=303"
```
We tested with increasing number of concurrent users with a warm up time of 8 minutes and a run time of 20 minutes:
```
NUM_THREADS="1 4 8 16 32 64 128"
...
--num-threads=$THREADS
```
We also tested an increasing number of random points:
```
# Default option is --random-points=10.
SYSBENCH_TESTS[0]="select_random_points.lua"
SYSBENCH_TESTS[1]="select_random_points.lua --random-points=50"
SYSBENCH_TESTS[2]="select_random_points.lua --random-points=100"
```
### Kernel parameters
#### IO scheduler
For optimal IO performance running a database we are using the noop scheduler. You can check your scheduler setting with:
```
cat /sys/block/${DEVICE}/queue/scheduler
```
For instance, it should look like this output:
```
cat /sys/block/sda/queue/scheduler
[noop] deadline cfq
```
You can find detailed notes about Linux schedulers here: [Linux schedulers in TPCC like benchmark](http://www.mysqlperformanceblog.com/2009/01/30/linux-schedulers-in-tpcc-like-benchmark/).
#### Open file limits
Having a lot of concurrent connections can hit the open file limit on your system. On most Linux systems the open file limit is at 1024, which can be not enough. Please set your open file limit higher by editing
```
$EDITOR /etc/security/limits.conf
```
and adding a line like
```
#ftp hard nproc 0
#@student - maxlogins 4
* - nofile 16384
# End of file
```
Your `ulimit -a` output should look like this afterwards:
```
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15975
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) 1744200
open files (-n) 16384
```
### Machines used for testing
#### perro
```
# OS: openSUSE 11.1 (x86_64)
# Platform: x86_64
# CPU: Quad-core Intel @ 3.20GHz: 4 CPUs
# RAM: 2GB
# Disk(s): 2 x ST31000528AS S-ATA as software RAID 0
```
#### pitbull
```
# OS: Ubuntu 10.10
# Platform: x86_64
# CPU: Two-socket x hexa-core Intel Xeon X5660 @ 2.80GHz. With hyperthreading: 24CPUs
# RAM: 28GB
# Disk(s): 1 x ST3500320NS S-ATA
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Binary Log Group Commit and InnoDB Flushing Performance Binary Log Group Commit and InnoDB Flushing Performance
=======================================================
[MariaDB 10.0](../what-is-mariadb-100/index) introduced a performance improvement related to [group commit](../group-commit-for-the-binary-log/index) that affects the performance of flushing [InnoDB](../xtradb-and-innodb/index) transactions when the [binary log](../binary-log/index) is enabled.
Overview
--------
In [MariaDB 10.0](../what-is-mariadb-100/index) and above, when both [innodb\_flush\_log\_at\_trx\_commit=1](../innodb-system-variables/index#innodb_flush_log_at_trx_commit) (the default) is set and the [binary log](../binary-log/index) is enabled, there is now one less sync to disk inside InnoDB during commit (2 syncs shared between a group of transactions instead of 3).
Durability of commits is not decreased — this is because even if the server crashes before the commit is written to disk by InnoDB, it will be recovered from the binary log at next server startup (and it is guaranteed that sufficient information is synced to disk so that such a recovery is always possible).
Switching to Old Flushing Behavior
----------------------------------
The old behavior, with 3 syncs to disk per (group) commit (and consequently lower performance), can be selected with the new [innodb\_flush\_log\_at\_trx\_commit=3](../innodb-system-variables/index#innodb_flush_log_at_trx_commit) option. There is normally no benefit to doing this, however there are a couple of edge cases to be aware of.
### Non-durable Binary Log Settings
If [innodb\_flush\_log\_at\_trx\_commit=1](../innodb-system-variables/index#innodb_flush_log_at_trx_commit) is set and the [binary log](../binary-log/index) is enabled, but [sync\_binlog=0](../replication-and-binary-log-server-system-variables/index#sync_binlog) is set, then commits are not guaranteed durable inside InnoDB after commit. This is because if [sync\_binlog=0](../replication-and-binary-log-server-system-variables/index#sync_binlog) is set and if the server crashes, then transactions that were not flushed to the binary log prior to the crash will be missing from the binary log.
In this specific scenario, [innodb\_flush\_log\_at\_trx\_commit=3](../innodb-system-variables/index#innodb_flush_log_at_trx_commit) can be set to ensure that transactions will be durable in InnoDB, even if they are not necessarily durable from the perspective of the binary log.
One should be aware that if [sync\_binlog=0](../replication-and-binary-log-server-system-variables/index#sync_binlog) is set, then a crash is nevertheless likely to cause transactions to be missing from the binary log. This will cause the binary log and InnoDB to be inconsistent with each other. This is also likely to cause any [replication slaves](../high-availability-performance-tuning-mariadb-replication/index) to become inconsistent, since transactions are replicated through the [binary log](../binary-log/index). Thus it is recommended to set [sync\_binlog=1](../replication-and-binary-log-server-system-variables/index#sync_binlog). With the [group commit](../group-commit-for-the-binary-log/index) improvements introduced in [MariaDB 5.3](../what-is-mariadb-53/index), this setting has much less penalty in recent versions compared to older versions of MariaDB and MySQL.
### Recent Transactions Missing from Backups
[Mariabackup](../mariabackup/index) and [Percona XtraBackup](../backup-restore-and-import-xtrabackup/index) only see transactions that have been flushed to the [redo log](../xtradbinnodb-redo-log/index). With the [group commit](../group-commit-for-the-binary-log/index) improvements, there may be a small delay (defined by the [binlog\_commit\_wait\_usec](../replication-and-binary-log-system-variables/index#binlog_commit_wait_usec) system variable) between when a commit happens and when the commit will be included in a backup.
Note that the backup will still be fully consistent with itself and the [binary log](../binary-log/index). This problem is normally not an issue in practice. A backup usually takes a long time to complete (relative to the 1 second or so that [binlog\_commit\_wait\_usec](../replication-and-binary-log-system-variables/index#binlog_commit_wait_usec) is normally set to), and a backup usually includes a lot of transactions that were committed during the backup. With this in mind, it is not generally noticeable if the backup does not include transactions that were committed during the last 1 second or so of the backup process. It is just mentioned here for completeness.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb DBT3 Benchmark Results MyISAM DBT3 Benchmark Results MyISAM
=============================
Introduction
------------
This page shows the results for benchmarking the following configuration:
* [MariaDB 5.3.2](https://mariadb.com/kb/en/mariadb-532-release-notes/) + MyISAM
* [MariaDB 5.5.18](https://mariadb.com/kb/en/mariadb-5518-release-notes/) + MyISAM
* MySQL 5.5.19 + MyISAM
* MySQL 5.6.4 + MyISAM
The test is performed using the automation script `/mariadb-tools/dbt3_benchmark/launcher.pl`.
Details about this automation script can be found on the [DBT3 automation scripts](../dbt3-automation-scripts/index) page.
Hardware
--------
The tests were performed on our `facebook-maria1` machine. It has the following parameters:
* **CPU:** 16 Intel® Xeon® CPU L5520 @ 2.27GHz
* **Memory:** Limited to 16 GB out of 72 by adding 'mem=16G' parameter to /boot/grub/menu.lst
* **Logical disk:** HDD 2 TB
* **Operating system:**
Scale factor 30
---------------
This test was performed with the following parameters:
* **Scale factor:** 30
* **Query timeout:** 2 hours
* **Number of tests per query:** 1
* **Total DB size on disk:** about 50GB
* **Available memory:** 16 GB
**NOTE:** The available memory is controlled by a parameter `mem=16G` added to the file `/boot/grub/menu.lst`
### Steps to reproduce
Follow the instructions in [DBT3 automation scripts](../dbt3-automation-scripts/index) to prepare the environment for the test.
Before you run the test, ensure that the settings in the test configuration files match your prepared environment. For more details on the test configuration, please, refer to the [Test configuration parameters](../dbt3-automation-scripts/index#test-configuration).
After the environment is prepared, the following command should be executed in the shell:
```
perl launcher.pl \
--results-output-dir=/home/mariadb/benchmark/dbt3/results/myisam_test \
--project-home=/home/mariadb/benchmark/dbt3/ \
--datadir=/home/mariadb/benchmark/dbt3/db_data/ \
--test=./tests/myisam_test_mariadb_5_3_mysql_5_5_mysql_5_6.conf \
--queries-home=/home/mariadb/benchmark/dbt3/gen_query/ --scale-factor=30 \
--TIMEOUT=7200
```
### Compared configurations
The following configurations have been compared in this test:
#### Case 1: [MariaDB 5.3.2](https://mariadb.com/kb/en/mariadb-532-release-notes/) + MyISAM
Here are the common options that the mysqld server was started with:
```
net_read_timeout = 300
net_write_timeout = 600
key_buffer_size = 3G
skip-external-locking
key_buffer = 16M
max_allowed_packet = 16M
table_open_cache = 1024
thread_cache = 512
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
max_connections = 256
query_cache_size = 0
query_cache_type = 0
sql-mode = NO_ENGINE_SUBSTITUTION
#Per-test optimizations
optimizer_switch='index_merge=on'
optimizer_switch='index_merge_union=on'
optimizer_switch='index_merge_sort_union=on'
optimizer_switch='index_merge_intersection=on'
optimizer_switch='index_merge_sort_intersection=off'
optimizer_switch='index_condition_pushdown=on'
optimizer_switch='derived_merge=on'
optimizer_switch='derived_with_keys=on'
optimizer_switch='firstmatch=off'
optimizer_switch='loosescan=off'
optimizer_switch='materialization=on'
optimizer_switch='in_to_exists=on'
optimizer_switch='semijoin=on'
optimizer_switch='partial_match_rowid_merge=on'
optimizer_switch='partial_match_table_scan=on'
optimizer_switch='subquery_cache=off'
optimizer_switch='mrr=on'
optimizer_switch='mrr_cost_based=off'
optimizer_switch='mrr_sort_keys=on'
optimizer_switch='outer_join_with_cache=on'
optimizer_switch='semijoin_with_cache=off'
optimizer_switch='join_cache_incremental=on'
optimizer_switch='join_cache_hashed=on'
optimizer_switch='join_cache_bka=on'
optimizer_switch='optimize_join_buffer_size=on'
optimizer_switch='table_elimination=on'
join_buffer_space_limit = 3072M
join_buffer_size = 1536M
join_cache_level = 6
mrr_buffer_size = 96M
tmp_table_size = 96M
max_heap_table_size = 96M
```
#### Case 2: [MariaDB 5.5.18](https://mariadb.com/kb/en/mariadb-5518-release-notes/) + MyISAM
Uses the same configuration file as [MariaDB 5.3.2](https://mariadb.com/kb/en/mariadb-532-release-notes/) in Case 1.
#### Case 3: MySQL 5.5.19 + MyISAM
Here are the common options that the mysqld server was started with:
```
net_read_timeout = 300
net_write_timeout = 600
key_buffer_size = 3G
skip-external-locking
key_buffer = 16M
max_allowed_packet = 16M
table_open_cache = 1024
thread_cache = 512
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
myisam_sort_buffer_size = 8M
max_connections = 256
query_cache_size = 0
query_cache_type = 0
sql-mode = NO_ENGINE_SUBSTITUTION
join_buffer_size = 1536M
tmp_table_size = 96M
max_heap_table_size = 96M
read_rnd_buffer_size = 96M
```
#### Case 4: MySQL 5.6.4 + MyISAM
Here are the common options that the mysqld server was started with:
```
net_read_timeout = 300
net_write_timeout = 600
key_buffer_size = 3G
skip-external-locking
key_buffer = 16M
max_allowed_packet = 16M
table_open_cache = 1024
thread_cache = 512
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
myisam_sort_buffer_size = 8M
max_connections = 256
query_cache_size = 0
query_cache_type = 0
sql-mode = NO_ENGINE_SUBSTITUTION
optimizer_switch='mrr=on'
optimizer_switch='mrr_cost_based=off'
optimizer_switch='batched_key_access=on'
optimizer_switch='index_condition_pushdown=on'
join_buffer_size = 1536M
tmp_table_size = 96M
max_heap_table_size = 96M
read_rnd_buffer_size = 96M
```
The server has been restarted between each query run and the caches have been cleared between each query run.
### Results (without q20)
Here is the graphics of the results:
(Smaller bars are better)

**NOTE:** Queries that are cut off by the graphics have timed out the period of 2 hours.
Here are the actual results in seconds (smaller is better):
| | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Configuration | [MariaDB 5.3.2](https://mariadb.com/kb/en/mariadb-532-release-notes/) + MyISAM | *Ratio* | [MariaDB 5.5.18](https://mariadb.com/kb/en/mariadb-5518-release-notes/) + MyISAM | *Ratio* | MySQL 5.5.19 + MyISAM | *Ratio* | MySQL 5.6.4 + MyISAM | *Ratio* |
| 1.sql | 261 | *1.00* | 308 | *1.18* | 259 | *0.99* | 277 | *1.06* |
| 2.sql | 47 | *1.00* | 48 | *1.02* | 499 | *10.62* | 49 | *1.04* |
| 2-opt.sql | 46 | *1.00* | 48 | *1.04* | - | - | - | - |
| 3.sql | 243 | *1.00* | 246 | *1.01* | >7200 | - | 1360 | *5.60* |
| 4.sql | 137 | *1.00* | 135 | *0.99* | 4117 | *30.05* | 137 | *1.00* |
| 5.sql | 181 | *1.00* | 187 | *1.03* | 6164 | *34.06* | 1254 | *6.93* |
| 6.sql | 198 | *1.00* | 205 | *1.04* | >7200 | - | 194 | *0.98* |
| 7.sql | 779 | *1.00* | 896 | *1.15* | 814 | *1.04* | 777 | *1.00* |
| 8.sql | 270 | *1.00* | 287 | *1.06* | 749 | *2.77* | 1512 | *5.60* |
| 9.sql | 252 | *1.00* | 254 | *1.01* | >7200 | - | 298 | *1.18* |
| 10.sql | 782 | *1.00* | 854 | *1.09* | >7200 | - | 1881 | *2.41* |
| 11.sql | 45 | *1.00* | 36 | *0.80* | 357 | *7.93* | 49 | *1.09* |
| 12.sql | 211 | *1.00* | 217 | *1.03* | >7200 | - | 213 | *1.01* |
| 13.sql | 251 | *1.00* | 236 | *0.94* | 1590 | *6.33* | 244 | *0.97* |
| 14.sql | 88 | *1.00* | 91 | *1.03* | 1590 | *18.07* | 94 | *1.07* |
| 15.sql | 162 | *1.00* | 164 | *1.01* | 4580 | *28.27* | 165 | *1.02* |
| 16.sql | 154 | *1.00* | 152 | *0.99* | 174 | *1.13* | 173 | *1.12* |
| 17.sql | 1493 | *1.00* | 1495 | *1.00* | 865 | *0.58* | 794 | *0.53* |
| 17-opt1.sql | 795 | *1.00* | 794 | *1.00* | 862 | *1.08* | 794 | *1.00* |
| 17-opt2.sql | 1482 | *1.00* | 1458 | *0.98* | 2167 | *1.46* | 1937 | *1.31* |
| 18.sql | 971 | *1.00* | 931 | *0.96* | >7200 | - | >7200 | - |
| 18-opt.sql | 121 | *1.00* | 125 | *1.03* | - | - | - | - |
| 19.sql | 212 | *1.00* | 212 | *1.00* | 2004 | *9.45* | 61 | *0.29* |
| 19-opt1.sql | 59 | *1.00* | 59 | *1.00* | 1999 | *33.88* | 61 | *1.03* |
| 19-opt2.sql | 260 | *1.00* | 216 | *0.83* | 443 | *1.70* | 236 | *0.91* |
| 20.sql | - | - | - | - | - | - | - | - |
| 21.sql | 173 | *1.00* | 179 | *1.03* | >7200 | - | 183 | *1.06* |
| 22.sql | 13 | *1.00* | 14 | *1.08* | 10 | *0.77* | 13 | *1.00* |
| Version | 5.3.2-MariaDB-beta | | 5.5.18-MariaDB | | 5.5.19 | | 5.6.4-m7 | |
| Query and explain details | [Explain details](http://askmonty.org/w/images/5/56/DBT3_MyISAM_HDD_s30_mariadb_5_3_2.txt) | | [Explain details](http://askmonty.org/w/images/9/9e/DBT3_MyISAM_HDD_s30_mariadb_5_5_18.txt) | | [Explain details](http://askmonty.org/w/images/c/cc/Explain_DBT3_MyISAM_HDD_s30_mysql_5_5_19.txt) | | [Explain details](http://askmonty.org/w/images/d/d1/Explain_MyISAM_HDD_s30_mysql_5_6_4_.txt) | |
**NOTE:** The columns named "*Ratio*" are calculated values of the ratio between the current value compared to the value in the first test configuration. The formula for it is `(current_value/value_in_first_row)`. For example if [MariaDB 5.3.2](https://mariadb.com/kb/en/mariadb-532-release-notes/) (the first column) handles a query for 100 seconds and MySQL 5.6.4 (the last configuration) handles the same query for 120 seconds, the ratio will be `120/100 = 1.20`. This means that it takes MySQL 5.6.4 20% more time to handle the same query.
The archived folder with all the results and details for that benchmark can be downloaded from here: [MyISAM s30 on facebook-maria1](http://askmonty.org/w/images/e/e1/Myisam_test_2011-12-01_203150.tar.bz2)
### Notes
Queries 2-opt.sql and 18-opt.sql are tested only for [MariaDB 5.3.2](https://mariadb.com/kb/en/mariadb-532-release-notes/) and [MariaDB 5.5.18](https://mariadb.com/kb/en/mariadb-5518-release-notes/)
* Additional startup parameters for 2\_opt:
```
--optimizer_switch='mrr_sort_keys=off'
```
* Additional startup parameters for 18\_opt:
```
--optimizer_switch='semijoin=off' --optimizer_switch='index_condition_pushdown=on'
```
* Additional modifications for 17-opt1:
```
select
sum(l_extendedprice) / 7.0 as avg_yearly
from
part straight_join lineitem
where
p_partkey = l_partkey
...
```
* Additional modifications for 17-opt2:
```
select
sum(l_extendedprice) / 7.0 as avg_yearly
from
lineitem straight_join part
where
p_partkey = l_partkey
...
```
* Additional modifications for 19-opt1:
```
select
sum(l_extendedprice* (1 - l_discount)) as revenue
from
part straight_join lineitem
where
(
p_partkey = l_partkey
...
```
* Additional modifications for 19-opt2:
```
select
sum(l_extendedprice* (1 - l_discount)) as revenue
from
lineitem straight_join part
where
(
p_partkey = l_partkey
...
```
### Benchmark for q20
This benchmarked only q20 with the same settings as described above for the other queries. The only difference is the timeout that was used: 30000 seconds (8 hours and 20 min).
#### Compared cases
The benchmark for q20 compares the following cases:
* q20.sql - the original query is run with the IN-TO-EXISTS strategy for all servers. The following optimizer switches were used for MariaDB:
```
--optimizer_switch='in_to_exists=on,materialization=off,semijoin=off';
```
* q20-opt0.sql - the original query is changed so that the same join order is chosen as for the two subsequent variants that test materialization where this order is optimal. The join order is:
```
select s_name, s_address
from supplier, nation
where s_suppkey in (select distinct (ps_suppkey)
from '''part straight_join partsupp'''
where ps_partkey = p_partkey ...
```
* Since the IN-TO-EXISTS strategy is essentially the same for both MariaDB and MySQL, this query was tested for MySQL only.
* q20-opt1.sql - modifies the original query in two ways:
+ enforces the MATERIALIZATION strategy, and
+ enforces an optimal JOIN order via straight\_join as follows:
```
select s_name, s_address
from supplier, nation
where s_suppkey in (select distinct (ps_suppkey)
from '''part straight_join partsupp'''
where ps_partkey = p_partkey ...
```
q20-opt1.sql uses the following optimizer switches for MariaDB:
```
--optimizer_switch='in_to_exists=off,materialization=on,semijoin=off';
```
* q20-opt2.sql - the same as q20-opt1.sql but allows the optimizer to choose the subquery strategy via the following switch:
```
--optimizer_switch='in_to_exists=on,materialization=on,semijoin=on';
```
* This switch results in the choice of SJ-MATERIALIZATION.
**NOTE:** For MySQL there are no such *optimizer-switch* parameters, and the tests were started without any additional startup parameters. The default algorithm in MySQL is *in\_to\_exists*.
#### Results for q20
Here is the graphics of the results of the benchmarked q20: (Smaller bars are better)
**NOTE:** Queries that are cut off by the graphics have timed out the period of 30000 seconds.
Here are the actual results in seconds (smaller is better):
| Configuration | 20.sql | 20-opt0.sql | 20-opt1.sql | 20-opt2.sql | Version | Query and explain details |
| --- | --- | --- | --- | --- | --- | --- |
| [MariaDB 5.3.2](https://mariadb.com/kb/en/mariadb-532-release-notes/) + MyISAM | 20070 | - | 5560 | 5615 | 5.3.2-MariaDB-beta | [Explain details](http://askmonty.org/w/images/9/9a/DBT3_MyISAM_HDD_s30_q20_mariadb_5_3_2.txt) |
| [MariaDB 5.5.18](https://mariadb.com/kb/en/mariadb-5518-release-notes/) + MyISAM | 19922 | - | 5529 | 5572 | 5.5.18-MariaDB | [Explain details](http://askmonty.org/w/images/e/e8/DBT3_MyISAM_HDD_s30_q20_mariadb_5_5_18.txt) |
| MySQL 5.5.19 + MyISAM | 17832 | >30000 | - | - | 5.5.19 | [Explain details](http://askmonty.org/w/images/a/ae/DBT3_MyISAM_HDD_s30_q20_mysql_5_5_19.txt) |
| MYSQL 5.6.4 + MyISAM | 19845 | >30000 | - | - | 5.6.4-m7 | [Explain details](http://askmonty.org/w/images/c/cc/DBT3_MyISAM_HDD_s30_q20_mysql_5_6_4.txt) |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ST_AsGeoJSON ST\_AsGeoJSON
=============
Syntax
------
```
ST_AsGeoJSON(g[, max_decimals[, options]])
```
Description
-----------
Returns the given geometry *g* as a GeoJSON element. The optional *max\_decimals* limits the maximum number of decimals displayed.
The optional *options* flag can be set to `1` to add a bounding box to the output.
Examples
--------
```
SELECT ST_AsGeoJSON(ST_GeomFromText('POINT(5.3 7.2)'));
+-------------------------------------------------+
| ST_AsGeoJSON(ST_GeomFromText('POINT(5.3 7.2)')) |
+-------------------------------------------------+
| {"type": "Point", "coordinates": [5.3, 7.2]} |
+-------------------------------------------------+
```
See also
--------
* [ST\_GeomFromGeoJSON](../st_geomfromgeojson/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Cursor Overview Cursor Overview
===============
Description
-----------
A cursor is a structure that allows you to go over records sequentially, and perform processing based on the result.
MariaDB permits cursors inside [stored programs](../stored-programs-and-views/index), and MariaDB cursors are non-scrollable, read-only and asensitive.
* Non-scrollable means that the rows can only be fetched in the order specified by the SELECT statement. Rows cannot be skipped, you cannot jump to a specific row, and you cannot fetch rows in reverse order.
* Read-only means that data cannot be updated through the cursor.
* Asensitive means that the cursor points to the actual underlying data. This kind of cursor is quicker than the alternative, an insensitive cursor, as no data is copied to a temporary table. However, changes to the data being used by the cursor will affect the cursor data.
Cursors are created with a [DECLARE CURSOR](../declare-cursor/index) statement and opened with an [OPEN](../open/index) statement. Rows are read with a [FETCH](../fetch/index) statement before the cursor is finally closed with a [CLOSE](../close/index) statement.
When FETCH is issued and there are no more rows to extract, the following error is produced:
```
ERROR 1329 (02000): No data - zero rows fetched, selected, or processed
```
To avoid problems, a [DECLARE HANDLER](../declare-handler/index) statement is generally used. The HANDLER should handler the 1329 error, or the '02000' [SQLSTATE](../sqlstate/index), or the NOT FOUND error class.
Only [SELECT](../select/index) statements are allowed for cursors, and they cannot be contained in a variable - so, they cannot be composed dynamically. However, it is possible to SELECT from a view. Since the [CREATE VIEW](../create-view/index) statement can be executed as a prepared statement, it is possible to dynamically create the view that is queried by the cursor.
From [MariaDB 10.3.0](https://mariadb.com/kb/en/mariadb-1030-release-notes/), cursors can have parameters. Cursor parameters can appear in any part of the [DECLARE CURSOR](../declare-cursor/index) select\_statement where a stored procedure variable is allowed (select list, WHERE, HAVING, LIMIT etc). See [DECLARE CURSOR](../declare-cursor/index) and [OPEN](../open/index) for syntax, and below for an example:
Examples
--------
```
CREATE TABLE c1(i INT);
CREATE TABLE c2(i INT);
CREATE TABLE c3(i INT);
DELIMITER //
CREATE PROCEDURE p1()
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE x, y INT;
DECLARE cur1 CURSOR FOR SELECT i FROM test.c1;
DECLARE cur2 CURSOR FOR SELECT i FROM test.c2;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
OPEN cur1;
OPEN cur2;
read_loop: LOOP
FETCH cur1 INTO x;
FETCH cur2 INTO y;
IF done THEN
LEAVE read_loop;
END IF;
IF x < y THEN
INSERT INTO test.c3 VALUES (x);
ELSE
INSERT INTO test.c3 VALUES (y);
END IF;
END LOOP;
CLOSE cur1;
CLOSE cur2;
END; //
DELIMITER ;
INSERT INTO c1 VALUES(5),(50),(500);
INSERT INTO c2 VALUES(10),(20),(30);
CALL p1;
SELECT * FROM c3;
+------+
| i |
+------+
| 5 |
| 20 |
| 30 |
+------+
```
From [MariaDB 10.3.0](https://mariadb.com/kb/en/mariadb-1030-release-notes/)
```
DROP PROCEDURE IF EXISTS p1;
DROP TABLE IF EXISTS t1;
CREATE TABLE t1 (a INT, b VARCHAR(10));
INSERT INTO t1 VALUES (1,'old'),(2,'old'),(3,'old'),(4,'old'),(5,'old');
DELIMITER //
CREATE PROCEDURE p1(min INT,max INT)
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE va INT;
DECLARE cur CURSOR(pmin INT, pmax INT) FOR SELECT a FROM t1 WHERE a BETWEEN pmin AND pmax;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=TRUE;
OPEN cur(min,max);
read_loop: LOOP
FETCH cur INTO va;
IF done THEN
LEAVE read_loop;
END IF;
INSERT INTO t1 VALUES (va,'new');
END LOOP;
CLOSE cur;
END;
//
DELIMITER ;
CALL p1(2,4);
SELECT * FROM t1;
+------+------+
| a | b |
+------+------+
| 1 | old |
| 2 | old |
| 3 | old |
| 4 | old |
| 5 | old |
| 2 | new |
| 3 | new |
| 4 | new |
+------+------+
```
See Also
--------
* [DECLARE CURSOR](../declare-cursor/index)
* [OPEN cursor\_name](../open/index)
* [FETCH cursor\_name](../fetch/index)
* [CLOSE cursor\_name](../close/index)
* [Cursors in Oracle mode](../sql_modeoracle-from-mariadb-103/index#cursors)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CONNECT TBL Table Type: Table List CONNECT TBL Table Type: Table List
==================================
This type allows defining a table as a list of tables of any engine and type. This is more flexible than multiple tables that must be all of the same file type. This type does, but is more powerful than, what is done with the [MERGE](../merge/index) engine.
The list of the columns of the TBL table may not necessarily include all the columns of the tables of the list. If the name of some columns is different in the sub-tables, the column to use can be specified by its position given by the `FLAG` option of the column. If the `ACCEPT` option is set to true (Y or 1) columns that do not exist in some of the sub-tables are accepted and their value will be null or pseudo-null (this depends on the nullability of the column) for the tables not having this column. The column types can also be different and an automatic conversion will be done if necessary.
**Note:** If not specified, the column definitions are retrieved from the first table of the table list.
The default database of the sub-tables is the current database or if not, can be specified in the DBNAME option. For the tables that are not in the default database, this can be specified in the table list. For instance, to create a table based on the French table *employe* in the current database and on the English table *employee* of the *db2* database, the syntax of the create statement can be:
```
CREATE TABLE allemp (
SERIALNO char(5) NOT NULL flag=1,
NAME varchar(12) NOT NULL flag=2,
SEX smallint(1),
TITLE varchar(15) NOT NULL flag=3,
MANAGER char(5) DEFAULT NULL flag=4,
DEPARTMENT char(4) NOT NULL flag=5,
SECRETARY char(5) DEFAULT NULL flag=6,
SALARY double(8,2) NOT NULL flag=7)
ENGINE=CONNECT table_type=TBL
table_list='employe,db2.employee' option_list='Accept=1';
```
The search for columns in sub tables is done by name and, if they exist with a different name, by their position given by a not null `FLAG` option. Column *sex* exists only in the English table (`FLAG` is `0`). Its values will null value for the French table.
For instance, the query:
```
select name, sex, title, salary from allemp where department = 318;
```
Can reply:
| NAME | SEX | TITLE | SALARY |
| --- | --- | --- | --- |
| BARBOUD | NULL | VENDEUR | 9700.00 |
| MARCHANT | NULL | VENDEUR | 8800.00 |
| MINIARD | NULL | ADMINISTRATIF | 7500.00 |
| POUPIN | NULL | INGENIEUR | 7450.00 |
| ANTERPE | NULL | INGENIEUR | 6850.00 |
| LOULOUTE | NULL | SECRETAIRE | 4900.00 |
| TARTINE | NULL | OPERATRICE | 2800.00 |
| WERTHER | NULL | DIRECTEUR | 14500.00 |
| VOITURIN | NULL | VENDEUR | 10130.00 |
| BANCROFT | 2 | SALESMAN | 9600.00 |
| MERCHANT | 1 | SALESMAN | 8700.00 |
| SHRINKY | 2 | ADMINISTRATOR | 7500.00 |
| WALTER | 1 | ENGINEER | 7400.00 |
| TONGHO | 1 | ENGINEER | 6800.00 |
| HONEY | 2 | SECRETARY | 4900.00 |
| PLUMHEAD | 2 | TYPIST | 2800.00 |
| WERTHER | 1 | DIRECTOR | 14500.00 |
| WHEELFOR | 1 | SALESMAN | 10030.00 |
The first 9 rows, coming from the French table, have a null for the *sex* value. They would have 0 if the sex column had been created NOT NULL.
### Sub-tables of not CONNECT engines
Sub-tables are accessed as `[PROXY](../connect-table-types-proxy-table-type/index)` tables. For not CONNECT sub-tables that are accessed via the MySQL API, it is possible like with `PROXY` to change the MYSQL default options. Of course, this will apply to all not CONNECT tables of the list.
### Using the TABID special column
The TABID special column can be used to see from which table the rows come from and to restrict the access to only some of the sub-tables.
Let us see the following example where t1 and t2 are MyISAM tables similar to the ones given in the `MERGE` description:
```
create table xt1 (
a int(11) not null,
message char(20))
engine=CONNECT table_type=MYSQL tabname='t1'
option_list='database=test,user=root';
create table xt2 (
a int(11) not null,
message char(20))
engine=CONNECT table_type=MYSQL tabname='t2'
option_list='database=test,user=root';
create table toto (
tabname char(8) not null special='TABID',
a int(11) not null,
message char(20))
engine=CONNECT table_type=TBL table_list='xt1,xt2';
select * from total;
```
The result returned by the SELECT statement is:
| tabname | a | message |
| --- | --- | --- |
| xt1 | 1 | Testing |
| xt1 | 2 | table |
| xt1 | 3 | t1 |
| xt2 | 1 | Testing |
| xt2 | 2 | table |
| xt2 | 3 | t2 |
Now if you send the query:
```
select * from total where tabname = 'xt2';
```
CONNECT will analyze the where clause and only read the *xt1* table. This can save time if you want to retrieve only a few sub-tables from a TBL table containing many sub-tables.
### Parallel Execution
Parallel Execution is currently unavailable until some bugs are fixed.
When the sub-tables are located on different servers, it is possible to execute the remote queries simultaneously instead of sequentially. To enable this, set the thread option to yes.
Additional options available for this table type:
| Option | Description |
| --- | --- |
| Maxerr | The max number of missing tables in the table list before an error is raised. Defaults to 0. |
| Accept | If true, missing columns are accepted and return null values. Defaults to false. |
| Thread | If true, enables parallel execution of remote sub-tables. |
These options can be specified in the `OPTION_LIST`.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Buildbot Setup for Virtual Machines - Ubuntu 13.10 "saucy" Buildbot Setup for Virtual Machines - Ubuntu 13.10 "saucy"
==========================================================
Base install
------------
```
qemu-img create -f qcow2 /kvm/vms/vm-saucy-amd64-serial.qcow2 20G
qemu-img create -f qcow2 /kvm/vms/vm-saucy-i386-serial.qcow2 20G
```
Start each VM booting from the server install iso one at a time and perform the following install steps:
```
kvm -m 2048 -hda /kvm/vms/vm-saucy-amd64-serial.qcow2 -cdrom /kvm/iso/ubuntu/ubuntu-13.10-server-amd64.iso -boot d -smp 2 -cpu qemu64 -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:2287-:22
kvm -m 2048 -hda /kvm/vms/vm-saucy-i386-serial.qcow2 -cdrom /kvm/iso/ubuntu/ubuntu-13.10-server-i386.iso -boot d -smp 2 -cpu qemu64 -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:2288-:22
```
Once running you can connect to the VNC server from your local host with:
```
vncviewer -via ${remote_host} localhost
```
Replace ${remote-host} with the host the vm is running on.
**Note:** When you activate the install, vncviewer may disconnect with a complaint about the rect being too large. This is fine. Ubuntu has just resized the vnc screen. Simply reconnect.
Install, picking default options mostly, with the following notes:
* Set the hostname to ubuntu-saucy-amd64 or ubuntu-saucy-i386
* **do not** encrypt the home directory
* When partitioning disks, choose "Guided - use entire disk" (we do not want LVM)
* No automatic updates
* Choose software to install: OpenSSH server
Now that the VM is installed, it's time to configure it. If you have the memory you can do the following simultaneously:
```
kvm -m 2048 -hda /kvm/vms/vm-saucy-amd64-serial.qcow2 -cdrom /kvm/iso/ubuntu/ubuntu-13.10-server-amd64.iso -boot c -smp 2 -cpu qemu64 -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:2287-:22 -nographic
kvm -m 2048 -hda /kvm/vms/vm-saucy-i386-serial.qcow2 -cdrom /kvm/iso/ubuntu/ubuntu-13.10-server-i386.iso -boot c -smp 2 -cpu qemu64 -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:2288-:22 -nographic
ssh -p 2287 localhost
sudo update-alternatives --config editor
# edit /boot/grub/menu.lst and visudo, see below
ssh -p 2288 localhost
sudo update-alternatives --config editor
# edit /boot/grub/menu.lst and visudo, see below
ssh -t -p 2287 localhost "mkdir -v .ssh; sudo addgroup $USER sudo"
ssh -t -p 2288 localhost "mkdir -v .ssh; sudo addgroup $USER sudo"
scp -P 2287 /kvm/vms/authorized_keys localhost:.ssh/
scp -P 2288 /kvm/vms/authorized_keys localhost:.ssh/
echo $'Buildbot\n\n\n\n\ny' | ssh -p 2287 localhost 'chmod -vR go-rwx .ssh; sudo adduser --disabled-password buildbot; sudo addgroup buildbot sudo; sudo mkdir -v ~buildbot/.ssh; sudo cp -vi .ssh/authorized_keys ~buildbot/.ssh/; sudo chown -vR buildbot:buildbot ~buildbot/.ssh; sudo chmod -vR go-rwx ~buildbot/.ssh'
echo $'Buildbot\n\n\n\n\ny' | ssh -p 2288 localhost 'chmod -vR go-rwx .ssh; sudo adduser --disabled-password buildbot; sudo addgroup buildbot sudo; sudo mkdir -v ~buildbot/.ssh; sudo cp -vi .ssh/authorized_keys ~buildbot/.ssh/; sudo chown -vR buildbot:buildbot ~buildbot/.ssh; sudo chmod -vR go-rwx ~buildbot/.ssh'
scp -P 2287 /kvm/vms/ttyS0.conf buildbot@localhost:
scp -P 2288 /kvm/vms/ttyS0.conf buildbot@localhost:
ssh -p 2287 buildbot@localhost 'sudo apt-get update && sudo apt-get -y dist-upgrade;'
ssh -p 2288 buildbot@localhost 'sudo apt-get update && sudo apt-get -y dist-upgrade;'
ssh -p 2287 buildbot@localhost 'sudo cp -vi ttyS0.conf /etc/init/; rm -v ttyS0.conf; sudo shutdown -h now'
ssh -p 2288 buildbot@localhost 'sudo cp -vi ttyS0.conf /etc/init/; rm -v ttyS0.conf; sudo shutdown -h now'
```
Enabling passwordless sudo:
```
sudo VISUAL=vi visudo
# Add line at end: `%sudo ALL=NOPASSWD: ALL'
```
Editing /boot/grub/menu.lst:
```
sudo vi /etc/default/grub
# Add/edit these entries:
GRUB_HIDDEN_TIMEOUT_QUIET=false
GRUB_TIMEOUT=0
GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0,115200n8"
GRUB_TERMINAL="serial"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"
sudo update-grub
# exit back to the host server
```
VMs for building .debs
----------------------
```
for i in '/kvm/vms/vm-saucy-amd64-serial.qcow2 2287 qemu64' '/kvm/vms/vm-saucy-i386-serial.qcow2 2288 qemu64' ; do \
set $i; \
runvm --user=buildbot --logfile=kernel_$2.log --base-image=$1 --port=$2 --cpu=$3 "$(echo $1 | sed -e 's/serial/build/')" \
"= scp -P $2 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no /kvm/thrift-0.9.0.tar.gz buildbot@localhost:/dev/shm/" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get update" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get -y build-dep mysql-server-5.5" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get install -y devscripts hardening-wrapper fakeroot doxygen texlive-latex-base ghostscript libevent-dev libssl-dev zlib1g-dev libpam0g-dev libreadline-gplv2-dev autoconf automake automake1.9 dpatch ghostscript-x libfontenc1 libjpeg62 libltdl-dev libltdl7 libmail-sendmail-perl libxfont1 lmodern texlive-latex-base-doc ttf-dejavu ttf-dejavu-extra libaio-dev xfonts-encodings xfonts-utils libxml2-dev unixodbc-dev bzr scons check libboost-all-dev openssl epm" \
"bzr co --lightweight lp:mariadb-native-client" \
"cd /usr/local/src;sudo tar zxf /dev/shm/thrift-0.9.0.tar.gz;pwd;ls" \
"cd /usr/local/src/thrift-0.9.0;echo;pwd;sudo ./configure --prefix=/usr --enable-shared=no --enable-static=yes CXXFLAGS=-fPIC CFLAGS=-fPIC && echo && echo 'now making' && echo && sleep 3 && sudo make && echo && echo 'now installing' && echo && sleep 3 && sudo make install" ; \
done
```
VMs for install testing.
------------------------
See [Buildbot Setup for Virtual Machines - General Principles](../buildbot-setup-for-virtual-machines-general-principles/index) for how to obtain `my.seed` and `sources.append`.
```
for i in '/kvm/vms/vm-saucy-amd64-serial.qcow2 2287 qemu64' '/kvm/vms/vm-saucy-i386-serial.qcow2 2288 qemu64' ; do \
set $i; \
runvm --user=buildbot --logfile=kernel_$2.log --base-image=$1 --port=$2 --cpu=$3 "$(echo $1 | sed -e 's/serial/install/')" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get update" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get install -y patch libaio1 debconf-utils unixodbc libxml2" \
"= scp -P $2 /kvm/vms/my55.seed /kvm/vms/sources.append buildbot@localhost:/tmp/" \
"sudo debconf-set-selections /tmp/my55.seed" \
"sudo sh -c 'cat /tmp/sources.append >> /etc/apt/sources.list'"; \
done
```
VMs for MySQL upgrade testing
-----------------------------
```
for i in '/kvm/vms/vm-saucy-amd64-serial.qcow2 2287 qemu64' '/kvm/vms/vm-saucy-i386-serial.qcow2 2288 qemu64' ; do \
set $i; \
runvm --user=buildbot --logfile=kernel_$2.log --base-image=$1 --port=$2 --cpu=$3 "$(echo $1 | sed -e 's/serial/upgrade/')" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get update" \
"sudo DEBIAN_FRONTEND=noninteractive apt-get install -y patch libaio1 debconf-utils" \
"= scp -P $2 /kvm/vms/my55.seed /kvm/vms/sources.append buildbot@localhost:/tmp/" \
"sudo debconf-set-selections /tmp/my55.seed" \
"sudo sh -c 'cat /tmp/sources.append >> /etc/apt/sources.list'" \
'sudo DEBIAN_FRONTEND=noninteractive apt-get install -y mysql-server-5.5' \
'mysql -uroot -prootpass -e "create database mytest; use mytest; create table t(a int primary key); insert into t values (1); select * from t"' ;\
done
```
VMs for MariaDB upgrade testing
-------------------------------
```
for i in '/kvm/vms/vm-saucy-amd64-serial.qcow2 2287 qemu64' '/kvm/vms/vm-saucy-i386-serial.qcow2 2288 qemu64' ; do \
set $i; \
runvm --user=buildbot --logfile=kernel_$2.log --base-image=$1 --port=$2 --cpu=$3 "$(echo $1 | sed -e 's/serial/upgrade2/')" \
"= scp -P $2 /kvm/vms/my55.seed /kvm/vms/sources.append buildbot@localhost:/tmp/" \
"= scp -P $2 /kvm/vms/mariadb-saucy.list buildbot@localhost:/tmp/tmp.list" \
"sudo debconf-set-selections /tmp/my55.seed" \
'sudo mv -vi /tmp/tmp.list /etc/apt/sources.list.d/' \
'sudo apt-key adv --recv-keys --keyserver pgp.mit.edu 0xcbcb082a1bb943db' \
"sudo DEBIAN_FRONTEND=noninteractive apt-get update" \
'sudo DEBIAN_FRONTEND=noninteractive apt-get install -y mariadb-server' \
'mysql -uroot -prootpass -e "create database mytest; use mytest; create table t(a int primary key); insert into t values (1); select * from t"' \
'sudo rm -v /etc/apt/sources.list.d/tmp.list' \
'sudo DEBIAN_FRONTEND=noninteractive apt-get update' \
"sudo sh -c 'cat /tmp/sources.append >> /etc/apt/sources.list'" \
'sudo DEBIAN_FRONTEND=noninteractive apt-get install -y patch libaio1 debconf-utils' \
'sudo DEBIAN_FRONTEND=noninteractive apt-get upgrade -y'; \
done
```
Add Key to known\_hosts
-----------------------
Do the following on each kvm host server (terrier, terrier2, i7, etc...) to add the VMs to known\_hosts.
```
# saucy-amd64
cp -avi /kvm/vms/vm-saucy-amd64-install.qcow2 /kvm/vms/vm-saucy-amd64-test.qcow2
kvm -m 1024 -hda /kvm/vms/vm-saucy-amd64-test.qcow2 -redir tcp:2287::22 -boot c -smp 2 -cpu qemu64 -net nic,model=virtio -net user -nographic
sudo su - buildbot
ssh -p 2287 buildbot@localhost sudo shutdown -h now
# answer "yes" when prompted
exit # the buildbot user
rm -v /kvm/vms/vm-saucy-amd64-test.qcow2
# saucy-i386
cp -avi /kvm/vms/vm-saucy-i386-install.qcow2 /kvm/vms/vm-saucy-i386-test.qcow2
kvm -m 1024 -hda /kvm/vms/vm-saucy-i386-test.qcow2 -redir tcp:2288::22 -boot c -smp 2 -cpu qemu64 -net nic,model=virtio -net user -nographic
sudo su - buildbot
ssh -p 2288 buildbot@localhost sudo shutdown -h now
# answer "yes" when prompted
exit # the buildbot user
rm -v /kvm/vms/vm-saucy-i386-test.qcow2
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb REVERSE REVERSE
=======
Syntax
------
```
REVERSE(str)
```
Description
-----------
Returns the string `str` with the order of the characters reversed.
Examples
--------
```
SELECT REVERSE('desserts');
+---------------------+
| REVERSE('desserts') |
+---------------------+
| stressed |
+---------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Dynamic Columns from MariaDB 10 Dynamic Columns from MariaDB 10
===============================
**MariaDB starting with [10.0.1](https://mariadb.com/kb/en/mariadb-1001-release-notes/)**[MariaDB 10.0.1](https://mariadb.com/kb/en/mariadb-1001-release-notes/) introduced the following improvements to the [dynamic columns](../dynamic-columns/index) feature.
Column Name Support
-------------------
It is possible to refer to column by names. Names can be used everywhere where in [MariaDB 5.3](../what-is-mariadb-53/index) one could use only strings:
* Create a dynamic column blob:
```
COLUMN_CREATE('int_col', 123 as int, 'double_col', 3.14 as double, 'string_col', 'text-data' as char);
```
* Set a column value:
```
COLUMN_ADD(dyncol_blob, 'intcol', 1234);
```
* Get a column value:
```
COLUMN_GET(dynstr, 'column1' as char(10));
```
* Check whether a column exists
```
COLUMN_EXISTS(dyncol_blob, 'column_name');
```
Changes in Behavior
-------------------
* Column list output now includes quoting:
```
select column_list(column_create(1, 22, 2, 23));
+------------------------------------------+
| column_list(column_create(1, 22, 2, 23)) |
+------------------------------------------+
| `1`,`2` |
+------------------------------------------+
select column_list(column_create('column1', 22, 'column2', 23));
+----------------------------------------------------------+
| column_list(column_create('column1', 22, 'column2', 23)) |
+----------------------------------------------------------+
| `column1`,`column2` |
+----------------------------------------------------------+
```
* Column name interpretation has been changed so that the string now is not converted to a number. So some "magic" tricks will not work any more, for example, "1test" and "1" now become different column names:
```
select column_list(column_add(column_create('1a', 22), '1b', 23));
+------------------------------------------------------------+
| column_list(column_add(column_create('1a', 22), '1b', 23)) |
+------------------------------------------------------------+
| `1a`,`1b` |
+------------------------------------------------------------+
```
* Old behavior:
```
select column_list(column_add(column_create('1a', 22), '1b', 23));
+------------------------------------------------------------+
| column_list(column_add(column_create('1a', 22), '1b', 23)) |
+------------------------------------------------------------+
| 1 |
+------------------------------------------------------------+
```
New Functions
-------------
The following new functions have been added to dynamic columns in MariaDB 10
### COLUMN\_CHECK
[COLUMN\_CHECK](../column_check/index) is used to check a column's integrity. When it encounters an error it does not return illegal format errors but returns false instead. It also checks integrity more thoroughly and finds errors in the dynamic column internal structures which might not be found by other functions.
```
select column_check(column_create('column1', 22));
+--------------------------------------------+
| column_check(column_create('column1', 22)) |
+--------------------------------------------+
| 1 |
+--------------------------------------------+
select column_check('abracadabra');
+-----------------------------+
| column_check('abracadabra') |
+-----------------------------+
| 0 |
+-----------------------------+
```
### COLUMN\_JSON
[COLUMN\_JSON](column-json) converts all dynamic column record content to a JSON object.
```
select column_json(column_create('column1', 1, 'column2', "two"));
+------------------------------------------------------------+
| column_json(column_create('column1', 1, 'column2', "two")) |
+------------------------------------------------------------+
| {"column1":1,"column2":"two"} |
+------------------------------------------------------------+
```
Other Changes
-------------
* All API functions has prefix mariadb\_dyncol\_ (old prefix dynamic\_column\_ is depricated
* API changed to be able to work with the new format (\*\_named functions).
* Removed 'delete' function because deleting could be done by adding NULL value.
* 'Time' and 'datetime' in the new format are stored without microseconds if they are 0.
* New function added to API (except that two which are representing SQL level functions):
+ 'Unpack' the dynamic columns content to an arrays of values and names.
+ 3 functions to get any column value as string, integer (long long) or floating point (double).
* New type of "dynamic column" row added on the API level (in SQL level output it is a string but if you use dynamic column functions to construct object it will be added as dynamic column value) which allow to add dynamic columns inside dynamic columns. JSON function represent such recursive constructions correctly but limit depth of representation as current implementation limit (internally depth of dynamic columns embedding is not limited).
Interface with Cassandra
------------------------
CassandraSE is no longer actively being developed and has been removed in [MariaDB 10.6](../what-is-mariadb-106/index). See [MDEV-23024](https://jira.mariadb.org/browse/MDEV-23024).
Some internal changes were added to dynamic columns to allow them to serve as an interface to Apache Cassandra dynamic columns. The [Cassandra engine](../cassandra-storage-engine/index) may pack all columns which were not mentioned in the MariaDB interface table definition and even bring changes in the dynamic column contents back to the cassandra columns family (the table analog in cassandra).
See Also
--------
* [Dynamic Columns](../dynamic-columns/index)
* [Cassandra Storage Engine](../cassandra-storage-engine/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Installing MariaDB ColumnStore 1.4 Installing MariaDB ColumnStore 1.4
==================================
MariaDB ColumnStore 1.4 is included with MariaDB Enterprise Server 10.4.
MariaDB ColumnStore 5 and later have significant enhancements that are not available in MariaDB ColumnStore 1.4. Therefore, MariaDB recommends installing MariaDB ColumnStore 5 or later.
Resources
---------
* [Installing MariaDB ColumnStore 5](../installing-mariadb-columnstore-5/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysqlreport mysqlreport
===========
**MariaDB starting with [10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/)**From [MariaDB 10.4.6](https://mariadb.com/kb/en/mariadb-1046-release-notes/), `mariadb-report` is a symlink to `mysqlreport`.
**MariaDB starting with [10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/)**From [MariaDB 10.5.2](https://mariadb.com/kb/en/mariadb-1052-release-notes/), `mysqlreport` is the symlink, and `mariadb-report` the binary name.
`mysqlreport` makes a friendly report of important MariaDB status values. Actually, it makes a friendly report of nearly every status value from SHOW STATUS. Unlike SHOW STATUS which simply dumps over 100 values to the screen in one long list, mysqlreport interprets and formats the values and presents the basic values and many more inferred values in a human-readable format. Numerous example reports are available at the mysqlreport web page at <http://hackmysql.com/mysqlreport>.
The benefit of mysqlreport is that it allows you to very quickly see a wide array of performance indicators for your MariaDB server which would otherwise need to be calculated by hand from all the various SHOW STATUS values. For example, the Index Read Ratio is an important value but it's not present in SHOW STATUS; it's an inferred value (the ratio of Key\_reads to Key\_read\_requests).
This documentation outlines all the command line options in mysqlreport, most of which control which reports are printed. This document does not address how to interpret these reports; that topic is covered in the document Guide To Understanding mysqlreport at <http://hackmysql.com/mysqlreportguide>.
Usage
-----
```
mysqlreport [options]
```
mysqlreport options
-------------------
Technically, command line options are in the form `--option`, but `-option` works too. All options can be abbreviated if the abbreviation is unique. For example, option `--host` can be abbreviated to `--ho` but not `--h` because `--h` is ambiguous: it could mean `--host` or `--help`.
| Option | Description |
| --- | --- |
| `--all` | Equivalent to `--dtq --dms --com 3 --sas --qcache`. (Notice `--tab` is not invoked by `--all`.) |
| `--com N` | Print top N number of non-DMS Com\_ [status values](../server-status-variables/index) in descending order (after DMS in Questions report). If N is not given, default is 3. Such non-DMS Com\_ values include [Com\_change\_db](../server-status-variables/index#com_change_db), [Com\_show\_tables](../server-status-variables/index#com_show_tables), [Com\_rollback](../server-status-variables/index#com_rollback), etc. |
| `--dms` | Print Data Manipulation Statements (DMS) report (under DMS in Questions report). DMS are those from the [Data Manipulation](../data-manipulation/index) section. Currently, mysqlreport considers only [SELECT](../select/index), [INSERT](../insert/index), [REPLACE](../replace/index), [UPDATE](../update/index), and [DELETE](../delete/index). Each DMS is listed in descending order by count. |
| `--dtq` | Print Distribution of Total Queries (DTQ) report (under Total in Questions report). Queries (or Questions) can be divided into four main areas: DMS (see `--dms`), Com\_ (see `--com` ), COM\_QUIT (see COM\_QUIT and Questions at [http://hack‐mysql.com/com\_quit](#)), and Unknown. `--dtq` lists the number of queries in each of these areas in descending order. |
| `--email ADDRESS` | After printing the report to screen, email the report to ADDRESS. This option requires sendmail in /usr/sbin/, therefore it does not work on Windows. /usr/sbin/sendmail can be a sym link to qmail, for example, or any MTA that emulates sendmail's -t command line option and operation. The FROM: field is "mysqlreport", SUBJECT: is "MySQL status report". |
| `--flush-status` | Execute a [FLUSH STATUS](../flush/index) after generating the reports. If you do not have permissions in MariaDB to do this an error from DBD::mysql::st will be printed after the reports. |
| `--help` | Output help information and exit. |
| `--host ADDRESS` | Host address. |
| `--infile FILE` | Instead of getting [SHOW STATUS](../show-status/index) values from MariaDB, read values from FILE. FILE is often a copy of the output of SHOW STATUS including formatting characters (+, -). *mysqlreport* expects FILE to have the format " value number " where value is only alpha and underscore characters (A-Z and \_) and number is a positive integer. Anything before, between, or after value and number is ignored. *mysqlreport* also needs the following MariaDB server variables: [version](../server-system-variables/index#version), [table\_cache](../server-system-variables/index#table_open_cache), [max\_connections](../server-system-variables/index#max_connections), [key\_buffer\_size](../myisam-system-variables/index#key_buffer_size), [query\_cache\_size](../server-system-variables/index#query_cache_size). These values can be specified in INFILE in the format "name = value" where name is one of the aforementioned server variables and value is a positive integer with or without a trailing M and possible periods (for version). For example, to specify an 18M key\_buffer\_size: key\_buffer\_size = 18M. Or, a 256 table\_cache: table\_cache = 256. The M implies Megabytes not million, so 18M means 18,874,368 not 18,000,000. If these server variables are not specified the following defaults are used (respectively) which may cause strange values to be reported: 0.0.0, 64, 100, 8M, 0. |
| `--no-mycnf` | Makes mysqlreport not read /.my.cnf which it does by default otherwise. `--user` and `--password` always override values from /.my.cnf. |
| `--outfile FILE` | After printing the report to screen, print the report to FILE too. Internally, mysqlreport always writes the report to a temp file first: `/tmp/mysqlreport.PID` on \*nix, `c:sqlreport.PID` on Windows (PID is the script's process ID). Then it prints the temp file to screen. Then if `--outfile` is specified, the temp file is copied to OUTFILE. After `--email` (above), the temp file is deleted. |
| `--password` | As of version 2.3 `--password` can take the password on the command line like `--password FOO`. Using `--password` alone without giving a password on the command line causes mysqlreport to prompt for a password. |
| `--port PORT` | Port number. |
| `--qcache` | Print [Query Cache](../query-cache/index) report. |
| `--sas` | Print report for Select\_ and Sort\_ [status values](../server-status-variables/index) (after Questions report). See MySQL Select and Sort Status Variables at <http://hackmysql.com/selectandsort>. |
| `--socket SOCKET` | For connections to localhost, the Unix socket file to use, or, on Windows, the name of the named pipe to use. |
| `--tab` | Print Threads, Aborted, and Bytes status reports (after Created temp report). As of mysqlreport v2.3 the Threads report reports on all Threads\_ status values. |
| `--user USERNAME` | Username. |
Examples
--------
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema KEYWORDS Table Information Schema KEYWORDS Table
=================================
**MariaDB starting with [10.6.3](https://mariadb.com/kb/en/mariadb-1063-release-notes/)**The `KEYWORDS` table was added in [MariaDB 10.6.3](https://mariadb.com/kb/en/mariadb-1063-release-notes/).
Description
-----------
The [Information Schema](../information_schema/index) `KEYWORDS` table contains the list of MariaDB keywords.
It contains a single column:
| Column | Description |
| --- | --- |
| `WORD` | Keyword |
The table is not a standard Information Schema table, and is a MariaDB extension.
Example
-------
```
SELECT * FROM INFORMATION_SCHEMA.KEYWORDS;
+-------------------------------+
| WORD |
+-------------------------------+
| && |
| <= |
| <> |
| != |
| >= |
| << |
| >> |
| <=> |
| ACCESSIBLE |
| ACCOUNT |
| ACTION |
| ADD |
| ADMIN |
| AFTER |
| AGAINST |
| AGGREGATE |
| ALL |
| ALGORITHM |
| ALTER |
| ALWAYS |
| ANALYZE |
| AND |
| ANY |
| AS |
| ASC |
| ASCII |
| ASENSITIVE |
| AT |
| ATOMIC |
| AUTHORS |
| AUTO_INCREMENT |
| AUTOEXTEND_SIZE |
| AUTO |
| AVG |
| AVG_ROW_LENGTH |
| BACKUP |
| BEFORE |
| BEGIN |
| BETWEEN |
| BIGINT |
| BINARY |
| BINLOG |
| BIT |
| BLOB |
| BLOCK |
| BODY |
| BOOL |
| BOOLEAN |
| BOTH |
| BTREE |
| BY |
| BYTE |
| CACHE |
| CALL |
| CASCADE |
| CASCADED |
| CASE |
| CATALOG_NAME |
| CHAIN |
| CHANGE |
| CHANGED |
| CHAR |
| CHARACTER |
| CHARSET |
| CHECK |
| CHECKPOINT |
| CHECKSUM |
| CIPHER |
| CLASS_ORIGIN |
| CLIENT |
| CLOB |
| CLOSE |
| COALESCE |
| CODE |
| COLLATE |
| COLLATION |
| COLUMN |
| COLUMN_NAME |
| COLUMNS |
| COLUMN_ADD |
| COLUMN_CHECK |
| COLUMN_CREATE |
| COLUMN_DELETE |
| COLUMN_GET |
| COMMENT |
| COMMIT |
| COMMITTED |
| COMPACT |
| COMPLETION |
| COMPRESSED |
| CONCURRENT |
| CONDITION |
| CONNECTION |
| CONSISTENT |
| CONSTRAINT |
| CONSTRAINT_CATALOG |
| CONSTRAINT_NAME |
| CONSTRAINT_SCHEMA |
| CONTAINS |
| CONTEXT |
| CONTINUE |
| CONTRIBUTORS |
| CONVERT |
| CPU |
| CREATE |
| CROSS |
| CUBE |
| CURRENT |
| CURRENT_DATE |
| CURRENT_POS |
| CURRENT_ROLE |
| CURRENT_TIME |
| CURRENT_TIMESTAMP |
| CURRENT_USER |
| CURSOR |
| CURSOR_NAME |
| CYCLE |
| DATA |
| DATABASE |
| DATABASES |
| DATAFILE |
| DATE |
| DATETIME |
| DAY |
| DAY_HOUR |
| DAY_MICROSECOND |
| DAY_MINUTE |
| DAY_SECOND |
| DEALLOCATE |
| DEC |
| DECIMAL |
| DECLARE |
| DEFAULT |
| DEFINER |
| DELAYED |
| DELAY_KEY_WRITE |
| DELETE |
| DELETE_DOMAIN_ID |
| DESC |
| DESCRIBE |
| DES_KEY_FILE |
| DETERMINISTIC |
| DIAGNOSTICS |
| DIRECTORY |
| DISABLE |
| DISCARD |
| DISK |
| DISTINCT |
| DISTINCTROW |
| DIV |
| DO |
| DOUBLE |
| DO_DOMAIN_IDS |
| DROP |
| DUAL |
| DUMPFILE |
| DUPLICATE |
| DYNAMIC |
| EACH |
| ELSE |
| ELSEIF |
| ELSIF |
| EMPTY |
| ENABLE |
| ENCLOSED |
| END |
| ENDS |
| ENGINE |
| ENGINES |
| ENUM |
| ERROR |
| ERRORS |
| ESCAPE |
| ESCAPED |
| EVENT |
| EVENTS |
| EVERY |
| EXAMINED |
| EXCEPT |
| EXCHANGE |
| EXCLUDE |
| EXECUTE |
| EXCEPTION |
| EXISTS |
| EXIT |
| EXPANSION |
| EXPIRE |
| EXPORT |
| EXPLAIN |
| EXTENDED |
| EXTENT_SIZE |
| FALSE |
| FAST |
| FAULTS |
| FEDERATED |
| FETCH |
| FIELDS |
| FILE |
| FIRST |
| FIXED |
| FLOAT |
| FLOAT4 |
| FLOAT8 |
| FLUSH |
| FOLLOWING |
| FOLLOWS |
| FOR |
| FORCE |
| FOREIGN |
| FORMAT |
| FOUND |
| FROM |
| FULL |
| FULLTEXT |
| FUNCTION |
| GENERAL |
| GENERATED |
| GET_FORMAT |
| GET |
| GLOBAL |
| GOTO |
| GRANT |
| GRANTS |
| GROUP |
| HANDLER |
| HARD |
| HASH |
| HAVING |
| HELP |
| HIGH_PRIORITY |
| HISTORY |
| HOST |
| HOSTS |
| HOUR |
| HOUR_MICROSECOND |
| HOUR_MINUTE |
| HOUR_SECOND |
| ID |
| IDENTIFIED |
| IF |
| IGNORE |
| IGNORED |
| IGNORE_DOMAIN_IDS |
| IGNORE_SERVER_IDS |
| IMMEDIATE |
| IMPORT |
| INTERSECT |
| IN |
| INCREMENT |
| INDEX |
| INDEXES |
| INFILE |
| INITIAL_SIZE |
| INNER |
| INOUT |
| INSENSITIVE |
| INSERT |
| INSERT_METHOD |
| INSTALL |
| INT |
| INT1 |
| INT2 |
| INT3 |
| INT4 |
| INT8 |
| INTEGER |
| INTERVAL |
| INVISIBLE |
| INTO |
| IO |
| IO_THREAD |
| IPC |
| IS |
| ISOLATION |
| ISOPEN |
| ISSUER |
| ITERATE |
| INVOKER |
| JOIN |
| JSON |
| JSON_TABLE |
| KEY |
| KEYS |
| KEY_BLOCK_SIZE |
| KILL |
| LANGUAGE |
| LAST |
| LAST_VALUE |
| LASTVAL |
| LEADING |
| LEAVE |
| LEAVES |
| LEFT |
| LESS |
| LEVEL |
| LIKE |
| LIMIT |
| LINEAR |
| LINES |
| LIST |
| LOAD |
| LOCAL |
| LOCALTIME |
| LOCALTIMESTAMP |
| LOCK |
| LOCKED |
| LOCKS |
| LOGFILE |
| LOGS |
| LONG |
| LONGBLOB |
| LONGTEXT |
| LOOP |
| LOW_PRIORITY |
| MASTER |
| MASTER_CONNECT_RETRY |
| MASTER_DELAY |
| MASTER_GTID_POS |
| MASTER_HOST |
| MASTER_LOG_FILE |
| MASTER_LOG_POS |
| MASTER_PASSWORD |
| MASTER_PORT |
| MASTER_SERVER_ID |
| MASTER_SSL |
| MASTER_SSL_CA |
| MASTER_SSL_CAPATH |
| MASTER_SSL_CERT |
| MASTER_SSL_CIPHER |
| MASTER_SSL_CRL |
| MASTER_SSL_CRLPATH |
| MASTER_SSL_KEY |
| MASTER_SSL_VERIFY_SERVER_CERT |
| MASTER_USER |
| MASTER_USE_GTID |
| MASTER_HEARTBEAT_PERIOD |
| MATCH |
| MAX_CONNECTIONS_PER_HOUR |
| MAX_QUERIES_PER_HOUR |
| MAX_ROWS |
| MAX_SIZE |
| MAX_STATEMENT_TIME |
| MAX_UPDATES_PER_HOUR |
| MAX_USER_CONNECTIONS |
| MAXVALUE |
| MEDIUM |
| MEDIUMBLOB |
| MEDIUMINT |
| MEDIUMTEXT |
| MEMORY |
| MERGE |
| MESSAGE_TEXT |
| MICROSECOND |
| MIDDLEINT |
| MIGRATE |
| MINUS |
| MINUTE |
| MINUTE_MICROSECOND |
| MINUTE_SECOND |
| MINVALUE |
| MIN_ROWS |
| MOD |
| MODE |
| MODIFIES |
| MODIFY |
| MONITOR |
| MONTH |
| MUTEX |
| MYSQL |
| MYSQL_ERRNO |
| NAME |
| NAMES |
| NATIONAL |
| NATURAL |
| NCHAR |
| NESTED |
| NEVER |
| NEW |
| NEXT |
| NEXTVAL |
| NO |
| NOMAXVALUE |
| NOMINVALUE |
| NOCACHE |
| NOCYCLE |
| NO_WAIT |
| NOWAIT |
| NODEGROUP |
| NONE |
| NOT |
| NOTFOUND |
| NO_WRITE_TO_BINLOG |
| NULL |
| NUMBER |
| NUMERIC |
| NVARCHAR |
| OF |
| OFFSET |
| OLD_PASSWORD |
| ON |
| ONE |
| ONLINE |
| ONLY |
| OPEN |
| OPTIMIZE |
| OPTIONS |
| OPTION |
| OPTIONALLY |
| OR |
| ORDER |
| ORDINALITY |
| OTHERS |
| OUT |
| OUTER |
| OUTFILE |
| OVER |
| OVERLAPS |
| OWNER |
| PACKAGE |
| PACK_KEYS |
| PAGE |
| PAGE_CHECKSUM |
| PARSER |
| PARSE_VCOL_EXPR |
| PATH |
| PERIOD |
| PARTIAL |
| PARTITION |
| PARTITIONING |
| PARTITIONS |
| PASSWORD |
| PERSISTENT |
| PHASE |
| PLUGIN |
| PLUGINS |
| PORT |
| PORTION |
| PRECEDES |
| PRECEDING |
| PRECISION |
| PREPARE |
| PRESERVE |
| PREV |
| PREVIOUS |
| PRIMARY |
| PRIVILEGES |
| PROCEDURE |
| PROCESS |
| PROCESSLIST |
| PROFILE |
| PROFILES |
| PROXY |
| PURGE |
| QUARTER |
| QUERY |
| QUICK |
| RAISE |
| RANGE |
| RAW |
| READ |
| READ_ONLY |
| READ_WRITE |
| READS |
| REAL |
| REBUILD |
| RECOVER |
| RECURSIVE |
| REDO_BUFFER_SIZE |
| REDOFILE |
| REDUNDANT |
| REFERENCES |
| REGEXP |
| RELAY |
| RELAYLOG |
| RELAY_LOG_FILE |
| RELAY_LOG_POS |
| RELAY_THREAD |
| RELEASE |
| RELOAD |
| REMOVE |
| RENAME |
| REORGANIZE |
| REPAIR |
| REPEATABLE |
| REPLACE |
| REPLAY |
| REPLICA |
| REPLICAS |
| REPLICA_POS |
| REPLICATION |
| REPEAT |
| REQUIRE |
| RESET |
| RESIGNAL |
| RESTART |
| RESTORE |
| RESTRICT |
| RESUME |
| RETURNED_SQLSTATE |
| RETURN |
| RETURNING |
| RETURNS |
| REUSE |
| REVERSE |
| REVOKE |
| RIGHT |
| RLIKE |
| ROLE |
| ROLLBACK |
| ROLLUP |
| ROUTINE |
| ROW |
| ROWCOUNT |
| ROWNUM |
| ROWS |
| ROWTYPE |
| ROW_COUNT |
| ROW_FORMAT |
| RTREE |
| SAVEPOINT |
| SCHEDULE |
| SCHEMA |
| SCHEMA_NAME |
| SCHEMAS |
| SECOND |
| SECOND_MICROSECOND |
| SECURITY |
| SELECT |
| SENSITIVE |
| SEPARATOR |
| SEQUENCE |
| SERIAL |
| SERIALIZABLE |
| SESSION |
| SERVER |
| SET |
| SETVAL |
| SHARE |
| SHOW |
| SHUTDOWN |
| SIGNAL |
| SIGNED |
| SIMPLE |
| SKIP |
| SLAVE |
| SLAVES |
| SLAVE_POS |
| SLOW |
| SNAPSHOT |
| SMALLINT |
| SOCKET |
| SOFT |
| SOME |
| SONAME |
| SOUNDS |
| SOURCE |
| STAGE |
| STORED |
| SPATIAL |
| SPECIFIC |
| REF_SYSTEM_ID |
| SQL |
| SQLEXCEPTION |
| SQLSTATE |
| SQLWARNING |
| SQL_BIG_RESULT |
| SQL_BUFFER_RESULT |
| SQL_CACHE |
| SQL_CALC_FOUND_ROWS |
| SQL_NO_CACHE |
| SQL_SMALL_RESULT |
| SQL_THREAD |
| SQL_TSI_SECOND |
| SQL_TSI_MINUTE |
| SQL_TSI_HOUR |
| SQL_TSI_DAY |
| SQL_TSI_WEEK |
| SQL_TSI_MONTH |
| SQL_TSI_QUARTER |
| SQL_TSI_YEAR |
| SSL |
| START |
| STARTING |
| STARTS |
| STATEMENT |
| STATS_AUTO_RECALC |
| STATS_PERSISTENT |
| STATS_SAMPLE_PAGES |
| STATUS |
| STOP |
| STORAGE |
| STRAIGHT_JOIN |
| STRING |
| SUBCLASS_ORIGIN |
| SUBJECT |
| SUBPARTITION |
| SUBPARTITIONS |
| SUPER |
| SUSPEND |
| SWAPS |
| SWITCHES |
| SYSDATE |
| SYSTEM |
| SYSTEM_TIME |
| TABLE |
| TABLE_NAME |
| TABLES |
| TABLESPACE |
| TABLE_CHECKSUM |
| TEMPORARY |
| TEMPTABLE |
| TERMINATED |
| TEXT |
| THAN |
| THEN |
| TIES |
| TIME |
| TIMESTAMP |
| TIMESTAMPADD |
| TIMESTAMPDIFF |
| TINYBLOB |
| TINYINT |
| TINYTEXT |
| TO |
| TRAILING |
| TRANSACTION |
| TRANSACTIONAL |
| THREADS |
| TRIGGER |
| TRIGGERS |
| TRUE |
| TRUNCATE |
| TYPE |
| TYPES |
| UNBOUNDED |
| UNCOMMITTED |
| UNDEFINED |
| UNDO_BUFFER_SIZE |
| UNDOFILE |
| UNDO |
| UNICODE |
| UNION |
| UNIQUE |
| UNKNOWN |
| UNLOCK |
| UNINSTALL |
| UNSIGNED |
| UNTIL |
| UPDATE |
| UPGRADE |
| USAGE |
| USE |
| USER |
| USER_RESOURCES |
| USE_FRM |
| USING |
| UTC_DATE |
| UTC_TIME |
| UTC_TIMESTAMP |
| VALUE |
| VALUES |
| VARBINARY |
| VARCHAR |
| VARCHARACTER |
| VARCHAR2 |
| VARIABLES |
| VARYING |
| VIA |
| VIEW |
| VIRTUAL |
| VISIBLE |
| VERSIONING |
| WAIT |
| WARNINGS |
| WEEK |
| WEIGHT_STRING |
| WHEN |
| WHERE |
| WHILE |
| WINDOW |
| WITH |
| WITHIN |
| WITHOUT |
| WORK |
| WRAPPER |
| WRITE |
| X509 |
| XOR |
| XA |
| XML |
| YEAR |
| YEAR_MONTH |
| ZEROFILL |
| || |
+-------------------------------+
694 rows in set (0.000 sec)
```
See Also
--------
* [Reserved Words](../reserved-words/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb SYSTEM_USER SYSTEM\_USER
============
Syntax
------
```
SYSTEM_USER()
```
Description
-----------
SYSTEM\_USER() is a synonym for [USER()](../user/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb aria_read_log aria\_read\_log
===============
**aria\_read\_log** is a tool for displaying and applying log records from an [Aria](../aria/index) transaction log.
Note: Aria is compiled without -DIDENTICAL\_PAGES\_AFTER\_RECOVERY which means that the table files are not byte-to-byte identical to files created during normal execution. This should be ok, except for test scripts that try to compare files before and after recovery.
Usage:
```
aria_read_log OPTIONS
```
You need to use one of `-d` or `-a`.
Options
-------
The following variables can be set while passed as commandline options to aria\_read\_log, or set in the `[aria_read_log]` section in your [my.cnf](../configuring-mariadb-with-mycnf/index) file.
| Option | Description |
| --- | --- |
| -a, --apply | Apply log to tables: modifies tables! you should make a backup first! Displays a lot of information if not run with --silent. |
| --character-sets-dir=name | Directory where character sets are. |
| -c, --check | if --display-only, check if record is fully readable (for debugging). |
| -?, --help | Display help and exit. |
| -d, --display-only | Display brief info read from records' header. |
| -e, --end-lsn=# | Stop applying at this lsn. If end-lsn is used, UNDO:s will not be applied |
| -h, --aria-log-dir-path=name | Path to the directory where to store transactional log |
| -P, --page-buffer-size=# | The size of the buffer used for index blocks for Aria tables. |
| -l, --print-log-control-file | Print the content of the aria\_log\_control\_file. From [MariaDB 10.4.1](https://mariadb.com/kb/en/mariadb-1041-release-notes/). |
| -o, --start-from-lsn=# | Start reading log from this lsn. |
| -C, --start-from-checkpoint | Start applying from last checkpoint. |
| -s, --silent | Print less information during apply/undo phase. |
| -T, --tables-to-redo=name | List of comma-separated tables that we should apply REDO on. Use this if you only want to recover some tables. |
| -t, --tmpdir=name | Path for temporary files. Multiple paths can be specified, separated by colon (:) |
| --translog-buffer-size=# | The size of the buffer used for transaction log for Aria tables. |
| -u, --undo | Apply UNDO records to tables. (disable with --disable-undo) (Defaults to on; use --skip-undo to disable.) |
| -v, --verbose | Print more information during apply/undo phase. |
| -V, --version | Print version and exit. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb create_synonym_db create\_synonym\_db
===================
Syntax
------
```
create_synonym_db(db_name,synonym)
# db_name (VARCHAR(64))
# synonym (VARCHAR(64))
```
Description
-----------
`create_synonym_db` is a [stored procedure](../stored-procedures/index) available with the [Sys Schema](../sys-schema/index).
Takes a source database name *db\_name* and *synonym* name and creates a synonym database with views that point to all of the tables within the source database. Useful for example for creating a synonym for the [performance\_schema](../performance-schema/index) or [information\_schema](../information-schema/index) databases.
Returns an error if the source database doesn't exist, or the synonym already exists.
Example
-------
```
SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
+--------------------+
CALL sys.create_synonym_db('performance_schema', 'perf');
+-----------------------------------------+
| summary |
+-----------------------------------------+
| Created 81 views in the `perf` database |
+-----------------------------------------+
SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| perf |
| performance_schema |
| sys |
| test |
+--------------------+
SHOW FULL TABLES FROM perf;
+------------------------------------------------------+------------+
| Tables_in_perf | Table_type |
+------------------------------------------------------+------------+
| accounts | VIEW |
| cond_instances | VIEW |
| events_stages_current | VIEW |
| events_stages_history | VIEW |
| events_stages_history_long | VIEW |
...
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema XTRADB_INTERNAL_HASH_TABLES Table Information Schema XTRADB\_INTERNAL\_HASH\_TABLES Table
=======================================================
**MariaDB starting with [10.0.9](https://mariadb.com/kb/en/mariadb-1009-release-notes/)**The `XTRADB_INTERNAL_HASH_TABLES` table was added in [MariaDB 10.0.9](https://mariadb.com/kb/en/mariadb-1009-release-notes/).
The [Information Schema](../information_schema/index) `XTRADB_INTERNAL_HASH_TABLES` table contains InnoDB/XtraDB hash table memory usage information.
The `PROCESS` [privilege](../grant/index) is required to view the table.
It has the following columns:
| Column | Description |
| --- | --- |
| `INTERNAL_HASH_TABLE_NAME` | Hash table name |
| `TOTAL_MEMORY` | Total memory |
| `CONSTANT_MEMORY` | Constant memory |
| `VARIABLE_MEMORY` | Variable memory |
Example
-------
```
SELECT * FROM information_schema.XTRADB_INTERNAL_HASH_TABLES;
+--------------------------------+--------------+-----------------+-----------------+
| INTERNAL_HASH_TABLE_NAME | TOTAL_MEMORY | CONSTANT_MEMORY | VARIABLE_MEMORY |
+--------------------------------+--------------+-----------------+-----------------+
| Adaptive hash index | 2217568 | 2213368 | 4200 |
| Page hash (buffer pool 0 only) | 139112 | 139112 | 0 |
| Dictionary Cache | 613423 | 554768 | 58655 |
| File system | 816368 | 812272 | 4096 |
| Lock System | 332872 | 332872 | 0 |
| Recovery System | 0 | 0 | 0 |
+--------------------------------+--------------+-----------------+-----------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema events_stages_summary_global_by_event_name Table Performance Schema events\_stages\_summary\_global\_by\_event\_name Table
=========================================================================
The table lists stage events, summarized by thread and event name.
It contains the following columns:
| Column | Description |
| --- | --- |
| `EVENT_NAME` | Event name. |
| `COUNT_STAR` | Number of summarized events, which includes all timed and untimed events. |
| `SUM_TIMER_WAIT` | Total wait time of the timed summarized events. |
| `MIN_TIMER_WAIT` | Minimum wait time of the timed summarized events. |
| `AVG_TIMER_WAIT` | Average wait time of the timed summarized events. |
| `MAX_TIMER_WAIT` | Maximum wait time of the timed summarized events. |
Example
-------
```
SELECT * FROM events_stages_summary_global_by_event_name\G
...
*************************** 106. row ***************************
EVENT_NAME: stage/sql/Waiting for trigger metadata lock
COUNT_STAR: 0
SUM_TIMER_WAIT: 0
MIN_TIMER_WAIT: 0
AVG_TIMER_WAIT: 0
MAX_TIMER_WAIT: 0
*************************** 107. row ***************************
EVENT_NAME: stage/sql/Waiting for event metadata lock
COUNT_STAR: 0
SUM_TIMER_WAIT: 0
MIN_TIMER_WAIT: 0
AVG_TIMER_WAIT: 0
MAX_TIMER_WAIT: 0
*************************** 108. row ***************************
EVENT_NAME: stage/sql/Waiting for commit lock
COUNT_STAR: 0
SUM_TIMER_WAIT: 0
MIN_TIMER_WAIT: 0
AVG_TIMER_WAIT: 0
MAX_TIMER_WAIT: 0
*************************** 109. row ***************************
EVENT_NAME: stage/aria/Waiting for a resource
COUNT_STAR: 0
SUM_TIMER_WAIT: 0
MIN_TIMER_WAIT: 0
AVG_TIMER_WAIT: 0
MAX_TIMER_WAIT: 0
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Partitioning Types Overview Partitioning Types Overview
===========================
A partitioning type determines how a partitioned table's rows are distributed across partitions. Some partition types require the user to specify a partitioning expression that determines in which partition a row will be stored.
The size of individual partitions depends on the partitioning type. Read and write performance are affected by the partitioning expression. Therefore, these choices should be made carefully.
MariaDB supports the following partitioning types:
* [RANGE](../range-partitioning-type/index)
* [LIST](../list-partitioning/index)
* [RANGE COLUMNS and LIST COLUMNS](../range-columns-and-list-columns-partitioning-types/index), HASH COLUMNS
* [HASH](../hash-partitioning-type/index)
* KEY
* LINEAR HASH, LINEAR KEY
* [SYSTEM\_TIME](../system-versioned-tables/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema GLOBAL_STATUS and SESSION_STATUS Tables Information Schema GLOBAL\_STATUS and SESSION\_STATUS Tables
============================================================
The [Information Schema](../information_schema/index) `GLOBAL_STATUS` and `SESSION_STATUS` tables store a record of all [status variables](../server-status-variables/index) and their global and session values respectively. This is the same information as displayed by the `[SHOW STATUS](../show-status/index)` commands `SHOW GLOBAL STATUS` and `SHOW SESSION STATUS`.
They contain the following columns:
| Column | Description |
| --- | --- |
| `VARIABLE_NAME` | Status variable name. |
| `VARIABLE_VALUE` | Global or session value. |
Example
-------
```
SELECT * FROM information_schema.GLOBAL_STATUS;
+-----------------------------------------------+--------------------+
| VARIABLE_NAME | VARIABLE_VALUE |
+-----------------------------------------------+--------------------+
...
| BINLOG_SNAPSHOT_FILE | mariadb-bin.000208 |
| BINLOG_SNAPSHOT_POSITION | 369 |
...
| THREADS_CONNECTED | 1 |
| THREADS_CREATED | 1 |
| THREADS_RUNNING | 1 |
| UPTIME | 57358 |
| UPTIME_SINCE_FLUSH_STATUS | 57358 |
+-----------------------------------------------+--------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Performance Schema events_statements_summary_by_digest Table Performance Schema events\_statements\_summary\_by\_digest Table
================================================================
The [Performance Schema digest](../performance-schema-digests/index) is a hashed, normalized form of a statement with the specific data values removed. It allows statistics to be gathered for similar kinds of statements.
The [Performance Schema](../performance-schema/index) `events_statements_summary_by_digest` table records statement events summarized by schema and digest. It contains the following columns:
| Column | Description |
| --- | --- |
| `SCHEMA NAME` | Database name. Records are summarised together with `DIGEST`. |
| `DIGEST` | [Performance Schema digest](../performance-schema-digests/index). Records are summarised together with `SCHEMA NAME`. |
| `DIGEST TEXT` | The unhashed form of the digest. |
| `COUNT_STAR` | Number of summarized events |
| `SUM_TIMER_WAIT` | Total wait time of the summarized events that are timed. |
| `MIN_TIMER_WAIT` | Minimum wait time of the summarized events that are timed. |
| `AVG_TIMER_WAIT` | Average wait time of the summarized events that are timed. |
| `MAX_TIMER_WAIT` | Maximum wait time of the summarized events that are timed. |
| `SUM_LOCK_TIME` | Sum of the `LOCK_TIME` column in the `events_statements_current` table. |
| `SUM_ERRORS` | Sum of the `ERRORS` column in the `events_statements_current` table. |
| `SUM_WARNINGS` | Sum of the `WARNINGS` column in the `events_statements_current` table. |
| `SUM_ROWS_AFFECTED` | Sum of the `ROWS_AFFECTED` column in the `events_statements_current` table. |
| `SUM_ROWS_SENT` | Sum of the `ROWS_SENT` column in the `events_statements_current` table. |
| `SUM_ROWS_EXAMINED` | Sum of the `ROWS_EXAMINED` column in the `events_statements_current` table. |
| `SUM_CREATED_TMP_DISK_TABLES` | Sum of the `CREATED_TMP_DISK_TABLES` column in the `events_statements_current` table. |
| `SUM_CREATED_TMP_TABLES` | Sum of the `CREATED_TMP_TABLES` column in the `events_statements_current` table. |
| `SUM_SELECT_FULL_JOIN` | Sum of the `SELECT_FULL_JOIN` column in the `events_statements_current` table. |
| `SUM_SELECT_FULL_RANGE_JOIN` | Sum of the `SELECT_FULL_RANGE_JOIN` column in the `events_statements_current` table. |
| `SUM_SELECT_RANGE` | Sum of the `SELECT_RANGE` column in the `events_statements_current` table. |
| `SUM_SELECT_RANGE_CHECK` | Sum of the `SELECT_RANGE_CHECK` column in the `events_statements_current` table. |
| `SUM_SELECT_SCAN` | Sum of the `SELECT_SCAN` column in the `events_statements_current` table. |
| `SUM_SORT_MERGE_PASSES` | Sum of the `SORT_MERGE_PASSES` column in the `events_statements_current` table. |
| `SUM_SORT_RANGE` | Sum of the `SORT_RANGE` column in the `events_statements_current` table. |
| `SUM_SORT_ROWS` | Sum of the `SORT_ROWS` column in the `events_statements_current` table. |
| `SUM_SORT_SCAN` | Sum of the `SORT_SCAN` column in the `events_statements_current` table. |
| `SUM_NO_INDEX_USED` | Sum of the `NO_INDEX_USED` column in the `events_statements_current` table. |
| `SUM_NO_GOOD_INDEX_USED` | Sum of the `NO_GOOD_INDEX_USED` column in the `events_statements_current` table. |
| `FIRST_SEEN` | Time at which the digest was first seen. |
| `LAST_SEEN` | Time at which the digest was most recently seen. |
The `*_TIMER_WAIT` columns only calculate results for timed events, as non-timed events have a `NULL` wait time.
The `events_statements_summary_by_digest` table is limited in size by the [performance\_schema\_digests\_size](../performance-schema-system-variables/index#performance_schema_digests_size) system variable. Once the limit has been reached and the table is full, all entries are aggregated in a row with a `NULL` digest. The `COUNT_STAR` value of this `NULL` row indicates how many digests are recorded in the row and therefore gives an indication of whether `performance_schema_digests_size` should be increased to provide more accurate statistics.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb mysql_zap mysql\_zap
==========
**MariaDB until [10.1](../what-is-mariadb-101/index)**mysql\_zap was removed in [MariaDB 10.2](../what-is-mariadb-102/index). pkill can be used [as an alternative](#pkill-as-an-alternative).
*mysql\_zap* kills processes that match a pattern. It uses the *ps* command and Unix signals, so it runs on Unix and Unix-like systems.
Invoke mysql\_zap like this:
```
shell> mysql_zap [-signal] [-?Ift]
```
A process matches if its output line from the *ps* command contains the pattern. By default, mysql\_zap asks for confirmation for each process. Respond *y* to kill the process, or *q* to exit mysql\_zap. For any other response, mysql\_zap does not attempt to kill the process.
If the *-signal* option is given, it specifies the name or number of the signal to send to each process. Otherwise, mysql\_zap tries first with TERM (signal 15) and then with KILL (signal 9).
mysql\_zap supports the following additional options:
| Option | Description |
| --- | --- |
| `--help`, `-?`, `-I` | Display a help message and exit. |
| `-f` | Force mode. *mysql\_zap* attempts to kill each process without confirmation. |
| `-t` | Test mode. Display information about each process but do not kill it. |
Example
-------
```
localhost:~# mysql_zap -t mysql
stty: standard input: unable to perform all requested operations
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 4073 0.0 0.2 3804 1308 ? S 08:51 0:00 /bin/bash /usr/bin/mysqld_safe
mysql 4258 3.3 15.7 939740 81236 ? Sl 08:51 30:18 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306
```
pkill as an Alternative
-----------------------
*pkill* can be used as an alternative to *mysql\_zap*, although an important distinction between pkill and mysql\_zap is that mysql\_zap kills the server 'gently' first (with signal 15) and only if the server doesn't die in a limited time then tries -9.
To use pkill in the same way, one must run it twice; `pkill --signal 15 mysqld ; sleep(10) ; pkill -f --signal 9 pattern`
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb WEEKDAY WEEKDAY
=======
Syntax
------
```
WEEKDAY(date)
```
Description
-----------
Returns the weekday index for `date` (`0` = Monday, `1` = Tuesday, ... `6` = Sunday).
This contrasts with `[DAYOFWEEK()](../dayofweek/index)` which follows the ODBC standard (`1` = Sunday, `2` = Monday, ..., `7` = Saturday).
Examples
--------
```
SELECT WEEKDAY('2008-02-03 22:23:00');
+--------------------------------+
| WEEKDAY('2008-02-03 22:23:00') |
+--------------------------------+
| 6 |
+--------------------------------+
SELECT WEEKDAY('2007-11-06');
+-----------------------+
| WEEKDAY('2007-11-06') |
+-----------------------+
| 1 |
+-----------------------+
```
```
CREATE TABLE t1 (d DATETIME);
INSERT INTO t1 VALUES
("2007-01-30 21:31:07"),
("1983-10-15 06:42:51"),
("2011-04-21 12:34:56"),
("2011-10-30 06:31:41"),
("2011-01-30 14:03:25"),
("2004-10-07 11:19:34");
```
```
SELECT d FROM t1 where WEEKDAY(d) = 6;
+---------------------+
| d |
+---------------------+
| 2011-10-30 06:31:41 |
| 2011-01-30 14:03:25 |
+---------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb MariaDB ColumnStore 1.5 Upgrades MariaDB ColumnStore 1.5 Upgrades
================================
Choose an option below to see the corresponding upgrade procedure:
* [Upgrade a Single-Node MariaDB ColumnStore 1.5 Deployment with MariaDB Community Server 10.5](https://mariadb.com/docs/deploy/columnstore-cs105/#deploy-upgrade-community-single-columnstore-col15-cs105)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore Drop Table ColumnStore Drop Table
======================
The [DROP TABLE](../drop-table/index) statement deletes a table from ColumnStore.
Syntax
------
```
DROP TABLE [IF EXISTS]
tbl_name
[RESTRICT ]
```
The RESTRICT clause limits the table to being dropped in the front end only. This could be useful when the table has been dropped on one user module, and needs to be synced to others.
images here
The following statement drops the *orders* table on the front end only:
```
DROP TABLE orders RESTRICT;
```
See also
--------
* [DROP TABLE](../drop-table/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb DROP SEQUENCE DROP SEQUENCE
=============
**MariaDB starting with [10.3](../what-is-mariadb-103/index)**DROP SEQUENCE was introduced in [MariaDB 10.3](../what-is-mariadb-103/index).
Syntax
------
```
DROP [TEMPORARY] SEQUENCE [IF EXISTS] [/*COMMENT TO SAVE*/]
sequence_name [, sequence_name] ...
```
Description
-----------
`DROP SEQUENCE` removes one or more sequences created with [CREATE SEQUENCE](../create-sequence/index). You must have the `DROP` privilege for each sequence. MariaDB returns an error indicating by name which non-existing tables it was unable to drop, but it also drops all of the tables in the list that do exist.
Important: When a table is dropped, user privileges on the table are not automatically dropped. See [GRANT](../grant/index).
If another connection is using the sequence, a metadata lock is active, and this statement will wait until the lock is released. This is also true for non-transactional tables.
For each referenced sequence, DROP SEQUENCE drops a temporary sequence with that name, if it exists. If it does not exist, and the `TEMPORARY` keyword is not used, it drops a non-temporary sequence with the same name, if it exists. The `TEMPORARY` keyword ensures that a non-temporary sequence will not accidentally be dropped.
Use `IF EXISTS` to prevent an error from occurring for sequences that do not exist. A NOTE is generated for each non-existent sequence when using `IF EXISTS`. See [SHOW WARNINGS](../show-warnings/index).
DROP SEQUENCE requires the [DROP privilege](../grant/index).
Notes
-----
DROP SEQUENCE only removes sequences, not tables. However, [DROP TABLE](../drop-table/index) can remove both sequences and tables.
See Also
--------
* [Sequence Overview](../sequence-overview/index)
* [CREATE SEQUENCE](../create-sequence/index)
* [ALTER SEQUENCE](../alter-sequence/index)
* [DROP TABLE](../drop-table/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB ColumnStore 1.4 Upgrades MariaDB ColumnStore 1.4 Upgrades
================================
Choose an option below to see the corresponding upgrade procedure:
* [Upgrade a Single-Node MariaDB ColumnStore 1.4 Deployment with MariaDB Enterprise Server 10.4](https://mariadb.com/docs/deploy/columnstore-es104/#deploy-upgrade-enterprise-single-columnstore-col14-es104)
* [Upgrade a Multi-Node MariaDB ColumnStore 1.4 Deployment with MariaDB Enterprise Server 10.4](https://mariadb.com/docs/deploy/columnstore-es104/#deploy-upgrade-enterprise-multi-columnstore-col14-es104)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ST_PointFromText ST\_PointFromText
=================
Syntax
------
```
ST_PointFromText(wkt[,srid])
PointFromText(wkt[,srid])
```
Description
-----------
Constructs a [POINT](../point/index) value using its [WKT](../wkt-definition/index) representation and [SRID](../srid/index).
`ST_PointFromText()` and `PointFromText()` are synonyms.
Examples
--------
```
CREATE TABLE gis_point (g POINT);
SHOW FIELDS FROM gis_point;
INSERT INTO gis_point VALUES
(PointFromText('POINT(10 10)')),
(PointFromText('POINT(20 10)')),
(PointFromText('POINT(20 20)')),
(PointFromWKB(AsWKB(PointFromText('POINT(10 20)'))));
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Geometry Relations Geometry Relations
===================
Geometry relations
| Title | Description |
| --- | --- |
| [CONTAINS](../contains/index) | Whether one geometry contains another. |
| [CROSSES](../crosses/index) | Whether two geometries spatially cross |
| [DISJOINT](../disjoint/index) | Whether the two elements do not intersect. |
| [EQUALS](../equals/index) | Indicates whether two geometries are spatially equal. |
| [INTERSECTS](../intersects/index) | Indicates whether two geometries spatially intersect. |
| [OVERLAPS](../overlaps/index) | Indicates whether two elements spatially overlap. |
| [ST\_CONTAINS](../st-contains/index) | Whether one geometry is contained by another. |
| [ST\_CROSSES](../st-crosses/index) | Whether two geometries spatially cross. |
| [ST\_DIFFERENCE](../st_difference/index) | Point set difference. |
| [ST\_DISJOINT](../st_disjoint/index) | Whether one geometry is spatially disjoint from another. |
| [ST\_DISTANCE](../st_distance/index) | The distance between two geometries. |
| [ST\_DISTANCE\_SPHERE](../st_distance_sphere/index) | Spherical distance between two geometries (point or multipoint) on a sphere. |
| [ST\_EQUALS](../st-equals/index) | Whether two geometries are spatoially equal. |
| [ST\_INTERSECTS](../st-intersects/index) | Whether two geometries spatially intersect. |
| [ST\_LENGTH](../st_length/index) | Length of a LineString value. |
| [ST\_OVERLAPS](../st-overlaps/index) | Whether two geometries overlap. |
| [ST\_TOUCHES](../st-touches/index) | Whether one geometry g1 spatially touches another. |
| [ST\_WITHIN](../st-within/index) | Whether one geometry is within another. |
| [TOUCHES](../touches/index) | Whether two geometries spatially touch. |
| [WITHIN](../within/index) | Indicate whether a geographic element is spacially within another. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SHOW ERRORS SHOW ERRORS
===========
Syntax
------
```
SHOW ERRORS [LIMIT [offset,] row_count]
SHOW ERRORS [LIMIT row_count OFFSET offset]
SHOW COUNT(*) ERRORS
```
Description
-----------
This statement is similar to [SHOW WARNINGS](../show-warnings/index), except that instead of displaying errors, warnings, and notes, it displays only errors.
The `LIMIT` clause has the same syntax as for the [SELECT](../select/index) statement.
The `SHOW COUNT(*) ERRORS` statement displays the number of errors. You can also retrieve this number from the [error\_count](../server-system-variables/index#error_count) variable.
```
SHOW COUNT(*) ERRORS;
SELECT @@error_count;
```
The value of [error\_count](../server-system-variables/index#error_count) might be greater than the number of messages displayed by [SHOW WARNINGS](../show-warnings/index) if the [max\_error\_count](../server-system-variables/index#max_error_count) system variable is set so low that not all messages are stored.
For a list of MariaDB error codes, see [MariaDB Error Codes](../mariadb-error-codes/index).
Examples
--------
```
SELECT f();
ERROR 1305 (42000): FUNCTION f does not exist
SHOW COUNT(*) ERRORS;
+-----------------------+
| @@session.error_count |
+-----------------------+
| 1 |
+-----------------------+
SHOW ERRORS;
+-------+------+---------------------------+
| Level | Code | Message |
+-------+------+---------------------------+
| Error | 1305 | FUNCTION f does not exist |
+-------+------+---------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb GEOMETRYCOLLECTION GEOMETRYCOLLECTION
==================
Syntax
------
```
GeometryCollection(g1,g2,...)
```
Description
-----------
Constructs a [WKB](../wkb/index) GeometryCollection. If any argument is not a well-formed WKB representation of a geometry, the return value is `NULL`.
Examples
--------
```
CREATE TABLE gis_geometrycollection (g GEOMETRYCOLLECTION);
SHOW FIELDS FROM gis_geometrycollection;
INSERT INTO gis_geometrycollection VALUES
(GeomCollFromText('GEOMETRYCOLLECTION(POINT(0 0), LINESTRING(0 0,10 10))')),
(GeometryFromWKB(AsWKB(GeometryCollection(Point(44, 6), LineString(Point(3, 6), Point(7, 9)))))),
(GeomFromText('GeometryCollection()')),
(GeomFromText('GeometryCollection EMPTY'));
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb CONNECT Data Types CONNECT Data Types
==================
Many data types make no or little sense when applied to plain files. This why [CONNECT](../connect/index) supports only a restricted set of data types. However, ODBC, JDBC or MYSQL source tables may contain data types not supported by CONNECT. In this case, CONNECT makes an automatic conversion to a similar supported type when it is possible.
The data types currently supported by CONNECT are:
| Type name | Description | Used for |
| --- | --- | --- |
| `TYPE_STRING` | Zero ended string | [char](../char/index), [varchar](../varchar/index), [text](../text/index) |
| `TYPE_INT` | 4 bytes integer | [int](../int/index), [mediumint](../mediumint/index), [integer](../integer/index) |
| `TYPE_SHORT` | 2 bytes integer | [smallint](../smallint/index) |
| `TYPE_TINY` | 1 byte integer | [tinyint](../tinyint/index) |
| `TYPE_BIGINT` | 8 bytes integer | [bigint](../bigint/index), longlong |
| `TYPE_DOUBLE` | 8 bytes floating point | [double](../double/index), [float](../float/index), real |
| `TYPE_DECIM` | Numeric value | [decimal](../decimal/index), numeric, number |
| `TYPE_DATE` | 4 bytes integer | [date](../date/index), [datetime](../datetime/index), [time](../time/index), [timestamp](../timestamp/index), [year](../year/index) |
TYPE\_STRING
------------
This type corresponds to what is generally known as [CHAR](../char/index) or [VARCHAR](../varchar/index) by database users, or as strings by programmers. Columns containing characters have a maximum length but the character string is of fixed or variable length depending on the file format.
The DATA\_CHARSET option must be used to specify the character set used in the data source or file. Note that, unlike usually with MariaDB, when a multi-byte character set is used, the column size represents the number of bytes the column value can contain, not the number of characters.
TYPE\_INT
---------
The ][INTEGER](../integer/index) type contains signed integer numeric 4-byte values (the *int/ of the C language) ranging from `–2,147,483,648` to `2,147,483,647` for signed type and `0` to `4,294,967,295` for unsigned type.*
TYPE\_SHORT
-----------
The SHORT data type contains signed [integer numeric 2-byte](../smallint/index) values (the *short integer* of the C language) ranging from `–32,768` to `32,767` for signed type and `0` to `65,535` for unsigned type.
TYPE\_TINY
----------
The TINY data type contains [integer numeric 1-byte](../tinyint/index) values (the *char* of the C language) ranging from `–128` to `127` for signed type and `0` to `255` for unsigned type. For some table types, TYPE\_TINY is used to represent Boolean values (0 is false, anything else is true).
TYPE\_BIGINT
------------
The [BIGINT](../bigint/index) data type contains signed integer 8-byte values (the *long long* of the C language) ranging from `-9,223,372,036,854,775,808` to `9,223,372,036,854,775,807` for signed type and from `0` to `18,446,744,073,709,551,615` for unsigned type.
Inside tables, the coding of all integer values depends on the table type. In tables represented by text files, the number is written in characters, while in tables represented by binary files (`BIN` or `VEC`) the number is directly stored in the binary representation corresponding to the platform.
The *length* (or *precision*) specification corresponds to the length of the table field in which the value is stored for text files only. It is used to set the output field length for all table types.
TYPE\_DOUBLE
------------
The DOUBLE data type corresponds to the C language [double](../double/index) type, a floating-point double precision value coded with 8 bytes. Like for integers, the internal coding in tables depends on the table type, characters for text files, and platform binary representation for binary files.
The *length* specification corresponds to the length of the table field in which the value is stored for text files only. The *scale* (was *precision*) is the number of decimal digits written into text files. For binary table types (BIN and VEC) this does not apply. The *length* and *scale* specifications are used to set the output field length and number of decimals for all types of tables.
TYPE\_DECIM
-----------
The DECIMAL data type corresponds to what MariaDB or ODBC data sources call NUMBER, NUMERIC, or [DECIMAL](../decimal/index): a numeric value with a maximum number of digits (the precision) some of them eventually being decimal digits (the scale). The internal coding in CONNECT is a character representation of the number. For instance:
```
colname decimal(14,6)
```
This defines a column *colname* as a number having a *precision* of 14 and a *scale* of 6. Supposing it is populated by:
```
insert into xxx values (-2658.74);
```
The internal representation of it will be the character string `-2658.740000`. The way it is stored in a file table depends on the table type. The *length* field specification corresponds to the length of the table field in which the value is stored and is calculated by CONNECT from the *precision* and the *scale* values. This length is *precision* plus 1 if *scale* is not 0 (for the decimal point) plus 1 if this column is not unsigned (for the eventual minus sign). In fix formatted tables the number is right justified in the field of width *length*, for variable formatted tables, such as CSV, the field is the representing character string.
Because this type is mainly used by CONNECT to handle numeric or decimal fields of ODBC, JDBC and MySQL table types, CONNECT does not provide decimal calculations or comparison by itself. This is why decimal columns of CONNECT tables cannot be indexed.
DATE Data type
--------------
Internally, date/time values are stored by CONNECT as a signed 4-byte integer. The value 0 corresponds to 01 January 1970 12:00:00 am coordinated universal time ([UTC](../coordinated-universal-time/index)). All other date/time values are represented by the number of seconds elapsed since or before midnight (00:00:00), 1 January 1970, to that date/time value. Date/time values before midnight 1 January 1970 are represented by a negative number of seconds.
CONNECT handles dates from **13 December 1901, 20:45:52** to **18 January 2038, 19:14:07**.
Although date and time information can be represented in both CHAR and INTEGER data types, the DATE data type has special associated properties. For each DATE value, CONNECT can store all or only some of the following information: century, year, month, day, hour, minute, and second.
### Date Format in Text Tables
Internally, date/time values are handled as a signed 4-byte integer. But in text tables (type DOS, FIX, CSV, FMT, and DBF) dates are most of the time stored as a formatted character string (although they also can be stored as a numeric string representing their internal value). Because there are infinite ways to format a date, the format to use for decoding dates, as well as the field length in the file, must be associated to date columns (except when they are stored as the internal numeric value).
Note that this associated format is used only to describe the way the temporal value is stored internally. This format is used both for output to decode the date in a SELECT statement as well as for input to encode the date in INSERT or UPDATE statements. However, what is kept in this value depends on the data type used in the column definition (all the MariaDB temporal values can be specified). When creating a table, the format is associated to a date column using the DATE\_FORMAT option in the column definition, for instance:
```
create table birthday (
Name varchar(17),
Bday date field_length=10 date_format='MM/DD/YYYY',
Btime time field_length=8 date_format='hh:mm tt')
engine=CONNECT table_type=CSV;
insert into birthday values ('Charlie','2012-11-12','15:30:00');
select * from birthday;
```
The SELECT query returns:
| Name | Bday | Btime |
| --- | --- | --- |
| Charlie | 2012-11-12 | 15:30:00 |
The values of the INSERT statement must be specified using the standard MariaDB syntax and these values are displayed as MariaDB temporal values. Sure enough, the column formats apply only to the way these values are represented inside the CSV files. Here, the inserted record will be:
```
Charlie,11/12/2012,03:30 PM
```
**Note:** The field\_length option exists because the MariaDB syntax does not allow specifying the field length between parentheses for temporal column types. If not specified, the field length is calculated from the date format (sometimes as a max value) or made equal to the default length value if there is no date format. In the above example it could have been removed as the calculated values are the ones specified. However, if the table type would have been DOS or FIX, these values could be adjusted to fit the actual field length within the file.
A CONNECT format string consists of a series of elements that represent a particular piece of information and define its format. The elements will be recognized in the order they appear in the format string. Date and time format elements will be replaced by the actual date and time as they appear in the source string. They are defined by the following groups of characters:
| Element | Description |
| --- | --- |
| YY | The last two digits of the year (that is, 1996 would be coded as "96"). |
| YYYY | The full year (that is, 1996 could be entered as "96" but displayed as “1996”). |
| MM | The one or two-digit month number. |
| MMM | The three-character month abbreviation. |
| MMMM | The full month name. |
| DD | The one or two-digit month day. |
| DDD | The three-character weekday abbreviation. |
| DDDD | The full weekday name. |
| hh | The one or two-digit hour in 12-hour or 24-hour format. |
| mm | The one or two-digit minute. |
| ss | The one or two-digit second. |
| t | The one-letter AM/PM abbreviation (that is, AM is entered as "A"). |
| tt | The two-letter AM/PM abbreviation (that is, AM is entered as "AM"). |
### Usage Notes
* To match the source string, you can add body text to the format string, enclosing it in single quotes or double quotes if it would be ambiguous. Punctuation marks do not need to be quoted.
* The hour information is regarded as 12-hour format if a “t” or “tt” element follows the “hh” element in the format or as 24-hour format otherwise.
* The "MM", "DD", "hh", "mm", "ss" elements can be specified with one or two letters (e.g. "MM" or "M") making no difference on input, but placing a leading zero to one-digit values on output[[1](#_note-0)] for two-letter elements.
* If the format contains elements DDD or DDDD, the day of week name is skipped on input and ignored to calculate the internal date value. On output, the correct day of week name is generated and displayed.
* Temporal values are always stored as numeric in [BIN](../connect-table-types-data-files/index#bin-table-type) and [VEC](../connect-table-types-data-files/index#vec-table-type-vecto) tables.
### Handling dates that are out of the range of supported CONNECT dates
If you want to make a table containing, for instance, historical dates not being convertible into CONNECT dates, make your column CHAR or VARCHAR and store the dates in the MariaDB format. All date functions applied to these strings will convert them to MariaDB dates and will work as if they were real dates. Of course they must be inserted and will be displayed using the MariaDB format.
NULL handling
-------------
CONNECT handles [null values](../null-values-in-mariadb/index) for data sources able to produce nulls. Currently this concerns mainly the [ODBC](../connect-table-types-odbc-table-type-accessing-tables-from-other-dbms/index), [JDBC](../connect-jdbc-table-type-accessing-tables-from-other-dbms/index), MONGO, [MYSQL](../connect-table-types-mysql-table-type-accessing-mysqlmariadb-tables/index), [XML](../connect-table-types-data-files/index#xml-table-type), [JSON](../connect-json-table-type/index) and [INI](../connect-table-types-data-files/index#ini-table-type) table types. For INI, [JSON](../connect-json-table-type/index), MONGO or XML types, null values are returned when the key is missing in the section (INI) or when the corresponding node does not exist in a row (XML, JSON, MONGO).
For other file tables, the issue is to define what a null value is. In a numeric column, 0 can sometimes be a valid value but, in some other cases, it can make no sense. The same for character columns; is a blank field a valid value or not?
A special case is DATE columns with a DATE\_FORMAT specified. Any value not matching the format can be regarded as NULL.
CONNECT leaves the decision to you. When declaring a column in the [CREATE TABLE](../create-table/index) statement, if it is declared NOT NULL, blank or zero values will be considered as valid values. Otherwise they will be considered as NULL values. In all cases, nulls are replaced on insert or update by pseudo null values, a zero-length character string for text types or a zero value for numeric types. Once converted to pseudo null values, they will be recognized as NULL only for columns declared as nullable.
For instance:
```
create table t1 (a int, b char(10)) engine=connect;
insert into t1 values (0,'zero'),(1,'one'),(2,'two'),(null,'???');
select * from t1 where a is null;
```
The select query replies:
| a | b |
| --- | --- |
| NULL | zero |
| NULL | ??? |
Sure enough, the value 0 entered on the first row is regarded as NULL for a nullable column. However, if we execute the query:
```
select * from t1 where a = 0;
```
This will return no line because a NULL is not equal to 0 in an SQL where clause.
Now let us see what happens with not null columns:
```
create table t1 (a int not null, b char(10) not null) engine=connect;
insert into t1 values (0,'zero'),(1,'one'),(2,'two'),(null,'???');
```
The insert statement will produce a warning saying:
| Level | Code | Message |
| --- | --- | --- |
| Warning | 1048 | Column 'a' cannot be null |
It is replaced by a pseudo null `0` on the fourth row. Let us see the result:
```
select * from t1 where a is null;
select * from t1 where a = 0;
```
The first query returns no rows, 0 are valid values and not NULL. The second query replies:
| a | b |
| --- | --- |
| 0 | zero |
| 0 | ??? |
It shows that the NULL inserted value was replaced by a valid 0 value.
Unsigned numeric types
----------------------
They are supported by CONNECT since version 1.01.0010 for fixed numeric types (TINY, SHORT, INTEGER, and BITINT).
Data type conversion
--------------------
CONNECT is able to convert data from one type to another in most cases. These conversions are done without warning even when this leads to truncation or loss of precision. This is true, in particular, for tables of type ODBC, JDBC, MYSQL and PROXY (via MySQL) because the source table may contain some data types not supported by CONNECT. They are converted when possible to CONNECT types.
When converted, MariaDB types are converted as:
| MariaDB Types | CONNECT Type | Remark |
| --- | --- | --- |
| [integer](../integer/index), [medium integer](../mediumint/index) | TYPE\_INT | 4 byte integer |
| [small integer](../smallint/index) | TYPE\_SHORT | 2 byte integer |
| [tiny integer](../tinyint/index) | TYPE\_TINY | 1 byte integer |
| [char](../char/index), [varchar](../varchar/index) | TYPE\_STRING | Same length |
| [double](../double/index), [float](../float/index), real | TYPE\_DOUBLE | 8 byte floating point |
| [decimal](../decimal/index), numeric | TYPE\_DECIM | Length depends on precision and scale |
| all [date](../date/index) related types | TYPE\_DATE | Date format can be set accordingly |
| [bigint](../bigint/index), longlong | TYPE\_BIGINT | 8 byte integer |
| [enum](../enum/index), [set](../set-data-type/index) | TYPE\_STRING | Numeric value not accessible |
| All text types | TYPE\_STRING TYPE\_ERROR | Depending on the value of the [connect\_type\_conv](../connect-system-variables/index#connect_type_conv) system variable value. |
| Other types | TYPE\_ERROR | Not supported, no conversion provided. |
For [ENUM](../enum/index), the length of the column is the length of the longest value of the enumeration. For [SET](../set-data-type/index) the length is enough to contain all the set values concatenated with comma separator.
In the case of [TEXT](../text/index) columns, the handling depends on the values given to the [connect\_type\_conv](../connect-system-variables/index#connect_type_conv) and [connect\_conv\_size](../connect-system-variables/index#connect_conv_size) system variables.
Note: [BLOB](../blob/index) is currently not converted by default until a TYPE\_BIN type is added to CONNECT. However, the FORCE option (from Connect 1.06.006) can be specified for blob columns containing text and the SKIP option also applies to ODBC BLOB columns.
ODBC SQL types are converted as:
| SQL Types | Connect Type | Remark |
| --- | --- | --- |
| SQL\_CHAR, SQL\_VARCHAR | TYPE\_STRING | |
| SQL\_LONGVARCHAR | TYPE\_STRING | `len = min(abs(len), connect_conv_size)` If the column is generated by discovery (columns not specified) its length is [connect\_conv\_size](../connect-system-variables/index#connect_conv_size). |
| SQL\_NUMERIC, SQL\_DECIMAL | TYPE\_DECIM | |
| SQL\_INTEGER | TYPE\_INT | |
| SQL\_SMALLINT | TYPE\_SHORT | |
| SQL\_TINYINT, SQL\_BIT | TYPE\_TINY | |
| SQL\_FLOAT, SQL\_REAL, SQL\_DOUBLE | TYPE\_DOUBLE | |
| SQL\_DATETIME | TYPE\_DATE | `len = 10` |
| SQL\_INTERVAL | TYPE\_STRING | `len = 8 + ((scale) ? (scale+1) : 0)` |
| SQL\_TIMESTAMP | TYPE\_DATE | `len = 19 + ((scale) ? (scale +1) : 0)` |
| SQL\_BIGINT | TYPE\_BIGINT | |
| SQL\_GUID | TYPE\_STRING | l`len=36` |
| SQL\_BINARY, SQL\_VARBINARY, SQL\_LONG-VARBINARY | TYPE\_STRING | `len = min(abs(len), connect_conv_size`) Only if the value of [connect\_type\_conv](../connect-system-variables/index#connect_type_conv) is `force`. The column should use the binary charset. |
| Other types | TYPE\_ERROR | *Not supported.* |
JDBC SQL types are converted as:
| JDBC Types | Connect Type | Remark |
| --- | --- | --- |
| (N)CHAR, (N)VARCHAR | TYPE\_STRING | |
| LONG(N)VARCHAR | TYPE\_STRING | `len = min(abs(len), connect_conv_size)` If the column is generated by discovery (columns not specified), its length is [connect\_conv\_size](../connect-system-variables/index#connect_conv_size) |
| NUMERIC, DECIMAL, VARBINARY | TYPE\_DECIM |
| INTEGER | TYPE\_INT | |
| SMALLINT | TYPE\_SHORT | |
| TINYINT, BIT | TYPE\_TINY | |
| FLOAT, REAL, DOUBLE | TYPE\_DOUBLE | |
| DATE | TYPE\_DATE | `len = 10` |
| TIME | TYPE\_DATE | `len = 8 + ((scale) ? (scale+1) : 0)` |
| TIMESTAMP | TYPE\_DATE | `len = 19 + ((scale) ? (scale +1) : 0)` |
| BIGINT | TYPE\_BIGINT | |
| UUID (specific to PostgreSQL) | TYPE\_STRINGTYPE\_ERROR | `len=36`If [connect\_type\_conv=NO](../connect-system-variables/index#connect_type_conv) |
| Other types | TYPE\_ERROR | Not supported. |
Note: The [connect\_type\_conv](../connect-system-variables/index#connect_type_conv) SKIP option also applies to ODBC and JDBC tables.
---
1. [↑](#_ref-0) Here input and output are used to specify respectively decoding the date to get its numeric value from the data file and encoding a date to write it in the table file. Input is performed within [SELECT](../select/index) queries; output is performed in [UPDATE](../update/index) or [INSERT](../insert/index) queries.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb CONTAINS CONTAINS
========
Syntax
------
```
Contains(g1,g2)
```
Description
-----------
Returns `1` or `0` to indicate whether a geometry `g1` completely contains geometry `g2`. CONTAINS() is based on the original MySQL implementation and uses object bounding rectangles, while [ST\_CONTAINS()](../st_contains/index) uses object shapes.
This tests the opposite relationship to [Within()](../within/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Vagrant and MariaDB Vagrant and MariaDB
====================
Vagrant is an open source tool to quickly setup machines that can be used for development and testing. These machines can be local virtual machines, Docker containers, AWS EC2 instances, and so on. Vagrant allows one to easily and quickly setup test MariaDB servers.
| Title | Description |
| --- | --- |
| [Vagrant Overview for MariaDB Users](../vagrant-overview-for-mariadb-users/index) | Vagrant architecture, general concepts and basic usage. |
| [Creating a Vagrantfile](../creating-a-vagrantfile/index) | How to create a new Vagrant box running MariaDB. |
| [Vagrant Security Concerns](../vagrant-security-concerns/index) | Security matters related to Vagrant machines. |
| [Running MariaDB ColumnStore Docker containers on Linux, Windows and MacOS](../running-mariadb-columnstore-docker-containers-on-linux-windows-and-macos/index) | Docker allows for a simple setup of a ColumnStore single server instance for evaluation purposes |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB 5.5.33 Debian and Ubuntu Installation Issues MariaDB 5.5.33 Debian and Ubuntu Installation Issues
====================================================
Shortly after the [MariaDB 5.5.33](https://mariadb.com/kb/en/mariadb-5533-release-notes/) release we became aware of some installation issues with the Debian and Ubuntu repositories. These issues were fixed in [MariaDB 5.5.33a](https://mariadb.com/kb/en/mariadb-5533a-release-notes/), but due to how apt works, steps need to be taken to solve the broken dependencies before upgrading.
We know of three scenarios when dependencies were broken. The steps to fix each of them are pretty much the same, only the list of broken dependencies and hence the list of packages to take care of them differs. The basic idea is to downgrade those certain packages to 5.5.32 temporarily before upgrading them to 5.5.33a.
If you ran into issues when moving from [MariaDB 5.5.32](https://mariadb.com/kb/en/mariadb-5532-release-notes/) to [MariaDB 5.5.33](https://mariadb.com/kb/en/mariadb-5533-release-notes/), look through each of the three scenarios to see which one applies to you and then follow the steps to apply that fix.
Applying the fix
----------------
To get your system ready to apply the fix, do the following:
* Comment out the standard [MariaDB 5.5](../what-is-mariadb-55/index) repo in the `/etc/apt/sources.list` or `/etc/apt/sources.list.d/mariadb.repo` file (or wherever you have the repositories configured).
* Add a [MariaDB 5.5.32](https://mariadb.com/kb/en/mariadb-5532-release-notes/) repository to the `sources.list`. The easiest way is to add the following. Just replace '`{os}`' and '`{dist}`' with the appropriate values.
```
deb http://ftp.osuosl.org/pub/mariadb/mariadb-5.5.32/repo/{os} {dist} main
```
For example, on Debian Wheezy the line would be:
```
deb http://ftp.osuosl.org/pub/mariadb/mariadb-5.5.32/repo/debian wheezy main
```
And on Ubuntu Raring the line would be:
```
deb http://ftp.osuosl.org/pub/mariadb/mariadb-5.5.32/repo/ubuntu raring main
```
* Then run '`sudo apt-get update`'
* Then '`sudo apt-get install`' the list of packages to downgrade as given in the applicable section below.
* Next, modify our sources.list to remove the 5.5.32 repo and switch back to the normal 5.5 repo
* Then '`sudo apt-get update`' to get things back to normal
* As a final optional step, once your normal mirror has at least [MariaDB 5.5.33a](https://mariadb.com/kb/en/mariadb-5533a-release-notes/) you can '`sudo apt-get upgrade`' to upgrade. To check what version of MariaDB our mirror has, run the following command (after running '`sudo apt-get update`'):
```
apt-cache show mariadb-server | grep Version
```
5.5.32 server + 5.5.32 client upgraded to the initial (17 Sep 2013) release of 5.5.33
-------------------------------------------------------------------------------------
In this first scenario, both client and server were partially upgraded to 5.5.33 before the process aborted. The problem looks like this:
```
You might want to run 'apt-get -f install' to correct these.
The following packages have unmet dependencies:
libmariadbclient18 : Depends: libmysqlclient18 (= 5.5.32+maria-1~wheezy) but 5.5.33+maria-1~wheezy is installed
libmysqlclient18 : Depends: libmariadbclient18 (= 5.5.33+maria-1~wheezy) but 5.5.32+maria-1~wheezy is installed
mariadb-client-5.5 : Depends: libmariadbclient18 (>= 5.5.33+maria-1~wheezy) but 5.5.32+maria-1~wheezy is installed
mariadb-client-core-5.5 : Depends: libmariadbclient18 (>= 5.5.33+maria-1~wheezy) but 5.5.32+maria-1~wheezy is installed
mariadb-server : Depends: mariadb-server-5.5 (= 5.5.33+maria-1~wheezy) but 5.5.32+maria-1~wheezy is installed
mariadb-server-core-5.5 : Depends: libmariadbclient18 (>= 5.5.33+maria-1~wheezy) but 5.5.32+maria-1~wheezy is installed
```
To fix it, the following server and client packages need to be temporarily downgraded to 5.5.32 (replace '`wheezy`' with the name of whatever distribution you are using):
```
sudo apt-get install \
libmysqlclient18=5.5.32+maria-1~wheezy \
mariadb-client-5.5=5.5.32+maria-1~wheezy \
mariadb-client-core-5.5=5.5.32+maria-1~wheezy \
mariadb-server=5.5.32+maria-1~wheezy \
mariadb-server-core-5.5=5.5.32+maria-1~wheezy
```
5.5.32 Galera server and 5.5.32 MariaDB client upgraded to 5.5.33
-----------------------------------------------------------------
In this scenario, the client upgraded, but Galera-server did not. The problem looks like this:
```
The following packages have unmet dependencies:
libmariadbclient18 : Depends: libmysqlclient18 (= 5.5.32+maria-1~wheezy) but 5.5.33+maria-1~wheezy is installed
libmysqlclient18 : Depends: libmariadbclient18 (= 5.5.33+maria-1~wheezy) but 5.5.32+maria-1~wheezy is installed
mariadb-client-5.5 : Depends: libmariadbclient18 (>= 5.5.33+maria-1~wheezy) but 5.5.32+maria-1~wheezy is installed
mariadb-client-core-5.5 : Depends: libmariadbclient18 (>= 5.5.33+maria-1~wheezy) but 5.5.32+maria-1~wheezy is installed
```
To fix it, only the client packages need to be temporarily downgraded to 5.5.32 (replace wheezy with whatever your distribution is):
```
sudo apt-get install \
libmysqlclient18=5.5.32+maria-1~wheezy \
mariadb-client-5.5=5.5.32+maria-1~wheezy \
mariadb-client-core-5.5=5.5.32+maria-1~wheezy
```
5.3.12 server + 5.3.12 client + 5.5.32 libmariadbclient18 upgraded to 5.5.33
----------------------------------------------------------------------------
In this scenario, only the library upgraded. The problem looks like this:
```
The following packages have unmet dependencies:
libmariadbclient18: Depends: libmysqlclient18 (= 5.5.32+maria-1~lucid) but 5.5.33+maria-1~lucid is installed
libmysqlclient18: Depends: libmariadbclient18 (= 5.5.33+maria-1~lucid) but 5.5.32+maria-1~lucid is installed
```
To fix it, the library needs to be downgraded to 5.5.32 (replace wheezy with your distribution):
```
sudo apt-get install \
libmysqlclient=5.5.32+maria-1~wheezy \
libmariadbclient=5.5.32+maria-1~wheezy
```
After switching back to the 5.5 repo, the libraries won't get upgraded, they will stay 5.5.32 until you upgrade the server to 5.5.33a.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Data Sampling: Techniques for Efficiently Finding a Random Row Data Sampling: Techniques for Efficiently Finding a Random Row
==============================================================
Fetching random rows from a table (beyond ORDER BY RAND())
----------------------------------------------------------
### The problem
One would like to do "SELECT ... ORDER BY RAND() LIMIT 10" to get 10 rows at random. But this is slow. The optimizer does
* Fetch all the rows -- this is costly
* Append [RAND()](../rand/index) to the rows
* Sort the rows -- also costly
* Pick the first 10.
All the algorithms given below are "fast", but most introduce flaws:
* Bias -- some rows are more like to be fetched than others.
* Repetitions -- If two random sets contain the same row, they are likely to contain other dups.
* Sometimes failing to fetch the desired number of rows.
"Fast" means avoiding reading all the rows. There are many techniques that require a full table scan, or at least an index scan. They are not acceptable for this list. There is even a technique that averages half a scan; it is relegated to a footnote.
### Metrics
Here's a way to measure performance without having a big table.
```
FLUSH STATUS;
SELECT ...;
SHOW SESSION STATUS LIKE 'Handler%';
```
If some of the "Handler" numbers look like the number of rows in the table, then there was a table scan.
None of the queries presented here need a full table (or index) scan. Each has a time proportional to the number of rows returned.
Virtually all published algorithms involve a table scan. The previously published version of this blog had, embarrassingly, several algorithms that had table scans.
Sometimes the scan can be avoided via a subquery. For example, the first of these will do a table scan; the second will not.
```
SELECT * FROM RandTest AS a
WHERE id = FLOOR(@min + (@max - @min + 1) * RAND()); -- BAD: table scan
SELECT *
FROM RandTest AS a
JOIN (
SELECT FLOOR(@min + (@max - @min + 1) * RAND()) AS id -- Good; single eval.
) b USING (id);
```
### Case: Consecutive AUTO\_INCREMENT without gaps, 1 row returned
* Requirement: [AUTO\_INCREMENT](../auto_increment/index) id
* Requirement: No gaps in id
```
SELECT r.*
FROM (
SELECT FLOOR(mm.min_id + (mm.max_id - mm.min_id + 1) * RAND()) AS id
FROM (
SELECT MIN(id) AS min_id,
MAX(id) AS max_id
FROM RandTest
) AS mm
) AS init
JOIN RandTest AS r ON r.id = init.id;
```
(Of course, you might be able to simplify this. For example, min\_id is likely to be 1. Or precalculate limits into @min and @max.)
### Case: Consecutive AUTO\_INCREMENT without gaps, 10 rows
* Requirement: AUTO\_INCREMENT id
* Requirement: No gaps in id
* Flaw: Sometimes delivers fewer than 10 rows
```
-- First select is one-time:
SELECT @min := MIN(id),
@max := MAX(id)
FROM RandTest;
SELECT DISTINCT *
FROM RandTest AS a
JOIN (
SELECT FLOOR(@min + (@max - @min + 1) * RAND()) AS id
FROM RandTest
LIMIT 11 -- more than 10 (to compensate for dups)
) b USING (id)
LIMIT 10; -- the desired number of rows
```
The FLOOR expression could lead to duplicates, hence the inflated inner LIMIT. There could (rarely) be so many duplicates that the inflated LIMIT leads to fewer than the desired 10 different rows. One approach to that Flaw is to rerun the query if it delivers too few rows.
A variant:
```
SELECT r.*
FROM (
SELECT FLOOR(mm.min_id + (mm.max_id - mm.min_id + 1) * RAND()) AS id
FROM (
SELECT MIN(id) AS min_id,
MAX(id) AS max_id
FROM RandTest
) AS mm
JOIN ( SELECT id dummy FROM RandTest LIMIT 11 ) z
) AS init
JOIN RandTest AS r ON r.id = init.id
LIMIT 10;
```
Again, ugly but fast, regardless of table size.
### Case: AUTO\_INCREMENT with gaps, 1 or more rows returned
* Requirement: AUTO\_INCREMENT, possibly with gaps due to DELETEs, etc
* Flaw: Only semi-random (rows do not have an equal chance of being picked), but it does partially compensate for the gaps
* Flaw: The first and last few rows of the table are less likely to be delivered.
This gets 50 "consecutive" ids (possibly with gaps), then delivers a random 10 of them.
```
-- First select is one-time:
SELECT @min := MIN(id),
@max := MAX(id)
FROM RandTest;
SELECT a.*
FROM RandTest a
JOIN ( SELECT id FROM
( SELECT id
FROM ( SELECT @min + (@max - @min + 1 - 50) * RAND()
AS start FROM DUAL ) AS init
JOIN RandTest y
WHERE y.id > init.start
ORDER BY y.id
LIMIT 50 -- Inflated to deal with gaps
) z ORDER BY RAND()
LIMIT 10 -- number of rows desired (change to 1 if looking for a single row)
) r ON a.id = r.id;
```
Yes, it is complex, but yes, it is fast, regardless of the table size.
### Case: Extra FLOAT column for randomizing
(Unfinished: need to check these.)
Assuming `rnd` is a FLOAT (or DOUBLE) populated with RAND() and INDEXed:
* Requirement: extra, indexed, FLOAT column
* Flaw: Fetches 10 adjacent rows (according to `rnd`), hence not good randomness
* Flaw: Near 'end' of table, can't find 10 rows.
```
SELECT r.*
FROM ( SELECT RAND() AS start FROM DUAL ) init
JOIN RandTest r
WHERE r.rnd >= init.start
ORDER BY r.rnd
LIMIT 10;
```
* These two variants attempt to resolve the end-of-table flaw:
```
SELECT r.*
FROM ( SELECT RAND() * ( SELECT rnd
FROM RandTest
ORDER BY rnd DESC
LIMIT 10,1 ) AS start
) AS init
JOIN RandTest r
WHERE r.rnd > init.start
ORDER BY r.rnd
LIMIT 10;
SELECT @start := RAND(),
@cutoff := CAST(1.1 * 10 + 5 AS DECIMAL(20,8)) / TABLE_ROWS
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = 'dbname'
AND TABLE_NAME = 'RandTest'; -- 0.0030
SELECT d.*
FROM (
SELECT a.id
FROM RandTest a
WHERE rnd BETWEEN @start AND @start + @cutoff
) sample
JOIN RandTest d USING (id)
ORDER BY rand()
LIMIT 10;
```
### Case: UUID or MD5 column
* Requirement: UUID/GUID/MD5/SHA1 column exists and is indexed.
* Similar code/benefits/flaws to AUTO\_INCREMENT with gaps.
* Needs 7 random HEX digits:
```
RIGHT( HEX( (1<<24) * (1+RAND()) ), 6)
```
can be used as a `start` for adapting a gapped AUTO\_INCREMENT case. If the field is BINARY instead of hex, then
```
UNHEX(RIGHT( HEX( (1<<24) * (1+RAND()) ), 6))
```
See also
--------
Rick James graciously allowed us to use this article in the Knowledge Base.
[Rick James' site](http://mysql.rjweb.org/) has other useful tips, how-tos, optimizations, and debugging tips.
Original source: <http://mysql.rjweb.org/doc.php/random>
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB ColumnStore software upgrade 1.0.6 to 1.0.7 MariaDB ColumnStore software upgrade 1.0.6 to 1.0.7
===================================================
MariaDB ColumnStore software upgrade 1.0.6 to 1.0.7
---------------------------------------------------
Note: Columnstore.xml modifications you manually made are not automatically carried forward on an upgrade. These modifications will need to be incorporated back into .XML once the upgrade has occurred.
The previous configuration file will be saved as /usr/local/mariadb/columnstore/etc/Columnstore.xml.rpmsave.
If you have specified a root database password (which is good practice), then you must configure a .my.cnf file with user credentials for the upgrade process to use. Create a .my.cnf file in the user home directory with 600 file permissions with the following content (updating PASSWORD as appropriate):
```
[mysqladmin]
user = root
password = PASSWORD
```
This file can be removed after the upgrade is complete.
### Changes for 1.0.7
#### MariaDB ColumnStore Schema Sync feature
There is a new prompt in postConfigure for MariaDB ColumnStore Schema Sync feature. In previous versions, this is defaulted to enabled. Starting in the 1.0.7 version, there is a prompt to the user to give you the option to disable it, in the case you have another application that is doing this functionality. So for upgrade, if you use the upgrade option of '-u', then this feature will be left enabled.
##### Amazon AMI Certification Keys
In 1.0.6, if you were running on a multi-node system and utilizing the AWS EC2 API toolset, you was required to have the Certification Keys (Access and Secret) in files that were referenced during the postConfigure install. in 1.0.7, these keys are now read from the IAM role that is provided or is read from a file called .aws/certification. Please check the Amazon AMI Installation Guide for additional details.
So if you are upgrading from from the Amazon AMI 1.0.6 to the 1.0.7 package and you are utilizing these keys, you will need to either have the keys in the IAM role that you used to launch the AMI with or in the .aws/certification file.
### Choosing the type of upgrade
#### Root User Installs
#### Upgrading MariaDB ColumnStore using RPMs
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Download the package mariadb-columnstore-1.0.7-1-centos#.x86\_64.rpm.tar.gz to the PM1 server where you are installing MariaDB ColumnStore.** Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Unpack the tarball, which will generate a set of RPMs that will reside in the /root/ directory.
`tar -zxf mariadb-columnstore-1.0.7-1-centos#.x86_64.rpm.tar.gz`
* Upgrade the RPMs. The MariaDB ColumnStore software will be installed in /usr/local/.
```
rpm -e --nodeps $(rpm -qa | grep '^mariadb-columnstore')
rpm -ivh mariadb-columnstore-*1.0.7*rpm
```
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml.rpmsave
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
For RPM Upgrade, the previous configuration file will be saved as:
/usr/local/mariadb/columnstore/etc/Columnstore.xml.rpmsave
### Initial download/install of MariaDB ColumnStore binary package
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /usr/local directory -mariadb-columnstore-1.0.7-1.x86\_64.bin.tar.gz (Binary 64-BIT)to the server where you are installing MariaDB ColumnStore.
* Shutdown the MariaDB ColumnStore system:
`mcsadmin shutdownsystem y`
* Run pre-uninstall script
`/usr/local/mariadb/columnstore/bin/pre-uninstall`
* Unpack the tarball, in the /usr/local/ directory.
`tar -zxvf -mariadb-columnstore-1.0.7-1.x86_64.bin.tar.gz`
* Run post-install scripts
`/usr/local/mariadb/columnstore/bin/post-install`
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml,rpmsave
`/usr/local/mariadb/columnstore/bin/postConfigure -u`
### Upgrading MariaDB ColumnStore using the DEB package
A DEB upgrade would be done on a system that supports DEBs like Debian or Ubuntu systems.
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /root directory
```
mariadb-columnstore-1.0.7-1.amd64.deb.tar.gz
```
(DEB 64-BIT) to the server where you are installing MariaDB ColumnStore.
* Shutdown the MariaDB ColumnStore system:
```
mcsadmin shutdownsystem y
```
* Unpack the tarball, which will generate DEBs.
```
tar -zxf mariadb-columnstore-1.0.7-1.amd64.deb.tar.gz
```
* Remove, purge and install all MariaDB ColumnStore debs
```
cd /root/
dpkg -r mariadb-columnstore*deb
dpkg -P mariadb-columnstore*deb
```
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml,rpmsave
```
/usr/local/mariadb/columnstore/bin/postConfigure -u
```
<</style>>
#### Non-Root User Installs
### Initial download/install of MariaDB ColumnStore binary package
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /home/'non-root-user" directory
mariadb-columnstore-1.0.7-1.x86\_64.bin.tar.gz (Binary 64-BIT)to the server where you are installing MariaDB ColumnStore.
* Shutdown the MariaDB ColumnStore system:
`mcsadmin shutdownsystem y`
* Run pre-uninstall script
`$HOME/mariadb/columnstore/bin/pre-uninstall -i /home/guest/mariadb/columnstore`
* Unpack the tarball, which will generate the $HOME/ directory.
`tar -zxvf -mariadb-columnstore-1.0.7-1.x86_64.bin.tar.gz`
* Run post-install scripts
1. $HOME/mariadb/columnstore/bin/post-install -i /home/guest/mariadb/columnstore
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml,rpmsave
`$HOME/mariadb/columnstore/bin/postConfigure -u -i /home/guest/mariadb/columnstore`
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Optimization and Tuning Optimization and Tuning
========================
Articles on how to get the most out of MariaDB, including new features.
| Title | Description |
| --- | --- |
| [Hardware Optimization](../hardware-optimization/index) | Better performance with hardware improvements |
| [Operating System Optimizations](../operating-system-optimizations/index) | Optimizations at the OS level |
| [Optimization and Indexes](../optimization-and-indexes/index) | Using indexes to improve query performance |
| [Query Optimizations](../query-optimizations/index) | Getting queries running more optimally |
| [Optimizing Tables](../optimizing-tables/index) | Different ways to optimize tables and data on disk |
| [MariaDB Memory Allocation](../mariadb-memory-allocation/index) | Basic issues in RAM allocation for MariaDB. |
| [System Variables](../system-variables/index) | Understanding, optimizing and tuning the server system variables |
| [Buffers, Caches and Threads](../buffers-caches-and-threads/index) | Buffering, caching, thread pool to improve performance |
| [Optimizing Data Structure](../optimizing-data-structure/index) | Designing the most optimal schemas, tables, and columns |
| [MariaDB Internal Optimizations](../mariadb-internal-optimizations/index) | Different optimizations strategies done internally in MariaDB |
| [Benchmarking](../benchmarking/index) | Various benchmark results for MariaDB. |
| [Compression](../optimization-and-tuning-compression/index) | Types of compression in MariaDB |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb TIME TIME
====
Syntax
------
```
TIME [(<microsecond precision>)]
```
Description
-----------
A time. The range is `'-838:59:59.999999'` to `'838:59:59.999999'`. [Microsecond precision](../microseconds-in-mariadb/index) can be from 0-6; if not specified 0 is used. Microseconds have been available since [MariaDB 5.3](../what-is-mariadb-53/index).
MariaDB displays `TIME` values in `'HH:MM:SS.ssssss'` format, but allows assignment of times in looser formats, including 'D HH:MM:SS', 'HH:MM:SS', 'HH:MM', 'D HH:MM', 'D HH', 'SS', or 'HHMMSS', as well as permitting dropping of any leading zeros when a delimiter is provided, for example '3:9:10'. For details, see [date and time literals](../date-and-time-literals/index).
**MariaDB starting with [10.1.2](https://mariadb.com/kb/en/mariadb-1012-release-notes/)**[MariaDB 10.1.2](https://mariadb.com/kb/en/mariadb-1012-release-notes/) introduced the [--mysql56-temporal-format](../server-system-variables/index#mysql56_temporal_format) option, on by default, which allows MariaDB to store TIMEs using the same low-level format MySQL 5.6 uses.
### Internal Format
In [MariaDB 10.1.2](https://mariadb.com/kb/en/mariadb-1012-release-notes/) a new temporal format was introduced from MySQL 5.6 that alters how the `TIME`, `DATETIME` and `TIMESTAMP` columns operate at lower levels. These changes allow these temporal data types to have fractional parts and negative values. You can disable this feature using the `[mysql56\_temporal\_format](../server-system-variables/index#mysql56_temporal_format)` system variable.
Tables that include `TIMESTAMP` values that were created on an older version of MariaDB or that were created while the `[mysql56\_temporal\_format](../server-system-variables/index#mysql56_temporal_format)` system variable was disabled continue to store data using the older data type format.
In order to update table columns from the older format to the newer format, execute an `[ALTER TABLE... MODIFY COLUMN](../alter-table/index#modify-column)` statement that changes the column to the \*same\* data type. This change may be needed if you want to export the table's tablespace and import it onto a server that has `mysql56_temporal_format=ON` set (see [MDEV-15225](https://jira.mariadb.org/browse/MDEV-15225)).
For instance, if you have a `TIME` column in your table:
```
SHOW VARIABLES LIKE 'mysql56_temporal_format';
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| mysql56_temporal_format | ON |
+-------------------------+-------+
ALTER TABLE example_table MODIFY ts_col TIME;
```
When MariaDB executes the `[ALTER TABLE](../alter-table/index)` statement, it converts the data from the older temporal format to the newer one.
In the event that you have several tables and columns using temporal data types that you want to switch over to the new format, make sure the system variable is enabled, then perform a dump and restore using `mysqldump`. The columns using relevant temporal data types are restored using the new temporal format.
Starting from [MariaDB 10.5.1](https://mariadb.com/kb/en/mariadb-1051-release-notes/) columns with old temporal formats are marked with a `/* mariadb-5.3 */` comment in the output of `[SHOW CREATE TABLE](../show-create-table/index)`, `[SHOW COLUMNS](../show-columns/index)`, `[DESCRIBE](../describe/index)` statements, as well as in the `COLUMN_TYPE` column of the `[INFORMATION\_SCHEMA.COLUMNS Table](../information-schema-columns-table/index)`.
```
SHOW CREATE TABLE mariadb5312_time\G
*************************** 1. row ***************************
Table: mariadb5312_time
Create Table: CREATE TABLE `mariadb5312_time` (
`t0` time /* mariadb-5.3 */ DEFAULT NULL,
`t6` time(6) /* mariadb-5.3 */ DEFAULT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1
```
Note, columns with the current format are not marked with a comment.
Examples
--------
```
INSERT INTO time VALUES ('90:00:00'), ('800:00:00'), (800), (22), (151413), ('9:6:3'), ('12 09');
SELECT * FROM time;
+-----------+
| t |
+-----------+
| 90:00:00 |
| 800:00:00 |
| 00:08:00 |
| 00:00:22 |
| 15:14:13 |
| 09:06:03 |
| 297:00:00 |
+-----------+
```
See also
--------
* [Data Type Storage Requirements](../data-type-storage-requirements/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb TIME_MS column in INFORMATION_SCHEMA.PROCESSLIST TIME\_MS column in INFORMATION\_SCHEMA.PROCESSLIST
==================================================
In MariaDB, an extra column `TIME_MS` has been added to the [INFORMATION\_SCHEMA.PROCESSLIST](../information-schema-processlist-table/index) table. This column shows the same information as the column '`TIME`', but in units of milliseconds with microsecond precision (the unit and precision of the `TIME` column is one second).
For details about microseconds support in MariaDB, see [microseconds in MariaDB](../microseconds-in-mariadb/index).
The value displayed in the `TIME` and `TIME_MS` columns is the period of time that the given thread has been in its current state. Thus it can be used to check for example how long a thread has been executing the current query, or for how long it has been idle.
```
select id, time, time_ms, command, state from
information_schema.processlist, (select sleep(2)) t;
+----+------+----------+---------+-----------+
| id | time | time_ms | command | state |
+----+------+----------+---------+-----------+
| 37 | 2 | 2000.493 | Query | executing |
+----+------+----------+---------+-----------+
```
Note that as a difference to MySQL, in MariaDB the `TIME` column (and also the `TIME_MS` column) are not affected by any setting of [@TIMESTAMP](../server-system-variables/index#timestamp). This means that it can be reliably used also for threads that change `@TIMESTAMP` (such as the [replication](../replication/index) SQL thread). See also [MySQL Bug #22047](http://bugs.mysql.com/bug.php?id=22047).
As a consequence of this, the `TIME` column of `SHOW FULL PROCESSLIST` and `INFORMATION_SCHEMA.PROCESSLIST` can not be used to determine if a slave is lagging behind. For this, use instead the `Seconds_Behind_Master` column in the output of [SHOW SLAVE STATUS](../show-slave-status/index).
The addition of the TIME\_MS column is based on the microsec\_process patch, developed by [Percona](http://www.percona.com/).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb FOUND_ROWS FOUND\_ROWS
===========
Syntax
------
```
FOUND_ROWS()
```
Description
-----------
A [SELECT](../select/index) statement may include a [LIMIT](../select/index#limit) clause to restrict the number of rows the server returns to the client. In some cases, it is desirable to know how many rows the statement would have returned without the LIMIT, but without running the statement again. To obtain this row count, include a [SQL\_CALC\_FOUND\_ROWS](../select/index#sql_calc_found_rows) option in the SELECT statement, and then invoke FOUND\_ROWS() afterwards.
You can also use FOUND\_ROWS() to obtain the number of rows returned by a [SELECT](../select/index) which does not contain a [LIMIT](../select/index#limit) clause. In this case you don't need to use the [SQL\_CALC\_FOUND\_ROWS](../select/index#sql_calc_found_rows) option. This can be useful for example in a [stored procedure](../stored-procedures/index).
Also, this function works with some other statements which return a resultset, including [SHOW](../show/index), [DESC](../describe/index) and [HELP](../help-command/index). For [DELETE ... RETURNING](../delete/index) you should use [ROW\_COUNT()](../information-functions-row_count/index). It also works as a [prepared statement](../prepared-statements/index), or after executing a prepared statement.
Statements which don't return any results don't affect FOUND\_ROWS() - the previous value will still be returned.
**Warning:** When used after a [CALL](../call/index) statement, this function returns the number of rows selected by the last query in the procedure, not by the whole procedure.
Statements using the FOUND\_ROWS() function are not [safe for replication](../unsafe-statements-for-replication/index).
Examples
--------
```
SHOW ENGINES\G
*************************** 1. row ***************************
Engine: CSV
Support: YES
Comment: Stores tables as CSV files
Transactions: NO
XA: NO
Savepoints: NO
*************************** 2. row ***************************
Engine: MRG_MyISAM
Support: YES
Comment: Collection of identical MyISAM tables
Transactions: NO
XA: NO
Savepoints: NO
...
*************************** 8. row ***************************
Engine: PERFORMANCE_SCHEMA
Support: YES
Comment: Performance Schema
Transactions: NO
XA: NO
Savepoints: NO
8 rows in set (0.000 sec)
SELECT FOUND_ROWS();
+--------------+
| FOUND_ROWS() |
+--------------+
| 8 |
+--------------+
SELECT SQL_CALC_FOUND_ROWS * FROM tbl_name WHERE id > 100 LIMIT 10;
SELECT FOUND_ROWS();
+--------------+
| FOUND_ROWS() |
+--------------+
| 23 |
+--------------+
```
See Also
--------
* [ROW\_COUNT()](../information-functions-row_count/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb sys_get_config sys\_get\_config
================
Syntax
------
```
sys.sys_get_config(name,default)
```
Description
-----------
`sys_get_config` is a [stored function](../stored-functions/index) available with the [Sys Schema](../sys-schema/index).
The function returns a configuration option value from the [sys\_config table](../sys-schema-sys_config-table/index). It takes two arguments; *name*, a configuration option name, and *default*, which is returned if the given option does not exist in the table.
Both arguments are VARCHAR(128) and can be NULL. Returns NULL if *name* is NULL, or if the given option is not found and *default* is NULL.
Examples
--------
```
SELECT sys.sys_get_config('ps_thread_trx_info.max_length',NULL);
+----------------------------------------------------------+
| sys.sys_get_config('ps_thread_trx_info.max_length',NULL) |
+----------------------------------------------------------+
| 65535 |
+----------------------------------------------------------+
```
See Also
--------
* [Sys Schema sys\_config Table](../sys-schema-sys_config-table/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MBROverlaps MBROverlaps
===========
Syntax
------
```
MBROverlaps(g1,g2)
```
Description
-----------
Returns 1 or 0 to indicate whether the Minimum Bounding Rectangles of the two geometries `g1` and `g2` overlap. The term spatially overlaps is used if two geometries intersect and their intersection results in a geometry of the same dimension but not equal to either of the given geometries.
Examples
--------
```
SET @g1 = GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0))');
SET @g2 = GeomFromText('Polygon((4 4,4 7,7 7,7 4,4 4))');
SELECT mbroverlaps(@g1,@g2);
+----------------------+
| mbroverlaps(@g1,@g2) |
+----------------------+
| 0 |
+----------------------+
SET @g1 = GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0))');
SET @g2 = GeomFromText('Polygon((3 3,3 6,6 6,6 3,3 3))');
SELECT mbroverlaps(@g1,@g2);
+----------------------+
| mbroverlaps(@g1,@g2) |
+----------------------+
| 0 |
+----------------------+
SET @g1 = GeomFromText('Polygon((0 0,0 4,4 4,4 0,0 0))');
SET @g2 = GeomFromText('Polygon((3 3,3 6,6 6,6 3,3 3))');
SELECT mbroverlaps(@g1,@g2);
+----------------------+
| mbroverlaps(@g1,@g2) |
+----------------------+
| 1 |
+----------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB ColumnStore software upgrade 1.1.6 GA to 1.2.2 GA MariaDB ColumnStore software upgrade 1.1.6 GA to 1.2.2 GA
=========================================================
MariaDB ColumnStore software upgrade 1.1.6 GA to 1.2.2 GA
---------------------------------------------------------
This upgrade also applies to 1.2.0 Alpha to 1.2.2 GA upgrades
### Changes in 1.2.1 and later
#### Non-distributed is the default distribution mode in postConfigure
The default distribution mode has changed from 'distributed' to 'non-distributed'. During an upgrade, however, the default is to use the distribution mode used in the original installation. The options '-d' and '-n' can always be used to override the default.
#### Non-root user sudo setup
Root-level permissions are no longer required to install or upgrade ColumnStore for some types of installations. Installations requiring some level of sudo access, and the instructions, are listed here: [https://mariadb.com/kb/en/library/preparing-for-columnstore-installation-121/#update-sudo-configuration-if-needed-by-root-user](../library/preparing-for-columnstore-installation-121/index#update-sudo-configuration-if-needed-by-root-user)
#### Running the mysql\_upgrade script
As part of the upgrade process to 1.2.2, the user is required to run the mysql\_upgrade script on all of the following nodes.
* All User Modules on a system configured with separate User and Performance Modules
* All Performance Modules on a system configured with separate User and Performance Modules and Local Query Feature is enabled
* All Performance Modules on a system configured with combined User and Performance Modules
mysql\_upgrade should be run once the upgrade has been completed.
This is an example of how it run on a root user install:
```
/usr/local/mariadb/columnstore/mysql/bin/mysql_upgrade --defaults-file=/usr/local/mariadb/columnstore/mysql/my.cnf --force
```
This is an example of how it run on a non-root user install, assuming ColumnStore is installed under the user's home directory:
```
$HOME/mariadb/columnstore/mysql/bin/mysql_upgrade --defaults-file=$HOME/mariadb/columnstore/mysql/my.cnf --force
```
### Setup
In this section, we will refer to the directory ColumnStore is installed in as <CSROOT>. If you installed the RPM or DEB package, then your <CSROOT> will be /usr/local. If you installed it from the tarball, <CSROOT> will be where you unpacked it.
#### Columnstore.xml / my.cnf
Configuration changes made manually are not automatically carried forward during the upgrade. These modifications will need to be made again manually after the upgrade is complete.
After the upgrade process the configuration files will be saved at:
* <CSROOT>/mariadb/columnstore/etc/Columnstore.xml.rpmsave
* <CSROOT>/mariadb/columnstore/mysql/my.cnf.rpmsave
#### MariaDB root user database password
If you have specified a root user database password (which is good practice), then you must configure a .my.cnf file with user credentials for the upgrade process to use. Create a .my.cnf file in the user home directory with 600 file permissions with the following content (updating PASSWORD as appropriate):
```
[mysqladmin]
user = root
password = PASSWORD
```
### Choosing the type of upgrade
Note, softlinks may cause a problem during the upgrade if you use the RPM or DEB packages. If you have linked a directory above /usr/local/mariadb/columnstore, the softlinks will be deleted and the upgrade will fail. In that case you will need to upgrade using the binary tarball instead. If you have only linked the data directories (ie /usr/local/MariaDB/columnstore/data\*), the RPM/DEB package upgrade will work.
#### Root User Installs
##### Upgrading MariaDB ColumnStore using the tarball of RPMs (distributed mode)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Download the package mariadb-columnstore-1.2.2-1-centos#.x86\_64.rpm.tar.gz to the PM1 server where you are installing MariaDB ColumnStore.**
**Shutdown the MariaDB ColumnStore system:**
```
# mcsadmin shutdownsystem y
```
* Unpack the tarball, which will generate a set of RPMs that will reside in the /root/ directory.
```
# tar -zxf mariadb-columnstore-1.2.2-1-centos#.x86_64.rpm.tar.gz
```
* Uninstall the old packages, then install the new packages. The MariaDB ColumnStore software will be installed in /usr/local/.
```
# rpm -e --nodeps $(rpm -qa | grep '^mariadb-columnstore')
# rpm -ivh mariadb-columnstore-*1.2.2*rpm
```
* Run postConfigure using the upgrade option
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-116-ga-to-122-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-116-ga-to-122-ga/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using RPM Package Repositories (non-distributed mode)
The system can be upgraded when it was previously installed from the Package Repositories. This will need to be run on each module in the system.
Additional information can be found in this document on how to setup and install using the 'yum' package repo command:
[https://mariadb.com/kb/en/library/installing-mariadb-ax-from-the-package-repositories](../library/installing-mariadb-ax-from-the-package-repositories)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Shutdown the MariaDB ColumnStore system:**
```
# mcsadmin shutdownsystem y
```
* Uninstall MariaDB ColumnStore Packages
```
# yum remove mariadb-columnstore*
```
* Install MariaDB ColumnStore Packages
```
# yum --enablerepo=mariadb-columnstore clean metadata
# yum install mariadb-columnstore*
```
NOTE: On all modules except for PM1, start the columnstore service
```
# /usr/local/mariadb/columnstore/bin/columnstore start
```
* Run postConfigure using the upgrade and non-distributed options
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u -n
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-116-ga-to-122-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-116-ga-to-122-ga/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using the binary tarball (distributed mode)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /usr/local directory mariadb-columnstore-1.2.2-1.x86\_64.bin.tar.gz
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Run pre-uninstall script
```
# /usr/local/mariadb/columnstore/bin/pre-uninstall
```
* Unpack the tarball in the /usr/local/ directory.
```
# tar -zxvf mariadb-columnstore-1.2.2-1.x86_64.bin.tar.gz
```
* Run post-install scripts
```
# /usr/local/mariadb/columnstore/bin/post-install
```
* Run postConfigure using the upgrade option
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-116-ga-to-122-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-116-ga-to-122-ga/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using the DEB tarball (distributed mode)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /root directory mariadb-columnstore-1.2.2-1.amd64.deb.tar.gz
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Unpack the tarball, which contains DEBs.
```
# tar -zxf mariadb-columnstore-1.2.2-1.amd64.deb.tar.gz
```
* Remove and install all MariaDB ColumnStore debs
```
# cd /root/
# dpkg -r $(dpkg --list | grep 'mariadb-columnstore' | awk '{print $2}')
# dpkg --install mariadb-columnstore-*1.2.2-1*deb
```
* Run postConfigure using the upgrade option
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-116-ga-to-122-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-116-ga-to-122-ga/index#running-the-mysql_upgrade-script)
##### Upgrading MariaDB ColumnStore using DEB Package Repositories (non-distributed mode)
The system can be upgraded when it was previously installed from the Package Repositories. This will need to be run on each module in the system
Additional information can be found in this document on how to setup and install using the 'apt-get' package repo command:
[https://mariadb.com/kb/en/library/installing-mariadb-ax-from-the-package-repositories](../library/installing-mariadb-ax-from-the-package-repositories)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Shutdown the MariaDB ColumnStore system:**
```
# mcsadmin shutdownsystem y
```
* Uninstall MariaDB ColumnStore Packages
```
# apt-get remove mariadb-columnstore*
```
* Install MariaDB ColumnStore Packages
```
# apt-get update
# sudo apt-get install mariadb-columnstore*
```
NOTE: On all modules except for PM1, start the columnstore service
```
# /usr/local/mariadb/columnstore/bin/columnstore start
```
* Run postConfigure using the upgrade and non-distributed options
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u -n
```
* Run the mysql\_upgrade script on the nodes documented above for a root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-116-ga-to-122-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-116-ga-to-122-ga/index#running-the-mysql_upgrade-script)
#### Non-Root User Installs
##### Upgrade MariaDB ColumnStore from the binary tarball without sudo access (non-distributed mode)
This upgrade method applies when root/sudo access is not an option.
The uninstall script for 1.1.6 requires root access to perform some operations. These operations are the following:
* removing /etc/profile.d/columnstore{Alias,Env}.sh to remove aliases and environment variables from all users.
* running '<CSROOT>/mysql/columnstore/bin/syslogSetup.sh uninstall' to remove ColumnStore from the logging system
* removing the columnstore startup script
* remove /etc/ld.so.conf.d/columnstore.conf to ColumnStore directories from the ld library search path
Because you are upgrading ColumnStore and not uninstalling it, they are not necessary. If at some point you wish to uninstall it, you (or your sysadmin) will have to perform those operations by hand.
The upgrade instructions:
* Download the binary tarball to the current installation location on all nodes. See <https://downloads.mariadb.com/ColumnStore/>
* Shutdown the MariaDB ColumnStore system:
```
$ mcsadmin shutdownsystem y
```
* Copy Columnstore.xml to Columnstore.xml.rpmsave, and my.cnf to my.cnf.rpmsave
```
$ cp <CSROOT>/mariadb/columnstore/etc/Columnstore{.xml,.xml.rpmsave}
$ cp <CSROOT>/mariadb/columnstore/mysql/my{.cnf,.cnf.rpmsave}
```
* On all nodes, untar the new files in the same location as the old ones
```
$ tar zxf columnstore-1.2.2-1.x86_64.bin.tar.gz
```
* On all nodes, run post-install, specifying where ColumnStore is installed
```
$ <CSROOT>/mariadb/columnstore/bin/post-install --installdir=<CSROOT>/mariadb/columnstore
```
* On all nodes except for PM1, start the columnstore service
```
$ <CSROOT>/mariadb/columnstore/bin/columnstore start
```
* On PM1 only, run postConfigure, specifying the upgrade, non-distributed installation mode, and the location of the installation
```
$ <CSROOT>/mariadb/columnstore/bin/postConfigure -u -n -i <CSROOT>/mariadb/columnstore
```
* Run the mysql\_upgrade script on the nodes documented above for a non-root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-116-ga-to-122-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-116-ga-to-122-ga/index#running-the-mysql_upgrade-script)
##### Upgrade MariaDB ColumnStore from the binary tarball (distributed mode)
Upgrade MariaDB ColumnStore as user USER on the server designated as PM1:
* Download the package into the user's home directory mariadb-columnstore-1.2.2-1.x86\_64.bin.tar.gz
* Shutdown the MariaDB ColumnStore system:
```
$ mcsadmin shutdownsystem y
```
* Run the pre-uninstall script; this will require sudo access as you are running a script from 1.1.6.
```
$ <CSROOT>/mariadb/columnstore/bin/pre-uninstall --installdir=<CSROOT>/mariadb/columnstore
```
* Make the sudo changes as noted at the beginning of this document
* Unpack the tarball in the same place as the original installation
```
$ tar -zxvf mariadb-columnstore-1.2.2-1.x86_64.bin.tar.gz
```
* Run post-install scripts
```
$ <CSROOT>/mariadb/columnstore/bin/post-install --installdir=<CSROOT>/mariadb/columnstore
```
* Run postConfigure using the upgrade option
```
$ <CSROOT>/mariadb/columnstore/bin/postConfigure -u -i <CSROOT>/mariadb/columnstore
```
* Run the mysql\_upgrade script on the nodes documented above for a non-root user install
[https://mariadb.com/kb/en/library/mariadb-columnstore-software-upgrade-116-ga-to-122-ga/#running-the-mysql\_upgrade-script](../library/mariadb-columnstore-software-upgrade-116-ga-to-122-ga/index#running-the-mysql_upgrade-script)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb CREATE LOGFILE GROUP CREATE LOGFILE GROUP
====================
The `CREATE LOGFILE GROUP` statement is not supported by MariaDB. It was originally inherited from MySQL NDB Cluster. See [MDEV-19295](https://jira.mariadb.org/browse/MDEV-19295) for more information.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Sys Schema Stored Procedures Sys Schema Stored Procedures
=============================
This article is currently incomplete.
The following [stored procedures](../stored-procedures/index) are available in the [Sys Schema](../sys-schema/index).
| Title | Description |
| --- | --- |
| [create\_synonym\_db](../create_synonym_db/index) | Takes a source db and create a synonym db with views that point to all of t... |
| [statement\_performance\_analyzer](../statement_performance_analyzer/index) | Returns a report on running statements. |
| [table\_exists](../table_exists/index) | Given a database and table name, returns the table type. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore remote bulk data import: mcsimport ColumnStore remote bulk data import: mcsimport
==============================================
Overview
--------
mcsimport is a high-speed bulk load utility that imports data into ColumnStore tables in a fast and efficient manner utilizing ColumnStore's [Bulk Write SDK](../columnstore-bulk-write-sdk/index). Unlike cpimport, mcsimport was designed to be executed from a remote machine that doesn't necessarily needs to be a [UM](../columnstore-user-module/index) or [PM](../columnstore-performance-module/index). mcsimport is further executable from Windows and Linux operating systems.
Similar to cpimport, mcsimport accepts as input any flat file that contains a delimiter between fields of data (i.e. columns in a table). The default delimiter is a comma (‘**,**’), but other delimiters such as pipes may also be used. By default mcsimport expects the data values to be in the same order as the create table statement, and a date format of ‘*YYYY-MM-DD HH:MM:SS*’. But, these settings can be overwritten in a mapping file which allows customizeable input column to ColumnStore column mappings, the usage of individual input column specific date formats utilizing the [strptime](http://pubs.opengroup.org/onlinepubs/9699919799/functions/strptime.html) format, and the specification of default values for non mapped target columns.
It is important to note that:
* The bulk loads are an append operation to a table so they allow existing data to be read and remain unaffected during the process.
* The bulk loads do not write their data operations to the transaction log; they are not transactional in nature but are considered an atomic operation at this time. Information markers, however, are placed in the transaction log so the DBA is aware that a bulk operation did occur.
* Upon completion of the load operation, a high water mark in each column file is moved in an atomic operation that allows for any subsequent queries to read the newly loaded data. This append operation provides for consistent read but does not incur the overhead of logging the data.
There are three primary steps to using the mcsimport utility:
1. Create the Columnstore.xml configuration file that holds the information of the ColumnStore instance to connect to.
2. Optionally create a mapping file that defines the mapping between input file and target ColumnStore table.
3. Run the mcsimport utility to perform the data import.
Installation
------------
On Linux systems mcsimport requires the installation of the ColumnStore Bulk Write SDK, on Windows systems the Bulk Write SDK is bundled with mcsimport and doesn't require an extra installation.
### RHEL, CentOS, Debian / Ubuntu Repositories
mcsimport can also be installed from our MariaDB ColumnStore Tools repository. Detailed information can be found [here](../installing-mariadb-ax-mariadb-columnstore-from-the-package-repositories-122/index#mariadb-columnstore-tools-package).
### RHEL / CentOS 7 Package
First, install the Bulk Write SDK and dependencies according to following [documentation](../columnstore-bulk-write-sdk/index#rhel-centos-7-package).
Afterwards, you can install mcsimport via:
```
sudo rpm -ivh mariadb-columnstore-tools*.rpm
```
### Ubuntu 16 / Debian 9 Package
First, install the Bulk Write SDK and dependencies according to following [documentation](../columnstore-bulk-write-sdk/index#ubuntu-16-debian-9-package).
Afterwards, you can install mcsimport via:
```
sudo dpkg -i mariadb-columnstore-tools*.deb
```
### Debian 8 Package
First, install the Bulk Write SDK and dependencies according to following [documentation](../columnstore-bulk-write-sdk/index#debian-8-package).
Afterwards, you can install mcsimport via:
```
sudo dpkg -i mariadb-columnstore-tools*.deb
```
### Windows 10 Package
To install mcsimport on Windows 10 you simply have to follow the installation wizard of the installer.
<http://downloads.mariadb.com/ColumnStore-Tools/latest/winx64-packages/>
### ColumnStore server configuration
As mcsimport is using the [Bulk Write SDK](../columnstore-bulk-write-sdk/index) for the injection, all ports required by the ColumnStore Bulk write SDK need to be accessible from the client executing mcsimport at the target ColumnStore server. These are in particular the TCP ports 8616, 8630, and 8800.
Syntax
------
```
mcsimport database table input_file [-m mapping_file] [-c Columnstore.xml] [-d delimiter]
[-n null_option] [-df date_format] [-default_non_mapped] [-E enclose_by_character]
[-C escape_character] [-rc read_cache_size] [-header] [-ignore_malformed_csv] [-err_log]
```
### -m mapping\_file
The mapping file is used to define the mapping between source csv columns and target ColumnStore columns, to define column specific input date formats, and to set default values for ignored target columns. It follows the Yaml 1.2 standard and can address the source csv columns implicit and explicit.
Source csv columns can only be identified by their position in the csv file starting with 0, and target ColumnStore columns can be identified either by their position or name.
Following snippet is an example for an implicit mapping file.
```
- column:
target: 0
- column:
- ignore
- column:
target: id
- column:
target: occurred
format: "%d %b %Y %H:%M:%S"
- target: 2
value: default
- target: salary
value: 20000
```
It defines that the first csv column (#0) is mapped to the first column in the ColumnStore table, that the second csv column (#1) is ignored and won't be injected into the target table, that the third csv column (#2) is mapped to the ColumnStore column with the name id, and that the fourth csv column (#3) is mapped to the ColumnStore column with the name *occurred* and uses a specific date format. (defined using the [strptime](http://pubs.opengroup.org/onlinepubs/9699919799/functions/strptime.html) format) The mapping file further defines that for the third ColumnStore target column (#2) its default value will be used, and that the ColumnStore target column with the name *salary* will be set to 20000 for all injections.
Explicit mapping is also possible.
```
- column: 0
target: id
- column: 4
target: salary
- target: timestamp
value: 2018-09-13 12:00:00
```
Using this variant the first (#0) csv source column is mapped to the target ColumnStore column with the name *id*, and the fifth source csv column (#4) is mapped to the target ColumnStore column with the name *salary*. It further defines that the target ColumnStore column timestamp uses a default value of *2018-09-13 12:00:00* for the injection.
### -c Columnstore.xml
As mcsimport is built upon ColumnStore's [Bulk Write SDK](../columnstore-bulk-write-sdk/index) it inherits its methods to connect to ColumnStore instances to ingest data. By default mcsimport uses the standard configuration file */usr/local/mariadb/ColumnStore/etc/Columnstore.xml* or if set the one defined through the environment variable *COLUMNSTORE\_INSTALL\_DIR* to connect to the remote Columnstore instance. Individual configurations can be defined through the command line parameter -c. Instructions on how to prepare Columnstore.xml for remote ingestion can be found [here](../columnstore-bulk-write-sdk/index#environment-configuration).
### -d delimiter
The default delimiter of the CSV input file is a comma (‘**,**’) and can be changed through the command line parameter -d. Only one character delimiters are currently supported.
### -df date\_format
By default mcsimport uses *YYYY-MM-DD HH:MM:SS* as input date format. An individual global date format can be specified via the command line parameter -df using the [strptime](http://pubs.opengroup.org/onlinepubs/9699919799/functions/strptime.html) format. Column specific input date formats can be defined in the mapping file and overwrite the global date format.
### -n null\_option
By default mcsimport treats input strings with the value "NULL" as data. If the null\_option is set to 1 strings with the value "NULL" are treated as *NULL* values.
### -default\_non\_mapped
mcsimport needs to inject values for all ColumnStore columns of the target table. In order to use the ColumnStore column's default values for all non mapped target columns the global parameter *default\_non\_mapped* can be used. Target column specific default values in the mapping file overwrite the global default values of this parameter.
### -E enclose\_by\_character
By default mcsimport uses the double-quote character **"** as enclosing character. It can be changed through the command line parameter -E. The enclosing character's length is limited to 1.
### -C escape\_character
By default mcsimport uses the double-quote character **"** as escaping character. It can be changed through the command line parameter -C. The escaping character's length is limited to 1.
### -rc read\_cache\_size
By default mcsimport uses a read cache size of 20,971,520 (20 MiB) to cache chunks of the input file in RAM. It can be changed through the command line paramter -rc. A minimum cache size of 1,048,576 (1 MiB) is required.
### -header
Choose this flag to ignore the first line of the input CSV file as header. (It won't be injected)
### -ignore\_malformed\_csv
By default mcsimport rolls back the entire bulk import if a malformed csv entry is found. With this option mcsimport ignores detected malformed csv entries and continiues with the injection.
### -err\_log
With this option an optional error log file is written which states truncated, saturated, and invalid values during the injection. If the command line parameter *-ignore\_malformed\_csv* is chosen, it also states which lines were ignored.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb UNINSTALL SONAME UNINSTALL SONAME
================
Syntax
------
```
UNINSTALL SONAME [IF EXISTS] 'plugin_library'
```
Description
-----------
This statement is a variant of [UNINSTALL PLUGIN](../uninstall-plugin/index) statement, that removes all [plugins](../mariadb-plugins/index) belonging to a specified `plugin_library`. See [UNINSTALL PLUGIN](../uninstall-plugin/index) for details.
`plugin_library` is the name of the shared library that contains the plugin code. The file name extension (for example, `libmyplugin.so` or `libmyplugin.dll`) can be omitted (which makes the statement look the same on all architectures).
To use `UNINSTALL SONAME`, you must have the [DELETE privilege](../grant/index) for the `mysql.plugin` table.
**MariaDB starting with [10.4.0](https://mariadb.com/kb/en/mariadb-1040-release-notes/)**#### IF EXISTS
If the `IF EXISTS` clause is used, MariaDB will return a note instead of an error if the plugin library does not exist. See [SHOW WARNINGS](../show-warnings/index).
Examples
--------
To uninstall the XtraDB plugin and all of its `information_schema` tables with one statement, use
```
UNINSTALL SONAME 'ha_xtradb';
```
From [MariaDB 10.4.0](https://mariadb.com/kb/en/mariadb-1040-release-notes/):
```
UNINSTALL SONAME IF EXISTS 'ha_example';
Query OK, 0 rows affected (0.099 sec)
UNINSTALL SONAME IF EXISTS 'ha_example';
Query OK, 0 rows affected, 1 warning (0.000 sec)
SHOW WARNINGS;
+-------+------+-------------------------------------+
| Level | Code | Message |
+-------+------+-------------------------------------+
| Note | 1305 | SONAME ha_example.so does not exist |
+-------+------+-------------------------------------+
```
See Also
--------
* [INSTALL SONAME](../install-soname/index)
* [SHOW PLUGINS](../show-plugins/index)
* [INSTALL PLUGIN](../install-plugin/index)
* [UNINSTALL PLUGIN](../uninstall-plugin/index)
* [SHOW PLUGINS](../show-plugins/index)
* [INFORMATION\_SCHEMA.PLUGINS Table](../plugins-table-information-schema/index)
* [mysql\_plugin](../mysql_plugin/index)
* [List of Plugins](../list-of-plugins/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb GeometryType GeometryType
============
A synonym for [ST\_GeometryType](../st_geometrytype/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Pausing mysql-test-run.pl Pausing mysql-test-run.pl
=========================
Sometimes you need to work when your computer is busy running [mysql-test-run.pl](../mysql-test-runpl-options/index). The mysql-test-run.pl script allows you to stop it temporarily so you can use your computer and then restart the tests when you're ready.
There are two ways to enable this:
1. **Command-line:** The `--stop-file` and `--stop-keep-alive` options.
2. **Environment Variables:** If you are calling mysql-test-run.pl indirectly (i.e from a script or program such as buildbot) you can set `MTR_STOP_FILE` and `MTR_STOP_KEEP_ALIVE`.
### Keep Alive
If you plan on using this feature with other programs, such as buildbot, you should set the <code>MTR\_STOP\_KEEP\_ALIVE</code> environment variable or the <code>--stop-keep-alive</code> command-line option with a value in seconds. This will make the script print messages to whatever program is calling mysql-test-run.pl at the interval you set to prevent timeouts.
If you are calling mysql-test-run.pl directly, you do not need to specify a timeout.
### The mysql-test-run Stop File
The stop file is a temporary file that you create on your system when you want to pause the execution of mysql-test-run. When enabled via the command-line or environment variable options, mysql-test-run will periodically check for the existence of the file and if it exists it will stop until the file is no longer present.
### Examples
Command-line:
```
mysql-test-run.pl --stop-file="/path/to/stop/file" --stop-keep-alive=120
```
Environment Variables:
```
export MTR_STOP_FILE="/path/to/stop/file"
export MTR_STOP_KEEP_ALIVE=120
mysql-test-run.pl
```
### Fixes
The following mysql-test-run bugs have been fixed in [MariaDB 5.1](../what-is-mariadb-51/index):
* Windows: mysql-test-run --log-error fixed to not add --console.
* mysql-test-run sometimes terminated mysqld early, causing loss of memory leak error reports from Valgrind and GCov test coverage output
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema INNODB_CMP and INNODB_CMP_RESET Tables Information Schema INNODB\_CMP and INNODB\_CMP\_RESET Tables
============================================================
The `INNODB_CMP` and `INNODB_CMP_RESET` tables contain status information on compression operations related to [compressed XtraDB/InnoDB tables](../innodb-storage-formats/index#compressed).
The `[PROCESS](../grant/index#global-privileges)` privilege is required to query this table.
These tables contain the following columns:
| Column Name | Description |
| --- | --- |
| `PAGE_SIZE` | Compressed page size, in bytes. This value is unique in the table; other values are totals which refer to pages of this size. |
| `COMPRESS_OPS` | How many times a page of the size `PAGE_SIZE` has been compressed. This happens when a new page is created because the compression log runs out of space. This value includes both successful operations and *compression failures*. |
| `COMPRESS_OPS_OK` | How many times a page of the size `PAGE_SIZE` has been successfully compressed. This value should be as close as possible to `COMPRESS_OPS`. If it is notably lower, either avoid compressing some tables, or increase the `KEY_BLOCK_SIZE` for some compressed tables. |
| `COMPRESS_TIME` | Time (in seconds) spent to compress pages of the size `PAGE_SIZE`. This value includes time spent in *compression failures*. |
| `UNCOMPRESS_OPS` | How many times a page of the size `PAGE_SIZE` has been uncompressed. This happens when an uncompressed version of a page is created in the buffer pool, or when a *compression failure* occurs. |
| `UNCOMPRESS_TIME` | Time (in seconds) spent to uncompress pages of the size `PAGE_SIZE`. |
These tables can be used to measure the effectiveness of XtraDB/InnoDB table compression. When you have to decide a value for `KEY_BLOCK_SIZE`, you can create more than one version of the table (one for each candidate value) and run a realistic workload on them. Then, these tables can be used to see how the operations performed with different page sizes.
`INNODB_CMP` and `INNODB_CMP_RESET` have the same columns and always contain the same values, but when `INNODB_CMP_RESET` is queried, both the tables are cleared. `INNODB_CMP_RESET` can be used, for example, if a script periodically logs the performances of compression in the last period of time. `INNODB_CMP` can be used to see the cumulated statistics.
Examples
--------
```
SELECT * FROM information_schema.INNODB_CMP\G
**************************** 1. row *****************************
page_size: 1024
compress_ops: 0
compress_ops_ok: 0
compress_time: 0
uncompress_ops: 0
uncompress_time: 0
...
```
See Also
--------
Other tables that can be used to monitor XtraDB/InnoDB compressed tables:
* [INNODB\_CMP\_PER\_INDEX and INNODB\_CMP\_PER\_INDEX\_RESET](../information_schemainnodb_cmp_per_index-and-innodb_cmp_per_index_reset-table/index)
* [INNODB\_CMPMEM and INNODB\_CMPMEM\_RESET](../information_schemainnodb_cmpmem-and-innodb_cmpmem_reset-tables/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Spider Table Parameters Spider Table Parameters
=======================
When a table uses the [Spider](../spider/index) storage engine, the following Spider table parameters can be set in the `COMMENT` clause of the [CREATE TABLE](../create-table/index) statement. Many Spider table parameters have corresponding system variables, so they can be set for all Spider tables on the node. For additional information, see the [Spider System Variables](../spider-server-system-variables/index) page.
#### `access_balances`
* **Description:** Connection load balancing integer weight.
* **Default Table Value:** `0`
* **DSN Parameter Name:** `abl`
#### `active_link_count`
* **Description:** Number of active remote servers, for use in load balancing read connections
* **Default Table Value:** `all backends`
* **DSN Parameter Name:** `alc`
#### `casual_read`
* **Description:**
* **Default Table Value:**
* **DSN Parameter Name:**
* **Introduced:** Spider 3.2
#### `database`
* **Description:** Database name for reference table that exists on remote backend server.
* **Default Table Value:** `local table database`
* **DSN Parameter Name:** `database`
#### `default_file`
* **Description:** Configuration file used when connecting to remote servers. When the `[default\_group](#default_group)` table variable is set, this variable defaults to the values of the `--defaults-extra-file` or `--defaults-file` options. When the `[default\_group](#default_group)` table variable is not set, it defaults to `none`.
* **Default Table Value:** `none`
* **DSN Parameter Name:** `dff`
#### `default_group`
* **Description:** Group name in configuration file used when connecting to remote servers.
* **Default Table Value:** `none`
* **DSN Parameter Name:** `dfg`
#### `delete_all_rows_type`
* **Description:**
* **Default Table Value:**
* **DSN Parameter Name:**
* **Introduced:** Spider 3.2
#### `host`
* **Description:** Host name of remote server.
* **Default Table Value:** `localhost`
* **DSN Parameter Name:** `host`
#### `idx000`
* **Description:** When using an index on Spider tables for searching, Spider uses this hint to search the remote table. The remote table index is related to the Spider table index by this hint. The number represented by `000` is the index ID, which is the number of the index shown by the `[SHOW CREATE TABLE](../show-create-table/index)` statement. `000` is the Primary Key. For instance, `idx000 "force index(PRIMARY)"` (in abbreviated format `idx000 "f PRIMARY"`).
+ `f` force index
+ `u` use index
+ `ig` ignore index
* **Default Table Value:** `none`
#### `internal_delayed`
* **Description:** Whether to transmit existence of delay to remote servers when executing an `[INSERT DELAYED](../insert-delayed/index)` statement on local server.
+ `0` Doesn't transmit.
+ `1` Transmits.
* **Default Table Value:** `0`
* **DSN Parameter Name:** `idl`
#### `link_status`
* **Description:** Change status of the remote backend server link.
+ `0` Doesn't change status.
+ `1` Changes status to `OK`.
+ `2` Changes status to `RECOVERY`.
+ `3` Changes status to no more in group communication.
* **Default Table Value:** `0`
* **DSN Parameter Name:** `lst`
#### `monitoring_bg_interval`
* **Description:** Interval of background monitoring in microseconds.
* **Default Table Value:** `10000000`
* **DSN Parameter Name:** `mbi`
#### `monitoring_bg_kind`
* **Description:** Kind of background monitoring to use.
+ `0` Disables background monitoring.
+ `1` Monitors connection state.
+ `2` Monitors state of table without `WHERE` clause.
+ `3` Monitors state of table with `WHERE` clause (currently unsupported).
* **Default Table Value:** `0`
* **DSN Parameter Name:** `mbk`
#### `monitoring_kind`
* **Description:** Kind of monitoring.
+ `0` Disables monitoring
+ `1` Monitors connection state.
+ `2` Monitors state of table without `WHERE` clause.
+ `3` Monitors state of table with `WHERE` clause (currently unsupported).
* **Default Table Value:** `0`
* **DSN Parameter Name:** `mkd`
#### `monitoring_limit`
* **Description:** Limits the number of records in the monitoring table. This is only effective when Spider monitors the state of a table, which occurs when the `[monitoring\_kind](#monitoring_kind)` table variable is set to a value greater than `1`.
* **Default Table Value:** `1`
* **Range:** `0` upwards
* **DSN Parameter Name:** `mlt`
#### `monitoring_server_id`
* **Description:** Preferred monitoring `@@server_id` for each backend failure. You can use this to geo-localize backend servers and set the first Spider monitoring node to contact for failover. In the event that this monitor fails, other monitoring nodes are contacted. For multiple copy backends, you can set a lazy configuration with a single MSI instead of one per backend.
* **Default Table Value:** `server_id`
* **DSN Parameter Name:** `msi`
#### `password`
* **Description:** Remote server password.
* **Default Table Value:** `none`
* **DSN Parameter Name:** `password`
#### `port`
* **Description:** Remote server port.
* **Default Table Value:** `3306`
* **DSN Parameter Name:** `port`
#### `priority`
* **Description:** Priority. Used to define the order of execution. For instance, Spider uses priority when deciding the order in which to lock tables on a remote server.
* **Default Table Value:** `1000000`
* **DSN Parameter Name:** `prt`
#### `query_cache`
* **Description:** Passes the option for the [Query Cache](../query-cache/index) when issuing `[SELECT](../select/index)` statements to the remote server.
+ `0` No option passed.
+ `1` Passes the `[SQL\_CACHE](../optimizer-hints/index#sql_cache-sql_no_cache)` option.
+ `2` Passes the `[SQL\_NO\_CACHE](../optimizer-hints/index#sql_cache-sql_no_cache)` option.
* **Default Table Value:** `0`
* **DSN Parameter Name:** `qch`
#### `read_rate`
* **Description:** Rate used to calculate the amount of time Spider requires when executing index scans.
* **Default Table Value:** `0.0002`
* **DSN Parameter Name:** `rrt`
#### `scan_rate`
* **Description:** Rate used to calculate the amount of time Spider requires when scanning tables.
* **Default Table Value:** `0.0001`
* **DSN Parameter Name:** `srt`
#### `server`
* **Description:** Server name. Used when generating connection information with `[CREATE SERVER](../create-server/index)` statements.
* **Default Table Value:** `none`
* **DSN Parameter Name:** `srv`
#### `socket`
* **Description:** Remote server socket.
* **Default Table Value:** `none`
* **DSN Parameter Name:** `socket`
#### `ssl_ca`
* **Description:** Path to the Certificate Authority file.
* **Default Table Value:** `none`
* **DSN Parameter Name:** `sca`
#### `ssl_capath`
* **Description:** Path to directory containing trusted TLS CA certificates in PEM format.
* **Default Table Value:** `none`
* **DSN Parameter Name:** `scp`
#### `ssl_cert`
* **Description:** Path to the certificate file.
* **Default Table Value:** `none`
* **DSN Parameter Name:** `scr`
#### `ssl_cipher`
* **Description:** List of allowed ciphers to use with [TLS encryption](../secure-connections-overview/index).
* **Default Table Value:** `none`
* **DSN Parameter Name:** `sch`
#### `ssl_key`
* **Description:** Path to the key file.
* **Default Table Value:** `none`
* **DSN Parameter Name:** `sky`
#### `ssl_verify_server_cert`
* **Description:** Enables verification of the server's Common Name value in the certificate against the host name used when connecting to the server.
+ `0` Disables verification.
+ `1` Enables verification.
* **Default Table Value:** `0`
* **DSN Parameter Name:** `svc`
#### `table`
* **Description:** Destination table name.
* **Default Table Value:** `Same table name`
* **DSN Parameter Name:** `tbl`
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb SHOW CREATE TABLE SHOW CREATE TABLE
=================
Syntax
------
```
SHOW CREATE TABLE tbl_name
```
Description
-----------
Shows the [CREATE TABLE](../create-table/index) statement that created the given table. The statement requires the [SELECT privilege](../select/index) for the table. This statement also works with [views](../views/index) and [SEQUENCE](../create-sequence/index).
`SHOW CREATE TABLE` quotes table and column names according to the value of the [sql\_quote\_show\_create](../server-system-variables/index#sql_quote_show_create) server system variable.
Certain [SQL\_MODE](../sql-mode/index) values can result in parts of the original CREATE statement not being included in the output. MariaDB-specific table options, column options, and index options are not included in the output of this statement if the [NO\_TABLE\_OPTIONS](../sql-mode/index#no_table_options), [NO\_FIELD\_OPTIONS](../sql-mode/index#no_field_options) and [NO\_KEY\_OPTIONS](../sql-mode/index#no_key_options) [SQL\_MODE](../sql-mode/index) flags are used. All MariaDB-specific table attributes are also not shown when a non-MariaDB/MySQL emulation mode is used, which includes [ANSI](../sql-mode/index#ansi), [DB2](../sql-mode/index#db2), [POSTGRESQL](../sql-mode/index#postgresql), [MSSQL](../sql-mode/index#mssql), [MAXDB](../sql-mode/index#maxdb) or [ORACLE](../sql-mode/index#oracle).
Invalid table options, column options and index options are normally commented out (note, that it is possible to create a table with invalid options, by altering a table of a different engine, where these options were valid). To have them uncommented, enable the [IGNORE\_BAD\_TABLE\_OPTIONS](../sql-mode/index#ignore_bad_table_options) [SQL\_MODE](../sql-mode/index). Remember that replaying a [CREATE TABLE](../create-table/index) statement with uncommented invalid options will fail with an error, unless the [IGNORE\_BAD\_TABLE\_OPTIONS](../sql-mode/index#ignore_bad_table_options) [SQL\_MODE](../sql-mode/index) is in effect.
Note that `SHOW CREATE TABLE` is not meant to provide metadata about a table. It provides information about how the table was declared, but the real table structure could differ a bit. For example, if an index has been declared as `HASH`, the `CREATE TABLE` statement returned by `SHOW CREATE TABLE` will declare that index as `HASH`; however, it is possible that the index is in fact a `BTREE`, because the storage engine does not support `HASH`.
**MariaDB starting with [10.2.1](https://mariadb.com/kb/en/mariadb-1021-release-notes/)**[MariaDB 10.2.1](https://mariadb.com/kb/en/mariadb-1021-release-notes/) permits [TEXT](../text/index) and [BLOB](../blob/index) data types to be assigned a [DEFAULT](../create-table/index#default) value. As a result, from [MariaDB 10.2.1](https://mariadb.com/kb/en/mariadb-1021-release-notes/), `SHOW CREATE TABLE` will append a `DEFAULT NULL` to nullable TEXT or BLOB fields if no specific default is provided.
**MariaDB starting with [10.2.2](https://mariadb.com/kb/en/mariadb-1022-release-notes/)**From [MariaDB 10.2.2](https://mariadb.com/kb/en/mariadb-1022-release-notes/), numbers are no longer quoted in the `DEFAULT` clause in `SHOW CREATE` statement. Previously, MariaDB quoted numbers.
### Index Order
Indexes are sorted and displayed in the following order, which may differ from the order of the CREATE TABLE statement.
* PRIMARY KEY
* UNIQUE keys where all column are NOT NULL
* UNIQUE keys that don't contain partial segments
* Other UNIQUE keys
* LONG UNIQUE keys
* Normal keys
* Fulltext keys
See sql/sql\_table.cc for details.
Examples
--------
```
SHOW CREATE TABLE t\G
*************************** 1. row ***************************
Table: t
Create Table: CREATE TABLE `t` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`s` char(60) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
```
With [sql\_quote\_show\_create](../server-system-variables/index#sql_quote_show_create) off:
```
SHOW CREATE TABLE t\G
*************************** 1. row ***************************
Table: t
Create Table: CREATE TABLE t (
id int(11) NOT NULL AUTO_INCREMENT,
s char(60) DEFAULT NULL,
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
```
Unquoted numeric DEFAULTs, from [MariaDB 10.2.2](https://mariadb.com/kb/en/mariadb-1022-release-notes/):
```
CREATE TABLE td (link TINYINT DEFAULT 1);
SHOW CREATE TABLE td\G
*************************** 1. row ***************************
Table: td
Create Table: CREATE TABLE `td` (
`link` tinyint(4) DEFAULT 1
) ENGINE=InnoDB DEFAULT CHARSET=latin1
```
Quoted numeric DEFAULTs, until [MariaDB 10.2.1](https://mariadb.com/kb/en/mariadb-1021-release-notes/):
```
CREATE TABLE td (link TINYINT DEFAULT 1);
SHOW CREATE TABLE td\G
*************************** 1. row ***************************
Table: td
Create Table: CREATE TABLE `td` (
`link` tinyint(4) DEFAULT '1'
) ENGINE=InnoDB DEFAULT CHARSET=latin1
```
[SQL\_MODE](../sql-mode/index) impacting the output:
```
SELECT @@sql_mode;
+-------------------------------------------------------------------------------------------+
| @@sql_mode |
+-------------------------------------------------------------------------------------------+
| STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION |
+-------------------------------------------------------------------------------------------+
CREATE TABLE `t1` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`msg` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
;
SHOW CREATE TABLE t1\G
*************************** 1. row ***************************
Table: t1
Create Table: CREATE TABLE `t1` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`msg` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
SET SQL_MODE=ORACLE;
SHOW CREATE TABLE t1\G
*************************** 1. row ***************************
Table: t1
Create Table: CREATE TABLE "t1" (
"id" int(11) NOT NULL,
"msg" varchar(100) DEFAULT NULL,
PRIMARY KEY ("id")
```
See Also
--------
* [SHOW CREATE SEQUENCE](../show-create-sequence/index)
* [SHOW CREATE VIEW](../show-create-view/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MAX MAX
===
Syntax
------
```
MAX([DISTINCT] expr)
```
Description
-----------
Returns the largest, or maximum, value of *`expr`*. `MAX()` can also take a string argument in which case it returns the maximum string value. The `DISTINCT` keyword can be used to find the maximum of the distinct values of *`expr`*, however, this produces the same result as omitting `DISTINCT`.
Note that [SET](../set/index) and [ENUM](../enum/index) fields are currently compared by their string value rather than their relative position in the set, so MAX() may produce a different highest result than ORDER BY DESC.
It is an [aggregate function](../aggregate-functions/index), and so can be used with the [GROUP BY](../group-by/index) clause.
From [MariaDB 10.2.2](https://mariadb.com/kb/en/mariadb-1022-release-notes/), MAX() can be used as a [window function](../window-functions/index).
`MAX()` returns `NULL` if there were no matching rows.
Examples
--------
```
CREATE TABLE student (name CHAR(10), test CHAR(10), score TINYINT);
INSERT INTO student VALUES
('Chun', 'SQL', 75), ('Chun', 'Tuning', 73),
('Esben', 'SQL', 43), ('Esben', 'Tuning', 31),
('Kaolin', 'SQL', 56), ('Kaolin', 'Tuning', 88),
('Tatiana', 'SQL', 87), ('Tatiana', 'Tuning', 83);
SELECT name, MAX(score) FROM student GROUP BY name;
+---------+------------+
| name | MAX(score) |
+---------+------------+
| Chun | 75 |
| Esben | 43 |
| Kaolin | 88 |
| Tatiana | 87 |
+---------+------------+
```
MAX string:
```
SELECT MAX(name) FROM student;
+-----------+
| MAX(name) |
+-----------+
| Tatiana |
+-----------+
```
Be careful to avoid this common mistake, not grouping correctly and returning mismatched data:
```
SELECT name,test,MAX(SCORE) FROM student;
+------+------+------------+
| name | test | MAX(SCORE) |
+------+------+------------+
| Chun | SQL | 88 |
+------+------+------------+
```
Difference between ORDER BY DESC and MAX():
```
CREATE TABLE student2(name CHAR(10),grade ENUM('b','c','a'));
INSERT INTO student2 VALUES('Chun','b'),('Esben','c'),('Kaolin','a');
SELECT MAX(grade) FROM student2;
+------------+
| MAX(grade) |
+------------+
| c |
+------------+
SELECT grade FROM student2 ORDER BY grade DESC LIMIT 1;
+-------+
| grade |
+-------+
| a |
+-------+
```
As a [window function](../window-functions/index):
```
CREATE OR REPLACE TABLE student_test (name CHAR(10), test CHAR(10), score TINYINT);
INSERT INTO student_test VALUES
('Chun', 'SQL', 75), ('Chun', 'Tuning', 73),
('Esben', 'SQL', 43), ('Esben', 'Tuning', 31),
('Kaolin', 'SQL', 56), ('Kaolin', 'Tuning', 88),
('Tatiana', 'SQL', 87);
SELECT name, test, score, MAX(score)
OVER (PARTITION BY name) AS highest_score FROM student_test;
+---------+--------+-------+---------------+
| name | test | score | highest_score |
+---------+--------+-------+---------------+
| Chun | SQL | 75 | 75 |
| Chun | Tuning | 73 | 75 |
| Esben | SQL | 43 | 43 |
| Esben | Tuning | 31 | 43 |
| Kaolin | SQL | 56 | 88 |
| Kaolin | Tuning | 88 | 88 |
| Tatiana | SQL | 87 | 87 |
+---------+--------+-------+---------------+
```
See Also
--------
* [AVG](../avg/index) (average)
* [MIN](../min/index) (minimum)
* [SUM](../sum/index) (sum total)
* [MIN/MAX optimization](../minmax-optimization/index) used by the optimizer
* [GREATEST()](../greatest/index) returns the largest value from a list
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Puppet and MariaDB Puppet and MariaDB
===================
General information and hints on how to automate MariaDB deployments and configuration with Puppet.
Puppet is an open source tool deployment, configuration and operations.
| Title | Description |
| --- | --- |
| [Puppet Overview for MariaDB Users](../puppet-overview-for-mariadb-users/index) | Overview of Puppet and how it works with MariaDB. |
| [Bolt Examples](../bolt-examples/index) | How to invoke Bolt to run commands or apply roles on remote hosts. |
| [Puppet hiera Configuration System](../puppet-hiera-configuration-system/index) | Using hiera to handle Puppet configuration files. |
| [Deploying Docker Containers with Puppet](../deploying-docker-containers-with-puppet/index) | How to deploy and manage Docker containers with Puppet. |
| [Existing Puppet Modules for MariaDB](../existing-puppet-modules-for-mariadb/index) | Links to existing Puppet modules for MariaDB. |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ColumnStore Naming Conventions ColumnStore Naming Conventions
==============================
This lists the different naming conventions enforced by the column store, compared to the normal [MariaDB naming conventions](../mariadb/identifier-names).
* User names: 64 characters (MariaDB has 80)
* Table and column names are restricted to alphanumeric and underscore only, i.e "A-Z a-z 0-9 \_".
* The first character of all table and column names should be an ASCII letter (a-z A-Z).
* ColumnStore reserves certain words that MariaDB does not, such as SELECT, CHAR and TABLE, so even wrapped in backticks these cannot be used.
Reserved words
--------------
In addition to MariaDB Server [reserved words](../reserved-words/index), ColumnStore has additional reserved words that cannot be used as table names, column names or user defined variables, functions or stored procedure names.
| Keyword |
| --- |
| ACTION |
| ADD |
| ALTER |
| AUTO\_INCREMENT |
| BIGINT |
| BIT |
| CASCADE |
| CHANGE |
| CHARACTER |
| CHARSET |
| CHECK |
| CLOB |
| COLUMN |
| COLUMNS |
| COMMENT |
| CONSTRAINT |
| CONSTRAINTS |
| CREATE |
| CURRENT\_USER |
| DATETIME |
| DEC |
| DECIMAL |
| DEFERRED |
| DEFAULT |
| DEFERRABLE |
| DOUBLE |
| DROP |
| ENGINE |
| EXISTS |
| FOREIGN |
| FULL |
| IDB\_BLOB |
| IDB\_CHAR |
| IDB\_DELETE |
| IDB\_FLOAT |
| IDB\_INT |
| IF |
| IMMEDIATE |
| INDEX |
| INITIALLY |
| INTEGER |
| KEY |
| MATCH |
| MAX\_ROWS |
| MIN\_ROWS |
| MODIFY |
| NO |
| NOT |
| NULL\_TOK |
| NUMBER |
| NUMERIC |
| ON |
| PARTIAL |
| PRECISION |
| PRIMARY |
| REAL |
| REFERENCES |
| RENAME |
| RESTRICT |
| SESSION\_USER |
| SET |
| SMALLINT |
| SYSTEM\_USER |
| TABLE |
| TIME |
| TINYINT |
| TO |
| TRUNCATE |
| UNIQUE |
| UNSIGNED |
| UPDATE |
| USER |
| VARBINARY |
| VARCHAR |
| VARYING |
| WITH |
| ZONE |
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb DECLARE Variable DECLARE Variable
================
Syntax
------
```
DECLARE var_name [, var_name] ... [[ROW] TYPE OF]] type [DEFAULT value]
```
Description
-----------
This statement is used to declare local variables within [stored programs](../stored-programs-and-views/index). To provide a default value for the variable, include a `DEFAULT` clause. The value can be specified as an expression (even subqueries are permitted); it need not be a constant. If the `DEFAULT` clause is missing, the initial value is `NULL`.
Local variables are treated like stored routine parameters with respect to data type and overflow checking. See [CREATE PROCEDURE](../create-procedure/index).
Local variables must be declared before `CONDITION`s, [CURSORs](../programmatic-and-compound-statements-cursors/index) and `HANDLER`s.
Local variable names are not case sensitive.
The scope of a local variable is within the `BEGIN ... END` block where it is declared. The variable can be referred to in blocks nested within the declaring block, except those blocks that declare a variable with the same name.
### TYPE OF / ROW TYPE OF
**MariaDB starting with [10.3](../what-is-mariadb-103/index)**`TYPE OF` and `ROW TYPE OF` anchored data types for stored routines were introduced in [MariaDB 10.3](../what-is-mariadb-103/index).
Anchored data types allow a data type to be defined based on another object, such as a table row, rather than specifically set in the declaration. If the anchor object changes, so will the anchored data type. This can lead to routines being easier to maintain, so that if the data type in the table is changed, it will automatically be changed in the routine as well.
Variables declared with `ROW TYPE OF` will have the same features as implicit [ROW](../row/index) variables. It is not possible to use `ROW TYPE OF` variables in a [LIMIT](../limit/index) clause.
The real data type of `TYPE OF` and `ROW TYPE OF table_name` will become known at the very beginning of the stored routine call. [ALTER TABLE](../alter-table/index) or [DROP TABLE](../drop-table/index) statements performed inside the current routine on the tables that appear in anchors won't affect the data type of the anchored variables, even if the variable is declared after an [ALTER TABLE](../alter-table/index) or [DROP TABLE](../drop-table/index) statement.
The real data type of a `ROW TYPE OF cursor_name` variable will become known when execution enters into the block where the variable is declared. Data type instantiation will happen only once. In a cursor `ROW TYPE OF` variable that is declared inside a loop, its data type will become known on the very first iteration and won't change on further loop iterations.
The tables referenced in `TYPE OF` and `ROW TYPE OF` declarations will be checked for existence at the beginning of the stored routine call. [CREATE PROCEDURE](../create-procedure/index) or [CREATE FUNCTION](../create-function/index) will not check the referenced tables for existence.
Examples
--------
`TYPE OF` and `ROW TYPE OF` from [MariaDB 10.3](../what-is-mariadb-103/index):
```
DECLARE tmp TYPE OF t1.a; -- Get the data type from the column {{a}} in the table {{t1}}
DECLARE rec1 ROW TYPE OF t1; -- Get the row data type from the table {{t1}}
DECLARE rec2 ROW TYPE OF cur1; -- Get the row data type from the cursor {{cur1}}
```
See Also
--------
* [User-Defined variables](../user-defined-variables/index)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Database Design Phase 2: Conceptual Design Database Design Phase 2: Conceptual Design
==========================================
This article follows on from [Database Design Phase 1: Analysis](../database-design-phase-1-analysis/index).
The design phase is where the requirements identified in the previous phase are used as the basis to develop the new system. Another way of putting it is that the business understanding of the data structures is converted to a technical understanding. The *what* questions ("What data are required? What are the problems to be solved?") are replaced by the *how* questions ("How will the data be structured? How is the data to be accessed?")
This phase consists of three parts: the conceptual design, the logical design and the physical design. Some methodologies merge the logical design phase into the other two phases. This section is not aimed at being a definitive discussion of database design methodologies (there are whole books written on that!); rather it aims to introduce you to the topic.
Conceptual design
-----------------
The purpose of the conceptual design phase is to build a conceptual model based upon the previously identified requirements, but closer to the final physical model. A commonly-used conceptual model is called an *entity-relationship* model.
### Entities and attributes
*Entities* are basically people, places, or things you want to keep information about. For example, a library system may have the *book*, *library* and *borrower* entities. Learning to identify what should be an entity, what should be a number of entities, and what should be an *attribute* of an entity takes practice, but there are some good rules of thumb. The following questions can help to identify whether something is an entity:
* Can it vary in number independently of other entities? For example, *person height* is probably not an entity, as it cannot vary in number independently of *person*. It is not fundamental, so it cannot be an entity in this case.
* Is it important enough to warrant the effort of maintaining. For example *customer* may not be important for a small grocery store and will not be an entity in that case, but it will be important for a video store, and will be an entity in that case.
* Is it its own thing that cannot be separated into subcategories? For example, a car-rental agency may have different criteria and storage requirements for different kinds of vehicles. *Vehicle* may not be an entity, as it can be broken up into *car* and *boat*, which are the entities.
* Does it list a type of thing, not an instance? The video game *blow-em-up 6* is not an entity, rather an instance of the *game* entity.
* Does it have many associated facts? If it only contains one attribute, it is unlikely to be an entity. For example, *city* may be an entity in some cases, but if it contains only one attribute, *city name*, it is more likely to be an attribute of another entity, such as *customer*.
The following are examples of entities involving a university with possible attributes in parentheses.
* **Course** (name, code, course prerequisites)
* **Student** (first\_name, surname, address, age)
* **Book** (title, ISBN, price, quantity in stock)
An instance of an entity is one particular occurrence of that entity. For example, the student Rudolf Sono is one instance of the student entity. There will probably be many instances. If there is only one instance, consider whether the entity is warranted. The top level usually does not warrant an entity. For example, if the system is being developed for a particular university, *university* will not be an entity because the whole system is for that one university. However, if the system was developed to track legislation at all universities in the country, then *university* would be a valid entity.
### Relationships
Entities are related in certain ways. For example, a borrower may belong to a library and can take out books. A book can be found in a particular library. Understanding what you are storing data about, and how the data relate, leads you a large part of the way to a physical implementation in the database.
There are a number of possible relationships:
#### Mandatory
For each instance of entity A, there must exist one or more instances of entity B. This does not necessarily mean that for each instance of entity B, there must exist one or more instances of entity A. Relationships are optional or mandatory in one direction only, so the A-to-B relationship can be optional, while the B-to-A relationship is mandatory.
#### Optional
For each instance of entity A, there may or may not exist instances of entity B.
#### One-to-one (1:1)
This is where for each instance of entity A, there exists one instance of entity B, and vice-versa. If the relationship is optional, there can exist zero or one instances, and if the relationship is mandatory, there exists one and only one instance of the associated entity.
#### One-to-many (1:M)
For each instance of entity A, many instances of entity B can exist, which for each instance of entity B, only one instance of entity A exists. Again, these can be optional or mandatory relationships.
#### Many-to-many (M:N)
For each instance of entity A, many instances of entity B can exist, and vice versa. These can be optional or mandatory relationships.
There are numerous ways of showing these relationships. The image below shows *student* and *course* entities. In this case, each student must have registered for at least one course, but a course does not necessarily have to have students registered. The student-to-course relationship is mandatory, and the course-to-student relationship is optional.
The image below shows *invoice\_line* and *product* entities. Each invoice line must have at least one product (but no more than one); however each product can appear on many invoice lines, or none at all. The *invoice\_line-to-product* relationship is mandatory, while the *product-to-invoice\_line* relationship is optional.
The figure below shows husband and wife entities. In this system (others are of course possible), each husband must have one and only one wife, and each wife must have one, and only one, husband. Both relationships are mandatory.
An entity can also have a relationship with itself. Such an entity is called a *recursive entity*. Take a *person* entity. If you're interested in storing data about which people are brothers, you wlll have an "is brother to" relationship. In this case, the relationship is an M:N relationship.
Conversely, a *weak entity* is an entity that cannot exist without another entity. For example, in a school, the *scholar* entity is related to the weak entity *parent/guardian*. Without the scholar, the parent or guardian cannot exist in the system. Weak entities usually derive their primary key, in part or in totality, from the associated entity. *parent/guardian* could take the primary key from the scholar table as part of its primary key (or the entire key if the system only stored one parent/guardian per scholar).
The term *connectivity* refers to the relationship classification.
The term *cardinality* refers to the specific number of instances possible for a relationship. *Cardinality limits* list the minimum and maximum possible occurrences of the associated entity. In the husband and wife example, the cardinality limit is (1,1), and in the case of a student who can take between one and eight courses, the cardinality limits would be represented as (1,8).
### Developing an entity-relationship diagram
An entity-relationship diagram models how the entities relate to each other. It's made up of multiple relationships, the kind shown in the examples above. In general, these entities go on to become the database tables.
The first step in developing the diagram is to identify all the entities in the system. In the initial stage, it is not necessary to identify the attributes, but this may help to clarify matters if the designer is unsure about some of the entities. Once the entities are listed, relationships between these entities are identified and modeled according to their type: one-to-many, optional and so on. There are many software packages that can assist in drawing an entity-relationship diagram, but any graphical package should suffice.
Once the initial entity-relationship diagram has been drawn, it is often shown to the stakeholders. Entity-relationship diagrams are easy for non-technical people to understand, especially when guided through the process. This can help identify any errors that have crept in. Part of the reason for modeling is that models are much easier to understand than pages of text, and they are much more likely to be viewed by stakeholders, which reduces the chances of errors slipping through to the next stage, when they may be more difficult to fix.
It is important to remember that there is no one right or wrong answer. The more complex the situation, the more possible designs that will work. Database design is an acquired skill, though, and more experienced designers will have a good idea of what works and of possible problems at a later stage, having gone through the process before.
Once the diagram has been approved, the next stage is to replace many-to-many relationships with two one-to-many relationships. A DBMS cannot directly implement many-to-many relationships, so they are decomposed into two smaller relationships. To achieve this, you have to create an *intersection*, or *composite* entity type. Because intersection entities are less "real-world" than ordinary entities, they are sometimes difficult to name. In this case, you can name them according to the two entities being intersected. For example, you can intersect the many-to-many relationship between *student* and *course* by a *student-course* entity.
The same applies even if the entity is recursive. The person entity that has an M:N relationship "is brother to" also needs an intersection entity. You can come up with a good name for the intersection entity in this case: *brother*. This entity would contain two fields, one for each person of the brother relationship — in other words, the primary key of the first brother and the primary key of the other brother.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb TRIM TRIM
====
Syntax
------
```
TRIM([{BOTH | LEADING | TRAILING} [remstr] FROM] str), TRIM([remstr FROM] str)
```
From [MariaDB 10.3.6](https://mariadb.com/kb/en/mariadb-1036-release-notes/)
```
TRIM_ORACLE([{BOTH | LEADING | TRAILING} [remstr] FROM] str), TRIM([remstr FROM] str)
```
Description
-----------
Returns the string `str` with all `remstr` prefixes or suffixes removed. If none of the specifiers `BOTH`, `LEADING`, or `TRAILING` is given, `BOTH` is assumed. `remstr` is optional and, if not specified, spaces are removed.
Returns NULL if given a NULL argument. If the result is empty, returns either an empty string, or, from [MariaDB 10.3.6](https://mariadb.com/kb/en/mariadb-1036-release-notes/) with [SQL\_MODE=Oracle](../sql_modeoracle/index), NULL. `SQL_MODE=Oracle` is not set by default.
The Oracle mode version of the function can be accessed in any mode by using `TRIM_ORACLE` as the function name.
Examples
--------
```
SELECT TRIM(' bar ')\G
*************************** 1. row ***************************
TRIM(' bar '): bar
SELECT TRIM(LEADING 'x' FROM 'xxxbarxxx')\G
*************************** 1. row ***************************
TRIM(LEADING 'x' FROM 'xxxbarxxx'): barxxx
SELECT TRIM(BOTH 'x' FROM 'xxxbarxxx')\G
*************************** 1. row ***************************
TRIM(BOTH 'x' FROM 'xxxbarxxx'): bar
SELECT TRIM(TRAILING 'xyz' FROM 'barxxyz')\G
*************************** 1. row ***************************
TRIM(TRAILING 'xyz' FROM 'barxxyz'): barx
```
From [MariaDB 10.3.6](https://mariadb.com/kb/en/mariadb-1036-release-notes/), with [SQL\_MODE=Oracle](../sql_modeoracle/index) not set:
```
SELECT TRIM(''),TRIM_ORACLE('');
+----------+-----------------+
| TRIM('') | TRIM_ORACLE('') |
+----------+-----------------+
| | NULL |
+----------+-----------------+
```
From [MariaDB 10.3.6](https://mariadb.com/kb/en/mariadb-1036-release-notes/), with [SQL\_MODE=Oracle](../sql_modeoracle/index) set:
```
SELECT TRIM(''),TRIM_ORACLE('');
+----------+-----------------+
| TRIM('') | TRIM_ORACLE('') |
+----------+-----------------+
| NULL | NULL |
+----------+-----------------+
```
See Also
--------
* [LTRIM](../ltrim/index) - leading spaces removed
* [RTRIM](../rtrim/index) - trailing spaces removed
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Vagrant Overview for MariaDB Users Vagrant Overview for MariaDB Users
==================================
Vagrant is a tool to create and manage development machines (Vagrant *boxes*). They are usually virtual machines on the localhost system, but they could also be Docker containers or remote machines. Vagrant is open source software maintained by HashiCorp and released under the MIT license.
Vagrant benefits include simplicity, and a system to create test boxes that is mostly independent from the technology used.
For information about installing Vagrant, see [Installation](https://www.vagrantup.com/docs/installation) in Vagrant documentation.
In this page we discuss basic Vagrant concepts.
Vagrant Concepts
----------------
A **Vagrant machine** is compiled from a box. It can be a virtual machine, a container or a remote server from a cloud service.
A **box** is a package that can be used to create Vagrant machines. We can download boxes from app.vagrantup.com, or we can build a new box from a Vagrantfile. A box can be used as a base for another box. The base boxes are usually operating system boxes downloaded from app.vagrantup.com.
A **provider** is responsible for providing the virtualization technology that will run our machine.
A **provisioner** is responsible for installing and configuring the necessary software on a newly created Vagrant machine.
### Example
The above concepts are probably easier to understand with an example.
We can use an Ubuntu box as a base to build a Vagrant machine with MariaDB. So we write a Vagrantfile for this purpose. In the Vagrantfile we specify VirtualBox as a provider. And we use the Ansible provisioner to install and configure MariaDB. Once we finish this Vagrantfile, we can run a Vagrant command to start a Vagrant machine, which is actually a VirtualBox VM running MariaDB on Ubuntu.
The following diagram should make the example clear:
### Vagrantfiles
A Vagrantfile is a file that describes how to create one or more Vagrant machines. Vagrantfiles use the Ruby language, as well as objects provided by Vagrant itself.
A Vagrantfile is often based on a box, which is usually an operating system in which we are going to install our software. For example, one can create a MariaDB Vagrantfile based on the `ubuntu/trusty64` box. A Vagrantfile can describe a box with a single server, like MariaDB, but it can also contain a whole environment, like LAMP. For most practical use cases, having the whole environment in a single box is more convenient.
Boxes can be searched in [Vagrant Cloud](https://app.vagrantup.com/boxes/search). Most of their Vagrantfiles are available on GitHub. Searches can be made, among other things, by keyword to find a specific technology, and by provider.
### Providers
A provider adds support for creating a specific type of machines. Vagrant comes with several providers, for example:
* `VirtualBox` allows one to create virtual machines with VirtualBox.
* `Microsoft-Hyper-V` allows one to create virtual machines with Microsoft Hyper-V.
* [Docker](../docker-and-mariadb/index) allows one to create Docker containers. On non-Linux systems, Vagrant will create a VM to run Docker.
Alternative providers are maintained by third parties or sold by HashiCorp. They allow one to create different types of machines, for example using VMWare.
Some examples of useful providers, recognized by the community:
* [Vagrant AWS Provider](https://github.com/mitchellh/vagrant-aws).
* [Vagrant Google Compute Engine (GCE) Provider](https://github.com/mitchellh/vagrant-google).
* [Vagrant Azure Provider](https://github.com/Azure/vagrant-azure).
* [OpenVZ](https://app.vagrantup.com/OpenVZ).
* [vagrant-lxc](https://github.com/fgrehm/vagrant-lxc).
If you need to create machines with different technologies, or deploy them to unsupported cloud platforms, you can develop a custom provider in Ruby language. To find out how, see [Plugin Development: Providers](https://www.vagrantup.com/docs/plugins/providers) in Vagrant documentation. The [Vagrant AWS](https://github.com/mitchellh/vagrant-aws) Provider was initially written as an example provider.
### Provisioners
A provisioner is a technology used to deploy software to the newly created machines.
The simplest provisioner is `shell`, which runs a shell file inside the Vagrant machine. `powershell` is also available.
Other providers use automation software to provision the machine. There are provisioners that allow one to use [Ansible](../ansible-and-mariadb/index), [Puppet](../automated-mariadb-deployment-and-administration-puppet-and-mariadb/index), Chef or Salt. Where relevant, there are different provisioners allowing the use of these technologies in a distributed way (for example, using Puppet apply) or in a centralized way (for example, using a Puppet server).
It is interesting to note that there is both a Docker provider and a Docker provisioner. This means that a Vagrant machine can be a Docker container, thanks to the `docker` provisioner. Or it could be any virtualisation technology with Docker running in it, thanks to the `docker` provisioner. In this case, Docker pulls images and starts containers to run the software that should be running in the Vagrant machine.
If you need to use an unsupported provisioning method, you can develop a custom provisioner in Ruby language. See [Plugin Development: Provisioners](https://www.vagrantup.com/docs/plugins/provisioners) in Vagrant documentation.
### Plugins
It is possible to install a plugin with this command:
```
vagrant plugin install <plugin_name>
```
A Vagrantfile can require that a plugin is installed in this way:
```
require 'plugin_name'
```
A plugin can be a Vagrant plugin or a Ruby gem installable from [rubygems.org](https://rubygems.org/). It is possible to install a plugin that only exists locally by specifying its path.
### Changes in Vagrant 3.0
HashiCorp published an article that describes its [plans for Vagrant 3.0](https://www.hashicorp.com/blog/toward-vagrant-3-0).
Vagrant will switch to a client-server architecture. Most of the logic will be stored in the server, while the development machines will run a thin client that communicates with the server. It will be possible to store the configuration in a central database.
Another notable change is that Vagrant is switching from Ruby to Go. For some time, it will still be possible to use Vagrantfiles and plugins written in Ruby. However, in the future Vagrantfiles and plugins should be written in one of the languages that support [gRPC](https://grpc.io/) (not necessarily Go). Vagrantfiles can also be written in [HCL](https://github.com/hashicorp/hcl), HashiCorp Configuration Language.
Vagrant Commands
----------------
This is a list of the most common Vagrant commands. For a complete list, see [Command-Line Interface](https://www.vagrantup.com/docs/cli) in Vagrant documentation.
To list the available machines:
```
vagrant box list
```
To start a machine from a box:
```
cd /box/directory
vagrant up
```
To connect to a machine:
```
vagrant ssh
```
To see all machines status and their id:
```
vagrant global-status
```
To destroy a machine:
```
vagrant destroy <id>
```
Vagrant Resources and References
--------------------------------
Here are some valuable websites and pages for Vagrant users.
* [Vagrant Up](https://www.vagrantup.com/).
* [app.vagrantup.com](https://app.vagrantup.com/).
* [Vagrant Community](https://www.vagrantup.com/community).
* [Vagrant on Wikipedia](https://en.wikipedia.org/wiki/Vagrant_(software)).
* [Vagrant on HashiCorp Learn](https://learn.hashicorp.com/vagrant).
---
Content initially contributed by [Vettabase Ltd](https://vettabase.com/).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb ST_LENGTH ST\_LENGTH
==========
Syntax
------
```
ST_LENGTH(ls)
```
Description
-----------
Returns as a double-precision number the length of the [LineString](../linestring/index) value *`ls`* in its associated spatial reference.
Examples
--------
```
SET @ls = 'LineString(1 1,2 2,3 3)';
SELECT ST_LENGTH(ST_GeomFromText(@ls));
+---------------------------------+
| ST_LENGTH(ST_GeomFromText(@ls)) |
+---------------------------------+
| 2.82842712474619 |
+---------------------------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Current Status of the CONNECT Handler Current Status of the CONNECT Handler
=====================================
The current CONNECT handler is a GA (stable) release. It was written starting both from an aborted project written for MySQL in 2004 and from the “DBCONNECT” program. It was tested on all the examples described in this document, and is distributed with a set of 53 test cases. Here is a not limited list of future developments:
1. Adding more table types.
2. Make more tests files (53 are already made)
3. Adding more data types, in particular unsigned ones (done for unsigned).
4. Supporting indexing on nullable and decimal columns.
5. Adding more optimize tools (block indexing, dynamic indexing, etc.) (done)
6. Supporting MRR (done)
7. Supporting partitioning (done)
8. Getting NOSQL data from the Net as answers from REST queries (done)
No programs are bug free, especially new ones. Please [report all bugs](../reporting-bugs/index) or documentation errors using the means provided by MariaDB.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Password Reuse Check Plugin Password Reuse Check Plugin
===========================
**MariaDB starting with [10.7](../what-is-mariadb-107/index)**`password_reuse_check` is a [password validation](../password-validation-plugin-api/index) plugin introduced in [MariaDB 10.7.0](https://mariadb.com/kb/en/mariadb-1070-release-notes/).
Description
-----------
The plugin is used to prevent a user from reusing a password, which can be a requirement in some security policies. The [password\_reuse\_check\_interval](../password_reuse_check_interval/index) system variable determines the retention period, in days, for a password. By default this is zero, meaning unlimited retention. Old passwords are stored in the [mysql.password\_reuse\_check\_history table](../mysqlpassword_reuse_check_history-table/index).
Note that passwords can be directly set as a hash, bypassing the password validation, if the [strict\_password\_validation](../server-system-variables/index#strict_password_validation) variable is `OFF` (it is `ON` by default).
### Installing the Plugin
Although the plugin's shared library is distributed with MariaDB by default, the plugin is not actually installed by MariaDB by default.
You can install the plugin dynamically, without restarting the server, by executing [INSTALL SONAME](../install-soname/index) or [INSTALL PLUGIN](../install-plugin/index). For example:
```
INSTALL SONAME 'password_reuse_check';
```
The second method can be used to tell the server to load the plugin when it starts up. The plugin can be installed this way by providing the [--plugin-load](../mysqld-options/index#-plugin-load) or the [--plugin-load-add](../mysqld-options/index#-plugin-load-add) options. This can be specified as a command-line argument to [mysqld](../mysqld-options/index) or it can be specified in a relevant server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index). For example:
```
[mariadb]
...
plugin_load_add = password_reuse_check
```
### Uninstalling the Plugin
You can uninstall the plugin dynamically by executing [UNINSTALL SONAME](../uninstall-soname/index) or [UNINSTALL PLUGIN](../uninstall-plugin/index). For example:
```
UNINSTALL SONAME 'password_reuse_check';
```
If you installed the plugin by providing the [--plugin-load](../mysqld-options/index#-plugin-load) or the [--plugin-load-add](../mysqld-options/index#-plugin-load-add) options in a relevant server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index), then those options should be removed to prevent the plugin from being loaded the next time the server is restarted.
Example
-------
```
INSTALL SONAME 'password_reuse_check';
GRANT SELECT ON *.* TO user1@localhost identified by 'pwd1';
Query OK, 0 rows affected (0.038 sec)
GRANT SELECT ON *.* TO user1@localhost identified by 'pwd1';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
GRANT SELECT ON *.* TO user1@localhost identified by 'pwd2';
Query OK, 0 rows affected (0.003 sec)
GRANT SELECT ON *.* TO user1@localhost identified by 'pwd1';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
```
Versions
--------
| Version | Status | Introduced |
| --- | --- | --- |
| 1.0 | Alpha | [MariaDB 10.7.0](https://mariadb.com/kb/en/mariadb-1070-release-notes/) |
| 1.0 | Beta | [MariaDB 10.7.2](https://mariadb.com/kb/en/mariadb-1072-release-notes/) |
| 1.0 | Gamma | [MariaDB 10.7.4](https://mariadb.com/kb/en/mariadb-1074-release-notes/) |
See Also
--------
* [Password Validation](../password-validation/index)
* [10.7 preview feature: Password Reuse Check plugin](https://mariadb.org/10-7-preview-feature-password-reuse-check-plugin/) (mariadb.org blog post)
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb SQL statements That Cause an Implicit Commit SQL statements That Cause an Implicit Commit
============================================
Some SQL statements cause an implicit commit. As a rule of thumb, such statements are DDL statements. The same statements (except for [SHUTDOWN](../shutdown/index)) produce a 1400 error ([SQLSTATE](../sqlstate/index) 'XAE09') if a XA transaction is in effect.
Here is the list:
```
ALTER DATABASE ... UPGRADE DATA DIRECTORY NAME
ALTER EVENT
ALTER FUNCTION
ALTER PROCEDURE
ALTER SERVER
ALTER TABLE
ALTER VIEW
ANALYZE TABLE
BEGIN
CACHE INDEX
CHANGE MASTER TO
CHECK TABLE
CREATE DATABASE
CREATE EVENT
CREATE FUNCTION
CREATE INDEX
CREATE PROCEDURE
CREATE ROLE
CREATE SERVER
CREATE TABLE
CREATE TRIGGER
CREATE USER
CREATE VIEW
DROP DATABASE
DROP EVENT
DROP FUNCTION
DROP INDEX
DROP PROCEDURE
DROP ROLE
DROP SERVER
DROP TABLE
DROP TRIGGER
DROP USER
DROP VIEW
FLUSH
GRANT
LOAD INDEX INTO CACHE
LOCK TABLES
OPTIMIZE TABLE
RENAME TABLE
RENAME USER
REPAIR TABLE
RESET
REVOKE
SET PASSWORD
SHUTDOWN
START SLAVE
START TRANSACTION
STOP SLAVE
TRUNCATE TABLE
```
`SET autocommit = 1` causes an implicit commit if the value was 0.
All these statements cause an implicit commit before execution. This means that, even if the statement fails with an error, the transaction is committed. Some of them, like `CREATE TABLE ... SELECT`, also cause a commit immediatly after execution. Such statements couldn't be rollbacked in any case.
If you are not sure whether a statement has implicitly committed the current transaction, you can query the [in\_transaction](../server-system-variables/index#in_transaction) server system variable.
Note that when a transaction starts (not in autocommit mode), all locks acquired with [LOCK TABLES](../lock-tables-and-unlock-tables/index) are released. And acquiring such locks always commits the current transaction. To preserve the data integrity between transactional and non-transactional tables, the [GET\_LOCK()](../get_lock/index) function can be used.
Exceptions
----------
These statements do not cause an implicit commit in the following cases:
* `CREATE TABLE` and `DROP TABLE`, when the `TEMPORARY` keyword is used.
+ However, TRUNCATE TABLE causes an implicit commit even when used on a temporary table.
* `CREATE FUNCTION` and `DROP FUNCTION`, when used to create a UDF (instead of a stored function). However, `CREATE INDEX` and `DROP INDEX` cause commits even when used with temporary tables.
* `UNLOCK TABLES` causes a commit only if a `LOCK TABLES` was used on non-transactional tables.
* `START SLAVE`, `STOP SLAVE`, `RESET SLAVE` and `CHANGE MASTER TO` only cause implicit commit since [MariaDB 10.0](../what-is-mariadb-100/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb Information Schema TEMP_TABLES_INFO Table Information Schema TEMP\_TABLES\_INFO Table
===========================================
**MariaDB [10.2.2](https://mariadb.com/kb/en/mariadb-1022-release-notes/) - [10.2.3](https://mariadb.com/kb/en/mariadb-1023-release-notes/)**The `TEMP_TABLES_INFO` table was introduced in [MariaDB 10.2.2](https://mariadb.com/kb/en/mariadb-1022-release-notes/) and was removed in [MariaDB 10.2.4](https://mariadb.com/kb/en/mariadb-1024-release-notes/). See [MDEV-12459](https://jira.mariadb.org/browse/MDEV-12459) progress on an alternative.
The [Information Schema](../information_schema/index) `TEMP_TABLES_INFO` table contains information about active InnoDB temporary tables. All user and system-created temporary tables are reported when querying this table, with the exception of optimized internal temporary tables. The data is stored in memory.
Previously, InnoDB temp table metadata was rather stored in InnoDB system tables.
It has the following columns:
| Column | Description |
| --- | --- |
| `TABLE_ID` | Table ID. |
| `NAME` | Table name. |
| `N_COLS` | Number of columns in the temporary table, including three hidden columns that InnoDB creates (`DB_ROW_ID`, `DB_TRX_ID`, and `DB_ROLL_PTR`). |
| `SPACE` | Numerical identifier for the tablespace identifier holding the temporary table. Compressed temporary tables are stored by default in separate per-table tablespaces in the temporary file directory. For non-compressed tables, the shared temporary table is named `ibtmp1`, found in the data directory. Always a non-zero value, and regenerated on server restart. |
| `PER_TABLE_TABLESPACE` | If `TRUE`, the temporary table resides in a separate per-table tablespace. If `FALSE`, it resides in the shared temporary tablespace. |
| `IS_COMPRESSED` | `TRUE` if the table is compressed. |
The `PROCESS` [privilege](../grant/index) is required to view the table.
Examples
--------
```
CREATE TEMPORARY TABLE t (i INT) ENGINE=INNODB;
SELECT * FROM INFORMATION_SCHEMA.INNODB_TEMP_TABLE_INFO;
+----------+--------------+--------+-------+----------------------+---------------+
| TABLE_ID | NAME | N_COLS | SPACE | PER_TABLE_TABLESPACE | IS_COMPRESSED |
+----------+--------------+--------+-------+----------------------+---------------+
| 39 | #sql1c93_3_1 | 4 | 64 | FALSE | FALSE |
+----------+--------------+--------+-------+----------------------+---------------+
```
Adding a compressed table:
```
SET GLOBAL innodb_file_format="Barracuda";
CREATE TEMPORARY TABLE t2 (i INT) ROW_FORMAT=COMPRESSED ENGINE=INNODB;
SELECT * FROM INFORMATION_SCHEMA.INNODB_TEMP_TABLE_INFO;
+----------+--------------+--------+-------+----------------------+---------------+
| TABLE_ID | NAME | N_COLS | SPACE | PER_TABLE_TABLESPACE | IS_COMPRESSED |
+----------+--------------+--------+-------+----------------------+---------------+
| 40 | #sql1c93_3_3 | 4 | 65 | TRUE | TRUE |
| 39 | #sql1c93_3_1 | 4 | 64 | FALSE | FALSE |
+----------+--------------+--------+-------+----------------------+---------------+
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb NUMBER NUMBER
======
**MariaDB starting with [10.3](../what-is-mariadb-103/index)**
```
NUMBER[(M[,D])] [SIGNED | UNSIGNED | ZEROFILL]
```
In [Oracle mode from MariaDB 10.3](../sql_modeoracle-from-mariadb-103/index#synonyms-for-basic-sql-types), `NUMBER` is a synonym for [DECIMAL](../decimal/index).
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb MariaDB ColumnStore software upgrade 1.1.4 GA to 1.1.5 GA MariaDB ColumnStore software upgrade 1.1.4 GA to 1.1.5 GA
=========================================================
MariaDB ColumnStore software upgrade 1.1.4 GA to 1.1.5 GA
---------------------------------------------------------
Additional Dependency Packages exist for 1.1.5, so make sure you install those based on the "Preparing for ColumnStore Installation" Guide.
Note: Columnstore.xml modifications you manually made are not automatically carried forward on an upgrade. These modifications will need to be incorporated back into .XML once the upgrade has occurred.
The previous configuration file will be saved as /usr/local/mariadb/columnstore/etc/Columnstore.xml.rpmsave.
If you have specified a root database password (which is good practice), then you must configure a .my.cnf file with user credentials for the upgrade process to use. Create a .my.cnf file in the user home directory with 600 file permissions with the following content (updating PASSWORD as appropriate):
```
[mysqladmin]
user = root
password = PASSWORD
```
### Choosing the type of upgrade
As noted on the Preparing guide, you can installing MariaDB ColumnStore with the use of soft-links. If you have the softlinks be setup at the Data Directory Levels, like mariadb/columnstore/data and mariadb/columnstore/dataX, then your upgrade will happen without any issues. In the case where you have a softlink at the top directory, like /usr/local/mariadb, you will need to upgrade using the binary package. If you updating using the rpm package and tool, this softlink will be deleted when you perform the upgrade process and the upgrade will fail.
#### Root User Installs
#### Upgrading MariaDB ColumnStore using RPMs tar package
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Download the package mariadb-columnstore-1.1.5-1-centos#.x86\_64.rpm.tar.gz to the PM1 server where you are installing MariaDB ColumnStore.** Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Unpack the tarball, which will generate a set of RPMs that will reside in the /root/ directory.
```
# tar -zxf mariadb-columnstore-1.1.5-1-centos#.x86_64.rpm.tar.gz
```
* Upgrade the RPMs. The MariaDB ColumnStore software will be installed in /usr/local/.
```
# rpm -e --nodeps $(rpm -qa | grep '^mariadb-columnstore')
# rpm -ivh mariadb-columnstore-*1.1.5*rpm
```
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml.rpmsave
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
For RPM Upgrade, the previous configuration file will be saved as:
/usr/local/mariadb/columnstore/etc/Columnstore.xml.rpmsave
#### Upgrading MariaDB ColumnStore using RPM Package Repositories
The system can be upgrade when it was previously installed from the Package Repositories. This will need to be run on each module in the system
Additional information can be found in this document on how to setup and install using the 'yum' package repo command:
[https://mariadb.com/kb/en/library/installing-mariadb-ax-from-the-package-repositories](../library/installing-mariadb-ax-from-the-package-repositories)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Shutdown the MariaDB ColumnStore system:**
```
# mcsadmin shutdownsystem y
```
* Uninstall MariaDB ColumnStore Packages
```
# yum remove mariadb-columnstore*
```
* Install MariaDB ColumnStore Packages
```
# yum --enablerepo=mariadb-columnstore clean metadata
# yum install mariadb-columnstore*
```
NOTE: On the non-pm1 module, start the columnstore service
```
# /usr/local/mariadb/columnstore/bin/columnstore start
```
* Run postConfigure using the upgrade and non-distributed options, which will utilize the configuration from the Columnstore.xml.rpmsave
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u -n
```
For RPM Upgrade, the previous configuration file will be saved as:
/usr/local/mariadb/columnstore/etc/Columnstore.xml.rpmsave
### Initial download/install of MariaDB ColumnStore binary package
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /usr/local directory -mariadb-columnstore-1.1.5-1.x86\_64.bin.tar.gz (Binary 64-BIT)to the server where you are installing MariaDB ColumnStore.
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Run pre-uninstall script
```
# /usr/local/mariadb/columnstore/bin/pre-uninstall
```
* Unpack the tarball, in the /usr/local/ directory.
```
# tar -zxvf mariadb-columnstore-1.1.5-1.x86_64.bin.tar.gz
```
* Run post-install scripts
```
# /usr/local/mariadb/columnstore/bin/post-install
```
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml,rpmsave
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
### Upgrading MariaDB ColumnStore using the DEB package
A DEB upgrade would be done on a system that supports DEBs like Debian or Ubuntu systems.
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /root directory
mariadb-columnstore-1.1.5-1.amd64.deb.tar.gz
(DEB 64-BIT) to the server where you are installing MariaDB ColumnStore.
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Unpack the tarball, which will generate DEBs.
```
# tar -zxf mariadb-columnstore-1.1.5-1.amd64.deb.tar.gz
```
* Remove, purge and install all MariaDB ColumnStore debs
```
# cd /root/
# dpkg -r $(dpkg --list | grep 'mariadb-columnstore' | awk '{print $2}')
# dpkg -P $(dpkg --list | grep 'mariadb-columnstore' | awk '{print $2}')
# dpkg --install mariadb-columnstore-*1.1.5-1*deb
```
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml,rpmsave
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u
```
#### Upgrading MariaDB ColumnStore using DEB Package Repositories
The system can be upgrade when it was previously installed from the Package Repositories. This will need to be run on each module in the system
Additional information can be found in this document on how to setup and install using the 'apt-get' package repo command:
[https://mariadb.com/kb/en/library/installing-mariadb-ax-from-the-package-repositories](../library/installing-mariadb-ax-from-the-package-repositories)
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
**Shutdown the MariaDB ColumnStore system:**
```
# mcsadmin shutdownsystem y
```
* Uninstall MariaDB ColumnStore Packages
```
# apt-get remove mariadb-columnstore*
```
* Install MariaDB ColumnStore Packages
```
# apt-get update
# sudo apt-get install mariadb-columnstore*
```
NOTE: On the non-pm1 module, start the columnstore service
```
# /usr/local/mariadb/columnstore/bin/columnstore start
```
* Run postConfigure using the upgrade and non-distributed options, which will utilize the configuration from the Columnstore.xml.rpmsave
```
# /usr/local/mariadb/columnstore/bin/postConfigure -u -n
```
For RPM Upgrade, the previous configuration file will be saved as:
/usr/local/mariadb/columnstore/etc/Columnstore.xml.rpmsave
#### Non-Root User Installs
### Initial download/install of MariaDB ColumnStore binary package
Upgrade MariaDB ColumnStore as user root on the server designated as PM1:
* Download the package into the /home/'non-root-user" directory
mariadb-columnstore-1.1.5-1.x86\_64.bin.tar.gz (Binary 64-BIT)to the server where you are installing MariaDB ColumnStore.
* Shutdown the MariaDB ColumnStore system:
```
# mcsadmin shutdownsystem y
```
* Run pre-uninstall script
```
# $HOME/mariadb/columnstore/bin/pre-uninstall --installdir=/home/guest/mariadb/columnstore
```
* Unpack the tarball, which will generate the $HOME/ directory.
```
# tar -zxvf mariadb-columnstore-1.1.5-1.x86_64.bin.tar.gz
```
* Run post-install scripts
```
# $HOME/mariadb/columnstore/bin/post-install --installdir=/home/guest/mariadb/columnstore
```
* Run postConfigure using the upgrade option, which will utilize the configuration from the Columnstore.xml,rpmsave
```
# $HOME/mariadb/columnstore/bin/postConfigure -u -i /home/guest/mariadb/columnstore
```
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
mariadb How to Produce a Full Stack Trace for mysqld How to Produce a Full Stack Trace for mysqld
============================================
Partial Stack Traces in the Error Log
-------------------------------------
When `mysqld` crashes, it will write a stack trace in the [error log](../error-log/index) by default. This is because the `[stack\_trace](../mysqld-options/index#-stack-trace)` option defaults to `ON`. With a normal release build, this stack trace in the [error log](../error-log/index) may look something like this:
```
2019-03-28 23:31:08 0x7ff4dc62d700 InnoDB: Assertion failure in file /home/buildbot/buildbot/build/mariadb-10.2.23/storage/innobase/rem/rem0rec.cc line 574
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to https://jira.mariadb.org/
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: https://mariadb.com/kb/en/library/innodb-recovery-modes/
InnoDB: about forcing recovery.
190328 23:31:08 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Server version: 10.2.23-MariaDB-10.2.23+maria~stretch
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=234
max_threads=752
thread_count=273
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1783435 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x7ff4d8001f28
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7ff4dc62ccc8 thread_stack 0x49000
*** buffer overflow detected ***: /usr/sbin/mysqld terminated
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x70bfb)[0x7ffa09af5bfb]
/lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x37)[0x7ffa09b7e437]
/lib/x86_64-linux-gnu/libc.so.6(+0xf7570)[0x7ffa09b7c570]
/lib/x86_64-linux-gnu/libc.so.6(+0xf93aa)[0x7ffa09b7e3aa]
/usr/sbin/mysqld(my_addr_resolve+0xe2)[0x55ca42284922]
/usr/sbin/mysqld(my_print_stacktrace+0x1bb)[0x55ca4226b1eb]
/usr/sbin/mysqld(handle_fatal_signal+0x41d)[0x55ca41d0a01d]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x110e0)[0x7ffa0b4180e0]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcf)[0x7ffa09ab7fff]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x16a)[0x7ffa09ab942a]
/usr/sbin/mysqld(+0x40f971)[0x55ca41ab8971]
/usr/sbin/mysqld(+0x887df6)[0x55ca41f30df6]
/usr/sbin/mysqld(+0x863673)[0x55ca41f0c673]
/usr/sbin/mysqld(+0x96648e)[0x55ca4200f48e]
/usr/sbin/mysqld(+0x89b559)[0x55ca41f44559]
/usr/sbin/mysqld(+0x8a15e4)[0x55ca41f4a5e4]
/usr/sbin/mysqld(+0x8a2187)[0x55ca41f4b187]
/usr/sbin/mysqld(+0x8b1a20)[0x55ca41f5aa20]
/usr/sbin/mysqld(+0x7f5c04)[0x55ca41e9ec04]
/usr/sbin/mysqld(_ZN7handler12ha_write_rowEPh+0x107)[0x55ca41d140d7]
/usr/sbin/mysqld(_Z12write_recordP3THDP5TABLEP12st_copy_info+0x72)[0x55ca41b4b992]
/usr/sbin/mysqld(_Z12mysql_insertP3THDP10TABLE_LISTR4ListI4ItemERS3_IS5_ES6_S6_15enum_duplicatesb+0x1206)[0x55ca41b560f6]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x3f68)[0x55ca41b6bee8]
/usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_statebb+0x28a)[0x55ca41b70e4a]
/usr/sbin/mysqld(+0x4c864f)[0x55ca41b7164f]
/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcjbb+0x1a7c)[0x55ca41b737fc]
/usr/sbin/mysqld(_Z10do_commandP3THD+0x176)[0x55ca41b748a6]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP7CONNECT+0x25a)[0x55ca41c3ec0a]
/usr/sbin/mysqld(handle_one_connection+0x3d)[0x55ca41c3ed7d]
/usr/sbin/mysqld(+0xb75791)[0x55ca4221e791]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x74a4)[0x7ffa0b40e4a4]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7ffa09b6dd0f]
```
If you plan to [report a bug](../mariadb-community-bug-reporting/index) about the problem, then this information can be very useful for MariaDB's developers to track down the root cause. However, notice that some of the function names in the call stack are missing. In some cases, this partial stack trace may not be enough to find out exactly where the problem is.
A full stack trace can only be produced if you have debugging symbols for your `mysqld` binary.
Obtaining Debugging Symbols for Your `mysqld` Binary
----------------------------------------------------
If you want to get full stack traces, then your `mysqld` binary must have debugging symbols. These debugging symbols can be obtained in one of the ways listed below.
If your server is running on Linux, then you can do the following:
* If your `mysqld` binary is not stripped, then it will have debugging symbols. See [Checking Whether Your `mysqld` Binary is Stripped on Linux](#checking-whether-your-mysqld-binary-is-stripped-on-linux) for more information.
* If your `mysqld` binary is a release build instead of a debug build, then `debuginfo` packages that contain debugging symbols may be available on some Linux distributions. See [Installing Debug Info Packages on Linux](#installing-debug-info-packages-on-linux) for more information.
If your server is running on Windows, then you can do the following:
* If your `mysqld` binary is a release build instead of a debug build, then debug symbols may be available to install. See [Installing Debugging Symbols on Windows](#installing-debugging-symbols-on-windows) for more information.
If none of the above fit your situation, then you can do the following, regardless of the platform:
* You could compile a debug build of `mysqld`. See [Installing a Debug Build](#installing-a-debug-build) for more information.
Once you have a `mysqld` binary with debugging symbols, you can get full stack traces from one of the following sources:
* If the server crashes, then a full stack trace can be found in the [error log](../error-log/index).
* If the server crashes, then the server can also create a core dump, which is essentially an image of the crashed program. The full stack trace can be obtained from the resulting core file. This will only occur if [core dumps are enabled](#enabling-core-dumps).
* If the server hangs or stalls, then a full stack trace can be obtained from the running `mysqld` process by using a debugger like `gdb`.
### Checking Whether Your `mysqld` Binary is Stripped on Linux
On Linux, you can find out if your `mysqld` binary is stripped by executing the following command:
```
file /usr/sbin/mysqld
```
If this doesn't say 'stripped', then you are fine. If not, then you need to use one of the other options to obtain a `mysqld` binary with debugging symbols.
### Installing Debug Info Packages on Linux
On some Linux distributions, you may be able to install `debuginfo` packages that contain debugging symbols.
Currently, `debuginfo` packages may not allow the server to print a nice stack trace in the error log. They also allow users to extract full stack traces from core dumps. See [MDEV-20738](https://jira.mariadb.org/browse/MDEV-20738) for more information.
#### Installing Debug Info Packages with yum/dnf
**MariaDB starting with [5.5.64](https://mariadb.com/kb/en/mariadb-5564-release-notes/)**The MariaDB `yum` repository first added `[debuginfo](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Developer_Guide/intro.debuginfo.html)` packages in [MariaDB 5.5.64](https://mariadb.com/kb/en/mariadb-5564-release-notes/), [MariaDB 10.1.39](https://mariadb.com/kb/en/mariadb-10139-release-notes/), [MariaDB 10.2.23](https://mariadb.com/kb/en/mariadb-10223-release-notes/), [MariaDB 10.3.14](https://mariadb.com/kb/en/mariadb-10314-release-notes/), and [MariaDB 10.4.4](https://mariadb.com/kb/en/mariadb-1044-release-notes/).
On RHEL, CentOS, Fedora, and other similar Linux distributions, it is highly recommended to install the relevant [RPM package](../rpm/index) from MariaDB's repository using `[yum](../yum/index)` or `[dnf](https://en.wikipedia.org/wiki/DNF_(software))`. Starting with RHEL 8 and Fedora 22, `yum` has been replaced by `dnf`, which is the next major version of `yum`. However, `yum` commands still work on many systems that use `dnf`. For example:
```
sudo yum install MariaDB-server-debuginfo
```
See [Installing MariaDB with yum/dnf: Installing Debug Info Packages with YUM](../yum/index#installing-debug-info-packages-with-yum) for more information.
#### Installing Debug Info Packages with zypper
**MariaDB starting with [5.5.64](https://mariadb.com/kb/en/mariadb-5564-release-notes/)**The MariaDB `zypper` repository first added `[debuginfo](https://en.opensuse.org/openSUSE:Packaging_guidelines#Debuginfo)` packages in [MariaDB 5.5.64](https://mariadb.com/kb/en/mariadb-5564-release-notes/), [MariaDB 10.1.39](https://mariadb.com/kb/en/mariadb-10139-release-notes/), [MariaDB 10.2.23](https://mariadb.com/kb/en/mariadb-10223-release-notes/), [MariaDB 10.3.14](https://mariadb.com/kb/en/mariadb-10314-release-notes/), and [MariaDB 10.4.4](https://mariadb.com/kb/en/mariadb-1044-release-notes/).
On SLES, OpenSUSE, and other similar Linux distributions, it is highly recommended to install the relevant [RPM package](../rpm/index) from MariaDB's repository using `[zypper](../installing-mariadb-with-zypper/index)`. For example:
```
sudo zypper install MariaDB-server-debuginfo
```
See [Installing MariaDB with zypper: Installing Debug Info Packages with ZYpp](../installing-mariadb-with-zypper/index#installing-debug-info-packages-with-zypp) for more information.
#### Installing Debug Info Packages from MariaDB Debian or Ubuntu
These are for when you already installed MariaDB from a MariaDB mirror.
For Ubuntu and additional repository step is needed:
```
sudo add-apt-repository 'deb [arch=amd64,arm64,ppc64el,s390x] https://ftp.osuosl.org/pub/mariadb/repo/10.5/ubuntu focal main/debug'
```
Adjust 10.5 to the major version you are debugging and focal to the required distribution.
```
apt-get update && apt-get install -y mariadb-server-core-10.5-dbgsym
```
#### Installing Debug Info Packages from Ubuntu or Debian
If you used the MariaDB versions provided by Debian or Ubuntu see the following links.
For Debian see <https://wiki.debian.org/AutomaticDebugPackages>
For Ubuntu see <https://wiki.ubuntu.com/Debug%20Symbol%20Packages>
### Installing Debugging Symbols on Windows
Debugging symbols are available to install on Windows.
#### Installing Debugging Symbols with the MSI Installer on Windows
Debugging symbols can be installed with the [MSI](../installing-mariadb-msi-packages-on-windows/index) installer. Debugging symbols are not installed by default. You must perform a custom installation and explicitly choose to install debugging symbols.
The [MSI](../installing-mariadb-msi-packages-on-windows/index) installer can be downloaded from the [MariaDB downloads page](https://downloads.mariadb.org).
#### Installing Debugging Symbols with the ZIP Package on Windows
MariaDB also provides a [ZIP](../installing-mariadb-windows-zip-packages/index) package that contains debugging symbols on Windows.
The [ZIP](../installing-mariadb-windows-zip-packages/index) package that contains debugging symbol can be downloaded from the [MariaDB downloads page](https://downloads.mariadb.org).
### Installing a Compiled Debug Build
When trying to find the root cause of complicated problems, it can be helpful to use a [debug build](../compiling-mariadb-for-debugging/index) of `mysqld`.
If a debug build of `mysqld` crashes, then it will provide the following information:
* If the binaries are [not stripped](#checking-whether-your-mysqld-binary-is-stripped-on-linux), then the [error log](../error-log/index) will contain a more precise stack trace.
* If [core dumps are enabled](../enabling-core-dumps/index), then a core file will be created.
+ On Linux, the name of the core file is usually something like `core` or `core.${PID}`, and the core file is usually located in the `[datadir](../server-system-variables/index#datadir)` by default. However, the file name and location of the core file are configurable. See [Enabling Core Dumps: Setting the Path on Linux](../enabling-core-dumps/index#setting-the-path-on-linux) for more information.
* Debug builds contain more runtime checks than release builds, so the [error log](../error-log/index) may contain a more-detailed assertion failure.
The suggested steps are:
1. [Compile the debug build](../compiling-mariadb-for-debugging/index).
2. Temporarily install the debug `mysqld` binary instead of your release `mysqld` binary.
3. If your issue involves a crash, then [enable core dumps](#enabling-core-dumps).
4. Reproduce your issue with the debug `mysqld` binary.
5. When you've collected enough information, reinstall your release `mysqld` binary.
The process is described in more detail below. The process assumes that you are using Linux, so the process would be different on other platforms, such as Windows.
Enabling Core Dumps
-------------------
To enable core dumps, see [Enabling Core Dumps](../enabling-core-dumps/index) for details.
Analyzing a Core File with `gdb` on Linux
-----------------------------------------
To analyze the core file on Linux, you can use `[gdb](https://www.gnu.org/software/gdb/documentation)`.
For example, to open a core file with `[gdb](https://www.gnu.org/software/gdb/documentation)`, you could execute the following:
```
sudo gdb /usr/sbin/mysqld /var/lib/mysql/core.932
```
Be sure to replace `/usr/sbin/mysqld` with the path to your `mysqld` binary and to also replace `/var/lib/mysql/core.932` with the path to your core file.
Once `[gdb](https://www.gnu.org/software/gdb/documentation)` has opened the core file, if you want to [log all output to a file](https://sourceware.org/gdb/current/onlinedocs/gdb/Logging-Output.html#Logging-Output), then you could execute the following commands:
```
set logging file /tmp/gdb_output.log
set logging on
```
If you do not execute `set logging file`, then the `set logging on` command creates a `gdb.txt` in your current working directory. Redirecting the output to a file is useful, because it can make it easier to analyze. It also makes it easier to send the information to a MariaDB developer, if that becomes necessary.
Do any commands that you would like to do. For example, you could [get the backtraces](#getting-backtraces-with-gdb-on-linux).
Once you are done, you can exit `[gdb](https://www.gnu.org/software/gdb/documentation)` by executing the `[quit](https://sourceware.org/gdb/current/onlinedocs/gdb/Quitting-GDB.html#Quitting-GDB)` command.
Getting Backtraces with `gdb` on Linux
--------------------------------------
On Linux, once you have debugging symbols for your `mysqld` binary, you can use the `[gdb](https://www.gnu.org/software/gdb/documentation)` utility to get [backtraces](https://sourceware.org/gdb/current/onlinedocs/gdb/Backtrace.html#Backtrace), which are what `gdb` calls stack traces. Backtraces can be obtained from a core file or from a running `mysqld` process.
Full [backtraces](https://sourceware.org/gdb/current/onlinedocs/gdb/Backtrace.html#Backtrace) are prefered and will contain function arguments, which can contain useful information such as query strings, so it can make the information easier to analyze.
To get a **full** [backtrace](https://sourceware.org/gdb/current/onlinedocs/gdb/Backtrace.html#Backtrace) of the main thread, then you could execute the following:
```
bt -frame-arguments all full
```
If you want to get a **full** [backtrace](https://sourceware.org/gdb/current/onlinedocs/gdb/Backtrace.html#Backtrace) of **all** threads, then you could execute the following:
```
thread apply all bt -frame-arguments all full
```
If you want to get a full backtrace to a file to report a bug, the recommended way is to use gdb:
```
set logging on
set pagination off
set print frame-arguments all
thread apply all bt full
set logging off
```
This will write the full backtrace into the file `gdb.txt`.
### Getting Full Backtraces For All Threads From a Core File
Sometimes it can be helpful to get **full** [backtraces](https://sourceware.org/gdb/current/onlinedocs/gdb/Backtrace.html#Backtrace) for all threads. The full [backtraces](https://sourceware.org/gdb/current/onlinedocs/gdb/Backtrace.html#Backtrace) will contain function arguments, which can contain useful information such as query strings, so it can make the information easier to analyze.
To get **full** [backtraces](https://sourceware.org/gdb/current/onlinedocs/gdb/Backtrace.html#Backtrace) for all threads from a `mysqld` core file, execute a command like the following:
```
sudo gdb --batch --eval-command="thread apply all bt -frame-arguments all full" /usr/sbin/mysqld /var/lib/mysql/core.932 > mysqld_full_bt_all_threads.txt
```
Be sure to replace `/usr/sbin/mysqld` with the path to your `mysqld` binary and to also replace `/var/lib/mysql/core.932` with the path to your core dump.
The [backtraces](https://sourceware.org/gdb/current/onlinedocs/gdb/Backtrace.html#Backtrace) will be output to the file `mysqld_full_bt_all_threads.txt`.
### Getting Full Backtraces For All Threads From a Running `mysqld` Process
Sometimes it can be helpful to get **full** [backtraces](https://sourceware.org/gdb/current/onlinedocs/gdb/Backtrace.html#Backtrace) for all threads. The full backtraces will contain function arguments, which can contain useful information such as query strings, so it can make the information easier to analyze.
To get **full** [backtraces](https://sourceware.org/gdb/current/onlinedocs/gdb/Backtrace.html#Backtrace) for all threads from a running `mysqld` process, execute a command like the following:
```
sudo gdb --batch --eval-command="thread apply all bt -frame-arguments all full" /usr/sbin/mysqld $(pgrep -xn mysqld) > mysqld_full_bt_all_threads.txt
```
Be sure to replace `/usr/sbin/mysqld` with the path to your `mysqld` binary.
The [backtraces](https://sourceware.org/gdb/current/onlinedocs/gdb/Backtrace.html#Backtrace) will be output to the file `mysqld_full_bt_all_threads.txt`.
Running a Copy of the Database Directory
----------------------------------------
If you are concerned with debuggers running on your production database you can also copy the database to another location.
This is useful when you know which statement crashed the server.
Just start mysqld with the options `--datadir=/copy-of-original-data-directory --core-file --stack-trace --socket=/tmp/mysqld-alone.sock --skip-networking`
Disabling Stack Traces in the Error Log
---------------------------------------
In order to disable stack traces in the [error log](../error-log/index), you can configure the `[skip\_stack\_trace](../mysqld-options/index#-stack-trace)` option either on the command-line or in a relevant server [option group](../configuring-mariadb-with-option-files/index#option-groups) in an [option file](../configuring-mariadb-with-option-files/index). For example:
```
[mariadb]
...
skip_stack_trace
```
Reporting the Problem
---------------------
If you encounter some problem in MariaDB, then MariaDB's developers would appreciate if you would [report a bug](../mariadb-community-bug-reporting/index) at the [MariaDB JIRA bug tracker](https://jira.mariadb.org). Please include the following information:
* Your full stack trace.
* Your [error log](../error-log/index).
* Your [option files](../configuring-mariadb-with-option-files/index).
* How to reproduce the problem.
* [SHOW CREATE TABLE {table](../show-create-table/index) (for each table in query) and [EXPLAIN {query}](../explain/index) if a query related crash.
For very difficult or critical errors, you should consider uploading the following information to the [MariaDB FTP server](../mariadb-ftp-server/index) in a `.tar.gz` or `.zip` archive:
* Your debug build of `mysqld` (if you compiled it), otherwise version information on the mariadb-server package.
* Your core file.
* Your contact information.
* The associated [JIRA issue identifier](https://jira.mariadb.org) for the bug, if you [reported a bug](../mariadb-community-bug-reporting/index).
This information will allow the MariaDB developers at the MariaDB Corporation to analyze it and try to create a fix.
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
mariadb Latitude/Longitude Indexing Latitude/Longitude Indexing
===========================
The problem
-----------
You want to find the nearest 10 pizza parlors, but you cannot figure out how to do it efficiently in your huge database. Database indexes are good at one-dimensional indexing, but poor at two-dimensions.
You might have tried
* INDEX(lat), INDEX(lon) -- but the optimizer used only one
* INDEX(lat,lon) -- but it still had to work too hard
* Sometimes you ended up with a full table scan -- Yuck.
WHERE [SQRT(...)](../sqrt/index)< ... -- No chance of using any index.
WHERE lat BETWEEN ... AND lng BETWEEN... -- This has some chance of using such indexes.
The goal is to look only at records "close", in both directions, to the target lat/lng.
A solution -- first, the principles
-----------------------------------
[PARTITIONs](../managing-mariadb-partitioning/index) in MariaDB and MySQL sort of give you a way to have two clustered indexes. So, if we could slice up (partition) the globe in one dimension and use ordinary indexing in the other dimension, maybe we can get something approximating a 2D index. This 2D approach keeps the number of disk hits significantly lower than 1D approaches, thereby speeding up "find nearest" queries.
It works. Not perfectly, but better than the alternatives.
What to PARTITION on? It seems like latitude or longitude would be a good idea. Note that longitudes vary in width, from 69 miles (111 km) at the equator, to 0 at the poles. So, latitude seems like a better choice.
How many PARTITIONs? It does not matter a lot. Some thoughts:
* 90 partitions - 2 degrees each. (I don't like tables with too many partitions; 90 seems like plenty.)
* 50-100 - evenly populated. (This requires code. For 2.7M placenames, 85 partitions varied from 0.5 degrees to very wide partitions at the poles.)
* Don't have more than 100 partitions, there are inefficiencies in the partition implementation.
How to PARTITION? Well, MariaDB and MySQL are very picky. So [FLOAT](../float/index)/[DOUBLE](../double/index) are out. [DECIMAL](../decimal/index) is out. So, we are stuck with some kludge. Essentially, we need to convert Lat/Lng to some size of [INT](../int/index) and use PARTITION BY RANGE.
Representation choices
----------------------
To get to a datatype that can be used in PARTITION, you need to "scale" the latitude and longitude. (Consider only the \*INTs; the other datatypes are included for comparison)
```
Datatype Bytes resolution
------------------ ----- --------------------------------
Deg*100 (SMALLINT) 4 1570 m 1.0 mi Cities
DECIMAL(4,2)/(5,2) 5 1570 m 1.0 mi Cities
SMALLINT scaled 4 682 m 0.4 mi Cities
Deg*10000 (MEDIUMINT) 6 16 m 52 ft Houses/Businesses
DECIMAL(6,4)/(7,4) 7 16 m 52 ft Houses/Businesses
MEDIUMINT scaled 6 2.7 m 8.8 ft
FLOAT 8 1.7 m 5.6 ft
DECIMAL(8,6)/(9,6) 9 16cm 1/2 ft Friends in a mall
Deg*10000000 (INT) 8 16mm 5/8 in Marbles
DOUBLE 16 3.5nm ... Fleas on a dog
```
(Sorted by resolution)
What these mean...
Deg\*100 ([SMALLINT](../smallint/index)) -- you take the lat/lng, multiply by 100, round, and store into a SMALLINT. That will take 2 bytes for each dimension, for a total of 4 bytes. Two items might be 1570 meters apart, but register as having the same latitude and longitude.
[DECIMAL(4,2)](../decimal/index) for latitude and DECIMAL(5,2) for longitude will take 2+3 bytes and have no better resolution than Deg\*100.
SMALLINT scaled -- Convert latitude into a SMALLINT SIGNED by doing (degrees / 90 \* 32767) and rounding; longitude by (degrees / 180 \* 32767).
[FLOAT](../float/index) has 24 significant bits; [DOUBLE](../double/index) has 53. (They don't work with PARTITIONing but are included for completeness. Often people use DOUBLE without realizing how much an overkill it is, and how much space it takes.)
Sure, you could do DEG\*1000 and other "in between" cases, but there is no advantage. DEG\*1000 takes as much space as DEG\*10000, but has less resolution.
So, go down the list to see how much resolution you need, then pick an encoding you are comfortable with. However, since we are about to use latitude as a "partition key", it must be limited to one of the INTs. For the sample code, I will use Deg\*10000 ([MEDIUMINT](../mediumint/index)).
GCDist -- compute "great circle distance"
-----------------------------------------
GCDist is a helper FUNCTION that correctly computes the distance between two points on the globe.
The code has been benchmarked at about 20 microseconds per call on a 2011-vintage PC. If you had to check a million points, that would take 20 seconds -- far too much for a web application. So, one goal of the Procedure that uses it will be to minimize the usage of this function. With the code presented here, the function need be called only a few dozen or few hundred times, except in pathological cases.
Sure, you could use the Pythagorean formula. And it would work for most applications. But it does not take extra effort to do the GC. Furthermore, GC works across a pole and across the dateline. And, a Pythagorean function is not that much faster.
For efficiency, GCDist understands the scaling you picked and has that stuff hardcoded. I am picking "Deg\*10000", so the function expects 350000 for representing 35 degrees. If you choose a different scaling, you will need to change the code.
GCDist() takes 4 scaled DOUBLEs -- lat1, lon1, lat2, lon2 -- and returns a scaled number of "degrees" representing the distance.
The table of representation choices says 52 feet of resolution for Deg\*10000 and DECIMAL(x,4). Here is how it was calculated: To measuring a diagonal between lat/lng (0,0) and (0.0001,00001) (one 'unit in the last place'): GCDist(0,0,1,1) \* 69.172 / 10000 \* 5280 = 51.65, where
* 69.172 miles/degree of latitude
* 10000 units per degree for the scaling chosen
* 5280 feet / mile.
(No, this function does not compensate for the Earth being an oblate spheroid, etc.)
Required table structure
------------------------
There will be one table (plus normalization tables as needed). The one table must be partitioned and indexed as indicated below.
Fields and indexes
* PARTITION BY RANGE(lat)
* lat -- scaled latitude (see above)
* lon -- scaled longitude
* PRIMARY KEY(lon, lat, ...) -- lon must be first; something must be added to make it UNIQUE
* id -- (optional) you may need to identify the rows for your purposes; AUTO\_INCREMENT if you like
* INDEX(id) -- if `id` is [AUTO\_INCREMENT](../auto_increment/index), then this plain INDEX (not UNIQUE, not PRIMARY KEY) is necessary
* ENGINE=[InnoDB](../innodb/index) -- so the PRIMARY KEY will be "clustered"
* Other indexes -- keep to a minimum (this is a general performance rule for large tables)
For most of this discussion, lat is assumed to be MEDIUMINT -- scaled from -90 to +90 by multiplying by 10000. Similarly for lon and -180 to +180.
The PRIMARY KEY must
* start with `lon` since the algorithm needs the "clustering" that InnoDB will provide, and
* include `lat` somewhere, since it is the PARTITION key, and
* contain something to make the key UNIQUE (lon+lat is unlikely to be sufficient).
The FindNearest PROCEDURE will do multiple SELECTs something like this:
```
WHERE lat BETWEEN @my_lat - @dlat
AND @my_lat + @dlat -- PARTITION Pruning and bounding box
AND lon BETWEEN @my_lon - @dlon
AND @my_lon + @dlon -- first part of PK
AND condition -- filter out non-pizza parlors
```
The query planner will
* Do PARTITION "pruning" based on the latitude; then
* Within a PARTITION (which is effectively a table), use lon do a 'clustered' range scan; then
* Use the "condition" to filter down to the rows you desire, plus recheck lat. This design leads to very few disk blocks needing to be read, which is the main goal of the design.
Note that this does not even call GCDist. That comes in the last pass when the ORDER BY and LIMIT are used.
The [stored procedure](../stored-procedures/index) has a loop. At least two SELECTs will be executed, but with proper tuning; usually no more than about 6 SELECTs will be performed. Because of searching by the PRIMARY KEY, each SELECT hits only one block, sometimes more, of the table. Counting the number of blocks hit is a crude, but effective way, of comparing the performance of multiple designs. By comparison, a full table scan will probably touch thousands of blocks. A simple INDEX(lat) probably leads to hitting hundreds of blocks.
Filtering... An argument to the FindNearest procedure includes a boolean expression ("condition") for a WHERE clause. If you don't need any filtering, pass in "1". To avoid "SQL injection", do not let web users put arbitrary expressions; instead, construct the "condition" from inputs they provide, thereby making sure it is safe.
The algorithm
-------------
The algorithm is embodied in a [stored procedure](../stored-procedures/index) because of its complexity.
* You feed it a starting width for a "square" and a number of items to find.
* It builds a "square" around where you are.
* A SELECT is performed to see how many items are in the square.
* Loop, doubling the width of the square, until enough items are found.
* Now, a 'last' SELECT is performed to get the exact distances, sort them (ORDER BY) and LIMIT to the desired number.
* If spanning a pole or the dateline, a more complex SELECT is used.
The next section ("Performance") should make this a bit clearer as it walks through some examples.
Performance
-----------
Because of all the variations, it is hard to get a meaningful benchmark. So, here is some hand-waving instead.
Each SELECT is constrained by a "square" defined by a latitude range and a longitude range. (See the WHERE clause mentioned above, or in the sample code below.) Because of the way longitude lines warp, the longitude range of the "square" will be more degrees than the latitude range. Let's say the latitude partitioning is 3 degrees wide in the area where you are searching. That is over 200 miles (over 300km), so you are very likely to have a latitude range smaller than the partition width. Still, if you are reaching from the edge of a latitude stripe, the square could span two partitions. After partition pruning down to one (sometimes more) partition, the query is then constrained by a longitude range. (Remember, the PRIMARY KEY starts with `lon`.) If an InnoDB data block contains 100 rows (a handy Rule of Thumb), the select will touch one (or a few) block. If the square spans two (or more) partitions, then the same logic applies to each partition.
So, scanning the square will involve as little as one block; rarely more than a few blocks. The number of blocks is mostly independent of the dataset size.
The primary use case for this algorithm is when the data is significantly larger than will fit into cache (the buffer\_pool). Hence, the main goal is to minimize the number of disk hits.
Now let's look at some edge cases, and argue that the number of blocks is still better (usually) than with traditional indexing techniques.
What if you are looking for Starbucks in a dense city? There would be dozens, maybe hundreds per square mile. If you start the guess at 100 miles, the SELECTs would be hitting lots of blocks -- not efficient. In this case, the "starting distance" should be small, say, 2 miles. Let's say your app wants the closest 10 stores. In this example, you would probably find more than 10 Starbucks within 2 miles in 1 InnoDB block in one partition. Even though there is a second SELECT to finish off the query, it would be hitting the same block. Total: One block hit == cheap.
Let's say you start with a 5 mile square. Since there are upwards of 200 Starbucks within a 5-miles radius in some dense cities of the world, that might imply 300 in our "square". That maps to about 4 disk blocks, and a modest amount of CPU to chew through the 300 records. Still not bad.
Now, suppose you are on an ocean liner somewhere in the Pacific. And there is one Starbucks onboard, but you are looking for the nearest 10. If you again start with 2 miles, it will take several iterations to find 10 sites. But, let's walk through it anyway. The first probe will hit one partition (maybe 2), and find just one hit. The second probe doubles the width of the square; 4 miles will still give you one hit -- the same hit in the same block, which is now cached, so we won't count it as a second disk I/O. Eventually the square will be wide enough to span multiple partitions. Each extra partition will be one new disk hit to discover no sites in the square. Finally, the square will hit Chile or Hawaii or Fiji and find some more sites, perhaps enough to stop the iteration. Since the main criteria in determining the number of disk hits is the number of partitions hit, we do not want to split the world into too many partitions. If there are, say, 40 partitions, then I have just described a case where there might be 20 disk hits.
2-degree partitions might be good for a global table of stores or restaurants. A 5-mile starting distance might be good when filtering for Starbucks. 20 miles might be better for a department store.
Now, let's discuss the 'last' SELECT, wherein the square is expanded by SQRT(2) and it uses the Great Circle formula to precisely order the N results. The SQRT(2) is in case that the N items were all at the corners of the 'square'. Growing the square by this much allows us to catch any other sites that were just outside the old square.
First, note that this 'last' SELECT is hitting the same block(s) that the iteration hit, plus possibly hitting some more blocks. It is hard to predict how many extra blocks might be hit. Here's a pathological case. You are in the middle of a desert; the square grows and grows. Eventually it finds N sites. There is a big city just outside the final square from the iterating. Now the 'last' SELECT kicks in, and it includes lots of sites in this big city. "Lots of sites" --> lots of blocks --> lots of disk hits.
Discussion of reference code
----------------------------
Here's the gist of the [stored procedure](../stored-procedures/index) FindNearest().
* Make a guess at how close to "me" to look.
* See how many items are in a 'square' around me, after filtering.
* If not enough, repeat, doubling the width of the square.
* After finding enough, or giving up because we are looking "too far", make one last pass to get all the data, ORDERed and LIMITed
Note that the loop merely uses 'squares' of lat/lng ranges. This is crude, but works well with the partitioning and indexing, and avoids calling to GCDist (until the last step). In the sample code, I picked 15 miles as starting value. Adjusting this will have some impact on the Procedure's performance, but the impact will vary with the use cases. A rough way to set the radius is to guess what will find the desired LIMIT about half the time. (This value is hardcoded in the PROCEDURE.)
Parameters passed into FindNearest():
* your Latitude -- -90..90 (not scaled -- see hardcoded conversion in PROCEDURE)
* your Longitude -- -180..180 (not scaled)
* Start distance -- (miles or km) -- see discussion below
* Max distance -- in miles or km -- see hardcoded conversion in PROCEDURE
* Limit -- maximum number of items to return
* Condition -- something to put after 'AND' (more discussion above)
The function will find the nearest items, up to Limit that meet the Condition. But it will give up at Max distance. (If you are at the South Pole, why bother searching very far for the tenth pizza parlor?)
Because of the "scaling", "hardcoding", "Condition", the table name, etc, this PROCEDURE is not truly generic; the code must be modified for each application. Yes, I could have designed it to pass all that stuff in. But what a mess.
The "\_start\_dist" gives some control over the performance. Making this too small leads to extra iterations; too big leads to more rows being checked. If you choose to tune the Stored Procedure, do the following. "SELECT @iterations" after calling the SP for a number of typical values. If the value is usually 1, then decrease \_start\_dist. If it is usually 2 or more, then increase it.
Timing: Under 10ms for "typical" usage; any dataset size. Slower for pathological cases (low min distance, high max distance, crossing dateline, bad filtering, cold cache, etc)
End-cases:
* By using GC distance, not Pythagoras, distances are 'correct' even near poles.
* Poles -- Even if the "nearest" is almost 360 degrees away (longitude), it can find it.
* Dateline -- There is a small, 'contained', piece of code for crossing the Dateline. Example: you are at +179 deg longitude, and the nearest item is at -179.
The procedure returns one resultset, SELECT \*, distance.
* Only rows that meet your Condition, within Max distance are returned
* At most Limit rows are returned
* The rows will be ordered, "closest" first.
* "dist" will be in miles or km (based on a hardcoded constant in the SP)
Reference code, assuming deg\*10000 and 'miles'
-----------------------------------------------
This version is based on scaling "Deg\*10000 (MEDIUMINT)".
```
DELIMITER //
drop function if exists GCDist //
CREATE FUNCTION GCDist (
_lat1 DOUBLE, -- Scaled Degrees north for one point
_lon1 DOUBLE, -- Scaled Degrees west for one point
_lat2 DOUBLE, -- other point
_lon2 DOUBLE
) RETURNS DOUBLE
DETERMINISTIC
CONTAINS SQL -- SQL but does not read or write
SQL SECURITY INVOKER -- No special privileges granted
-- Input is a pair of latitudes/longitudes multiplied by 10000.
-- For example, the south pole has latitude -900000.
-- Multiply output by .0069172 to get miles between the two points
-- or by .0111325 to get kilometers
BEGIN
-- Hardcoded constant:
DECLARE _deg2rad DOUBLE DEFAULT PI()/1800000; -- For scaled by 1e4 to MEDIUMINT
DECLARE _rlat1 DOUBLE DEFAULT _deg2rad * _lat1;
DECLARE _rlat2 DOUBLE DEFAULT _deg2rad * _lat2;
-- compute as if earth's radius = 1.0
DECLARE _rlond DOUBLE DEFAULT _deg2rad * (_lon1 - _lon2);
DECLARE _m DOUBLE DEFAULT COS(_rlat2);
DECLARE _x DOUBLE DEFAULT COS(_rlat1) - _m * COS(_rlond);
DECLARE _y DOUBLE DEFAULT _m * SIN(_rlond);
DECLARE _z DOUBLE DEFAULT SIN(_rlat1) - SIN(_rlat2);
DECLARE _n DOUBLE DEFAULT SQRT(
_x * _x +
_y * _y +
_z * _z );
RETURN 2 * ASIN(_n / 2) / _deg2rad; -- again--scaled degrees
END;
//
DELIMITER ;
DELIMITER //
-- FindNearest (about my 6th approach)
drop procedure if exists FindNearest6 //
CREATE
PROCEDURE FindNearest (
IN _my_lat DOUBLE, -- Latitude of me [-90..90] (not scaled)
IN _my_lon DOUBLE, -- Longitude [-180..180]
IN _START_dist DOUBLE, -- Starting estimate of how far to search: miles or km
IN _max_dist DOUBLE, -- Limit how far to search: miles or km
IN _limit INT, -- How many items to try to get
IN _condition VARCHAR(1111) -- will be ANDed in a WHERE clause
)
DETERMINISTIC
BEGIN
-- lat and lng are in degrees -90..+90 and -180..+180
-- All computations done in Latitude degrees.
-- Thing to tailor
-- *Locations* -- the table
-- Scaling of lat, lon; here using *10000 in MEDIUMINT
-- Table name
-- miles versus km.
-- Hardcoded constant:
DECLARE _deg2rad DOUBLE DEFAULT PI()/1800000; -- For scaled by 1e4 to MEDIUMINT
-- Cannot use params in PREPARE, so switch to @variables:
-- Hardcoded constant:
SET @my_lat := _my_lat * 10000,
@my_lon := _my_lon * 10000,
@deg2dist := 0.0069172, -- 69.172 for miles; 111.325 for km *** (mi vs km)
@start_deg := _start_dist / @deg2dist, -- Start with this radius first (eg, 15 miles)
@max_deg := _max_dist / @deg2dist,
@cutoff := @max_deg / SQRT(2), -- (slightly pessimistic)
@dlat := @start_deg, -- note: must stay positive
@lon2lat := COS(_deg2rad * @my_lat),
@iterations := 0; -- just debugging
-- Loop through, expanding search
-- Search a 'square', repeat with bigger square until find enough rows
-- If the inital probe found _limit rows, then probably the first
-- iteration here will find the desired data.
-- Hardcoded table name:
-- This is the "first SELECT":
SET @sql = CONCAT(
"SELECT COUNT(*) INTO @near_ct
FROM Locations
WHERE lat BETWEEN @my_lat - @dlat
AND @my_lat + @dlat -- PARTITION Pruning and bounding box
AND lon BETWEEN @my_lon - @dlon
AND @my_lon + @dlon -- first part of PK
AND ", _condition);
PREPARE _sql FROM @sql;
MainLoop: LOOP
SET @iterations := @iterations + 1;
-- The main probe: Search a 'square'
SET @dlon := ABS(@dlat / @lon2lat); -- good enough for now -- note: must stay positive
-- Hardcoded constants:
SET @dlon := IF(ABS(@my_lat) + @dlat >= 900000, 3600001, @dlon); -- near a Pole
EXECUTE _sql;
IF ( @near_ct >= _limit OR -- Found enough
@dlat >= @cutoff ) THEN -- Give up (too far)
LEAVE MainLoop;
END IF;
-- Expand 'square':
SET @dlat := LEAST(2 * @dlat, @cutoff); -- Double the radius to search
END LOOP MainLoop;
DEALLOCATE PREPARE _sql;
-- Out of loop because found _limit items, or going too far.
-- Expand range by about 1.4 (but not past _max_dist),
-- then fetch details on nearest 10.
-- Hardcoded constant:
SET @dlat := IF( @dlat >= @max_deg OR @dlon >= 1800000,
@max_deg,
GCDist(ABS(@my_lat), @my_lon,
ABS(@my_lat) - @dlat, @my_lon - @dlon) );
-- ABS: go toward equator to find farthest corner (also avoids poles)
-- Dateline: not a problem (see GCDist code)
-- Reach for longitude line at right angle:
-- sin(dlon)*cos(lat) = sin(dlat)
-- Hardcoded constant:
SET @dlon := IFNULL(ASIN(SIN(_deg2rad * @dlat) /
COS(_deg2rad * @my_lat))
/ _deg2rad -- precise
, 3600001); -- must be too near a pole
-- This is the "last SELECT":
-- Hardcoded constants:
IF (ABS(@my_lon) + @dlon < 1800000 OR -- Usual case - not crossing dateline
ABS(@my_lat) + @dlat < 900000) THEN -- crossing pole, so dateline not an issue
-- Hardcoded table name:
SET @sql = CONCAT(
"SELECT *,
@deg2dist * GCDist(@my_lat, @my_lon, lat, lon) AS dist
FROM Locations
WHERE lat BETWEEN @my_lat - @dlat
AND @my_lat + @dlat -- PARTITION Pruning and bounding box
AND lon BETWEEN @my_lon - @dlon
AND @my_lon + @dlon -- first part of PK
AND ", _condition, "
HAVING dist <= ", _max_dist, "
ORDER BY dist
LIMIT ", _limit
);
ELSE
-- Hardcoded constants and table name:
-- Circle crosses dateline, do two SELECTs, one for each side
SET @west_lon := IF(@my_lon < 0, @my_lon, @my_lon - 3600000);
SET @east_lon := @west_lon + 3600000;
-- One of those will be beyond +/- 180; this gets points beyond the dateline
SET @sql = CONCAT(
"( SELECT *,
@deg2dist * GCDist(@my_lat, @west_lon, lat, lon) AS dist
FROM Locations
WHERE lat BETWEEN @my_lat - @dlat
AND @my_lat + @dlat -- PARTITION Pruning and bounding box
AND lon BETWEEN @west_lon - @dlon
AND @west_lon + @dlon -- first part of PK
AND ", _condition, "
HAVING dist <= ", _max_dist, " )
UNION ALL
( SELECT *,
@deg2dist * GCDist(@my_lat, @east_lon, lat, lon) AS dist
FROM Locations
WHERE lat BETWEEN @my_lat - @dlat
AND @my_lat + @dlat -- PARTITION Pruning and bounding box
AND lon BETWEEN @east_lon - @dlon
AND @east_lon + @dlon -- first part of PK
AND ", _condition, "
HAVING dist <= ", _max_dist, " )
ORDER BY dist
LIMIT ", _limit
);
END IF;
PREPARE _sql FROM @sql;
EXECUTE _sql;
DEALLOCATE PREPARE _sql;
END;
//
DELIMITER ;
<<code>>
== Sample
Find the 5 cities with non-zero population (out of 3 million) nearest to (+35.15, -90.15). Start with a 10-mile bounding box and give up at 100 miles.
<<code>>
CALL FindNearestLL(35.15, -90.05, 10, 100, 5, 'population > 0');
+---------+--------+---------+---------+--------------+--------------+-------+------------+--------------+---------------------+------------------------+
| id | lat | lon | country | ascii_city | city | state | population | @gcd_ct := 0 | dist | @gcd_ct := @gcd_ct + 1 |
+---------+--------+---------+---------+--------------+--------------+-------+------------+--------------+---------------------+------------------------+
| 3023545 | 351494 | -900489 | us | memphis | Memphis | TN | 641608 | 0 | 0.07478733189367963 | 3 |
| 2917711 | 351464 | -901844 | us | west memphis | West Memphis | AR | 28065 | 0 | 7.605683607627499 | 2 |
| 2916457 | 352144 | -901964 | us | marion | Marion | AR | 9227 | 0 | 9.3994963998986 | 1 |
| 3020923 | 352044 | -898739 | us | bartlett | Bartlett | TN | 43264 | 0 | 10.643941157860604 | 7 |
| 2974644 | 349889 | -900125 | us | southaven | Southaven | MS | 38578 | 0 | 11.344042217329935 | 5 |
+---------+--------+---------+---------+--------------+--------------+-------+------------+--------------+---------------------+------------------------+
5 rows in set (0.00 sec)
Query OK, 0 rows affected (0.04 sec)
SELECT COUNT(*) FROM ll_table;
+----------+
| COUNT(*) |
+----------+
| 3173958 |
+----------+
1 row in set (5.04 sec)
FLUSH STATUS;
CALL...
SHOW SESSION STATUS LIKE 'Handler%';
show session status like 'Handler%';
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| Handler_read_first | 1 |
| Handler_read_key | 3 |
| Handler_read_next | 1307 | -- some index, some tmp, but far less than 3 million.
| Handler_read_rnd | 5 |
| Handler_read_rnd_next | 13 |
| Handler_write | 12 | -- it needed a tmp
+----------------------------+-------+
```
Postlog
-------
There is a "Haversine" algorithm that is twice as fast as the GCDist function here. But it has a fatal flaw of sometimes returning NULL for the distance between a point and itself. (This is because of computing a number slightly bigger than 1.0, then trying to take the ACOS of it.)
See also
--------
* [Cities used for testing](https://www.maxmind.com/en/worldcities)
* [A forum thread](http://forums.mysql.com/read.php?20,619712,619712)
* [StackOverflow discussion](http://stackoverflow.com/questions/29058863/mysql-query-takes-long-time/)
* [Sample](http://dba.stackexchange.com/questions/134028/select-the-minimum-of-a-calculated-distance-value-without-sorting)
* [Z-ordering](https://en.wikipedia.org/wiki/Geohash)
Rick James graciously allowed us to use this article in the Knowledge Base.
[Rick James' site](http://mysql.rjweb.org/) has other useful tips, how-tos, optimizations, and debugging tips.
Original source: <http://mysql.rjweb.org/doc.php/latlng>
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.
| programming_docs |
Subsets and Splits