question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I installed mysql through yum just now and the OS fedora installed mariadb for me. I know mariadb is a new branch of mysql, but I can't understand why it does not ask me for setting the password. I have tried for 123456 and so on, but I failed. My fedora is new, and this is the first time to install mysql/mariadb. What should I do for it?
I had the same problem. It's true the password is empty, but even so the error message is shown. The solution is just using "sudo" so $ sudo mysql will open the mysql tool For securing the database, you should use sudo again. $ sudo mysql_secure_installation
MariaDB
20,270,879
105
I want to know how to check whether MySQL strict mode is on or off in localhost(xampp). If on then for what modes and how to off. If off then how to on. I already followed http://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sql-mode-full and https://mariadb.com/kb/en/mariadb/sql_mode/ and other related sites too. But I didn't get an exact answer for my question.
->STRICT_TRANS_TABLES is responsible for setting MySQL strict mode. ->To check whether strict mode is enabled or not run the below sql: SHOW VARIABLES LIKE 'sql_mode'; If one of the value is STRICT_TRANS_TABLES, then strict mode is enabled, else not. In my case it gave +--------------+------------------------------------------+ |Variable_name |Value | +--------------+------------------------------------------+ |sql_mode |STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION| +--------------+------------------------------------------+ Hence strict mode is enabled in my case as one of the value is STRICT_TRANS_TABLES. ->To disable strict mode run the below sql: set global sql_mode=''; [or any mode except STRICT_TRANS_TABLES. Ex: set global sql_mode='NO_ENGINE_SUBSTITUTION';] ->To again enable strict mode run the below sql: set global sql_mode='STRICT_TRANS_TABLES';
MariaDB
40,881,773
96
As part of our introduction to computer security course, we have a short unit on SQL injections. One of the homework assignments is a basic unsanitized login page. The expected solution is something along the lines of the classic ' or 1=1; -- , but we always welcome students to find unconventional solutions. One such solution was presented to me recently: inputting '--' for the password seems to perform a successful SQL injection. This would cause the query to be evaluated as something like: SELECT * FROM users WHERE name='admin' AND password=''--''; Here, --''; is not parsed as a comment, since MariaDB requires that comments are followed by spaces. In fact, if the double dash was parsed as a comment, this query wouldn't return anything at all; equivalently we would have password='', which would evaluate to false, assuming a non-empty password. The extra pair of quotes at the end appears to be necessary: leaving it as password=''--; or inserting other data behind it (password=''--1;) causes the conditional to be evaluated as false, as expected. Some quick testing fails to reproduce this behavior in other databases -- as far as I can tell, this is MariaDB-specific behavior. The documentation confirms that two dashes without a space are not parsed as a comment, but does not elaborate on what they are parsed as instead. EDIT: Somehow I managed to miss the fact that this happens in MySQL too. In fact, this behavior occurs in any MySQL fork (not just MariaDB). What does -- do when it is not followed by a space, and why it causing comparisons to evaluate as true? A toy example: CREATE TABLE users ( userid INTEGER PRIMARY KEY, username TEXT NOT NULL, password TEXT NOT NULL ); INSERT INTO users VALUES (0001, 'admin', 'S3cur3P4ssw0rd!'); INSERT INTO users VALUES (0002, 'generic_user', 'Password'); SELECT * FROM users WHERE username='admin' AND password=''; -- empty password, query returns no users SELECT * FROM users WHERE username='admin' AND password=''-- ''; -- parsed as comment, equivalent to above query, returns no users SELECT * FROM users WHERE username='admin' AND password=''--''; -- query returns admin user SELECT * FROM users WHERE username='admin' AND password=''--; -- query returns zero users SELECT * FROM users WHERE username='admin' AND password=''--1; -- query returns zero users
This specific SQL injection uses two aspects: First, -- without the space, together with a value behind it, is parsed as a subtraction of a negative number. When you run the query SELECT '10'--'9' You will get the following result: +-----------+ | '10'--'9' | //Interpreted as 10 minus -9 (negating the negative) +-----------+ | 19 | +-----------+ So when you run the query SELECT ''--'' You get the result of the first number (0) minus the negative second number (-0), which results in 0. +--------+ | ''--'' | +--------+ | 0 | +--------+ The second part is that MariaDB (and MySQL) tries to convert a string to a number when the current context requires the value to be a number (like in arithmetic or in numeric comparison, where one operand is already numeric). See the following query: SELECT 'some value', '42 apples'; This results in nothing special: +------------+-----------+ | some value | 42 apples | +------------+-----------+ | some value | 42 apples | +------------+-----------+ but when you do arithmetic like this SELECT 'some value'+0, '42 apples'+0; you get this: +----------------+---------------+ | 'some value'+0 | '42 apples'+0 | +----------------+---------------+ | 0 | 42 | +----------------+---------------+ This can be used against you with the arithmetic above with a value that results in 0. The query SELECT password, ''--'', password+0, password=0, password=''--'' FROM users; will show the following results (I added a new user with a number at the beginning of the password): +-----------------+--------+------------+------------+-----------------+ | password | ''--'' | password+0 | password=0 | password=''--'' | +-----------------+--------+------------+------------+-----------------+ | S3cur3P4ssw0rd! | 0 | 0 | 1 | 1 | | Password | 0 | 0 | 1 | 1 | | 42 apples | 0 | 42 | 0 | 0 | +-----------------+--------+------------+------------+-----------------+ Here you see, it will first convert the password to a number, most likely 0, and then compare it against "your" 0 value from ''--'', resulting in a successful SQL injection.
MariaDB
78,431,167
73
When migrating my DB, this error appears. Below is my code followed by the error that I am getting when trying to run the migration. Code public function up() { Schema::create('meals', function (Blueprint $table) { $table->increments('id'); $table->integer('user_id')->unsigned(); $table->integer('category_id')->unsigned(); $table->string('title'); $table->string('body'); $table->string('meal_av'); $table->timestamps(); $table->foreign('user_id') ->references('id') ->on('users') ->onDelete('cascade'); $table->foreign('category_id') ->references('id') ->on('categories') ->onDelete('cascade'); }); } Error message [Illuminate\Database\QueryException] SQLSTATE[HY000]: General error: 1005 Can't create table meal.#sql-11d2_1 4 (errno: 150 "Foreign key constraint is incorrectly formed") (SQL: alter table meals add constraint meals_category_id_foreign foreign key (category_id) references categories (id) on delete cascade)
When creating a new table in Laravel. A migration will be generated like: $table->bigIncrements('id'); Instead of (in older Laravel versions): $table->increments('id'); When using bigIncrements the foreign key expects a bigInteger instead of an integer. So your code will look like this: public function up() { Schema::create('meals', function (Blueprint $table) { $table->increments('id'); $table->unsignedBigInteger('user_id'); //changed this line $table->unsignedBigInteger('category_id'); //changed this line $table->string('title'); $table->string('body'); $table->string('meal_av'); $table->timestamps(); $table->foreign('user_id') ->references('id') ->on('users') ->onDelete('cascade'); $table->foreign('category_id') ->references('id') ->on('categories') ->onDelete('cascade'); }); } You could also use increments instead of bigIncrements like Kiko Sejio said. The difference between Integer and BigInteger is the size: int => 32-bit bigint => 64-bit
MariaDB
32,669,880
71
I am getting this error for my Java code Caused by :`com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException`: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB` server version for the right syntax to use near 'type = `MyISAM`' at line 1 This is the query passed by hibernate: Hibernate: create table EMPLOYEE (emp_id integer not null, FNAME varchar(255), LNAME varchar(255), primary key (emp_id)) type=MyISAM I have looked at all questions related to this error. But in all that questions the user itself is passing query "type = MyISAM" so they can change "type" to "engine", but here hibernate is responsible for creating table, so I don't understand where the mistake is, and how I can fix it. This is my configuration file: <hibernate-configuration> <session-factory > <property name="hibernate.hbm2ddl.auto">create</property> <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property> <property name="hibernate.connection.url">jdbc:mysql://localhost/servletcheck</property> <property name="hibernate.connection.username">root</property> <property name="hibernate.connection.password"> </property> <property name="hibernate.dialect"> org.hibernate.dialect.MySQLDialect</property> <property name="hibernate.show_sql">true</property> <mapping resource="Employee.hbm.xml"/> </session-factory> </hibernate-configuration> This is my mapping file: <hibernate-mapping> <class name="first.Employee" table="EMPLOYEE"> <id name="id" type="int"> <column name="emp_id" /> <generator class="assigned" /> </id> <property name="fname" type="java.lang.String"> <column name="FNAME" /> </property> <property name="lname" type="java.lang.String"> <column name="LNAME" /> </property> </class> </hibernate-mapping> This is my class file: public class StoreData { public static void main(String[] args) { Configuration cfg=new Configuration(); cfg.configure("hibernate.cfg.xml"); SessionFactory factory=cfg.buildSessionFactory(); Session session=factory.openSession(); org.hibernate.Transaction t=session.beginTransaction(); Employee e=new Employee(); e.setId(1); e.setFname("yogesh"); e.setLname("Meghnani"); session.persist(e); t.commit(); } }
The problem is that - in Hibernate 5.x and earlier - the dialect org.hibernate.dialect.MySQLDialect is for MySQL 4.x or earlier. The fragment TYPE=MYISAM that is generated by this dialect was deprecated in MySQL 4.0 and removed in 5.5. Given that you use MariaDB, you need to use (depending on the version of MariaDB and - maybe - the version of Hibernate) one of: org.hibernate.dialect.MariaDBDialect org.hibernate.dialect.MariaDB53Dialect or higher versions (e.g. org.hibernate.dialect.MariaDB106Dialect) If you are using MySQL, or if the above two dialects for MariaDB don't exist in your version of Hibernate: org.hibernate.dialect.MySQL5Dialect org.hibernate.dialect.MySQL55Dialect org.hibernate.dialect.MySQL57Dialect org.hibernate.dialect.MySQL8Dialect or variants of these dialects (e.g. org.hibernate.dialect.MySQL57InnoDBDialect) NOTE: With Hibernate 6, you should once again use MySQLDialect or MariaDBDialect, as Hibernate 6 dialects will configure themselves based on the actual connected version.
MariaDB
43,716,068
61
I logged into MariaDB/MySQL and entered: SHOW COLLATION; I see utf8mb4_unicode_ci and utf8mb4_unicode_520_ci among the available collations. What is the difference between these two collations and which should we be using?
Well, you can read about the differences in the documentation. I can't tell you what you should be using because every project is different. 10.1.3 Collation Naming Conventions MySQL collation names follow these conventions: A collation name starts with the name of the character set with which it is associated, followed by one or more suffixes indicating other collation characteristics. For example, utf8_general_ci and latin_swedish_ci are collations for the utf8 and latin1 character sets, respectively. A language-specific collation includes a language name. For example, utf8_turkish_ci and utf8_hungarian_ci sort characters for the utf8 character set using the rules of Turkish and Hungarian, respectively. Case sensitivity for sorting is indicated by _ci (case insensitive), _cs (case sensitive), or _bin (binary; character comparisons are based on character binary code values). For example, latin1_general_ci is case insensitive, latin1_general_cs is case sensitive, and latin1_bin uses binary code values. For Unicode, collation names may include a version number to indicate the version of the Unicode Collation Algorithm (UCA) on which the collation is based. UCA-based collations without a version number in the name use the version-4.0.0 UCA weight keys. For example: utf8_unicode_ci (with no version named) is based on UCA 4.0.0 weight keys >(http://www.unicode.org/Public/UCA/4.0.0/allkeys-4.0.0.txt). utf8_unicode_520_ci is based on UCA 5.2.0 weight keys (http://www.unicode.org/Public/UCA/5.2.0/allkeys.txt). For Unicode, the xxx_general_mysql500_ci collations preserve the pre-5.1.24 ordering of the original xxx_general_ci collations and permit upgrades for tables created before MySQL 5.1.24. For more information, see Section 2.11.3, “Checking Whether Tables or Indexes Must Be Rebuilt”, and Section 2.11.4, “Rebuilding or Repairing Tables or Indexes”. Source
MariaDB
37,307,146
58
In MySQL/MariaDB the most efficient way to store uuid is in a BINARY(16) column. However, sometimes you want to obtain it as a formatted uuid string. Given the following table structure, how would I obtain all uuids in a default formatted way? CREATE TABLE foo (uuid BINARY(16));
The following would create the result I was after: SELECT LOWER(CONCAT( SUBSTR(HEX(uuid), 1, 8), '-', SUBSTR(HEX(uuid), 9, 4), '-', SUBSTR(HEX(uuid), 13, 4), '-', SUBSTR(HEX(uuid), 17, 4), '-', SUBSTR(HEX(uuid), 21) )) FROM foo;
MariaDB
37,168,797
56
MySQL is awesome! I am currently involved in a major server migration and previously, our small database used to be hosted on the same server as the client. So we used to do this : SELECT * INTO OUTFILE .... LOAD DATA INFILE .... Now, we moved the database to a different server and SELECT * INTO OUTFILE .... no longer works, understandable - security reasons I believe. But, interestingly LOAD DATA INFILE .... can be changed to LOAD DATA LOCAL INFILE .... and bam, it works. I am not complaining nor am I expressing disgust towards MySQL. The alternative to that added 2 lines of extra code and a system call form a .sql script. All I wanted to know is why LOAD DATA LOCAL INFILE works and why is there no such thing as SELECT INTO OUTFILE LOCAL? I did my homework, couldn't find a direct answer to my questions above. I couldn't find a feature request @ MySQL either. If someone can clear that up, that had be awesome! Is MariaDB capable of handling this problem?
From the manual: The SELECT ... INTO OUTFILE statement is intended primarily to let you very quickly dump a table to a text file on the server machine. If you want to create the resulting file on some client host other than the server host, you cannot use SELECT ... INTO OUTFILE. In that case, you should instead use a command such as mysql -e "SELECT ..." > file_name to generate the file on the client host." http://dev.mysql.com/doc/refman/5.0/en/select.html An example: mysql -h my.db.com -u usrname--password=pass db_name -e 'SELECT foo FROM bar' > /tmp/myfile.txt
MariaDB
2,867,607
50
I set up my first Ubuntu Server with Ubuntu 16.04, nginx, php7.0, MariaDB, nextcloud and external DynDNS using this tutorial here: Install Nextcloud 9 on Ubuntu 16.04 Everything worked fine but since I restarted the server the next day, nextcloud just shows me a blank page. After clicking through all logs of nginx, MariaDB and nextcloud, I found out that the mysql service just doesn't start. So run service mysql start and everything worked fine again (calling nextcloud from server as well as other workstations). I just wondered that the terminal didn't "close" the line. Like it was still working on the command. After about 5 minutes, the line "closes" and the following message appears: "Job for mariadb.service failed because a timeout was exceeded. See "systemctl status mariadb.service" and "journalctl -xe" for details." Then the clients again just get a blank page in nextcloud. When I run the command and close the terminal immediately, clients get the access as well but still loses it after 5 minutes. I tried backing up the nextcloud, sql and run apt-get purge --auto-remove mariadb-server. Then again run the MariaDB installation steps out of the tutorial with importing the backup sql instead of creating a new one. Didn't change everything. Next try was update-rc.d mysql defaults and update-rc.d mysql enable. But after a restart just the blank page again. Access is only possible for 5 minutes by starting the service manual. I also tried the BUM - BootUpManager but the service seems to be enabled. I saw you can start services out oft it manually as well. So tried it with mysql and surprise: nextcloud available for 5 minutes while BUM just hangs up. I found mariadb.com/kb/en/mariadb/starting-and-stopping-mariadb-automatically/ as well but tried nothing of it because it seems like there is something else really wrong. root@s1:~# systemctl status mariadb.service: \u25cf mariadb.service - MariaDB database server Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: Drop-In: /etc/systemd/system/mariadb.service.d \u2514\u2500migrated-from-my.cnf-settings.conf Active: failed (Result: timeout) since Di 2016-12-06 14:52:51 CET; 55s ago Process: 3565 ExecStart=/usr/sbin/mysqld $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WS Process: 3415 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR Process: 3409 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START Process: 3405 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/ru Main PID: 3565 (code=exited, status=0/SUCCESS) Dez 06 14:52:48 s1 mysqld[3565]: 2016-12-06 14:52:48 3067387712 [Note] /usr/sbin Dez 06 14:52:48 s1 mysqld[3565]: 2016-12-06 14:52:48 3067387712 [Note] Event Sch Dez 06 14:52:48 s1 mysqld[3565]: 2016-12-06 14:52:48 2147785536 [Note] InnoDB: F Dez 06 14:52:48 s1 mysqld[3565]: 2016-12-06 14:52:48 3067387712 [Note] InnoDB: S Dez 06 14:52:49 s1 mysqld[3565]: 2016-12-06 14:52:49 3067387712 [Note] InnoDB: W Dez 06 14:52:50 s1 mysqld[3565]: 2016-12-06 14:52:50 3067387712 [Note] InnoDB: S Dez 06 14:52:50 s1 mysqld[3565]: 2016-12-06 14:52:50 3067387712 [Note] /usr/sbin Dez 06 14:52:51 s1 systemd[1]: Failed to start MariaDB database server. Dez 06 14:52:51 s1 systemd[1]: mariadb.service: Unit entered failed state. Dez 06 14:52:51 s1 systemd[1]: mariadb.service: Failed with result 'timeout'. root@s1:~# journalctl -xe: Dez 06 14:52:48 s1 mysqld[3565]: 2016-12-06 14:52:48 3067387712 [Note] Event Scheduler: Purging the queue. 0 events Dez 06 14:52:48 s1 mysqld[3565]: 2016-12-06 14:52:48 2147785536 [Note] InnoDB: FTS optimize thread exiting. Dez 06 14:52:48 s1 mysqld[3565]: 2016-12-06 14:52:48 3067387712 [Note] InnoDB: Starting shutdown... Dez 06 14:52:49 s1 mysqld[3565]: 2016-12-06 14:52:49 3067387712 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer po Dez 06 14:52:50 s1 mysqld[3565]: 2016-12-06 14:52:50 3067387712 [Note] InnoDB: Shutdown completed; log sequence number 111890806 Dez 06 14:52:50 s1 mysqld[3565]: 2016-12-06 14:52:50 3067387712 [Note] /usr/sbin/mysqld: Shutdown complete Dez 06 14:52:50 s1 audit[3648]: AVC apparmor="DENIED" operation="sendmsg" info="Failed name lookup - disconnected path" error=-13 profi Dez 06 14:52:50 s1 kernel: audit: type=1400 audit(1481032370.973:29): apparmor="DENIED" operation="sendmsg" info="Failed name lookup - Dez 06 14:52:50 s1 audit[3565]: AVC apparmor="DENIED" operation="sendmsg" info="Failed name lookup - disconnected path" error=-13 profi Dez 06 14:52:50 s1 kernel: audit: type=1400 audit(1481032370.973:30): apparmor="DENIED" operation="sendmsg" info="Failed name lookup - Dez 06 14:52:51 s1 systemd[1]: Failed to start MariaDB database server. -- Subject: Unit mariadb.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit mariadb.service has failed. -- -- The result is failed. Dez 06 14:52:51 s1 systemd[1]: mariadb.service: Unit entered failed state. Dez 06 14:52:51 s1 systemd[1]: mariadb.service: Failed with result 'timeout'. Dez 06 14:54:54 s1 x11vnc[2665]: 06/12/2016 14:54:54 cursor_noshape_updates_clients: 1 Dez 06 14:55:16 s1 ntpd[1244]: 46.4.1.155 local addr 192.168.178.50 -> <null> Dez 06 14:57:30 s1 ntpd[1244]: 89.238.66.98 local addr 192.168.178.50 -> <null> Content in /ect/init.d (if useful): #!/bin/bash # ### BEGIN INIT INFO # Provides: mysql # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Should-Start: $network $named $time # Should-Stop: $network $named $time # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start and stop the mysql database server daemon # Description: Controls the main MariaDB database server daemon "mysqld" # and its wrapper script "mysqld_safe". ### END INIT INFO # set -e set -u ${DEBIAN_SCRIPT_DEBUG:+ set -v -x} test -x /usr/sbin/mysqld || exit 0 . /lib/lsb/init-functions SELF=$(cd $(dirname $0); pwd -P)/$(basename $0) CONF=/etc/mysql/my.cnf MYADMIN="/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf" # priority can be overriden and "-s" adds output to stderr ERR_LOGGER="logger -p daemon.err -t /etc/init.d/mysql -i" # Safeguard (relative paths, core dumps..) cd / umask 077 # mysqladmin likes to read /root/.my.cnf. This is usually not what I want # as many admins e.g. only store a password without a username there and # so break my scripts. export HOME=/etc/mysql/ # Source default config file. [ -r /etc/default/mariadb ] && . /etc/default/mariadb ## Fetch a particular option from mysql's invocation. # # Usage: void mysqld_get_param option mysqld_get_param() { /usr/sbin/mysqld --print-defaults \ | tr " " "\n" \ | grep -- "--$1" \ | tail -n 1 \ | cut -d= -f2 } ## Do some sanity checks before even trying to start mysqld. sanity_checks() { # check for config file if [ ! -r /etc/mysql/my.cnf ]; then log_warning_msg "$0: WARNING: /etc/mysql/my.cnf cannot be read. See README.Debian.gz" echo "WARNING: /etc/mysql/my.cnf cannot be read. See README.Debian.gz" | $ERR_LOGGER fi # check for diskspace shortage datadir=`mysqld_get_param datadir` if LC_ALL=C BLOCKSIZE= df --portability $datadir/. | tail -n 1 | awk '{ exit ($4>4096) }'; then log_failure_msg "$0: ERROR: The partition with $datadir is too full!" echo "ERROR: The partition with $datadir is too full!" | $ERR_LOGGER exit 1 fi } ## Checks if there is a server running and if so if it is accessible. # # check_alive insists on a pingable server # check_dead also fails if there is a lost mysqld in the process list # # Usage: boolean mysqld_status [check_alive|check_dead] [warn|nowarn] mysqld_status () { ping_output=`$MYADMIN ping 2>&1`; ping_alive=$(( ! $? )) ps_alive=0 pidfile=`mysqld_get_param pid-file` if [ -f "$pidfile" ] && ps `cat $pidfile` >/dev/null 2>&1; then ps_alive=1; fi if [ "$1" = "check_alive" -a $ping_alive = 1 ] || [ "$1" = "check_dead" -a $ping_alive = 0 -a $ps_alive = 0 ]; then return 0 # EXIT_SUCCESS else if [ "$2" = "warn" ]; then echo -e "$ps_alive processes alive and '$MYADMIN ping' resulted in\n$ping_output\n" | $ERR_LOGGER -p daemon.debug fi return 1 # EXIT_FAILURE fi } # # main() # case "${1:-''}" in 'start') sanity_checks; # Start daemon log_daemon_msg "Starting MariaDB database server" "mysqld" if mysqld_status check_alive nowarn; then log_progress_msg "already running" log_end_msg 0 else # Could be removed during boot test -e /var/run/mysqld || install -m 755 -o mysql -g root -d /var/run/mysqld # Start MariaDB! /usr/bin/mysqld_safe "${@:2}" > /dev/null 2>&1 & # 6s was reported in #352070 to be too little for i in $(seq 1 "${MYSQLD_STARTUP_TIMEOUT:-60}"); do sleep 1 if mysqld_status check_alive nowarn ; then break; fi log_progress_msg "." done if mysqld_status check_alive warn; then log_end_msg 0 # Now start mysqlcheck or whatever the admin wants. output=$(/etc/mysql/debian-start) [ -n "$output" ] && log_action_msg "$output" else log_end_msg 1 log_failure_msg "Please take a look at the syslog" fi fi ;; 'stop') # * As a passwordless mysqladmin (e.g. via ~/.my.cnf) must be possible # at least for cron, we can rely on it here, too. (although we have # to specify it explicit as e.g. sudo environments points to the normal # users home and not /root) log_daemon_msg "Stopping MariaDB database server" "mysqld" if ! mysqld_status check_dead nowarn; then set +e shutdown_out=`$MYADMIN shutdown 2>&1`; r=$? set -e if [ "$r" -ne 0 ]; then log_end_msg 1 [ "$VERBOSE" != "no" ] && log_failure_msg "Error: $shutdown_out" log_daemon_msg "Killing MariaDB database server by signal" "mysqld" killall -15 mysqld server_down= for i in `seq 1 600`; do sleep 1 if mysqld_status check_dead nowarn; then server_down=1; break; fi done if test -z "$server_down"; then killall -9 mysqld; fi fi fi if ! mysqld_status check_dead warn; then log_end_msg 1 log_failure_msg "Please stop MariaDB manually and read /usr/share/doc/mariadb-server-10.1/README.Debian.gz!" exit -1 else log_end_msg 0 fi ;; 'restart') set +e; $SELF stop; set -e $SELF start ;; 'reload'|'force-reload') log_daemon_msg "Reloading MariaDB database server" "mysqld" $MYADMIN reload log_end_msg 0 ;; 'status') if mysqld_status check_alive nowarn; then log_action_msg "$($MYADMIN version)" else log_action_msg "MariaDB is stopped." exit 3 fi ;; 'bootstrap') # Bootstrap the cluster, start the first node # that initiates the cluster log_daemon_msg "Bootstrapping the cluster" "mysqld" $SELF start "${@:2}" --wsrep-new-cluster ;; *) echo "Usage: $SELF start|stop|restart|reload|force-reload|status|bootstrap" exit 1 ;; esac Unfortunately, Google can't help me. I tried to explain as much as I can maybe this helps you in helping me. Thanks a lot!
In case you are bitten by this bug, the solution is given as a suggestion in the bug report (all of these have to be done as root, so either with sudo -i as a zeroth command or with sudo prefixed): echo "/usr/sbin/mysqld { }" > /etc/apparmor.d/usr.sbin.mysqld (the second part with sudo is ... | sudo tee /etc/apparmor.d/usr.sbin.mysqld, thank you @dvlcube) apparmor_parser -v -R /etc/apparmor.d/usr.sbin.mysqld systemctl restart mariadb Background If you previously had MySQL installed, it activated an AppArmor profile which is incompatible with MariaDB. apt-get remove --purge only removes the profile, but does not deactivate/unload it. Only manually unloading it lets MariaDB work unhindered by AppArmor.
MariaDB
40,997,257
47
I am new to SQL, I was trying to change column name in my database's table. I am using 'xampp' with 'maria DB' (OS - Ubuntu 18.04) I tried all of the followings: ALTER TABLE subject RENAME COLUMN course_number TO course_id; ALTER TABLE subject CHANGE course_number course_id; ALTER TABLE subject CHANGE 'course_number' 'course_id'; ALTER TABLE subject CHANGE COLUMN 'course_number' course_id varchar(255); ALTER TABLE subject CHANGE 'course_number' 'course_id' varchar(255); But the only output I got was: ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'column course_number to course_id' at line 1 Could someone please tell me what is the correct answer. I have no idea what to do further.
Table names, column names, etc, may need quoting with backticks, but not with apostrophes (') or double quotes ("). ALTER TABLE subject CHANGE COLUMN `course_number` -- old name; notice optional backticks course_id -- new name varchar(255); -- must include all the datatype info
MariaDB
53,735,305
43
I am looking for solution on how to update mariadb on xampp 32 bit on window system but not found any article on that.I just found this link. Please help me how to update. I want JSON support that's why I am looking for update from V10.1 to V10.2. Or if there is any other way to do this please let me know Current version is 10.1.19-MariaDB
1 : Shutdown or Quit your XAMPP server from Xampp control panel. 2 : Download the ZIP version of MariaDB 3 : Rename the xampp/mysql folder to mysql_old. 4 : Unzip or Extract the contents of the MariaDB ZIP file into your XAMPP folder. 5 : Rename the MariaDB folder, called something like mariadb-5.5.37-win32, to mysql. 6 : Rename xampp/mysql/data to data_old. 7 : Copy the xampp/mysql_old/data folder to xampp/mysql/. 8 : Copy the xampp/mysql_old/backup folder to xampp/mysql/. 9 : Copy the xampp/mysql_old/scripts folder to xampp/mysql/. 10: Copy mysql_uninstallservice.bat and mysql_installservice.bat from xampp/mysql_old/ into xampp/mysql/. 11 : Copy xampp/mysql_old/bin/my.ini into xampp/mysql/bin. 12 : Edit xampp/mysql/bin/my.ini using a text editor like Notepad. Find skip-federated and add a # in front (to the left) of it to comment out the line if it exists. Save and exit the editor. 13 : Start-up XAMPP. Note If you can't get mysql to start from Xampp control panel. Add this 'skip-grant-tables' statement anywhere in xampp/mysql/bin/my.ini file 14 : Run xampp/mysql/bin/mysql_upgrade.exe. 15 : Shutdown and restart MariaDB (MySQL). If still mysql is not started then follow below Note steps(!Important) Note :mysql error log file: c:\xampp\mysql\bin\mysqld.exe: unknown variable 'innodb_additional_mem_pool_size=2M' like please remove or commented this statement in my.ini file in this path xampp/mysql/bin/my.ini file. Help from this link.
MariaDB
44,027,926
41
In Hibenate I am using MariaDB but I couldn't find the dialect class name of MariaDB . In Hibernate, MySQL5 dialect's name is <property name="hibernate.dialect">org.hibernate.dialect.MySQL5Dialect</property> For Oracle 10g <property name="hibernate.dialect">org.hibernate.dialect.Oracle10gDialect</property> What is the dialect class name for MariaDB?
Update note: Hibernate automatic dialect resolution In older versions of Hibernate, you are required to specify the dialect. But starting with version 3.2, hibernate uses dialect resolution to automatically determine the target database and the dialect that should be used. The Hibernate 5.0 userguide says: In most cases Hibernate will be able to determine the proper Dialect to use by asking some questions of the JDBC Connection during bootstrap. If for some reason it is not able to determine the proper one or you want to use a custom Dialect, you will need to set the hibernate.dialect setting. Hibernate 6.3 userguide even discourages you to set the property: In Hibernate 6, it’s no longer necessary to explicitly specify a dialect using the configuration property hibernate.dialect, and so setting that property is now discouraged. (An exception is the case of custom user-written Dialects.) The Hibernate 6.3 api doc says: As of Hibernate 6, this property should not be explicitly specified, except when using a custom user-written implementation of Dialect. Instead, applications should allow Hibernate to select the Dialect automatically. Very short answer Updated answer (Oct 05, 2023): org.hibernate.dialect.MariaDBDialect for MariaDB server 10.3 and later Update: some of these classes are now deprecated or even removed: org.hibernate.dialect.MariaDB106Dialect for MariaDB server 10.6 and later, provides skip locked support. org.hibernate.dialect.MariaDB103Dialect for MariaDB server 10.3 and later, provides sequence support. org.hibernate.dialect.MariaDB102Dialect for MariaDB server 10.2 org.hibernate.dialect.MariaDB10Dialect for MariaDB server 10.0 and 10.1 org.hibernate.dialect.MariaDB53Dialect for MariaDB server 5.3, and later 5.x versions. org.hibernate.dialect.MariaDBDialect for MariaDB server 5.1 and 5.2. Short answer When using a MariaDB server, you should use MariaDB Connector/J and MariaDB Hibernate dialects, not the MySQL ones. Even though MariaDB was created as a drop-in replacement and even though basic features will likely work when using the MySQL versions of those, subtle problems may occur or you may miss certain features. A complete list of available MariaDB dialects are currently not mentioned in the Hibernate User Guide, but in the Hibernate JavaDoc. Depending on your MariaDB server version, you should select the corresponding dialect version. The current dialects as of this writing are: org.hibernate.dialect.MariaDB102Dialect for MariaDB server 10.2 org.hibernate.dialect.MariaDB103Dialect for MariaDB server 10.3 and later, provides sequence support. org.hibernate.dialect.MariaDB10Dialect for MariaDB server 10.0 and 10.1 org.hibernate.dialect.MariaDB53Dialect for MariaDB server 5.3, and later 5.x versions. org.hibernate.dialect.MariaDBDialect for MariaDB server 5.1 and 5.2. Note that for detailed usage information, you'll sometimes have to look in dialect source codes. (There are non-JavaDoc usage information comments in some dialect sources.) If you want to change or explicitly mention the storage engine for the MariaDB dialect, you can use the storage_engine Hibernate variable. For example: hibernate.dialect.storage_engine = innodb. IMO, you should do this explicitly, because the default can change when switching to a different MariaDB server version. If you're using a MariaDB server older than 10.1.2 (which doesn't support fractional seconds), then you may want to provide the parameter useFractionalSeconds=false to the JDBC URL, otherwise MariaDB Connector/J will not truncate timestamps internally, which can cause time comparison problem when those values are using in comparison queries (even when using plain JDBC), which can cause Hibernate versioning problems and optimistic locking problems for temporal types. Long answer The MariaDB dialect for Hibernate (5.3 as of this writing) is mentioned in the Hibernate User Guide. The mentioned dialect "short names" followed by remarks are: MariaDB: Support for the MariadB database. May work with newer versions MariaDB53: Support for the MariadB database, version 5.3 and newer. However, a complete list of the available official MariaDB dialects can be found in the Hibernate JavaDoc. Which currently lists: org.hibernate.dialect.MariaDB102Dialect for MariaDB server 10.2 org.hibernate.dialect.MariaDB103Dialect for MariaDB server 10.3 and later, provides sequence support. org.hibernate.dialect.MariaDB10Dialect for MariaDB server 10.0 and 10.1 org.hibernate.dialect.MariaDB53Dialect for MariaDB server 5.3, and later 5.x versions. org.hibernate.dialect.MariaDBDialect for MariaDB server 5.1 and 5.2. Each dialect successor inherits the settings from the previous dialect version. So the inheritance hierachy for MariaDB is: MariaDB103Dialect > MariaDB102Dialect > MariaDB10Dialect > MariaDB53Dialect > MariaDBDialect > MySQL5Dialect > MySQLDialect > Dialect MariaDB was designed as a drop-in replacement for MySQL. But the databases are likely going to diverge as time goes by. Most basic features probably work without problems, allowing you to swap Connector/J clients (MariaDB client on MySQL server and vice versa), and allow you to swap dialects (MySQL dialect on MariaDB client and vice versa). But there are subtle differences that may cause unexpected problems. For example, the MySQL Connector/J client contains hardcoded checks for the server version, which will fail when using a MariaDB server, causing some features to be disabled in the client, such as the MySQL sendFractionalSeconds client parameter. This will cause fractional seconds to be disabled, so then the fractions will be truncated in the MySQL client but not in the MariaDB client. (This may even lead to optimistic locking problems when using versioning with date/time types in combination with non-max precision SQL date/time types. In these cases, use the max precision of 6.) Also, the MariaDB dialect are expected to provide specific functionality for MariaDB: http://in.relation.to/2017/02/16/mariadb-dialects/ In time, we will add new Dialects based on newer capabilities introduced by MariaDB. If you are using MariaDB, it’s best to use the MariaDB-specific Dialects from now on since it’s much easier to match the MariaDB version with its appropriate Hibernate Dialect. And https://hibernate.atlassian.net/browse/HHH-11457 says: since MySQL and MariaDB have gone in different directions, we might want to provide MariaDB Dialects as well. For instance, it's not very intuitive for a Hibernate user to figure out that they need to use the MySQLInnoDb57Dialect to handle Timestamps with microsecond precision which have been available since MariaDB 5.3: The Hibernate User Guide doesn't provide all usage information about how to use the dialects. Even the User Guide combines with the API docs may not be enough. Sometimes you'll have to look in the source codes for usage information. For example, MariaDB53Dialect.java contains hidden non-JavaDoc comments that may be useful. Previously, to select a MySQL storage engine, such as MyISAM or InnoDB or default, you could switch between for example MySQL57InnoDBDialect and MySQL57Dialect. But they refactored the MySQL dialect hierarchy starting from Hibernate 5.2.8, as mentioned in a Hibernate blog post. Note that to select a storage engine, you should use a Environment Variable or System Property: hibernate.dialect.storage_engine. For example: hibernate.dialect.storage_engine = innodb. XtraDB was the default MariaDB storage engine for MariaDB 10.1 and earlier, but since 10.2 it's InnoDB. So there may be cases that you want to explicitly mention the storage engine that Hibernate selects, so then you'll have to use the storage_engine variable. Info about the storage_engine variable (which isn't mentioned in the User Guide), can be found in the source of AvailableSettings.java. If you're using a MariaDB server older than 10.1.2 (which doesn't support fractional seconds), then you may want to provide the parameter useFractionalSeconds=false to the JDBC URL, otherwise MariaDB Connector/J will not truncate timestamps internally, which can cause time comparison problem, which can cause Hibernate versioning problems and optimistic locking problems for temporal types.
MariaDB
37,066,024
39
I've been using PDO in PHP for a while now utilizing MySQL. However, recent developments have made me think that MySQL will start fading out in replacement of MariaDB especially since MariaDB: Consider themselves many developer years ahead of MySQL without putting new developments into paid areas (clustering for example). The majority of the main MySQL developers moved to MariaDB after Oracle took over. Is becoming the default database to replace MySQL on various Linux distributions. Is a drop in replacement of MySQL and large companies are starting to adopt MariaDB such as Wikipedia (Read the blog post here). So my question is, since MariaDB doesn't appear to be listed in the PDO drivers and seeing as MariaDB is designed to be a "drop in replacement" and could potentially phase out MySQL in the future. Can I use the MySQL PDO driver with a MariaDB database, at least until an official MariaDB driver becomes available? Links PDO MySQL MariaDB
MariaDB and MySQL are 100% 99% compatible. This includes connector compatibility. edit: up to the point that MariaDB tools are shipped as MySQL tools (e.g. mysqldump), and data files are binary compatible, too
MariaDB
16,195,013
38
I'm in an environement setup, running OSX with MariaDB 10.0.12-MariaDB Homebrew I've screwed up the installation so I did completely removed MySQL and MariaDB from my setup and started again. After finishing installing MariaDB, I've reimported my databases (innoDB) via a DB Dump from the production server. It worked fine. After a reboot, the day after, I can no longer access to the databases : Table 'my.table' doesn't exist in engine What's causing this and what's the solution ? I do see the structure of my database, but when I try to access it, it gives me this error message. I did try mysql-upgrade --force and deleting rm ib_logfile1 ib_logfile0 The data loss is not a problem here, the problem is that I can't spend 30 minutes on re-installing each database each time I do a reboot. Here's some logs : 140730 9:24:13 [Note] Server socket created on IP: '127.0.0.1'. 140730 9:24:14 [Note] Event Scheduler: Loaded 0 events 140730 9:24:14 [Warning] InnoDB: Cannot open table mysql/gtid_slave_pos from the internal data dictionary of InnoDB though the .frm file for the table exists. See http://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting.html for how you can resolve the problem. 140730 9:24:14 [Warning] Failed to load slave replication state from table mysql.gtid_slave_pos: 1932: Table 'mysql.gtid_slave_pos' doesn't exist in engine 140730 9:24:14 [Note] /usr/local/Cellar/mariadb/10.0.12/bin/mysqld: ready for connections. Version: '10.0.12-MariaDB' socket: '/tmp/mysql.sock' port: 3306 Homebrew 140730 16:26:28 [Warning] InnoDB: Cannot open table db/site from the internal data dictionary of InnoDB though the .frm file for the table exists. See http://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting.html for how you can resolve the problem.
Something has deleted your ibdata1 file where InnoDB keeps the dictionary. Definitely it's not MySQL who does
MariaDB
25,039,927
36
I have a process that exports the data from an AWS RDS MariaDB using mysqldump which has been running succesfully in a docker-image on Concourse for years. Since two nights ago the process has started failing with the error: mysqldump: Couldn't execute 'FLUSH TABLES WITH READ LOCK': Access denied for user 'admin'@'%' (using password: YES) (1045) The official AWS explanation seems to be that because they do not allow super privileges to the master user or GLOBAL READ LOCK the mysqldump fails if the --master-data option is set. I do not have that option set. I'm running with these flags: mysqldump -h ${SOURCE_DB_HOST} ${SOURCE_CREDENTIALS} ${SOURCE_DB_NAME} --single-transaction --compress | grep -v '^SET .*;$' > /tmp/dump.sql mysqldump works fine when executed from my local Mac. It fails with the error that it couldn't execute FLUSH TABLES WITH READ LOCK only from the linux environment. My question is, does anyone know how to disable the FLUSH TABLES WITH READ LOCK command in mysqldump on linux? EDIT: Happy to accept @sergey-payu answer below as having fixed my problem, but here's a link to the MySQL bug report for the benefit of anyone else coming across this issue https://bugs.mysql.com/bug.php?id=109685
Along with granting my user PROCESS priv, I found that the --set-gtid-purged=OFF option worked for me. mysqldump --single-transaction --set-gtid-purged=OFF -h {host} {schema} Hitting AWS RDS MySQL 8.0.33 from Ubuntu 22.04
MariaDB
75,183,032
35
After I installed Mariadb 10 the Mysql workbench and JPDB client both connect and work fine so next step was get programming with Python (using SQLAlchemy) which seems to require MySQL-python so I went to update that and got: "mysql_config not found" I looked in the "usual places" and did not see a file... So I followed some ideas from an earlier question on SO and tried to install: apt-get install libmysqlclient-dev which got me to: The following packages have unmet dependencies: libmysqlclient-dev : Depends: libmysqlclient18 (= 5.5.35-0ubuntu0.13.10.2) but 10.0.10+maria-1~saucy is to be installed which kind of hits a brick wall for me
For Centos 7.0 install the following: yum install mariadb-devel For Fedora 23+: dnf install mariadb-devel
MariaDB
22,949,654
32
I have a spring-boot application on the same host as the Maria DB and both are running fine for some time. But between 12 hours and 2 days it seems that the spring boot application looses the connection to the database (stacktrace) and does not recover from that. When I restart the spring application all is fine again for some time. The application is not under load and when it looses the connection the application is still working but the db connection does not recover. The DB did not restart in the meantime (uptime 4 weeks). Only the monitoring service pings the application which pings the DB once a minute. (spring boot health) Other Java applications that are connected to the same DB are running fine and do not have any issues. My Question is: Why does spring not recover from that error and try to reconnect to the DB again? How can I set up spring to reconnect to the DB? 2015-02-19 15:25:48.392 INFO 4931 [qtp92662861-19] --- o.s.b.f.xml.XmlBeanDefinitionReader : Loading XML bean definitions from class path resource [org/springframework/jdbc/support/sql-error-codes.xml] 2015-02-19 15:25:48.580 INFO 4931 [qtp92662861-19] --- o.s.jdbc.support.SQLErrorCodesFactory : SQLErrorCodes loaded: [DB2, Derby, H2, HSQL, Informix, MS-SQL, MySQL, Oracle, PostgreSQL, Sybase] 2015-02-19 15:25:48.616 WARN 4931 [qtp92662861-19] --- o.s.jdbc.support.SQLErrorCodesFactory : Error while extracting database product name - falling back to empty error codes org.springframework.jdbc.support.MetaDataAccessException: Error while extracting DatabaseMetaData; nested exception is com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed. at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:296) at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:320) at org.springframework.jdbc.support.SQLErrorCodesFactory.getErrorCodes(SQLErrorCodesFactory.java:214) at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.setDataSource(SQLErrorCodeSQLExceptionTranslator.java:134) at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.<init>(SQLErrorCodeSQLExceptionTranslator.java:97) at org.springframework.jdbc.support.JdbcAccessor.getExceptionTranslator(JdbcAccessor.java:99) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:413) at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:468) at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:478) at org.springframework.boot.actuate.health.DataSourceHealthIndicator.doDataSourceHealthCheck(DataSourceHealthIndicator.java:98) at org.springframework.boot.actuate.health.DataSourceHealthIndicator.doHealthCheck(DataSourceHealthIndicator.java:87) at org.springframework.boot.actuate.health.AbstractHealthIndicator.health(AbstractHealthIndicator.java:38) at org.springframework.boot.actuate.endpoint.HealthEndpoint.invoke(HealthEndpoint.java:67) at org.springframework.boot.actuate.endpoint.HealthEndpoint.invoke(HealthEndpoint.java:34) at org.springframework.boot.actuate.endpoint.mvc.HealthMvcEndpoint.invoke(HealthMvcEndpoint.java:102) at sun.reflect.GeneratedMethodAccessor78.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:215) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:749) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:689) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:938) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:870) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:852) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1667) at org.springframework.boot.actuate.trace.WebRequestTraceFilter.doFilterInternal(WebRequestTraceFilter.java:110) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1650) at org.springframework.boot.actuate.autoconfigure.EndpointWebMvcAutoConfiguration$ApplicationContextHeaderFilter.doFilterInternal(EndpointWebMvcAutoConfiguration.java:280) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1650) at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:186) at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1650) at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:77) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1650) at onlinevalidation.CorsFilter.doFilter(CorsFilter.java:20) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1650) at org.springframework.boot.actuate.autoconfigure.MetricFilterAutoConfiguration$MetricsFilter.doFilterInternal(MetricFilterAutoConfiguration.java:90) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1650) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:497) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) at java.lang.Thread.run(Thread.java:745) Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at com.mysql.jdbc.Util.handleNewInstance(Util.java:377) at com.mysql.jdbc.Util.getInstance(Util.java:360) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:956) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:935) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:924) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:870) at com.mysql.jdbc.ConnectionImpl.throwConnectionClosedException(ConnectionImpl.java:1232) at com.mysql.jdbc.ConnectionImpl.checkClosed(ConnectionImpl.java:1225) at com.mysql.jdbc.ConnectionImpl.getMetaData(ConnectionImpl.java:2932) at com.mysql.jdbc.ConnectionImpl.getMetaData(ConnectionImpl.java:2927) at sun.reflect.GeneratedMethodAccessor76.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.tomcat.jdbc.pool.ProxyConnection.invoke(ProxyConnection.java:126) at org.apache.tomcat.jdbc.pool.JdbcInterceptor.invoke(JdbcInterceptor.java:109) at org.apache.tomcat.jdbc.pool.DisposableConnectionFacade.invoke(DisposableConnectionFacade.java:80) at com.sun.proxy.$Proxy68.getMetaData(Unknown Source) at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:285) ... 66 common frames omitted Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet successfully received from the server was 758,805 milliseconds ago. The last packet sent successfully to the server was 37 milliseconds ago. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at com.mysql.jdbc.Util.handleNewInstance(Util.java:377) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1036) at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3427) at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3327) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3814) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2435) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2582) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2526) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2484) at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1446) at org.springframework.jdbc.core.JdbcTemplate$1QueryStatementCallback.doInStatement(JdbcTemplate.java:452) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:402) ... 60 common frames omitted Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost. at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:2914) at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3337) ... 69 common frames omitted @Configuration @ComponentScan(value = "com.demo.validation",scopedProxy = TARGET_CLASS) @EnableAutoConfiguration @EnableAspectJAutoProxy(proxyTargetClass = true) @EnableCaching(proxyTargetClass = true) @EnableAsync(proxyTargetClass = true) @EnableJpaRepositories @EnableTransactionManagement(proxyTargetClass = true) public class Configuration { main(...) } The Configuration spring.datasource.url=jdbc:mysql://localhost/validation spring.datasource.username=validation spring.datasource.password=**** spring.datasource.driver-class-name=com.mysql.jdbc.Driver Gradle.Build dependencies { //Boot compile 'org.codehaus.groovy:groovy-all:2.3.7:indy' compile 'org.springframework.boot:spring-boot-starter-actuator:1.1.8.RELEASE' compile 'org.springframework.boot:spring-boot-starter-security:1.1.8.RELEASE' compile 'org.springframework:spring-aspects:4.0.7.RELEASE' compile 'org.springframework.boot:spring-boot-starter-aop:1.1.8.RELEASE' compile 'org.springframework:spring-instrument:4.0.7.RELEASE' compile('org.springframework.boot:spring-boot-starter-web:1.1.8.RELEASE'){ exclude module: 'spring-boot-starter-tomcat' } //servlet container compile 'org.eclipse.jetty:jetty-webapp:9.2.3.v20140905' compile 'org.eclipse.jetty:jetty-servlets:9.2.3.v20140905' //DB compile 'org.springframework.boot:spring-boot-starter-data-jpa:1.1.8.RELEASE' compile 'mysql:mysql-connector-java:5.1.34' //compile 'org.mariadb.jdbc:mariadb-java-client:1.1.8' runtime 'com.h2database:h2:1.4.182'
Per a senior member in the Spring forums, the Spring DataSource is not intended for production use: The above answers are only part of the solution. Indeed you need proper transaction managent AND you need a connection pool. The DriverManagerDataSource is NOT meant for production, it opens and closes a datebase connection each time it needs one. Instead you can use C3P0 as your DataSource which handles the reconnect and is much better in performance. Here's a quick example of a potential configuration in a Spring xml configuration: <bean id="c3p0DataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="com.mysql.jdbc.Driver" /> <property name="jdbcUrl" value="#{systemProperties.dbhost}" /> <property name="user" value="#{systemProperties.dbuser}" /> <property name="password" value="#{systemProperties.dbpass}" /> <property name="maxPoolSize" value="25" /> <property name="minPoolSize" value="10" /> <property name="maxStatements" value="100" /> <property name="testConnectionOnCheckout" value="true" /> </bean> <bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate"> <constructor-arg ref="c3p0DataSource" /> </bean>
MariaDB
28,609,565
32
I'm attempting to install MariaDB on Ubuntu 12.04 LTS. I've followed the instructions provided at https://askubuntu.com/questions/64772/how-to-install-mariadb and from MariaDB.org that appear when you choose the download. The last step is sudo apt-get install mariadb-server which returns: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: mariadb-server : Depends: mariadb-server-5.5 but it is not going to be installed E: Unable to correct problems, you have held broken packages. The dependency issue is an acknowledge issue (https://mariadb.atlassian.net/browse/MDEV-3882) but I believe the broken package prevents me from working around this. If I try to install libmariadbclient18 I get the following: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libmariadbclient18 : Depends: libmysqlclient18 (= 5.5.30-mariadb1~precise) but 5.5.31-0ubuntu0.12.04.1 is to be installed E: Unable to correct problems, you have held broken packages. I've tried to correct the broken package using sudo apt-get install -f, but I still can't install mariadb-server or libmariadbclient18.
sudo apt-get install libmysqlclient18=5.5.30-mariadb1~precise mysql-common=5.5.30-mariadb1~precise sudo apt-get install mariadb-server The first one reverts the two mysql libs that were bumped ubuntu side to the older mariadb ones. The second one can then proceed normally. Packages got removed because something like apt-get dist-upgrade was run. The GUI actually warns you that something's amiss. To prevent this issue from cropping up again, tell apt to favor the MariaDB repo via pinning by creating a file in /etc/apt/preferences.d: $ cat /etc/apt/preferences.d/MariaDB.pref Package: * Pin: origin <mirror-domain> Pin-Priority: 1000 Also, be sure to install libmariadbclient-dev if you need to compile anything (like Ruby gems).
MariaDB
16,214,517
30
there are lot of recommendations over the Internet on how to enable SUPER privileges in case if someone hit the following error: "ERROR 1419 (HY000): You do not have the SUPER Privilege and Binary Logging is Enabled" But I wasn't be able to find WHY MySQL disables these privileges when binary logging option is on. Are there some issues with replication if I use e.g. triggers which modify DB or something else? Whether it's safe and, if no, what kind of issues and under which circumstances I can hit if I will return SUPER privileges back? I think there should be some rationale behind this restriction but don't understand which one. Does anybody have an answer on this? Thank you.
Here is some detailed explaination I had found in documentation. Hopefully this could help you to understand. The CREATE FUNCTION and INSERT statements are written to the binary log, so the slave will execute them. Because the slave SQL thread has full privileges, it will execute the dangerous statement. Thus, the function invocation has different effects on the master and slave and is not replication-safe. To guard against this danger for servers that have binary logging enabled, stored function creators must have the SUPER privilege, in addition to the usual CREATE ROUTINE privilege that is required. Similarly, to use ALTER FUNCTION, you must have the SUPER privilege in addition to the ALTER ROUTINE privilege. Without the SUPER privilege, an error will occur: ERROR 1419 (HY000): You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable) If you do not want to require function creators to have the SUPER privilege (for example, if all users with the CREATE ROUTINE privilege on your system are experienced application developers), set the global log_bin_trust_function_creators system variable to 1. You can also set this variable by using the --log-bin-trust-function-creators=1 option when starting the server. If binary logging is not enabled, log_bin_trust_function_creators does not apply. SUPER is not required for function creation unless, as described previously, the DEFINER value in the function definition requires it. Source: https://dev.mysql.com/doc/refman/8.0/en/stored-programs-logging.html
MariaDB
56,389,698
30
I have tried query but there is an error. Does anybody solved the error? MariaDB [mysql]> UPDATE user SET Host='%' WHERE User='root'; ERROR 1356 (HY000): View 'mysql.user' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
MariaDB-10.4+ the mysql.user is a view rather than a table. Its recommend to stop copying off old blogs to do any authentication relates changes in MySQL and MariaDB, the mechanisms are being updated and no longer apply. Always check the official documentation. Use SET PASSWORD or ALTER USER to manage user authentication. Also modifying a user/host component of the username will put triggers, events, plugins, grants, roles etc out of sync with the combined username (aka broken). So just DROP/CREATE users rather than manipulate them.
MariaDB
64,841,185
30
If I replace a MySQL 5.1 server with a MariaDB Server (Maria & XtraDB storages) instead of MySQL (MyISAM & InnoDB), will most of MySQL client software (incl. applications made with PHP 5.2 and Java SE 1.6) ... just remain working without any changes (with minor regressions maybe)? Or will I have to replace/reconfigure client drivers (like use another JDBC driver class and connection string)? Or will I have even to change application code?
http://kb.askmonty.org/v/mariadb-versus-mysql All MySQL connectors (PHP, Perl, Python, Java, MyODBC, Ruby, MySQL C connector etc) works unchanged with MariaDB.
MariaDB
4,106,315
29
Recently, I read a news that MariaDB is a drop-off replacement for MySQL since MySQL has unfriendly pricing for clustered/enterprise version according to Google. Now I can't find anything relevant about EF for MariaDB on Google so I'm hoping someone knows about it. Is it ok to use MySQL driver for this since it is 100% compatible? Any thoughts? Update I just found out that RedHat is also switching from MySQL to MariaDB for it's default database management system. So it is necessary for my current project to switch it to MariaDB.
I was able to use MariaDB 10 with Entity Framework although it required a bit of work mainly because the MySQL tools are a bit buggy. To work with MySQL/MariaDB in Visual Studio 2010/2012,you need to install MySQL for Visual Studio using MySQL Installer. I used the Web version as I only wanted to download the connectors and the extensions. Once you do this, you can add connections to MariaDB and create EF models. This is not enough to run your code though. First you need to add the MySQL Connector using NuGet. Unfortuanetly, MySQL for Visual Studio adds a reference to an older provider version (mentioned here) and can't load the newer version. To fix this, I added the following section in my app.config: <system.data> <DbProviderFactories> <remove invariant="MySql.Data.MySqlClient"/> <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" description=".Net Framework Data Provider for MySQL" type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data, Version=6.7.4.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" /> </DbProviderFactories> </system.data> This replaces the old reference with a new one. Note that I used <remove invariant="MySql.Data.MySqlClient"/> not <remove name="MySql Data Provider"/> in the remove element. Currently, MySQL for Visual Studio isn't supported in Visual Studio 2013 UPDATE - 2017 Connector/.NET is essentially stagnant, with the same problems it had in 2013, eg no true asynchronous calls. The "async" calls are fake - they are run on separate threads, defeating the very purpose of using async. That alone makes it unsuitable for web applications, where one wants to server as many requests as possible using the minimum number of threads/CPU. Never mind about .NET Core support. That's why in the past few years people have built their own, truly asynchronous providers. Some of the more popular ones are: MySqlConnector offers a truly asynchronous provider for .NET and .NET Core Pomelo offers EF Core support on top of MySQLConnector With about 100K NuGet downloads each, frequent versions and active maintenance. They aren't "official", but definitely worth trying Lockdown Update - April 2020 It seems MySqlConnector and Pomelo have really taken off. Connector/.NET finally released a couple of versions after almost two years with the latest, 8.0.19, getting 233K downloads. MySqlConnector on the other hand, got 496K downloads for version 0.61.0. Minor updates are frequent, with the latest, 0.63.2 coming 8 hours before this post. That's probably a bit too frequent, but far better than 2 years. I haven't checked features or MySql 8 compatibility yet. If I had to chose though (which I will probably do for a project next week), I'd start with MySqlConnector. I suspect Connector/.NET will be forced to offer far more frequent updates going on, to keep pace with .NET Core releases, but that's just speculation at this point.
MariaDB
20,183,781
28
I've installed mariadb from Ubuntu 15.04 repositories using the Ubuntu software center or at the command prompt (apt-get install maraidb-server), but no password is asked for root user. Now I'm able to connect to mysql on command line without password, but connecting using Mysql-Workbench or python mysqldb library failed with the "Access denied for user 'root'@'localhost'" message
Starting with MariaDB 10.4 root@localhost account is created with the ability to use two authentication plugins: First, it is configured to try to use the unix_socket authentication plugin. This allows the root@localhost user to login without a password via the local Unix socket file defined by the socket system variable, as long as the login is attempted from a process owned by the operating system root user account. Second, if authentication fails with the unix_socket authentication plugin, then it is configured to try to use the mysql_native_password authentication plugin. However, an invalid password is initially set, so in order to authenticate this way, a password must be set with SET PASSWORD. That is why you don't need a password to login on a fresh install. But then another quote: When the plugin column is empty, MariaDB defaults to authenticating accounts with either the mysql_native_password or the mysql_old_password plugins. It decides which based on the hash used in the value for the Password column. When there's no password set or when the 4.1 password hash is used, (which is 41 characters long), MariaDB uses the mysql_native_password plugin. The mysql_old_password plugin is used with pre-4.1 password hashes, (which are 16 characters long). So setting plugin = '' will force it to use password based authentication. Make sure you set a password before that. sudo mysql -u root [mysql] use mysql; [mysql] update user set plugin='' where User='root'; [mysql] flush privileges; [mysql] \q
MariaDB
30,815,971
28
I've created a small docker-compose.yml which used to work like a charm to deploy small WordPress instances. It looks like this: wordpress: image: wordpress:latest links: - mysql ports: - "1234:80" environment: WORDPRESS_DB_USER: wordpress WORDPRESS_DB_NAME: wordpress WORDPRESS_DB_PASSWORD: "password" WORDPRESS_DB_HOST: mariadb MYSQL_PORT_3306_TCP: 3306 volumes: - /srv/wordpress/:/var/www/html/ mysql: image: mariadb:latest mem_limit: 256m container_name: mariadb environment: MYSQL_ROOT_PASSWORD: "password" MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: "password" volumes: - /srv/mariadb:/var/lib/mysql But when I start it now (maybe since docker update to Docker version 1.9.1, build a34a1d5), it fails wordpress_1 | Warning: mysqli::mysqli(): (HY000/2002): Connection refused in - on line 10 wordpress_1 | wordpress_1 | MySQL Connection Error: (2002) Connection refused When I cat /etc/hosts of the wordpress_1 there are entries for MySQL: 172.17.0.10 mysql 12a564fdbc56 mariadb and I am able to ping the MariaDB server. When I docker-compose up, WordPress gets installed and after several restarts the MariaDB container prints: Version: '10.0.22-MariaDB-1~jessie' socket: '/var/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution Which schould indicate it to be running, isn't it? How do I get the WordPress to be able to connect to the MariaDB container?
To fix this issue the first thing to do is: Add the following code to wordpress & database containers (in the docker-compose file): restart: unless-stopped This will make sure you Database is started and intialized before wordpress container trying to connect to it. Then restart docker engine sudo restart docker or (for ubuntu 15+) sudo service docker restart Here the full configuration that worked for me, to setup wordpress with MariaDB: version: '2' services: wordpress: image: wordpress:latest links: - database:mariadb environment: - WORDPRESS_DB_USER=wordpress - WORDPRESS_DB_NAME=mydbname - WORDPRESS_TABLE_PREFIX=ab_ - WORDPRESS_DB_PASSWORD=password - WORDPRESS_DB_HOST=mariadb - MYSQL_PORT_3306_TCP=3306 restart: unless-stopped ports: - "test.dev:80:80" working_dir: /var/www/html volumes: - ./wordpress/:/var/www/html/ database: image: mariadb:latest environment: - MYSQL_ROOT_PASSWORD=password - MYSQL_DATABASE=mydbname - MYSQL_USER=wordpress - MYSQL_PASSWORD=password restart: unless-stopped ports: - "3306:3306"
MariaDB
34,068,671
28
I'm getting this error: SQLSTATE[22007]: Invalid datetime format: 1366 Incorrect string value: '\xBD Inch...' for column 'column-name' at row 1 My database, table, and column have the format utf8mb4_unicode_ci also column-name is type text and NULL. This is the value of the column-name [column-name] => Some text before 11 ▒ and other text after, and after. However I wait that laravel adds quotes to column's values, because the values are separated by commas (,). It should be as follow: [column-name] => 'Some text before 11 ▒ and other text after, and after.' See below the Schema Schema::create('mws_orders', function (Blueprint $table) { $table->string('custom-id'); $table->string('name'); $table->string('description')->nullable(); $table->string('comment')->nullable(); $table->integer('count')->nullable(); $table->text('column-name')->nullable(); $table->timestamps(); $table->primary('custom-id'); }); I have been looking for on google but not any solution, yet. Anyone has an idea how to solve this issue? I'm using Laravel 5.5 and MariaDB 10.2.11.
I solved it, encoding to uft-8 all string columns that generated this error before insert. For example, the column that generated the error was column-name, I encoded as show bellow. Also I found other column with the same error, I used this solution, too. $data [ //key=>values ]; $myModel = new MyModel(); $data['column-name'] = DB::connection()->getPdo()->quote(utf8_encode($data['column-name'])); $myModel->insert($data);
MariaDB
48,270,374
27
I was asked to check Mariadb as Centos does not provider MySQL 5.5 for the moment. I have read that xtradb servers as a drop in for innodb. What are the advantages of using one or the other because if they were equal, they would not have been called the same name? Do you think that I should switch to Mariadb? What kind of problems I might face in the future because of updates if any. I know that the founder of MySQL is behind Mariadb, and Oracle is managing MySQL now. It seems a bit tricky as a tricky decision. Thank you in advance for your opinion, Update, I asked the question here, because google did not display any recent updates. Only some old comparisons published prior to 2012
XtraDB is InnoDB with several patches added. The patches themselves stem from Google, Facebook and others. XtraDB is maintained by Percona and the heart of Percona Server. You may think of Percona as a distributor who collects, coordinates and maintains patches and distributes an enhanced version of the MySQL server. A feature comparison between stock MySQL and Percona Server can be seen here: http://www.percona.com/software/percona-server/feature-comparison The XtraDB engine is also shipped as default InnoDB implementation in MariaDB. MariaDB includes also stock InnoDB as pluggable storage engine, so you can chose. Benchmarks show that XtraDB scales better on massively parallel architectures and especially XtraDB is much better suited for write-heavy workload. The InnoDB engine in MySQL 5.6 will incorporate many of the features and advantages that have so far been available in XtraDB only.
MariaDB
12,037,363
26
I get the error from this line SELECT table.field FROM table WHERE table.month = 'october' AND DATEDIFF(day, table.start_date, table.end_date) < 30 The dates in my column are in the format m-d-yy Do I need to convert this to a different format? If so how? Using MariaDB
According to the documentation for MariaDB DATEDIFF only takes two arguments: Syntax DATEDIFF(expr1,expr2) Description DATEDIFF() returns (expr1 – expr2) expressed as a value in days from one date to the other. expr1 and expr2 are date or date-and-time expressions. Only the date parts of the values are used in the calculation.
MariaDB
23,250,555
25
I'm using last version of laravel (5.1) in a homestead virtual machine (vagrant). I connect my project to a local mariaDB server, in which I have some table and 2 db-view. Since I made some select only on the db-view tables, I receive back randomly this error: General error: 1615 Prepared statement needs to be re-prepared From today, I always get this error when made select only on the db views. If I open my phpMyAdmin and make the same select it return the correct result. I tried to open php artisan tinker and select one record of the db-view but it return the same error: // Select one user from user table >>> $user = new App\User => <App\User #000000006dc32a890000000129f667d2> {} >>> $user = App\User::find(1); => <App\User #000000006dc32a9e0000000129f667d2> { id: 1, name: "Luca", email: "[email protected]", customerId: 1, created_at: "2015-08-06 04:17:57", updated_at: "2015-08-11 12:39:01" } >>> // Select one source from Source db-view >>> $source = new App\Source => <App\Source #000000006dc32a820000000129f667d2> {} >>> $source = App\Source::find(1); Illuminate\Database\QueryException with message 'SQLSTATE[HY000]: General error: 1615 Prepared statement needs to be re-prepared (SQL: select * from `sources` where `sources`.`id` = 1 limit 1)' How can I fix that? I read about a problem with mysqldump (but not in my case) and to increase value of table_definition_cache but it is not sure that it will work and I can't modify them. Is this a kind of laravel bug? How can I figure that out? Edit: As asked, I add my model source code. Source.php: <?php namespace App; use Illuminate\Database\Eloquent\Model; class Source extends Model { protected $table = 'sources'; /* |-------------------------------------------------------------------------- | FOREIGN KEYS |-------------------------------------------------------------------------- */ /** * * @return [type] [description] */ public function customersList(){ return $this->hasMany("App\CustomerSource", "sourceId", "id"); } /** * * @return [type] [description] */ public function issues(){ return $this->hasMany("App\Issue", "sourceId", "id"); } } Edit 2: If I execute the same query in the project with mysqli it works: $db = new mysqli(getenv('DB_HOST'), getenv('DB_USERNAME'), getenv('DB_PASSWORD'), getenv('DB_DATABASE')); if($db->connect_errno > 0){ dd('Unable to connect to database [' . $db->connect_error . ']'); } $sql = "SELECT * FROM `sources` WHERE `id` = 4"; if(!$result = $db->query($sql)){ dd('There was an error running the query [' . $db->error . ']'); } dd($result->fetch_assoc()); EDIT 3: Afeter 2 month, I'm still there. Same error and no solution found. I decide to try a little solution in aritsan tinker but no good news. I report what I've tried: First try to fetch a table model: >>> $user = \App\User::find(1); => App\User {#697 id: 1, name: "Luca", email: "[email protected]", customerId: 1, created_at: "2015-08-06 04:17:57", updated_at: "2015-10-27 11:28:14", } Now try to fetch a view table model: >>> $ir = \App\ContentRepository::find(15); Illuminate\Database\QueryException with message 'SQLSTATE[42S02]: Base table or view not found: 1146 Table 'dbname.content_repositories' doesn't exist (SQL: select * from `content_repositories` where `content_repositories`.`id` = 1 limit 1)' When contentRepository doesn't have correct table name setup inside the model ContentRepository.php: >>> $pdo = DB::connection()->getPdo(); => PDO {#690 inTransaction: false, errorInfo: [ "00000", 1146, "Table 'dbname.content_repositories' doesn't exist", ], attributes: [ "CASE" => NATURAL, "ERRMODE" => EXCEPTION, "AUTOCOMMIT" => 1, "PERSISTENT" => false, "DRIVER_NAME" => "mysql", "SERVER_INFO" => "Uptime: 2513397 Threads: 12 Questions: 85115742 Slow queries: 6893568 Opens: 1596 Flush tables: 1 Open tables: 936 Queries per second avg: 33.864", "ORACLE_NULLS" => NATURAL, "CLIENT_VERSION" => "mysqlnd 5.0.11-dev - 20120503 - $Id: id_here $", "SERVER_VERSION" => "5.5.5-10.0.17-MariaDB-1~wheezy-wsrep-log", "STATEMENT_CLASS" => [ "PDOStatement", ], "EMULATE_PREPARES" => 0, "CONNECTION_STATUS" => "localiphere via TCP/IP", "DEFAULT_FETCH_MODE" => BOTH, ], } >>> CHANGE TABLE VALUE INSIDE model ContentRepository.php: >>> $ir = \App\ContentRepository::find(15); Illuminate\Database\QueryException with message 'SQLSTATE[HY000]: General error: 1615 Prepared statement needs to be re-prepared (SQL: select * from `contentRepository` where `contentRepository`.`id` = 15 limit 1)' When it is correct, pay attention to "errorInfo" that is missing: >>> $pdo = DB::connection()->getPdo(); => PDO {#690 inTransaction: false, attributes: [ "CASE" => NATURAL, "ERRMODE" => EXCEPTION, "AUTOCOMMIT" => 1, "PERSISTENT" => false, "DRIVER_NAME" => "mysql", "SERVER_INFO" => "Uptime: 2589441 Threads: 13 Questions: 89348013 Slow queries: 7258017 Opens: 1604 Flush tables: 1 Open tables: 943 Queries per second avg: 34.504", "ORACLE_NULLS" => NATURAL, "CLIENT_VERSION" => "mysqlnd 5.0.11-dev - 20120503 - $Id: id_here $", "SERVER_VERSION" => "5.5.5-10.0.17-MariaDB-1~wheezy-wsrep-log", "STATEMENT_CLASS" => [ "PDOStatement", ], "EMULATE_PREPARES" => 0, "CONNECTION_STATUS" => "localIPhere via TCP/IP", "DEFAULT_FETCH_MODE" => BOTH, ], } Show db's tables: >>> $tables = DB::select('SHOW TABLES'); => [ {#702 +"Tables_in_dbname": "table_name_there", }, {#683 +"Tables_in_dbname": "table_name_there", }, {#699 +"Tables_in_dbname": "table_name_there", }, {#701 +"Tables_in_dbname": "table_name_there-20150917-1159", }, {#704 +"Tables_in_dbname": "contentRepository", */ VIEW TABLE IS THERE!!!! /* }, {#707 +"Tables_in_dbname": "table_name_there", }, {#684 +"Tables_in_dbname": "table_name_there", }, ] Try with normal select: >>> $results = DB::select('select * from dbname.contentRepository limit 1'); Illuminate\Database\QueryException with message 'SQLSTATE[HY000]: General error: 1615 Prepared statement needs to be re-prepared (SQL: select * from dbname.contentRepository limit 1)' Try unprepared query: >>> DB::unprepared('select * from dbname.contentRepository limit 1') => false Try second time unprepared query: >>> DB::unprepared('select * from dbname.contentRepository limit 1') Illuminate\Database\QueryException with message 'SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute. (SQL: select * from dbname.contentRepository limit 1)' Try PDOStatement::fetchAll(): >>> DB::fetchAll('select * from dbname.contentRepository limit 1'); PHP warning: call_user_func_array() expects parameter 1 to be a valid callback, class 'Illuminate\Database\MySqlConnection' does not have a method 'fetchAll' in /Users/luca/company/Laravel/dbname/vendor/laravel/framework/src/Illuminate/Database/DatabaseManager.php on line 296 Try second PDOStatement::fetchAll(): >>> $pdo::fetchAll('select * from dbname.contentRepository limit 1'); [Symfony\Component\Debug\Exception\FatalErrorException] Call to undefined method PDO::fetchAll() Try statement... : >>> $pdos = DB::statement('select * from dbname.contentRepository limit 1') Illuminate\Database\QueryException with message 'SQLSTATE[HY000]: General error: 1615 Prepared statement needs to be re-prepared (SQL: select * from dbname.contentRepository limit 1)' Thank you
It seems to work adding 'options' => [ \PDO::ATTR_EMULATE_PREPARES => true ] Inside projectName/config/database.php file in DB configuration. It will be like this: 'mysql' => [ 'driver' => 'mysql', 'host' => env('DB_HOST', 'localhost'), 'database' => env('DB_DATABASE', 'forge'), 'username' => env('DB_USERNAME', 'forge'), 'password' => env('DB_PASSWORD', ''), 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', 'strict' => false, 'options' => [ \PDO::ATTR_EMULATE_PREPARES => true ] ], Laravel 5.1. Hope it will help! Edit: I'm currently on Laravel 8 and this solution is still working.
MariaDB
31,957,441
25
I've just updated MariaDB using apt-get dist-upgrade. Now it won't start using service mysql start anymore. I can however, run it as root or do: sudo -u mysql mysqld_safe then MariaDB starts up fine. The folder /home/mysql is owned by the mysql user and group. I've found the error to be thrown in this function: https://github.com/MariaDB/server/blob/7ff44b1a832b005264994cbdfc52f93f69b92cdc/sql/mysqld.cc#L9865 I can't figure out what to do next. Any pointers?
To run MariaDB SQL from /home, in the file /usr/lib/systemd/system/mariadb.service or /lib/systemd/system/mariadb.service, just change : ProtectHome=true to : ProtectHome=false
MariaDB
38,529,205
24
I have an issue with the official dockerized image of Mariadb. When my applications tries to make some queries I got the following error : DB Error: unknown error QUERY : INSERT INTO It seems this error comes from the SQL_MODE, which is set as follow in this image : STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER, NO_ENGINE_SUBSTITUTION I have a normal Linux Server and with mariadb installed and i don't have this STRICT_TRANS_TABLES value in my SQL_mode. And my application is working without any problem. How can I remove the STRICT_TRANS_TABLES value in my container when I run docker-compose with my docker-compose file without the need of a custom dockerfile?
In your docker-compose.yml set command: --sql_mode="". Here is an example: db-service: build: context: . dockerfile: db.dockerfile image: example/repo:db ports: - "3306:3306" volumes: - ./data/db-data:/var/lib/mysql - ./data/db-init:/docker-entrypoint-initdb.d/ ports: - "3306:3306" environment: MYSQL_USER: root MYSQL_ROOT_PASSWORD: root MYSQL_DATABASE: your_database command: mysqld --sql_mode="" --character-set-server=utf8 --collation-server=utf8_slovenian_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0 restart: on-failure networks: - yournet It works fine for me.
MariaDB
48,924,667
24
My main question is that after I have created a docker container for my mariadb with the command docker run --name db -e MYSQL_ROOT_PASSWORD=test -d -p 3306:3306 mariadb how can I access the sql db? Somewhere I have seen a solution using a temporal (after exit the container is deleted) container, but cannot find it anymore. I am searching for a command like: sudo docker exec -it [other flags] [command] db.
First access the container terminal docker exec -it some-mariadb bash 'some-mariadb' is the mysql container name Then access the db directly using the mysql terminal command mysql -u root -p
MariaDB
33,170,489
23
In Debian Jessie I installed MariaDB server 10.0.30 and I try to increase max key length. AFAIU it depends of the config parameter innodb_large_prefix being enabled. According to the docs, it also requires barracuda file format and innodb_file_per_table. After setting them in config and restarting server I see in client, that those parameters are set correctly: > SHOW GLOBAL VARIABLES LIKE 'innodb_large%'; +---------------------+-------+ | Variable_name | Value | +---------------------+-------+ | innodb_large_prefix | ON | +---------------------+-------+ 1 row in set (0.00 sec) > SHOW GLOBAL VARIABLES LIKE 'innodb_file%'; +--------------------------+-----------+ | Variable_name | Value | +--------------------------+-----------+ | innodb_file_format | Barracuda | | innodb_file_format_check | OFF | | innodb_file_format_max | Antelope | | innodb_file_per_table | ON | +--------------------------+-----------+ 4 rows in set (0.00 sec) > SHOW GLOBAL VARIABLES LIKE 'innodb_page%'; +------------------+-------+ | Variable_name | Value | +------------------+-------+ | innodb_page_size | 16384 | +------------------+-------+ 1 row in set (0.00 sec) I am not sure, why innodb_file_format_max is set Antelope, but while innodb_file_format_check is OFF, it should not matter. Actually, even if I had it also set Barracuda, it did not made difference. If i try now create table with large index like: CREATE TABLE `some_table` ( `some_tableID` int(10) unsigned NOT NULL AUTO_INCREMENT, `column` varchar(750) COLLATE utf8mb4_estonian_ci NOT NULL DEFAULT '', PRIMARY KEY (`some_tableID`), KEY `column` (`column`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_estonian_ci; I get error: ERROR 1709 (HY000): Index column size too large. The maximum column size is 767 bytes. On Ubuntu 16.04 with mysql server 5.7.17 are all related settings same (by default) and there is no problem with large index (for utf8mb4 it is 750*4 = 3000). What is wrong with my MariaDB setup?
It requires more than just those two settings... SET GLOBAL innodb_file_format=Barracuda; SET GLOBAL innodb_file_per_table=ON; SET GLOBAL innodb_large_prefix=1; logout & login (to get the global values); ALTER TABLE tbl ROW_FORMAT=DYNAMIC; -- or COMPRESSED Perhaps all you need is to add ROW_FORMAT=... to your CREATE TABLE. These instructions are needed for 5.6.3 up to 5.7.7. Beginning with 5.7.7, the system defaults correctly to handle larger fields. Alternatively, you could use a "prefix" index: INDEX(column(191)) (But prefix indexing is flawed in many ways.) "If the server later creates a higher table format, innodb_file_format_max is set to that value" implies that that setting is not an issue.
MariaDB
43,379,717
23
Im using xampp control panel and from there i start the process for apache and mysql. Then i go to mysql workbench and server status seems to be ok, here is some info Host: Windows-PC Socket: C:/xampp/mysql/mysql.sock Port: 3306 Version 10.1.31-MariaDB mariadb.org binary distribution Compiled For: Win32(32) Configuratin File: unknown Then everytime when i try to add the foreign key for my dummy schema like: ALTER TABLE `puppies`.`animals` ADD INDEX `Breed_idx` (`BreedID` ASC) VISIBLE; ; ALTER TABLE `puppies`.`animals` ADD CONSTRAINT `Breed` FOREIGN KEY (`BreedID`) REFERENCES `puppies`.`breeds` (`Breed`) ON DELETE NO ACTION ON UPDATE NO ACTION; I get the following error ERROR 1064: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '' at line 2 SQL Statement: ALTER TABLE `puppies`.`animals` ADD INDEX `Breed_idx` (`BreedID` ASC) VISIBLE So what can i do so that xampp will start using mysql syntax over mariaDb? Or if im wrong in my understanding of the problem, then what should i do so that i dont have to face this kind of issues again when using xampp?
Problem is the word VISIBLE, remove it and it will work. Index are visible by default. Your question: "If i remove VISIBLE it works just fine, so why did mysql workbench decided to add visible?" My answer: The option to mark index invisible is not yet implemented in MariaDB (afaik!). Update: The syntax for MariaDB is different, please see this reference: https://jira.mariadb.org/browse/MDEV-7317
MariaDB
50,393,245
23
I am new to Docker, I was trying to crate docker container of mariadb for my application but when I start running mariadb container it shows Access denied for user 'root'@'localhost' (using password: YES) dockerfile Following is the docker compose I am using. version: '3' services: mysql: image: mariadb container_name: mariadb volumes: - dbvolume:/var/lib/mysql - ./AppDatabase.sql:/docker-entrypoint-initdb.d/AppDatabase.sql environment: MYSQL_ROOT_PASSWORD: root123 MYSQL_ROOT_USER: root MYSQL_USER: root MYSQL_PASSWORD: root123 MYSQL_DATABASE: appdata ports: - "3333:3306" volumes: dbvolume: After trying for multiple times by referring few links I was able to connect my application to docker container but it failed to import AppDatabase.sql script at the time of creating docker container. But now by using same docker compose file I am not able to connect mariadb to my application and I think even it's not importing SQL script to the database (based on logs I have observed). Following is the docker log generated while running docker compose: $ docker logs 3fde358ff015 2019-09-24 17:40:37 0 [Note] mysqld (mysqld 10.4.8-MariaDB-1:10.4.8+maria~bionic) starting as process 1 ... 2019-09-24 17:40:37 0 [Note] InnoDB: Using Linux native AIO 2019-09-24 17:40:37 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2019-09-24 17:40:37 0 [Note] InnoDB: Uses event mutexes 2019-09-24 17:40:37 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2019-09-24 17:40:37 0 [Note] InnoDB: Number of pools: 1 2019-09-24 17:40:37 0 [Note] InnoDB: Using SSE2 crc32 instructions 2019-09-24 17:40:37 0 [Note] mysqld: O_TMPFILE is not supported on /tmp (disabling future attempts) 2019-09-24 17:40:37 0 [Note] InnoDB: Initializing buffer pool, total size = 256M, instances = 1, chunk size = 128M 2019-09-24 17:40:37 0 [Note] InnoDB: Completed initialization of buffer pool 2019-09-24 17:40:37 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). 2019-09-24 17:40:37 0 [Note] InnoDB: Upgrading redo log: 2*50331648 bytes; LSN=21810033 2019-09-24 17:40:38 0 [Note] InnoDB: Starting to delete and rewrite log files. 2019-09-24 17:40:38 0 [Note] InnoDB: Setting log file ./ib_logfile101 size to 50331648 bytes 2019-09-24 17:40:38 0 [Note] InnoDB: Setting log file ./ib_logfile1 size to 50331648 bytes 2019-09-24 17:40:38 0 [Note] InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0 2019-09-24 17:40:38 0 [Note] InnoDB: New log files created, LSN=21810033 2019-09-24 17:40:38 0 [Note] InnoDB: 128 out of 128 rollback segments are active. 2019-09-24 17:40:38 0 [Note] InnoDB: Creating shared tablespace for temporary tables 2019-09-24 17:40:38 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2019-09-24 17:40:38 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB. 2019-09-24 17:40:38 0 [Note] InnoDB: Waiting for purge to start 2019-09-24 17:40:38 0 [Note] InnoDB: 10.4.8 started; log sequence number 21810033; transaction id 14620 2019-09-24 17:40:38 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 2019-09-24 17:40:38 0 [Note] Plugin 'FEEDBACK' is disabled. 2019-09-24 17:40:38 0 [Note] Server socket created on IP: '::'. 2019-09-24 17:40:38 0 [Warning] 'proxies_priv' entry '@% root@c980daa43351' ignored in --skip-name-resolve mode. 2019-09-24 17:40:38 0 [Note] InnoDB: Buffer pool(s) load completed at 190924 17:40:38 2019-09-24 17:40:38 0 [Note] Reading of all Master_info entries succeeded 2019-09-24 17:40:38 0 [Note] Added new Master_info '' to hash table 2019-09-24 17:40:38 0 [Note] mysqld: ready for connections. Version: '10.4.8-MariaDB-1:10.4.8+maria~bionic' socket: '/var/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution SQL Script I am trying to import: create database appdata; use appdata; CREATE TABLE `appdatadetails` ( `Name` varchar(8) NOT NULL, `appIndex` int(11) NOT NULL, `connector` varchar(16) DEFAULT NULL, `intName` varchar(12) DEFAULT NULL, `intIndex` int(11) DEFAULT NULL, PRIMARY KEY (`Name`,`appIndex`) ) Please help me to understand what I am doing wrong, I have tried all possible solution posted on different blogs. Update: Latest Update: I was able to up and running mariadb docker image with 10.1. But if I attach volume then still I am facing issue. Docker Compose: version: '3' services: mysql: image: mariadb:10.1 container_name: mariadb volumes: - container-volume:/var/lib/mysql - ./AppDatabase.sql:/docker-entrypoint-initdb.d/AppDatabase.sql environment: MYSQL_ROOT_PASSWORD: root123 MYSQL_DATABASE: appdata ports: - "3333:3306" volumes: container-volume: And the log error message, If I attach container-volume volume. Creating mariadb ... done Attaching to mariadb mariadb | 2019-09-25 6:56:26 140542855440384 [Note] mysqld (mysqld 10.1.41-MariaDB-1~bionic) starting as process 1 ... mariadb | 2019-09-25 6:56:26 140542855440384 [Note] InnoDB: Using mutexes to ref count buffer pool pages mariadb | 2019-09-25 6:56:26 140542855440384 [Note] InnoDB: The InnoDB memory heap is disabled mariadb | 2019-09-25 6:56:26 140542855440384 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins mariadb | 2019-09-25 6:56:26 140542855440384 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier mariadb | 2019-09-25 6:56:26 140542855440384 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2019-09-25 6:56:26 140542855440384 [Note] InnoDB: Using Linux native AIO mariadb | 2019-09-25 6:56:26 140542855440384 [Note] InnoDB: Using SSE crc32 instructions mariadb | 2019-09-25 6:56:26 140542855440384 [Note] InnoDB: Initializing buffer pool, size = 256.0M mariadb | 2019-09-25 6:56:26 140542855440384 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2019-09-25 6:56:26 140542855440384 [Note] InnoDB: Highest supported file format is Barracuda. mariadb | InnoDB: No valid checkpoint found. mariadb | InnoDB: A downgrade from MariaDB 10.2.2 or later is not supported. mariadb | InnoDB: If this error appears when you are creating an InnoDB database, mariadb | InnoDB: the problem may be that during an earlier attempt you managed mariadb | InnoDB: to create the InnoDB data files, but log file creation failed. mariadb | InnoDB: If that is the case, please refer to mariadb | InnoDB: http://dev.mysql.com/doc/refman/5.6/en/error-creating-innodb.html mariadb | 2019-09-25 6:56:26 140542855440384 [ERROR] Plugin 'InnoDB' init function returned error. mariadb | 2019-09-25 6:56:26 140542855440384 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. mariadb | 2019-09-25 6:56:26 140542855440384 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2019-09-25 6:56:26 140542855440384 [ERROR] Unknown/unsupported storage engine: InnoDB mariadb | 2019-09-25 6:56:26 140542855440384 [ERROR] Aborting mariadb | mariadb exited with code 1 If I remove container-volume then It is importing .sql script and running well and good. Updated with working script: Before I was using mariadb 10.4.8 or latest and facing issue(s) to access DB and attaching external volume. Now I have downgraded (As suggested by @Adiii) and tried. It runs perfectlly and we no need to specify external: true in volumes service version: '3' services: mysql: image: mariadb:10.1 container_name: mariadb volumes: - ./dbvolume:/var/lib/mysql - ./AppDatabase.sql:/docker-entrypoint-initdb.d/AppDatabase.sql environment: MYSQL_ROOT_PASSWORD: root123 MYSQL_DATABASE: appdata ports: - "3333:3306"
It will affect the new user and keys even if you create fresh container. Remove mount location, as it will pick a user name and password from this location, also will not run your DB init script too. volumes: - dbvolume:/var/lib/mysql Also you do not need create database appdata; As database already created at MYSQL_ROOT_PASSWORD: root123 MYSQL_ROOT_USER: root MYSQL_DATABASE: appdata this step. update: remove the user root, as root already defined. you can try with version: '3.7' services: mysql: environment: MYSQL_ROOT_PASSWORD: root123 MYSQL_DATABASE: appdata image: mariadb Or the second user should be different version: '3.7' services: mysql: environment: MYSQL_ROOT_PASSWORD: root123 MYSQL_DATABASE: appdata MYSQL_USER: test MYSQL_PASSWORD: root123 image: mariadb you can try with docker exec -it container_name bash -c "mysql -u test -proot123" or docker exec -it mysql bash -c "mysql -u root -proot123" If still issue, remove the DB image, pull a new one. Or try image tag 10.1 If I remove container-volume then It is importing .sql script and running well and good. if mount the location then the init script will not run as the container is expecting that there is already DB or try to remove named volume and create new one. So mount the location, import the DB using MySQL command and use the mount location for next time.
MariaDB
58,085,851
23
I just read online that MariaDB (which SQLZoo uses), is based on MySQL. So I thought that I can use ROW_NUMBER() function However, when I try this function in SQLZoo : SELECT * FROM ( SELECT * FROM route ) TEST7 WHERE ROW_NUMBER() < 10 then I get this error : Error: FUNCTION gisq.ROW_NUMBER does not exist
You can use the limit clause: SELECT * FROM route LIMIT 10 This can, of course, be used on a sorted query too: SELECT * FROM route ORDER BY some_field LIMIT 10
MariaDB
27,133,374
22
How can I use MariaDB instead of MySQL in my Rails project? When I try to install mysql2 gem it returns error,because mysqlclient was not found. Here some solution, but I didn't found any libmariadbd-dev package on my openSUSE 12.3.
It doesn't look like openSUSE has a MariaDB client development package. You must install the libmysqlclient-devel package package. Since MariaDB is tagged as a drop in replacement for MySQL, it would have to support the MySQL clients, though you may lose tiny bits of MariaDB improvements. It appears that the mysql2 gem should function with the MariaDB client libraries. Other options are hoping the mariadb-client package is enough, find a 3rd party package for the libraries or installing them yourself. Update There is now an MariaDB repository for openSUSE. It includes a development package, and very good instructions. Place the following in a file under /etc/zypp/repos.d/ # MariaDB 10.1 openSUSE repository list - created 2015-10-20 16:37 UTC # http://mariadb.org/mariadb/repositories/ [mariadb] name = MariaDB baseurl = http://yum.mariadb.org/10.1/opensuse13-amd64 gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB gpgcheck=1 Now you can run zypper install MariaDB-client MariaDB-devel Your question isn't about Ubuntu, but I'm sure a lot of people Googling Ubuntu (or its derivatives like Mint) will land on this page. In Ubuntu, you have to install all the packages using the MariaDB Foundation's ppa, but they include all the development libraries and support the 5.5, 10.0 and 10.1 releases. Then you can install packages such as mariadb-server, mariadb-client, libmariadbclient-dev, and libmariadbclient-dev:i386 (32-bit client). The instructions are simple and detailed on the site. For example, installing just the 10.1 dev libraries in Ubuntu 14.04 sudo apt-get install software-properties-common sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db sudo add-apt-repository 'deb http://mirrors.syringanetworks.net/mariadb/repo/10.1/ubuntu trusty main' sudo apt-get update sudo apt-get install libmariadbclient-dev
MariaDB
16,304,311
21
I have sql like this: UPDATE "user_login a" LEFT OUTER JOIN "p_pegawai b" ON a.id_pegawai = b.id SET a.password = 'Keluarga1' WHERE b.NIP = '195812' I have tried this : MySql Update A Joined Table but it always give me error You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '"user_login a" LEFT OUTER JOIN "p_pegawai b" ON a.id_pegawai = b.id SET a.passw' at line 1 I am using MariaDB, not Mysql, what could go wrong with my query ?
Use backticks in MySQL but do not apply these to the combined table and alias they must be treated as separate items UPDATE `user_login` a LEFT OUTER JOIN `p_pegawai` b ON a.id_pegawai = b.id SET a.password = 'Keluarga1' WHERE b.NIP = '195812'
MariaDB
47,546,152
21
In my understanding, mysql binlog can fully function as InnoDB's redo log. So, after the binlog is enabled, why does InnoDB have to write a redo log at the same time instead of just switching to use the binlog? Doesn't this significantly slow down database write performance? In addition to simplifying design and implementation, is there any benefit to doing this? AFAIK, to enable two logs at the same time as the ACID compliance is guaranteed, the following problems will occur: Each log record with the same meaning must be written twice separately. Flush two logs each time a transaction or transaction group commits. To ensure consistency between the two log files, a complex and inefficient way such as XA (2PC) is used. Therefore, all other products seem to use only one set of logs (SQL Server called Transaction log, ORACLE called redo log, and PostgreSQL called WAL) to do all the relevant work. Is it only MySQL that must open two sets of logs at the same time to ensure both ACID compliance and strong consistent master-slave replication? Is there a way to implement ACID compliance and strong consistent semi-synchronous replication while only one of them is enabled?
This is an interesting topic. For a long time, I have been advocating the idea of merging the InnoDB write-ahead log and the binlog. The biggest motivation for that would be that the need to synchronize two separate logs would go away. But, I am afraid that this might not happen any time soon. At MariaDB, we are taking some steps to reduce the fsync() overhead. The idea of MDEV-18959 Engine transaction recovery through persistent binlog is to guarantee that the binlog is never behind the InnoDB redo log, and by this, to allow a durable, crash-safe transaction commit with only one fsync() call, on the binlog file. While the binlog implements logical logging, the InnoDB redo log implements physical logging (covering changes to persistent data pages that implement undo logs and index trees). As I explained in M|18 Deep Dive: InnoDB Transactions and Write Paths, a user transaction is divided into multiple mini-transactions, each of which can atomically modify multiple data pages. The redo log is the ‘glue’ that makes changes to multiple data pages atomic. I think that the redo log is absolutely essential for implementing atomic changes of update-in-place data structures. Append-only data file structures, such as LSM trees, could be logs by themselves and would not necessarily need a separate log. For an InnoDB table that contains secondary indexes, every single row operation is actually divided into multiple mini-transactions, operating on each index separately. Thus, the transaction layer requires more ‘glue’ that makes the indexes of a table consistent with each other. That ‘glue’ is provided by the undo log, which is implemented in persistent data pages. InnoDB performs changes to the index pages upfront, and commit is a quick operation, merely changing the state of the transaction in the undo log header. But rollback is very expensive, because the undo log will have to be replayed backwards (and more redo log will be written to cover those index page changes). In MariaDB Server, MyRocks is another transactional storage engine, which does the opposite: Buffer changes in memory until the very end, and at commit, apply them to the data files. This makes rollback very cheap, but the size of a transaction is limited by the amount of available memory. I have understood that MyRocks could be made to work in the way that you propose.
MariaDB
57,983,490
21
What is the default port number of MariaDB? I am new to programming. I am creating my first Java application that connects to MariaDB. I need to specify the database port number.
The default port number of MariaDB is 3306. It is the same as MySQL.
MariaDB
59,113,646
21
I have been trying to run pip3 install mariadb on my raspberry pi running ubuntu 18.04 and I was unsuccessful. I have tried installing following packages as suggested in other answers: sudo apt-get install mariadb-server sudo apt-get install libmariadbclient-dev sudo apt-get install libmysqlclient-dev pip3 install mysqlclient pip3 install mysql-connector-python-rf However, I'm still running intto the problem as shown: ubuntu@ubuntu:~$ pip3 install mariadb Collecting mariadb Using cached https://files.pythonhosted.org/packages/8f/c9/7050899dc1066409a17e1147d3afe1b078e582afdb755c6d3cb9c9a5c3ab/mariadb-1.0.0.tar.gz Complete output from command python setup.py egg_info: /bin/sh: 1: mariadb_config: not found Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-build-2gdw_t_r/mariadb/setup.py", line 26, in <module> cfg = get_config(options) File "/tmp/pip-build-2gdw_t_r/mariadb/mariadb_posix.py", line 49, in get_config cc_version = mariadb_config(config_prg, "cc_version") File "/tmp/pip-build-2gdw_t_r/mariadb/mariadb_posix.py", line 27, in mariadb_config "mariadb_config not found.\nPlease make sure, that MariaDB Connector/C is installed on your system, edit the configuration file 'site.cfg' and set the 'mariadb_config'\noption, which should point to the mariadb_config utility.") OSError: mariadb_config not found. Please make sure, that MariaDB Connector/C is installed on your system, edit the configuration file 'site.cfg' and set the 'mariadb_config' option, which should point to the mariadb_config utility. ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-2gdw_t_r/mariadb/ I do have /etc/mysql/my.cnf file.
You probably need to install MariaDB database development files: https://packages.debian.org/unstable/libmariadb-dev. You need this package for your python connector to work properly in linux. You need to follow these steps: sudo apt-get update -y sudo apt-get install -y libmariadb-dev pip3 install mariadb Here you are first updating the list of packages that need an upgrade for your system. Then you are installing the previously mentioned dev library. The last step is to install mariadb with pip3 which should now work as expected.
MariaDB
63,027,020
21
I understand that collations are a set of rules for making comparisons over a character set. MySQL / MariaDB has table and database collations in addition to column collation. I was wondering what was the difference between a collation on these three (database, table and column). Thanks.
MySQL's character sets and collations can be interpreted as a top-down list of prioritized items. The topmost is least priority and the bottommost is most priority. Order of precedence with topmost being least precedence: Server collation Connection-specific collation Database collation Table collation Column collation Query collation (using CAST or CONVERT) The server collation is set by the server, which is set either inside of my.cnf or when the server was built from source code. By default, this will usually be latin1 or utf8, depending on your platform. The connection-specific collation is set by the client using a query like SET NAMES 'utf8' COLLATE 'utf8_unicode_ci';. Most clients don't set a connection-specific collation, so the server will use its own default as explained above. The database collation is set during database creation, or manually by updating it later. If you don't specify one, it will use the next higher-level collation, which would either be the connection-specific or the server collation. The table collation is the same as the database collation, except if left blank, it will use the database as its default, then connection-specific, and then finally the server's collation. The column collation uses the table's collation as its default, and if there is no collation set, it will then follow up the chain to find a collation to use, stopping at server if all of the others weren't set. The query collation is specified in the query by using CAST or CONVERT, but otherwise will use the next available collation in the chain. There's no way to set this unless you use a function. Please also refer to the manual page Character Set Support.
MariaDB
24,356,090
20
Background: I downloaded a *.sql backup of my WordPress site's database, and replaced all instances of the old database table prefix with a new one (e.g. from the default wp_ to something like asdfghjkl_). I've just learnt that WordPress uses serialized PHP strings in the database, and what I did will have messed with the integrity of the serialized string lengths. The thing is, I deleted the backup file just before I learnt about this (as my website was still functioning fine), and installed a number of plugins since. So, there's no way I can revert back, and I therefore would like to know two things: How can I fix this, if at all possible? What kind of problems could this cause? (This article states that, a WordPress blog for instance, could lose its settings and widgets. But this doesn't seem to have happened to me as all the settings for my blog are still intact. But I have no clue as to what could be broken on the inside, or what issues it'd pose in the future. Hence this question.)
Visit this page: http://unserialize.onlinephpfunctions.com/ On that page you should see this sample serialized string: a:1:{s:4:"Test";s:17:"unserialize here!";}. Take a piece of it-- s:4:"Test";. That means "string", 4 characters, then the actual string. I am pretty sure that what you did caused the numeric character count to be out of sync with the string. Play with the tool on the site mentioned above and you will see that you get an error if you change "Test" to "Tes", for example. What you need to do is get those character counts to match your new string. If you haven't corrupted any of the other encoding-- removed a colon or something-- that should fix the problem.
MariaDB
15,138,893
19
I am trying to connect to a database in Mariadb through a simple java application but the connection is told to be unsuccessful and an Exception is thrown. I have done the similar connection using mysql and it was working correctly. The problem is maybe with the driver here. try{ Class.forName("org.mariadb.jdbc.Driver"); Connection connection = DriverManager.getConnection( "jdbc:mariadb://localhost:3306/project", "root", ""); Statement statement = connection.createStatement(); String uname="xyz",pass="abc"; statement.executeUpdate("insert into user values('"+uname+"','"+pass+"')");}//end of try block I looked up the internet for the help and came by that driver class provided by the MariaDB Client Library for Java Applications is not com.mysql.jdbc.Driver but org.mariadb.jdbc.Driver! I changed it accordingly but it seems the problem is with the very first line inside the try block. The driver is not loading at all. Also, I have added the mysql jar file to the libraries of my java application as in the screen-shot below. Please help me through this.
It appears that you are trying to use jdbc:mariadb://... to establish a connection to a MariaDB server instance using the MySQL JDBC Driver. That probably won't work because the MySQL JDBC Driver would use jdbc:mysql://..., regardless of whether it is connecting to a MySQL server or a MariaDB server. That is, the connection string must match the driver that is being used (rather than the database server being accessed). The MySQL and MariaDB drivers are supposed to be somewhat interchangeable, but it only seems prudent to use the MariaDB connector when accessing a MariaDB server. For what it's worth, the combination of mariadb-java-client-1.1.7.jar and Connection con = DriverManager.getConnection( "jdbc:mariadb://localhost/project", "root", "whatever"); worked for me. I downloaded the MariaDB Client Library for Java from here: https://downloads.mariadb.org/client-java/1.1.7/ which I arrived at via https://downloads.mariadb.org/ Additional notes: There is no need for a Class.forName() statement in your Java code. The default configuration for MariaDB under Mageia may include the skip-networking directive in /etc/my.cnf. You will need to remove (or comment out) that directive if you want to connect to the database via JDBC because JDBC connections always look like "network" connections to MySQL/MariaDB, even if they are connections from localhost. (You may need to tweak the bind-address value to something like 0.0.0.0 as well.)
MariaDB
23,020,857
19
I used the following query with MySQL 5.5 (or previous versions) for years without any problems: SELECT t2.Code from (select Country.Code from Country order by Country.Code desc ) AS t2; The order of the result was always descending as I needed. Last week, I just migrated to a new MySQL Version (In fact, I migrated to MariaDB 10.0.14) and now the same query with the same database is not sorted descending anymore. It is sorted ascending (or sorted using the natural order, not sure in fact). So, can somebody could tell me if this is a bug or if this is a change of the behaviour in recent versions of MySQL/MariaDB?
After a bit of digging, I can confirm both your scenarios: MySQL 5.1 does apply the ORDER BY inside the subquery. MariaDB 5.5.39 on Linux does not apply the ORDER BY inside the subquery when no LIMIT is supplied. It does however correctly apply the order when a corresponding LIMIT is given: SELECT t2.Code FROM ( SELECT Country.Code FROM Country ORDER BY Country.Code DESC LIMIT 2 ) AS t2; Without that LIMIT, there isn't a good reason to apply the sort inside the subquery. It can be equivalently applied to the outer query. Documented behavior: As it turns out, MariaDB has documented this behavior and it is not regarded as a bug: A "table" (and subquery in the FROM clause too) is - according to the SQL standard - an unordered set of rows. Rows in a table (or in a subquery in the FROM clause) do not come in any specific order. That's why the optimizer can ignore the ORDER BY clause that you have specified. In fact, SQL standard does not even allow the ORDER BY clause to appear in this subquery (we allow it, because ORDER BY ... LIMIT ... changes the result, the set of rows, not only their order). You need to treat the subquery in the FROM clause, as a set of rows in some unspecified and undefined order, and put the ORDER BY on the top-level SELECT. So MariaDB also recommends applying the ORDER BY in the outermost query, or a LIMIT if necessary. Note: I don't currently have access to a proper MySQL 5.5 or 5.6 to confirm if the behavior is the same there (and SQLFiddle.com is malfunctioning). Comments on the original bug report (closed as not-a-bug) suggest that MySQL 5.6 probably behaves the same way as MariaDB.
MariaDB
26,372,511
19
how to apply WHERE clause on JSON column to perform a SELECT query on a table which is having two columns (id Integer, attr JSON). The JSON is nested and in the filter condition there is only one key value pair of json is allowed. This key value pair can be anywhere in the Josn. +----+----------------------------------------------------------------- | id | attr | +----+----------------------------------------------------------------- | 1 | {"id":"0001","type":"donut","name":"Cake","ppu":0.55} | | 2 | {"id":"0002","type":"donut","name":"Cake","ppu":0.55,"batters": {"batter1":100,"batter2":200}} +----+-----------------------------------------------------------------
In MariaDB 10.2, you can use the JSON functions. For example, if you want to SELECT all donuts from your database, you do: SELECT * FROM t WHERE JSON_CONTAINS(attr, '"donut"', '$.type'); Note: In MariaDB, JSON functions work with all text data types (VARCHAR, TEXT etc.). The JSON type is simply an alias for LONGTEXT.
MariaDB
48,804,649
19
I have a Drupal multisite that needs to have one database for each site, and want it to run in ddev, but ddev just has the one database by default, named 'db'. How can I get a second database?
You can import additional databases directly with ddev import-db --database=newdb --file=/path/to/file.sql.gz. The created database will already have permissions, etc. You can also manually create and manage databases (although this is rarely necessary). The root password for the db server is 'root', so you can mysql -uroot -proot in there (or use ddev mysql -uroot -proot). ddev mysql -uroot -proot CREATE DATABASE newdb; GRANT ALL ON newdb.* to 'db'@'%' IDENTIFIED BY 'db'; Now, if you want to load from a db dump, ddev import-db --database=newdb --file=dumpfile.sql Your normal web user can now access this alternate db, and it can be used in the settings.php for your alternate multisite. There are many other things you'll want to do for your Drupal multisite; there is a full tutorial at https://github.com/ddev/ddev-contrib/tree/master/recipes/drupal8-multisite More details about database management at https://ddev.readthedocs.io/en/stable/users/usage/database-management/
MariaDB
49,785,023
19
I have read somewhere that, there is no driver for "MariaDB" in Laravel 5 and that we can use mysql driver to use MariaDB in Laravel 5. But even then I am getting this error when i give my MariaDB username and password. The password is "root" and username is also "root". SQLSTATE[HY000] [1045] Access denied for user 'root'@'localhost' (using password: YES) Can someone guide me on how to configure MariaDB to be used with Laravel 5.
'mysql' => [ 'driver' => 'mysql', 'host' => env('DB_HOST', 'localhost'), 'port' => env('DB_PORT', '3307'), 'database' => env('DB_DATABASE', 'doctorsondemand'), 'username' => env('DB_USERNAME', 'root'), 'password' => env('DB_PASSWORD', 'root'), 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', 'strict' => false, ], Well, the problem was with the port. By default it is not mentioned and they take it as 3306. So, we have to include that line and mention that the port is 3307. That solved the problem. Hope this helps.
MariaDB
31,650,972
18
mariaDB is not starting in my windows 10. I am getting the following in the logs: Cannot find checkpoint record at LSN (1,0x5c8f) 2019-12-19 9:18:13 0 [ERROR] mysqld.exe: Aria recovery failed. Please run aria_chk -r on all Aria tables and delete all aria_log.######## files 2019-12-19 9:18:13 0 [ERROR] Plugin 'Aria' init function returned error. 2019-12-19 9:18:13 0 [ERROR] Plugin 'Aria' registration as a STORAGE ENGINE failed. InnoDB: using atomic writes. 2019-12-19 9:18:13 0 [Note] InnoDB: Mutexes and rw_locks use Windows interlocked functions 2019-12-19 9:18:13 0 [Note] InnoDB: Uses event mutexes 2019-12-19 9:18:13 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2019-12-19 9:18:13 0 [Note] InnoDB: Number of pools: 1 2019-12-19 9:18:13 0 [Note] InnoDB: Using SSE2 crc32 instructions 2019-12-19 9:18:13 0 [Note] InnoDB: Initializing buffer pool, total size = 16M, instances = 1, chunk size = 16M 2019-12-19 9:18:13 0 [Note] InnoDB: Completed initialization of buffer pool 2019-12-19 9:18:14 0 [Note] InnoDB: 128 out of 128 rollback segments are active. 2019-12-19 9:18:14 0 [Note] InnoDB: Creating shared tablespace for temporary tables 2019-12-19 9:18:14 0 [Note] InnoDB: Setting file 'C:\xampp\mysql\data\ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2019-12-19 9:18:14 0 [Note] InnoDB: File 'C:\xampp\mysql\data\ibtmp1' size is now 12 MB. 2019-12-19 9:18:14 0 [Note] InnoDB: Waiting for purge to start 2019-12-19 9:18:14 0 [Note] InnoDB: 10.4.10 started; log sequence number 42992145; transaction id 110929 2019-12-19 9:18:14 0 [Note] InnoDB: Loading buffer pool(s) from C:\xampp\mysql\data\ib_buffer_pool 2019-12-19 9:18:14 0 [Note] Plugin 'FEEDBACK' is disabled. 2019-12-19 9:18:14 0 [ERROR] Could not open mysql.plugin table. Some plugins may be not loaded 2019-12-19 9:18:14 0 [Note] InnoDB: Buffer pool(s) load completed at 191219 9:18:14 2019-12-19 9:18:14 0 [ERROR] Failed to initialize plugins. 2019-12-19 9:18:14 0 [ERROR] Aborting I have searched around and could not find a fix for this issue. Xampp was working fine yesterday but did not start today. So, what I did yesterday was to clone a wordpress website to my pc using xcloner. After I cloned the website, everything was working fine. Then, I stopped mysql and apache and shutdown my pc. This morning, I got that issue. I had face the same issue other times and I have reinstalled xampp and wordpress (bitnami versions). However, I keep getting that problem. Any help will be greatly appreciated.
With the help of @RiggsFolly, the following solved my issue. Open cmd. In cmd, go to xampp/mysql/data folder. In my case, I did the following cd C:\xampp\mysql\data Run aria_chk -r in that directory for all .MAI tables in the mysql subfolder. In my case, ..\bin\aria_chk -r mysql\*.MAI Delete all aria_log.######## files. They are present in the C:\xampp\mysql\data folder. I just renamed them for just in case (added old_ in the beginning of their names). Start xampp again and it should be working.
MariaDB
59,410,692
18
Just installed MariaDB (with homebrew). Everything looks like it's working, but I can't figure out how to have it automatically startup on boot on my Mac. I can't find any Mac-specific docs for this. The installation output says: To start mysqld at boot time you have to copy support-files/mysql.server to the right place for your system I guess I don't know where the right place is.
From brew info mariadb To have launchd start mariadb now and restart at login: brew services start mariadb Or, if you don't want/need a background service you can just run: mysql.server start Just run brew services start mariadb on terminal.
MariaDB
17,461,170
17
After replacing mysql with mariadb, I encountered the following error: PHP Fatal error: Uncaught exception 'PDOException' with message 'could not find driver' in /var/www/inlcude/config.php:5\nStack trace:\n#0 /var/www/inlcude/config.php(5): PDO->__construct('mysql:dbname=my...', 'apache', 'ABCDE...')\n#1 /var/www/html/index(21): require('/var/www/inlcude/con...')\n#2 {main}\n thrown in /var/www/inlcude/config.php on line 5 I have read through the following two related questions, but can't find the answer there: PDO and MariaDB PDOException “could not find driver” yum list pdo_mysql, yum list php5-mysql, yum list php5-mariadb all returns no matching package. What is the name of the PDO driver for mariadb to be used on Fedora 20 (red hat)? Just to add, php-pdo is already installed.
By trial and error, it turns out that I need to install mysqlnd to get the PDO driver. yum install php-mysqlnd Don't ask me why or how it works because I have absolutely no idea.
MariaDB
20,898,711
17
I've used: Connection connection = DriverManager.getConnection( "jdbc:mysql://localhost:3306/test", "username", "password" ); Statement stmt = connection.createStatement(); stmt.executeUpdate("CREATE TABLE a (id int not null primary key, value varchar(20))"); stmt.close(); connection.close(); but it gives an error "No route to host"
Java MariaDB example: //STEP 1. Import required packages package mariadb; import java.sql.*; public class Mariadb { // JDBC driver name and database URL static final String JDBC_DRIVER = "org.mariadb.jdbc.Driver"; static final String DB_URL = "jdbc:mariadb://192.168.100.174/db"; // Database credentials static final String USER = "root"; static final String PASS = "root"; public static void main(String[] args) { Connection conn = null; Statement stmt = null; try { //STEP 2: Register JDBC driver Class.forName("org.mariadb.jdbc.Driver"); //STEP 3: Open a connection System.out.println("Connecting to a selected database..."); conn = DriverManager.getConnection( "jdbc:mariadb://192.168.100.174/db", "root", "root"); System.out.println("Connected database successfully..."); //STEP 4: Execute a query System.out.println("Creating table in given database..."); stmt = conn.createStatement(); String sql = "CREATE TABLE REGISTRATION " + "(id INTEGER not NULL, " + " first VARCHAR(255), " + " last VARCHAR(255), " + " age INTEGER, " + " PRIMARY KEY ( id ))"; stmt.executeUpdate(sql); System.out.println("Created table in given database..."); } catch (SQLException se) { //Handle errors for JDBC se.printStackTrace(); } catch (Exception e) { //Handle errors for Class.forName e.printStackTrace(); } finally { //finally block used to close resources try { if (stmt != null) { conn.close(); } } catch (SQLException se) { }// do nothing try { if (conn != null) { conn.close(); } } catch (SQLException se) { se.printStackTrace(); }//end finally try }//end try System.out.println("Goodbye!"); }//end main }//end JDBCExample I use this example.I change the bind address to 127.10.230.440. and i restart the server sudo /etc/init.d/mysql start . List item
MariaDB
37,909,487
17
How can I switch the database from MySQL to MariaDB in WAMP 3.1.0? I'm looking for it, but I can not find it.
From the image you show it looks like both MySQL and MariaDB are already running! NOTE: Thats a bit memory hungry! Simple test to see if both MySQL and MariaDB are running. Launch phpMyAdmin and look at the login screen. If both are running you should see a Server Choise dropdown under the Username and Password fields. In there you will see 2 options like below. To pick MySQL or MariaDB, right click on the wampmanager icon in the system tray and you should see this this menu Just click on either MySQL or MariaDB to Enable or Disable either or both database servers. If there is a green tick beside the database server name, like above against MySQL, then that database server is configured to run, and if no tick exists, that server is not configured to run. Alternatively, just look at the services.msc snap-in to see if the database server is a) installed and b) running (started) Small note WAMPServer is now at V3.1.2, the update can be found here This contains a fix that if I remember correctly, contains a relevant fix. This is the WAMPServer backup repo, but it is a lot easier to navigate than SourceForge and is often more up to date than SourceForge as Oto does not have to jump through all the SourceForge loops to keep it up to date. Also note: that MariaDB and MySQL cannot both run on the same port i.e. 3306. So by default MySQL runs on 3306 and MariaDB runs on 3307. When you come to write PHP code you will have to specify port 3307 on your database connection code to make the connection to MariaDB if you are going to run both at the same time. Alternatively, if you want to use just MariaDB, Turn off MySQL and then switch MariaDB to use port 3306. There are menu items that make it quite easy if you look for them.
MariaDB
47,813,733
17
I've done quite a bit of reading before asking this, so let me preface by saying I am not running out of connections, or memory, or cpu, and from what I can tell, I am not running out of file descriptors either. Here's what PHP throws at me when MySQL is under heavy load: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (11 "Resource temporarily unavailable") This happens randomly under load - but the more I push, the more frequently php throws this at me. While this is happening I can always connect locally through the console and from PHP through 127.0.0.1 instead of "localhost" which uses the faster unix socket. Here's a few system variables to weed out the usual problems: cat /proc/sys/fs/file-max = 4895952 lsof | wc -l = 215778 (during "outages") Highest usage of available connections: 26% (261/1000) InnoDB buffer pool / data size: 10.0G/3.7G (plenty o room) soft nofile 999999 hard nofile 999999 I am actually running MariaDB (Server version: 10.0.17-MariaDB MariaDB Server) These results are generated both under normal load, and by running mysqlslap during off hours, so, slow queries are not an issue - just high connections. Any advice? I can report additional settings/data if necessary - mysqltuner.pl says everything is a-ok and again, the revealing thing here is that connecting via IP works just fine and is fast during these outages - I just can't figure out why. Edit: here is my my.ini (some values may seem a bit high from my recent troubleshooting changes, and please keep in mind that there are no errors in the MySQL logs, system logs, or dmesg) socket=/var/lib/mysql/mysql.sock skip-external-locking skip-name-resolve table_open_cache=8092 thread_cache_size=16 back_log=3000 max_connect_errors=10000 interactive_timeout=3600 wait_timeout=600 max_connections=1000 max_allowed_packet=16M tmp_table_size=64M max_heap_table_size=64M sort_buffer_size=1M read_buffer_size=1M read_rnd_buffer_size=8M join_buffer_size=1M innodb_log_file_size=256M innodb_log_buffer_size=8M innodb_buffer_pool_size=10G [mysql.server] user=mysql [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid open-files-limit=65535
Most likely it is due to net.core.somaxconn What is the value of /proc/sys/net/core/somaxconn net.core.somaxconn # The maximum number of "backlogged sockets". Default is 128. Connections in the queue which are not yet connected. Any thing above that queue will be rejected. I suspect this in your case. Try increasing it according to your load. as root user run echo 1024 > /proc/sys/net/core/somaxconn
MariaDB
28,828,141
16
I'm using the DBeaver to handle with databases. |/MariaDB-mysql |-- /information_schema |-- /mysql |-- /performance_schema |-- /autoparanaiba I want to run some .sql file inside the autoparanaiba but I didn't find the IMPORT.
Right Click on autoparanaiba --> Tools -- > Restore Database
MariaDB
61,685,353
16
When we try to GRANT ALL permissions to a user for a specific database, the admin (superuser) user of database receives the following error. Access denied for user 'admin'@'%' to database 'the_Db' After looking other questions in stackoverflow I could not find the solution. I already tried to change * -> % without success, that is the approach suggested in the following source: http://www.fidian.com/problems-only-tyler-has/using-grant-all-with-amazons-mysql-rds I think there is an underlying configuration on RDS so I can't grant all permissions for the users, but I don't know how to detect what is happening. Update After doing some workarounds I noticed that the "Delete versioning rows" permissions is the one that causes the problem. I can add all permissions but that one. https://mariadb.com/kb/en/grant/ So the only "way" I could grant other permissions was to specific each one of those with a script like this. GRANT Alter ON *.* TO 'user_some_app'@'%'; GRANT Create ON *.* TO 'user_some_app'@'%'; GRANT Create view ON *.* TO 'user_some_app'@'%'; GRANT Delete ON *.* TO 'user_some_app'@'%'; GRANT Drop ON *.* TO 'user_some_app'@'%'; GRANT Grant option ON *.* TO 'user_some_app'@'%'; GRANT Index ON *.* TO 'user_some_app'@'%'; GRANT Insert ON *.* TO 'user_some_app'@'%'; GRANT References ON *.* TO 'user_some_app'@'%'; GRANT Select ON *.* TO 'user_some_app'@'%'; GRANT Show view ON *.* TO 'user_some_app'@'%'; GRANT Trigger ON *.* TO 'user_some_app'@'%'; GRANT Update ON *.* TO 'user_some_app'@'%'; GRANT Alter routine ON *.* TO 'user_some_app'@'%'; GRANT Create routine ON *.* TO 'user_some_app'@'%'; GRANT Create temporary tables ON *.* TO 'user_some_app'@'%'; GRANT Execute ON *.* TO 'user_some_app'@'%'; GRANT Lock tables ON *.* TO 'user_some_app'@'%';
Try this mysql -h your-rds-host-name -P 3306 -u rds-master-user -p CREATE DATABASE sitedb; CREATE USER 'siteuser'@'%' IDENTIFIED BY 'Password'; // For MySQL 5.7 or Less GRANT ALL ON sitedb.* TO 'siteuser'@'%' IDENTIFIED BY 'Password' WITH GRANT OPTION; // MariaDB 10 Up GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, CREATE VIEW, EVENT, TRIGGER, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EXECUTE ON `sitedb`.* TO 'siteuser'@'%'; FLUSH PRIVILEGES; EXIT
MariaDB
64,974,214
16
I'm trying to connect to a MariaDB database in a C script and I can't find the necessary documentation. I installed libmariadbclient-dev, but I couldn't find any accompanying documentation such as a man page. There's a basic description and limited documentation here, but the documentation only includes descriptions of functions. The fact is, despite having scoured all sorts of Google results, I don't even know what to import to get this to work, much less how to use it. Is there any guide or documentation on how to use a MariaDB database in C?
The MariaDB Client Library for C has exactly the same API as the MySQL Connector/C for MySQL 5.5 Here it is: http://dev.mysql.com/doc/refman/5.5/en/c-api-function-overview.html Another one: http://zetcode.com/db/mysqlc/ You can compile a minimal test like #include <my_global.h> #include <mysql.h> int main(int argc, char **argv) { MYSQL *con = mysql_init(NULL); if (con == NULL) { fprintf(stderr, "%s\n", mysql_error(con)); exit(1); } if (mysql_real_connect(con, "localhost", "root", "root_pswd", NULL, 0, NULL, 0) == NULL) { fprintf(stderr, "%s\n", mysql_error(con)); mysql_close(con); exit(1); } if (mysql_query(con, "CREATE DATABASE testdb")) { fprintf(stderr, "%s\n", mysql_error(con)); mysql_close(con); exit(1); } mysql_close(con); exit(0); } using gcc -o mysql-test mysql-test.c $(mysql_config --libs)
MariaDB
17,265,471
15
I have a MariaDB Galera Cluster(3 nodes), I set uid to increase automatically and be the primary key of the table as `uid | int(11) | NO | PRI | NULL | auto_increment`. MariaDB [hello_cluster]> select uid from table order by uid limit 10; +-----+ | uid | +-----+ | 3 | | 6 | | 9 | | 12 | | 15 | | 18 | | 21 | | 24 | | 27 | | 30 | +-----+ I tried the following command, and it does not work alter table uid AUTO_INCREMENT=1
This is by design and is reported in MariaDB Galera Cluster - Known Limitations: Do not rely on auto-increment values to be sequential. Galera uses a mechanism based on autoincrement increment to produce unique non-conflicting sequences, so on every single node the sequence will have gaps. The rational is explained in Managing Auto Increments with Multi Masters, and is also why the observed auto-increment has the same step as the number of clusters. MySQL has system variables auto_increment_increment and auto_increment_offset for managing auto increment 'sequences' in multi master environment. Using these variables, it is possible to set up a multi master replication, where auto increment sequences in each master node interleave, and no conflicts should happen in the cluster. No matter which master(s) get the INSERTs. Even without clusters, it is rarely a "good" idea to rely on auto-increment columns to be dense sequences due to transaction rollbacks and deleted records.
MariaDB
23,022,574
15
What is the reason that the following two queries give wildly different results? MariaDB [mydatabase]> SELECT COUNT(DISTINCT(`price`)) FROM `products`; --Good +--------------------------+ | COUNT(DISTINCT(`price`)) | +--------------------------+ | 2059 | +--------------------------+ 1 row in set (0.01 sec) MariaDB [mydatabase]> SELECT COUNT(DISTINCT('price')) FROM `products`; --Bad +--------------------------+ | COUNT(DISTINCT('price')) | +--------------------------+ | 1 | +--------------------------+ 1 row in set (0.01 sec) I've googled around for an explanation of the difference between backticks and apostrophes (aka. single quotes), but I am unable to find any indication as to why they would be interpreted differently for a column name like in the above. Is it that the single-quoted string in the latter query is actually not interpreted as a column name, but just as an arbitrary string literal, of which there could be said to be "1"? If so, it ain't easy to find any pages expounding on this meaning of the apostrophe.
'price' (apostrophes or quotes) is a string. It never changes, so the count is always 1. `price` (backtics) refers to the column price. So it could be more than 1. The inner parentheses are irrelevant. COUNT(DISTINCT price) is the same as your backtic version. SELECT COUNT(*) FROM tbl WHERE ... is a common way to ask how many rows. SELECT foo, COUNT(*) FROM tbl GROUP BY foo is a common way to ask how many rows for each distinct value of foo. SELECT foo, COUNT(foo) FROM tbl GROUP BY foo is the same as above, but does not count rows where foo IS NULL. SELECT DISTINCT ... GROUP BY ... is a nonsense statement. Either use DISTINCT or use GROUP BY.
MariaDB
29,402,361
15
I'm trying to execute a raw query that is built dynamically. To assure that the parameters are inserted in the valid position I'm using named parameters. This seems to work for Sqlite without any problems. (all my tests succeed) But when I'm running the same code against MariaDB it fails... A simple example query: SELECT u.* FROM users_gigyauser AS u WHERE u.email like :u_email GROUP BY u.id ORDER BY u.last_login DESC LIMIT 60 OFFSET 0 Parameters are: {'u_email': '%test%'} The error I get is a default syntax error as the parameter is not replaced. I tried using '%' as an indicator, but this resulted in SQL trying to parse %u[_email] and that returned a type error. I'm executing the query like this: raw_queryset = GigyaUser.objects.raw( self.sql_fetch, self._query_object['params'] ) Or when counting: cursor.execute(self.sql_count, self._query_object['params']) Both give the same error on MariaDB but work on Sqlite (using the ':' indicator) Now, what am I missing?
edit: The format needs to have s suffix as following: %(u_email)s
MariaDB
36,311,528
15
So I basically am installing mariadb with mysql on my mac using homebrew. These are the steps I made: brew doctor -> worked brew update -> worked brew install mariadb -> worked mysql_install_db -> Failed WARNING: The host 'Toms-MacBook-Pro.local' could not be looked up with /usr/local/Cellar/mariadb/10.4.6_1/bin/resolveip. This probably means that your libc libraries are not 100 % compatible with this binary MariaDB version. The MariaDB daemon, mysqld, should work normally with the exception that host name resolving will not work. This means that you should use IP addresses instead of hostnames when specifying MariaDB privileges ! mysql.user table already exists! Running mysql_upgrade afterwards gave me following error: Version check failed. Got the following error when calling the 'mysql' command line client ERROR 1698 (28000): Access denied for user 'root'@'localhost' FATAL ERROR: Upgrade failed I can't enter mysql like this: mysql -uroot ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) but like this: sudo mysql -u root The user table returns this: MariaDB [(none)]> USE mysql; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed MariaDB [mysql]> SELECT User, Host, plugin FROM mysql.user; +---------------+-------------------------+-----------------------+ | User | Host | plugin | +---------------+-------------------------+-----------------------+ | root | localhost | mysql_native_password | | toms | localhost | mysql_native_password | | | localhost | | | | toms-macbook-pro.local | | +---------------+-------------------------+-----------------------+ 4 rows in set (0.004 sec)
You could try to update the root password and access it afterwards ALTER USER 'root'@'localhost' IDENTIFIED BY 'root'; Exit Mysql and try to login mysql -uroot -p # then use root as a password
MariaDB
57,803,604
15
Besides having mariadb 10.1.36-MariaDB I get following error. EXPLAIN ANALYZE select 1 MySQL said: Documentation 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'ANALYZE select 1' at line 1 What additional I need to do here. My PHP version is 7.2.11.
As you can see in the docs https://mariadb.com/kb/en/explain-analyze/ The syntax for the EXPLAIN ANALYZE feature was changed to ANALYZE statement, available since MariaDB 10.1.0. See ANALYZE statement. So just use ANALYZE ... without the explain keyword and you'll get the same output you got in the past. In the analyze docs you have the info for the ANALYZE statement, you can see it's the same that the deprecated EXPLAIN ANALYZE. The ANALYZE statement is similar to the EXPLAIN statement. ANALYZE statement will invoke the optimizer, execute the statement, and then produce EXPLAIN output instead of the result set. The EXPLAIN output will be annotated with statistics from statement execution. The syntax is ANALYZE explainable_statement; where the statement is any statement for which one can run EXPLAIN.
MariaDB
60,797,825
15
Have a simple question here. I've a database with around 1 billion records, a server with 200GB of ram to handle it. What do you suggest for best performances? Mysql 5, Mysql 6 or MariaDB?
MariaDB 5.3 should give you the best performance: It uses the XtraDB (InnoDB improved) storage engine from Percona as default. The optimizer is greatly improved to handle big data. Replication is a magnitude faster in MariaDB if you have lots of concurrent updates to InnoDB. See http://kb.askmonty.org/en/what-is-mariadb-53 for a list of features.
MariaDB
7,547,121
14
I get this annoying error when I try to insert data from db1 to db2 in MaridaDB 10 using mysql CLI. This is while all the columns exist. INSERT INTO db2.thread (threadid, title, postuserid, dateline, views) SELECT `nid`, `title`, `uid`, ‍‍`created`, `comment` from db1.node where type = 'forum' and status = 1; When I execute the same query in PHPMyAdmin, I get: #1054 - Unknown column '†I tried different syntax like 'like' etc. with no avail. Appreciate your hints
Looks like there are invisible garbage characters in your query. Try retyping the query (don't copy and paste or you'll most likely include the garbage character) and it should work.
MariaDB
16,910,652
14
I'm creating a table like this, Schema::create('booking_segments', function (Blueprint $table) { $table->increments('id'); $table->datetime('start')->index(); $table->integer('duration')->unsigned(); $table->string('comments'); $table->integer('booking_id')->unsigned(); $table->foreign('booking_id')->references('id')->on('bookings')->onDelete('cascade'); }); But I want to add one extra column. It looks like this in raw SQL: ALTER TABLE booking_segments ADD COLUMN `end` DATETIME AS (DATE_ADD(`start`, INTERVAL duration MINUTE)) PERSISTENT AFTER `start` How can I add it in my migration? I will also need to create an index on it.
I know this is an old question, but there is a way to do it using the schema builder since Laravel 5.3, so I thought I would put it here for completeness. You can use laravel 5.3 column modifiers virtualAs or storedAs. So, to create a virtual generated column to be computed at every query you would create the column like this: $table->dateTime('created_at')->virtualAs( 'DATE_ADD(`start`, INTERVAL duration MINUTE)' ); To create a stored generated column you would create the column like this instead: $table->dateTime('created_at')->storedAs( 'DATE_ADD(`start`, INTERVAL duration MINUTE)' );
MariaDB
27,749,887
14
I am using node.js, Sequelize and MariaDB and I am running into the following error, which I am not sure how to resolve? Error: Naming collision between attribute 'playlist' and association 'playlist' on model playlist_entry. To remedy this, change either foreignKey or as in your association definition My Javascript: Entities = function (settings, context) { sequelize = context.sequelize; var entities = { Playlist: this.sequelize.define('playlist', { name: Sequelize.STRING, description: Sequelize.STRING }), PlaylistEntry: this.sequelize.define('playlist_entry', { playlist: Sequelize.INTEGER //track: Sequelize.INTEGER }) }; entities.PlaylistEntry.belongsTo( entities.Playlist, { foreignKey: { name: 'fk_playlist' }}); return entities; } My tables: CREATE TABLE `playlist` ( `id` int(11) unsigned NOT NULL, `name` varchar(255) NOT NULL, `description` varchar(255) DEFAULT NULL, `createdAt` timestamp NULL DEFAULT NULL, `updatedAt` timestamp NULL DEFAULT NULL, `external_id` int(11) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `id` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE `playlist_entry` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `playlist` int(11) unsigned DEFAULT NULL, `track` int(11) unsigned DEFAULT NULL, `createdAt` timestamp NULL DEFAULT NULL, `updatedat` timestamp NULL DEFAULT NULL, PRIMARY KEY (`id`), KEY `track_idx` (`track`), KEY `playlist_idx` (`playlist`), CONSTRAINT `fk_playlist` FOREIGN KEY (`playlist`) REFERENCES `playlist` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;
I faced exactly this problem while play with Sequelize, It's happened because the column name and reference name are same Wrong Implementation module.exports = (sequelize, DataTypes) => { const session = sequelize.define('session', { menteeId: DataTypes.INTEGER, }, {}); session.associate = (models) => { session.belongsTo(models.user, { foreignKey: 'menteeId', as: 'menteeId', onDelete: 'CASCADE', }); }; return session; }; Here the column name (menteeId) and alias name (menteeId) are the same, To resolve this you just need to change alias name Correct Implementation module.exports = (sequelize, DataTypes) => { const session = sequelize.define('session', { menteeId: DataTypes.INTEGER, }, {}); session.associate = (models) => { session.belongsTo(models.user, { foreignKey: 'menteeId', as: 'MenteeId', // Changes applied here onDelete: 'CASCADE', }); }; return session; }; In your case, you can do this entities.PlaylistEntry.belongsTo( entities.Playlist, { foreignKey: { name: 'fk_playlist' }, as: 'PlayListAlias', // Appropriate name }, );
MariaDB
37,121,882
14
I'm trying to get work MariaDB Galera 10.1 under debian 8 jessie. I've installed all neccesary components and did configurations but i cannot get it work. The nodes are builded as VPS. Configuration of node 1: [mysqld] # Cluster node configurations wsrep_cluster_address="gcomm://172.16.0.102,172.16.0.112" wsrep_node_address="172.16.0.102" wsrep_node_name='n1' wsrep_cluster_name='cluster' innodb_buffer_pool_size=400M # Mandatory settings to enable Galera wsrep_provider=/usr/lib/galera/libgalera_smm.so binlog_format=ROW default-storage-engine=InnoDB innodb_autoinc_lock_mode=2 innodb_doublewrite=1 query_cache_size=0 bind-address=0.0.0.0 # Galera synchronisation configuration wsrep_sst_method=rsync Configuration of node 2: [mysqld] # Cluster node configurations wsrep_cluster_address="gcomm://172.16.0.102,172.16.0.112" wsrep_node_address="172.16.0.112" wsrep_node_name='n2' wsrep_cluster_name='cluster' innodb_buffer_pool_size=400M # Mandatory settings to enable Galera wsrep_provider=/usr/lib/galera/libgalera_smm.so binlog_format=ROW default-storage-engine=InnoDB innodb_autoinc_lock_mode=2 innodb_doublewrite=1 query_cache_size=0 bind-address=0.0.0.0 # Galera synchronisation configuration wsrep_sst_method=rsync When I'm trying to run on node 1 bootstrap command service mysql bootstrap It fails with error's May 13 15:59:28 test mysqld[2397]: 2016-05-13 15:59:28 139843152635840 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out) May 13 15:59:28 test mysqld[2397]: at gcomm/src/pc.cpp:connect():162 May 13 15:59:28 test mysqld[2397]: 2016-05-13 15:59:28 139843152635840 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -110 (Connection timed out) May 13 15:59:28 test mysqld[2397]: 2016-05-13 15:59:28 139843152635840 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1379: Failed to open channel 'cluster' at 'gcomm://172.16.0.102,172.16.0.112': -110 (Connection timed out) May 13 15:59:28 test mysqld[2397]: 2016-05-13 15:59:28 139843152635840 [ERROR] WSREP: gcs connect failed: Connection timed out May 13 15:59:28 test mysqld[2397]: 2016-05-13 15:59:28 139843152635840 [ERROR] WSREP: wsrep::connect(gcomm://172.16.0.102,172.16.0.112) failed: 7 May 13 15:59:28 test mysqld[2397]: 2016-05-13 15:59:28 139843152635840 [ERROR] Aborting The network configuration is private I'm using: 2x DEDICATED server installed with ProxmoxVE 4.0 the servers are in vRack network is configured on VPS as: node1: 172.16.0.102 //node 1 is on server 1 node2: 172.16.0.112 //node 2 is on server 2 They're able to ping each other on private network.
Since MariaDB 10.1.8, systemd is the new init and it affects the way Galera is bootstrapped on RPM and Debian based Linux distributions (in my case Ubuntu 16.04). On previous versions, you would use something like service mysql start --wsrep-new-cluster or service mysqld bootstrap but that doesn't work any more as it fails with: [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -110 (Connection timed out) To fix this issue run: galera_new_cluster Note that you only need to run this script on the 'first' server. To test if it is running, enter mysql with mysql -u [your mysql user] -p and then run SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size'; You should see something like: +--------------+ | cluster size | +--------------+ | 1 | +--------------+ Just in case it is useful to anyone, this is my my.conf (MariaDB 10.1.16) [galera] # Mandatory settings wsrep_on=ON wsrep_provider=/usr/lib/galera/libgalera_smm.so wsrep_cluster_address="gcomm://[first ip],[second ip]" binlog_format=row default_storage_engine=InnoDB innodb_autoinc_lock_mode=2 More details here: MariaDB systemd and galera_new_cluster Galera Cluster System Variables
MariaDB
37,212,127
14
How can i tell if a server I'm connecting to is Percona or MySQL or MariaDB? Is there any standard way of doing this? I'm currently using SHOW VERSION to test the server version, but I would also need to display the server name in the app I'm working on.
You can get specific information with: SHOW VARIABLES LIKE '%vers%' version and version_comment are very specific.
MariaDB
37,317,869
14
I have Ubuntu 16.04, and Mysql 5.7.12-0ubuntu1.1. When I type: sudo mysql -u root I can login into mysql console, but when I type it without sudo: mysql -u root I obtain error: ERROR 1698 (28000): Access denied for user 'root'@'localhost' My problem occurred when I installed and removed MariaDB. I remember that in PostgreSQL there is important which unix user login to database, but how to handle with this in Mysql? I solved this problem following by: https://askubuntu.com/questions/766334/cant-login-as-mysql-user-root-from-normal-user-account-in-ubuntu-16-04
This problem seems to be primarily caused by the auth_socket plugin which is now used by default if the root user doesn't have a password. (Formerly, the apt-get install process asked for a password for root, but it doesn't seem to do that anymore so auth_socket gets enabled.) For either query, first login as root by using sudo mysql For MySQL or MariaDB >= 10.2: ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'test'; For others who may be using MariaDB < 10.2 (which doesn't support ALTER USER), you'll want to run this query: SET PASSWORD = PASSWORD('test'); update mysql.user set plugin = 'mysql_native_password' where User='root'; FLUSH PRIVILEGES;
MariaDB
38,098,505
14
I've a root User on the MariaDB on Ubuntu 16.04. As default the root user is authenticated by the unix_socket authentication plugin. I can switch the authentication method to password method by setting update mysql.user set plugin='' where user='root'; This works fine. But ... Is there a possibility to authenticate the root user by unix_socket (by root shell) or by password (when it is connected by localhost:3306)?
A reliable and straightforward way would be to create another super-user and use it when you want to connect by password. CREATE USER admin@localhost IDENTIFIED BY 'password'; GRANT ALL ON *.* TO admin@localhost WITH GRANT OPTION; -- etc
MariaDB
41,846,000
14
There is something that bothers me. I've tried to find one clear answer but no luck so far. I'm using Symfony3 and Doctrine2 and MariaDB. Let's assume that I've created something like this in my entity: /** * @ORM\Column( * name="status", * type="boolean", * options={"default": 0} * ) */ private $status; Now thanks to this I have field with default value of 0 in database: `status` tinyint(1) NOT NULL DEFAULT '0', But what's the point of having this when every time I try to save data into database(I'm trying to save only for example 1 out of 10 fields): $story->setContent('Test Content'); $em = $this->getDoctrine()->getManager(); $em->persist($story); $em->flush(); I get: SQLSTATE[23000]: Integrity constraint violation: 1048 Column 'status' cannot be null Because, of course rest of the fields on the object are null. I can work around this by setting the default values in the constructor or by allowing for null values in the DB. What if I don't want to do this? Is there any other way that I am missing here? So what I would like to know: are setting default value in entity or allowing nulls in DB only ways to do this? is there something wrong with my logic here? is there any cooler and cleaner way of doing this?
Like @Cerad commented, you just need to initialize the property in your actual entity class /** * @ORM\Column( * name="status", * type="boolean", * options={"default": 0} * ) */ private $status = 0;
MariaDB
43,195,874
14
We currently use MySQL 5.7 and store passwords via the mysql-config-editor. It stores the login-path in an encrypted file, .mylogin.cnf. MariaDB does not support this functionality (and considers it a bad idea). So what is the MariaDB way of doing this? PostgreSQL offers ~/.pgpass for this purpose.
You can use an unencrypted options file. Create a new options file in your home directory, readable only by you, like this: [client] host='<your-db-host>' port='<your-db-port>' socket='<your-db-socket>' database='<your-db-name>' user='<your-db-user>' password='<your-db-password>' You can then use with a --defaults-extra-file= option when running one of the clients. Configuring MariaDB with Option Files You can create different config files for each login path you would have created. The MariaDB team are taking the position that the obfuscated file is providing only a false sense of security so you're better off just facing up to the fact that you're placing your password in your filesystem in plaintext. If you have your file permissions configured correctly this can be an acceptable situation.
MariaDB
49,109,592
14
This is my model: class Subscriber(models.Model): ... tenant = models.ForeignKey(Tenant, on_delete=models.CASCADE, null=True) ... This is the generated SQL, according to sqlmigrate (and to manual inspection of the database): ALTER TABLE `myapp_subscriber` ADD CONSTRAINT `myapp_subscriber_tenant_id_b52815ee_fk_myapp_tenant_id` FOREIGN KEY (`tenant_id`) REFERENCES `myapp_tenant` (`id`); I was expecting something like this: CREATE TABLE child ( id INT, parent_id INT, INDEX par_ind (parent_id), FOREIGN KEY (parent_id) REFERENCES parent(id) ON DELETE CASCADE ) ENGINE=INNODB; With the ON DELETE CASCADE. MySql (MariaDB actually) complains when I delete: SQL Error (1451): Cannot delete or update a parent row: a foreign key constraint fails Which makes sense since there is no ON DELETE CASCADE clause. Why is Django 2.1.5 not honoring the ON DELETE CASCADE clause?
From the docs: on_delete doesn’t create a SQL constraint in the database. Support for database-level cascade options may be implemented later It will perform the cascade in Django itself, so if you delete a Tenant object using Django delete() your Subscriber object will also be deleted. But not if you do it in SQL.
MariaDB
54,751,466
14
I am building an Alpine based image of a Django application with MariaDB and I can't figure out which dependency I should add to my Dockerfile so that my app could properly connect to the DB. django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module. Did you install mysqlclient? Well, I thought I did. From what I read in this article, in this discussion, mariadb-dev is in Alpine the equivalent of default-libmysqlclient-dev in Debian based system. Furthermore, the mysql-client package in Alpine is merely a dummy package (containing mariadb-dev, mariadb-client, etc etc). Here is the Dockerfile: # pull official base image FROM python:3.7-alpine # set environment variables ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # set work directory WORKDIR /usr/src/cms # install mysqlclient RUN apk update \ && apk add --virtual build-deps gcc python3-dev musl-dev \ && apk add --no-cache mariadb-dev\ && apk del build-deps # install dependencies RUN pip install --upgrade pip RUN pip install pipenv COPY ./Pipfile /usr/src/cms/Pipfile RUN pipenv install --skip-lock --system --dev # copy entrypoint.sh COPY ./entrypoint.sh /usr/src/cms/entrypoint.sh # copy project COPY . /usr/src/cms/ # run entrypoint.sh ENTRYPOINT ["/usr/src/cms/entrypoint.sh"] I tried to add mariadb-client, to use mysql-client instead. I also tried to add RUN pip install django-mysql. Nothing seems to change. I successfully build Postgres Django apps without any problem but, when it comes to MySQl vs MariaDB // Debian vs Alpine, I feel confused. Any insight?
It seems you're missing the MySQLdb Python module, for which you should install the mysqlclient Python package: pip install mysqlclient. On Alpine, pip will build mysqlclient from source, so you'll need gcc and musl-dev for this setup step, hence you'll need to postpone apk del build-deps to after Python modules are installed. Fixed Dockerfile snippet: RUN apk update \ && apk add --virtual build-deps gcc python3-dev musl-dev \ && apk add --no-cache mariadb-dev ... RUN pip install mysqlclient RUN apk del build-deps
MariaDB
56,048,631
14
Problem Suppose I have this table tab (fiddle available). | g | a | b | v | --------------------- | 1 | 3 | 5 | foo | | 1 | 4 | 7 | bar | | 1 | 2 | 9 | baz | | 2 | 1 | 1 | dog | | 2 | 5 | 2 | cat | | 2 | 5 | 3 | horse | | 2 | 3 | 8 | pig | I'm grouping rows by g, and for each group I want one value from column v. However, I don't want any value, but I want the value from the row with maximal a, and from all of those, the one with maximal b. In other words, my result should be | 1 | bar | | 2 | horse | Current solution I know of a query to achieve this: SELECT grps.g, (SELECT v FROM tab WHERE g = grps.g ORDER BY a DESC, b DESC LIMIT 1) AS r FROM (SELECT DISTINCT g FROM tab) grps Question But I consider this query rather ugly. Mostly because it uses a dependant subquery, which feels like a real performance killer. So I wonder whether there is an easier solution to this problem. Expected answers The most likely answer I expect to this question would be some kind of add-on or patch for MySQL (or MariaDB) which does provide a feature for this. But I'll welcome other useful inspirations as well. Anything which works without a dependent subquery would qualify as an answer. If your solution only works for a single ordering column, i.e. couldn't distinguish between cat and horse, feel free to suggest that answer as well as I expect it to be still useful to the majority of use cases. For example, 100*a+b would be a likely way to order the above data by both columns while still using only a single expression. I have a few pretty hackish solutions in mind, and might add them after a while, but I'll first look and see whether some nice new ones pour in first. Benchmark results As it is pretty hard to compare the various answers just by looking at them, I've run some benchmarks on them. This was run on my own desktop, using MySQL 5.1. The numbers won't compare to any other system, only to one another. You probably should be doing your own tests with your real-life data if performance is crucial to your application. When new answers come in, I might add them to my script, and re-run all the tests. 100,000 items, 1,000 groups to choose from, InnoDb: 0.166s for MvG (from question) 0.520s for RichardTheKiwi 2.199s for xdazz 19.24s for Dems (sequential sub-queries) 48.72s for acatt 100,000 items, 50,000 groups to choose from, InnoDb: 0.356s for xdazz 0.640s for RichardTheKiwi 0.764s for MvG (from question) 51.50s for acatt too long for Dems (sequential sub-queries) 100,000 items, 100 groups to choose from, InnoDb: 0.163s for MvG (from question) 0.523s for RichardTheKiwi 2.072s for Dems (sequential sub-queries) 17.78s for xdazz 49.85s for acatt So it seems that my own solution so far isn't all that bad, even with the dependent subquery. Surprisingly, the solution by acatt, which uses a dependent subquery as well and which I therefore would have considered about the same, performs much worse. Probably something the MySQL optimizer can't cope with. The solution RichardTheKiwi proposed seems to have good overall performance as well. The other two solutions heavily depend on the structure of the data. With many groups small groups, xdazz' approach outperforms all other, whereas the solution by Dems performs best (though still not exceptionally good) for few large groups.
This way doesn't use sub-query. SELECT t1.g, t1.v FROM tab t1 LEFT JOIN tab t2 ON t1.g = t2.g AND (t1.a < t2.a OR (t1.a = t2.a AND t1.b < t2.b)) WHERE t2.g IS NULL Explanation: The LEFT JOIN works on the basis that when t1.a is at its maximum value, there is no s2.a with a greater value and the s2 rows values will be NULL.
MariaDB
12,726,549
13
I am totally new to databases. I would like to create a database; I am going to make a small project which is going to use DB. I am going to use Maria DB as it is totally free for commercial use. The question is: Can I use MySQL workbench program to create a database and then transform/change it to MariaDB?
From my experience -- Sure, you can use MySQL Workbench with MariaDB. However, I have tried basic functionalities only, like queries, schema design etc. Not sure about compatibility of advanced features.
MariaDB
22,616,861
13
I want to build and execute a query like this with jOOQ. SELECT EXISTS( subquery ) For exemple: SELECT EXISTS(SELECT 1 FROM icona_etiqueta WHERE pvp IS NULL AND unitat_venda = 'GRAMS') How can I do it? Can it be done?
Found it. I was looking for a selectExists method and got confused by the DSL.exists() predicate constructor. There is a much more convenient fetchExists(subquery). My specific example is resolved like this: create.fetchExists( create.selectOne() .from(ICONA_ETIQUETA) .where(ICONA_ETIQUETA.PVP.isNull(), ICONA_ETIQUETA.UNITAT_VENDA.eq('GRAMS')) ); Which directly returns a boolean.
MariaDB
42,221,544
13
CREATE TABLE `files` ( `did` int(10) unsigned NOT NULL DEFAULT '0', `filename` varbinary(200) NOT NULL, `ext` varbinary(5) DEFAULT NULL, `fsize` double DEFAULT NULL, `filetime` datetime DEFAULT NULL, PRIMARY KEY (`did`,`filename`), KEY `fe` (`filetime`,`ext`), -- This? KEY `ef` (`ext`,`filetime`) -- or This? ) ENGINE=InnoDB DEFAULT CHARSET=utf8 ; There are a million rows in the table. The filetimes are mostly distinct. There are a finite number of ext values. So, filetimehas a high cardinality and ext has a much lower cardinality. The query involves both ext and filetime: WHERE ext = '...' AND filetime BETWEEN ... AND ... Which of those two indexes is better? And why?
First, let's try FORCE INDEX to pick either ef or fe. The timings are too short to get a clear picture of which is faster, but `EXPLAIN shows a difference: Forcing the range on filetime first. (Note: The order in WHERE has no impact.) mysql> EXPLAIN SELECT COUNT(*), AVG(fsize) FROM files FORCE INDEX(fe) WHERE ext = 'gif' AND filetime >= '2015-01-01' AND filetime < '2015-01-01' + INTERVAL 1 MONTH; +----+-------------+-------+-------+---------------+------+---------+------+-------+-----------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------+---------------+------+---------+------+-------+-----------------------+ | 1 | SIMPLE | files | range | fe | fe | 14 | NULL | 16684 | Using index condition | +----+-------------+-------+-------+---------------+------+---------+------+-------+-----------------------+ Forcing the low-cardinality ext first: mysql> EXPLAIN SELECT COUNT(*), AVG(fsize) FROM files FORCE INDEX(ef) WHERE ext = 'gif' AND filetime >= '2015-01-01' AND filetime < '2015-01-01' + INTERVAL 1 MONTH; +----+-------------+-------+-------+---------------+------+---------+------+------+-----------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------+---------------+------+---------+------+------+-----------------------+ | 1 | SIMPLE | files | range | ef | ef | 14 | NULL | 538 | Using index condition | +----+-------------+-------+-------+---------------+------+---------+------+------+-----------------------+ Clearly, the rows says ef is better. But let's check with the Optimizer trace. The output is rather bulky; I'll show only the interesting parts. No FORCE is needed; the trace will show both options then pick the better. ... "potential_range_indices": [ ... { "index": "fe", "usable": true, "key_parts": [ "filetime", "ext", "did", "filename" ] }, { "index": "ef", "usable": true, "key_parts": [ "ext", "filetime", "did", "filename" ] } ], ... "analyzing_range_alternatives": { "range_scan_alternatives": [ { "index": "fe", "ranges": [ "2015-01-01 00:00:00 <= filetime < 2015-02-01 00:00:00" ], "index_dives_for_eq_ranges": true, "rowid_ordered": false, "using_mrr": false, "index_only": false, "rows": 16684, "cost": 20022, <-- Here's the critical number "chosen": true }, { "index": "ef", "ranges": [ "gif <= ext <= gif AND 2015-01-01 00:00:00 <= filetime < 2015-02-01 00:00:00" ], "index_dives_for_eq_ranges": true, "rowid_ordered": false, "using_mrr": false, "index_only": false, "rows": 538, "cost": 646.61, <-- Here's the critical number "chosen": true } ], ... "attached_conditions_computation": [ { "access_type_changed": { "table": "`files`", "index": "ef", "old_type": "ref", "new_type": "range", "cause": "uses_more_keyparts" <-- Also interesting } } With fe (range column first), the range could be used, but it estimated scanning through 16684 rows fishing for ext='gif'. With ef (low cardinality ext first), it could use both columns of the index and drill down more efficiently in the BTree. Then it found an estimated 538 rows, all of which are useful for the query -- no further filtering needed. Conclusions: INDEX(filetime, ext) used only the first column. INDEX(ext, filetime) used both columns. Put columns involved in = tests first in the index regardless of cardinality. The query plan won't go beyond the first 'range' column. "Cardinality" is irrelevant for composite indexes and this type of query. ("Using index condition" means that the Storage Engine (InnoDB) will use columns of the index beyond the one used for filtering.)
MariaDB
50,239,658
13
I just installed MariaDB 10.1.29 on Ubuntu 18.04. From the command line I can connect using sudo: sudo mysql -u root -p But not without sudo. Also, if I try to connect to the database through DBeaver, I get: Could not connect: Access denied for user 'root'@'localhost' While the credentials are correct. I tried installing MySQL 5.7, but I'm experiencing the exact same issue there as well. What am I missing?
As it turns out, this is expected behaviour for MariaDB and MySQL. To overcome the issue, it is advisable to create a separate user and grant access to all databases created. Here is a step by step guide on how to create databases using the command line and grant permissions to the user of your choice. Log in to MariaDB/MySQL $ sudo mysql -u root -p Create a database mysql> CREATE DATABASE `database_name`; Create a user mysql> CREATE USER 'user_name' IDENTIFIED BY 'a_secure_password'; Grant user permissions mysql> GRANT ALL PRIVILEGES ON database_name.* TO 'user_name'@'%' WITH GRANT OPTION; Apply changes mysql> FLUSH PRIVILEGES;
MariaDB
50,453,078
13
So I have read a few posts on here but I cant seem to get this working on MySQL. Pretty much I have a «count» of records with an itemid that I want to update into my items table based on the itemid into items. This is what I have tried: Update items SET items.popularity = countitems.countofscriptiD FROM items INNER JOIN (SELECT Count(scripts.ScriptID) AS CountOfScriptID, scripts.ItemID FROM scripts GROUP BY scripts.ItemID) as countitems ON items.itemid = countitems.itemid Which returns a MySQL error: 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'FROM items INNER JOIN (SELECT Count(scripts.ScriptID) AS CountOfScriptID, scri' at line 3 If I change this to a SELECT query it works fine, either selecting from [items] or the count query however the update statement is failing! Any advice would be great, from what I have read I can't see where I'm going wrong with this.
The right way to do that, doing the join of tables before SET: UPDATE items INNER JOIN (SELECT Count(scripts.ScriptID) AS CountOfScriptID, scripts.ItemID FROM scripts GROUP BY scripts.ItemID) as countitems ON items.itemid = countitems.itemid SET items.popularity = countitems.countofscriptiD See https://dev.mysql.com/doc/refman/8.0/en/update.html
MariaDB
51,977,955
13
I have just installed MariaDB via homebrew on my Mac. At the end of the installation I got the following error: Warning: The post-install step did not complete successfully You can try again using `brew postinstall mariadb` If I run brew postinstall mariadb I get: ==> /usr/local/Cellar/mariadb/5.5.34/bin/mysql_install_db --verbose --user=andrew --basedir=/usr/loca MariaDB is hosted on launchpad; You can find the latest source and email lists at http://launchpad.net/maria Please check all of the above before mailing us! And remember, if you do mail us, you should use the /usr/local/Cellar/mariadb/5.5.34/scripts/mysqlbug script! READ THIS: https://github.com/mxcl/homebrew/wiki/troubleshooting Which isn't helpful! The tutorial I was following told me to run unset TEMPDIR, then mysql_install_db --user=mysql --basedir=$(brew --prefix mariadb); running those results in the following: /usr/local/opt/mariadb/bin/my_print_defaults: Can't read dir of '/usr/local/etc/my.cnf.d' (Errcode: 2) Fatal error in defaults handling. Program aborted chown: ./data: Operation not permitted Cannot change ownership of the database directories to the 'mysql' user. Check that you have the necessary permissions and try again. I suspect the problem has something to do with the /usr/local/etc/my.cnf.d folder. I've seen this referred to in a couple of things I've tried, but it doesn't exist on my machine. I have tried a few different mysql_install_db commands I've found in other tutorials, but they all throw up a (different) error message. Thanks for any help!
Just open the config file at /usr/local/etc/my.cnf with your editor and comment out the following line: !includedir /usr/local/etc/my.cnf.d
MariaDB
20,448,822
12
I have this column in this database with a spacebar included, which I want to change. ALTER TABLE . CHANGE COLUMN `Anzahl Personen` AnzahlPersonen int(11); After using this line in the command line the output is as following: ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'CHANGE COLUMN `Anzahl Personen` AnzahlPersonen int(11)' at line 1 Yeah, I have no Idea, what I am doing wrong.
If you are using dot (.) instead of table name that is why you have error. You have to specify table name: ALTER TABLE `table_name` CHANGE COLUMN `Anzahl Personen` AnzahlPersonen int(11);
MariaDB
28,507,987
12
I want to write simple python application and put in docker conteiner with dockerfile. My dockerfile is: FROM ubuntu:saucy # Install required packages RUN apt-get update RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python RUN DEBIAN_FRONTEND=noninteractive apt-get -y install mysql-server python-mysqldb # Add our python app code to the image RUN mkdir -p /app ADD . /app WORKDIR /app # Set the default command to execute CMD ["python", "main.py"] In my python application I only want to connect to the database. main.py look something like this: import MySQLdb as db connection = db.connect( host='localhost', port=3306, user='root', passwd='password', ) When I built docker image with: docker build -t myapp . and run docker image with: docker run -i myapp I got error: _mysql_exceptions.OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)") What is the problem?
The problem is that you've never started the database - you need to explicitly start services in most Docker images. But if you want to run two processes in Docker (the DB and your python program), things get a little more complex. You either have to use a process manager like supervisor, or be a bit cleverer in your start-up script. To see what I mean, create the following script, and call it cmd.sh: #!/bin/bash mysqld & python main.py Add it to the Dockerfile: FROM ubuntu:saucy # Install required packages RUN apt-get update RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python RUN DEBIAN_FRONTEND=noninteractive apt-get -y install mysql-server python-mysqldb # Add our python app code to the image RUN mkdir -p /app ADD . /app WORKDIR /app # Set the default command to execute COPY cmd.sh /cmd.sh RUN chmod +x /cmd.sh CMD cmd.sh Now build and try again. (Apologies if this doesn't work, it's off the top of my head and I haven't tested it). Note that this is not a good solution; mysql will not be getting signals proxied to it, so probably won't shutdown properly when the container stops. You could fix this by using a process manager like supervisor, but the easiest and best solution is to use separate containers. You can find stock containers for mysql and also for python, which would save you a lot of trouble. To do this: Take the mysql installation stuff out of the Dockerfile Change localhost in your python code to mysql or whatever you want to call your MySQL container. Start a MySQL container with something like docker run -d --name mysql mysql Start your container and link to the mysql container e.g: docker run myapp --link mysql:mysql
MariaDB
29,420,870
12
We are trying to upgrade to lastest sonarqube 5.5. We have mariadb 10.1 (latest) and since now we had no problems with sonarqube. Now, with the upgrade, sonarqube will not boot. It says: Unsupported mysql version: 5.5. Minimal supported version is 5.6. Is there any trick we can use to make "sonar think" we are using mysql 5.6?
You could change the MINIMAL_SUPPORTED_DB_VERSIONS member in the Sonarqube's class https://github.com/SonarSource/sonarqube/blob/master/sonar-db/src/main/java/org/sonar/db/DatabaseChecker.java private static final Map<String, Version> MINIMAL_SUPPORTED_DB_VERSIONS = ImmutableMap.of( // MsSQL 2008 is 10.x // MsSQL 2012 is 11.x // MsSQL 2014 is 12.x // https://support.microsoft.com/en-us/kb/321185 MsSql.ID, Version.create(10, 0, 0), MySql.ID, Version.create(5, 6, 0), Oracle.ID, Version.create(11, 0, 0), PostgreSql.ID, Version.create(8, 0, 0) ); And build the project again, but If they have that requirement it's possible that after the change not everything is going to work fine.
MariaDB
37,026,631
12
I'm trying to connect to an old MySQL 3.23 server from an Ubuntu 16 client with UnixODBC and pyodbc 3.07. I've tried three (3) versions of MySQL Connector/ODBC and two (2) from MariaDB: MySQL-ODBC 5.3.9 Supports only the new mysql authentication method. Therefore it can not connect. MySQL-ODBC 5.1.13 Has a switch for the authentication method but tells me on pyodbc.connect(dsn): [MySQL][ODBC 5.1 Driver]Driver does not support server versions under 4.1.1 MySQL-ODBC 3.51 Has two issues: Fails with [MySQL][ODBC 3.51 Driver]Transactions are not enabled (4000) (SQLSetConnnectAttr(SQL_ATTR_AUTOCOMMIT)) as pyodbc sets autocommit to false as a default. Gives me a connection when I connect with pyodbc.connect(dsn, autocommit=True). The connection gives me a cursor but all cursor.execute(sql) throw the exception ('HY000', 'The driver did not supply an error!'). Testing the connection with isql from the shell via isql -v [dsn] gives me a session but fails on all statements with [ISQL]ERROR: Could not SQLExecute. So this seems to be a unixodbc problem. I installed mysql-client. But the programm mysql fails to connect the server. mariadb-client can connect to the database and even execute statements. That looks more promising. I downloaded the MariaDB ODBC-Driver 3.0.2. Using that driver with isql returns the error: [S1000][unixODBC][ma-3.0.2]Plugin old_password could not be loaded: lib/mariadb/plugin/old_password.so: cannot open shared object file: No such file or directory. That is a response one could work with. There is an ODBC-Option PLUGIN_DIR but I don't know where to get the plugin. MariaDB ODBC-Driver 2.0.13 gives me ('HY000', "[HY000] [unixODBC][ma-2.0.13]You have an error in your SQL syntax near 'SQL_AUTO_IS_NULL=0' at line 1 (1064) (SQLDriverConnect)") on connect. As there seems to be no option to change this. Deadend here. I would like to know if there is a way to access this old MySql via unixodbc/pyodbc? Or does somebody know where to get the Plugin old_password.so for MariaDB? The mariadb-client installed via apt-get can connect so there has to be a way.
I spent a day or so poking at this, and don't think it's possible without significant alterations to driver code, or an extremely difficult-to-create build environment for old versions. I am putting this in an answer so that other people don't fall down the same rabbit hole I did (or, better yet, so other people can pick up where I left off and actually fix the problem!) ...and it didn't fit in a comment. This will be a bit of a tome, sorry. Overview I was able to reproduce each of the error conditions you mentioned in your post (thanks for a thorough and excellent question!) using a pair of Ubuntu 16.04 containers, the MySQL 3.23 download available from Oracle, and all of the client libs you mentioned, and a few others. Below are what I found while trying to find additional solutions in each place you mentioned, followed by some "next steps"-type info and some proselytizing about the moral of the story. All of these tests were conducted with the latest versions of Python 2, UnixODBC, and pyodbc (via pip) available for a stock Ubuntu 16.04 Docker container as of 26/11/2017. All the URLs used are linked, but, if history is any indication, they may die as time goes on, considering that a lot of this software is verging on two decades old. I am also happy to post any/all of my shellscripts/Dockerfiles/modified driver sources if you like; just ping me in the comments. old_password.so and MariaDB Connector/ODBC 3.0.2 You were right that this was the troubleshooting option with the most potential. Here's what I did: First, I installed the Connector/ODBC 3.0.2 binary and attempted to connect to it via Python. I hit the same error you did after configuring my ODBC .ini files for a data source named "maria", namely: > pyodbc.connect('DRIVER={maria};Server=mysql;Database=mysql;User=admin;Password=admin') pyodbc.Error: ('HY000', u'[HY000] [unixODBC][ma-3.0.2]Plugin old_password could not be loaded: lib/mariadb/plugin/old_password.so: cannot open shared object file: No such file or directory (2059) (SQLDriverConnect)') The ODBC code attempts, when presented with a MySQL server announcing an authentication protocol old enough, to load compiled plugins built for the Connector/C MariaDB driver. straceing the output of the ODBC connect attempts determined this. old_password.so turns out to be a component of the Connector/C MariaDB driver, but is not a library included with that driver's binary releases. Interesting. It turns out that there are a bunch of plugin modules similar to old_password included with the source of the Connector/C driver. I downloaded the Connector/C 3.0.2 sources and opened up the documentation, sources, and build system for those "auth"-type plugins, which were distributed as .so files, to see what I could find. I found that various components of the Connector/C can be compiled either as plugins "statically" linked into the main driver library, or as dynamic libraries themselves. I say "statically" in quotes, because the build process for the C driver creates both a static (.a) and dynamic (.so) version of mariadbclient, but if a specific plugin is declared as static in the build system, that plugin's code is statically included in both mariadbclient artifacts. The sources for the old_password.so file appeared to be in a single small source file at plugins/auth/old_password.c. It seemed that it would be possible to change the build system (CMake) to generate a dynamic library for the old_password plugin. In the Connector/C sources there is a cmake/plugins.cmake file which acts as a "registry" for all plugins. It contains a cmake macro REGISTER_PLUGIN which takes a STATIC or DYNAMIC argument. I searched in that file for old_password and found the following line: REGISTER_PLUGIN("AUTH_OLDPASSWORD" "${CC_SOURCE_DIR}/plugins/auth/old_password.c" "old_password_client_plugin" "STATIC" "" 0) That looked promising. Modelling off of similar lines that did generate .so files for their plugin, I changed that line to the following and ran the build: REGISTER_PLUGIN("AUTH_OLDPASSWORD" "${CC_SOURCE_DIR}/plugins/auth/old_password.c" "old_password_client_plugin" "DYNAMIC" "old_password" 1) The build failed a few times due to missing dependencies. I had to install a few -dev packages and other tools, but in the end I was able to build cleanly (for just the plugins, it turns out you don't need CURL or OpenSSL). Sure enough, a file called mysql_old_password.so was created in the plugins/auth directory as a build artifact. - Now, I needed my Python code to find that plugin; it still gave me the error about failing to find lib/mariadb/plugin/old_password.so. I supplied the PLUGIN_DIR argument you mentioned in your question to the ODBC connection string, renamed my compiled mysql_old_password.so to old_password.so, and ran the following code . . . and got a new error! Progress! conn = pyodbc.connect('DRIVER={maria};Server=mysql;Database=mysql;User=admin;Password=admin;PLUGIN_DIR=/home/mysql/zclient/mdb-c/plugins/auth') pyodbc.Error: ('HY000', u'[HY000] [unixODBC][ma-3.0.2]Plugin old_password could not be loaded: /home/mysql/zclient/mdb-c/plugins/auth/old_password.so: undefined symbol: ma_scramble_323 (2059) (SQLDriverConnect)') Looks like the compiled artifact is broken, missing the ma_scramble_323 function definition. Since the plugin is dynamically loaded at runtime, the program will still start, but when it tries to dload the plugin it'll blow up. Worse yet, that function looks like it's the main password-hashing entry point for the "old" MySQL protocol authentication mechanism, so I couldn't just ditch it. In the Connector/C sources, I found the declaration for that function and the header (mariadb_com.h), but includeing that in various places in the old_password.c source file didn't seem to do the trick. My hunch is that this is the interaction of two unfortunate behaviors. First, the plugins compiled by the Connector/C build system are set up assuming that they will only be linked by the Connector/C plugin, or something similar. This means that the plugins themselves don't link to the "common" Connector/C functionality when they are compiled, since that stuff should already be available in the thing loading the plugin. Since we're using Connector/ODBC, and not Connector/C, those common functions aren't present or accessible. Now, building Connector/ODBC from source requires Connector/C, so Iit might be possible to compile a new Connector/ODBC library in such a way that it includes the right functions, but I didn't want to start down that rabbit hole. Second, even when told to build the old_password plugin in standalone (don't compile anything else) mode, CMake's dependency analysis did not discover or link the files that described the ma_scramble_323. That might be a CMake problem, but it is probably because the build system is not configured with this use case in mind as mentioned above. Here, I got very lucky. The ma_scramble_323 function is defined in libmariadb/ma_password.c, which is a very small, simple source file with no significant dependencies on any other libraries in the Connector/C project that were not already depended-on by the old_password plugin. I did "poor man's linking" (yuck) and just copied the sources of the ma_scramble_323 function into the old_password.c file. Those functions called other functions in the ma_password.c file, so I copied those to. Again, this was only easy (or an option at all) since the ma_password.c file was so simple. If it had itself had dependencies or had been more complex, I would have had to stop, drop, and learn advanced CMake-fu to resolve the issue the "right" way. I am absolutely sure there is a better way to do this. (Aside) at this point I had to cron up a regular run of mysqladmin flush-hosts on my DB server since my testing was causing so many failed attempts that I had to do this frequently. There's probably a better way around this, too, but I don't know it and I know cron. With the newly-"inlined" sources, the mysql_old_password.so library compiled, I renamed it, and ran my test script again. This time, I got: pyodbc.Error: ('HY000', u'[HY000] [unixODBC][ma-3.0.2]Plugin old_password could not be loaded: name mismatch (2059) (SQLDriverConnect)') I figured this had something to do with the fact I was renaming the file so that ODBC could find it (it's looking for old_password.so not mysql_old_password.so). I tried the shotgun approach. In the plugins/auth/CMakeLists.txt build system config, I replaced all instances of mysql_old_password with old_password and compiled. Compilation succeeded, but it still didn't work. It turns out that the plugin sources themselves (old_password.c in this case) have a struct declaration at the top that announces their name, and this one announced its name as mysql_old_password. This may well be a pre-existing issue (I.e. this has never worked), and I started to feel a bit of a chill: when you're building code that feels like nobody has built it or tested it in a given configuration before, your odds of success are not good. Regardless, I did the same s/mysql_old_password/old_password/ on the source file as well, and compiled. This time it generated an artifact with the right old_password.so name. I ran my test script again and got: conn = pyodbc.connect('DRIVER={maria};Server=mysql;Database=mysql;User=admin;Password=admin;PLUGIN_DIR=/home/mysql/zclient/mdb-c/plugins/auth') pyodbc.Error: ('HY000', u"[HY000] [unixODBC][ma-3.0.2]Access denied for user: 'admin@hostname' (Using password: NO) (1045) (SQLDriverConnect)") This was odd. I had the mysql commandline client that came with the 3.23 server installed (via tarball, not in the system library path) on my client-testing box as well, and it could connect fine with those credentials (I couldn't test with isql because I couldn't get it to properly use PLUGIN_DIR and couldn't figure out where it wanted me to put the plugins; it wasn't in the system /usr directory, nor the relative ones). I could not figure out a way through this. I had set up my MySQL server with all of the usual "ultra-promiscuous, testing only" GRANTs, for localhost and %, for every database, for the admin user and eponymous password. I gave up and set the password to empty/null, disabling password auth, making sure I could still log in via mysql on the command line, and trying one last time: pyodbc.Error: ('HY000', u'[HY000] [unixODBC][ma-3.0.2]Error in server handshake (2012) (SQLDriverConnect)') This proved to be the death knell. Researching this error, I found this GitHub issue, in which folks seemed pretty convinced that this represented a fundamental client/server protocol incompatibility. At this point I gave up on the old_password.so approach. It seems that the 3.0.2 version of the MariaDB driver code (C or ODBC) doesn't speak an old enough dialect of MySQL's protocol to work, though there are probably a lot of possible fixes I missed in that process. Other paths tried I tried a few other things you mentioned in your question which I'll briefly go over here: As you probably found, trying to disable SQL_AUTO_IS_NULL behavior in the MariaDB 2.0 ODBC driver family doesn't work well. This bug thread and the ODBC Connector parameters list have a couple of suggestions on how to disable the setting of that field (Option=8388608 is obvious and clear, right?), but none of those attempts to forcibly disable or enable the flag changed the behavior, whether they were in the connection string or the ODBC .ini files. The MySQL archive site has old versions of the ODBC connector available. Unfortunately, all of their compiled versions are for 32-bit Linux, which I don't have. I tried building from source, and it was a massive chore even to get the toolchain configured. At the point where I had to hand-install system identification files from 1999 I knew it was probably a lost cause, but I got all the deps and ancient versions installed and tried to compile it. The sheer number and variety of compile errors caused me to abandon this approach (C standard mismatches, plus a lack of compatibility with what appeared to be nearly every part of UnixODBC). It's totally possible there are simple fixes to these issues that I missed; I am not a C coder or old-linux-build-system expert. I tried some third party MySQL ODBC connectors, which didn't work; same errors as with the 5.* family. I compiled the 2.50.39 version of the Connector/ODBC library (only the sources were available on the archive). To do that, I first compiled the libmysqlclient.so.10 files for the 3.23 version of the server. This required altering the sources of the 3.23 server to solve some errno related problems (remove the #define clause for extern int errno in my_sys.h), copying the libtool OS definition files to various locations in the source directory (/usr/share/libtool/build-aux/config.{guess,sub} got copied to the ., mit-pthreads, and mit-pthreads/config/, if it matters). After that, I was able to compile and build the libmysqlclient libraries with the --with-mit-threads --without-server --without-docs --without-bench configure switches. Compilation failed with several inscrutable errors while evaluating the macros for the mysql client program after that, but the .so files for libmysqlclient had already been generated, so I grabbed them and moved on. After the libmysqlclient.so.10 library was compiled, I built the 2.50.39 version of Connector/ODBC from the archive. That required changing some of the sources from the main MySQL include files (removing references to asm/atomic.h), and the same system-identification libtool hack as the other libraries. It failed to find the iodbc libs (installed on Ubuntu via the libiodbc2-dev package) since they are now in /usr/include rather than /usr/local/include. I finally configured it with the switches --with-mysql-includes=$path_to_3.23_mysql_binary_dir/include --with-mysql-libs=$path_to_compiled_libmysqlclient.so.10_files_from_mysql_server_3.23_sources --with-iodbc-includes=/usr/include/iodbc, and it built without trouble other than the aforementioned atomic.h issue. However, after all that, connecting via my newly-compiled libmyodbc.so caused a segfault in Python/UnixODBC. Valgrind, gdb, and other tools were not useful in determining why; perhaps someone better versed in debugging compiled library interoperability issues could solve this problem. The MySQL archive has old, binary RPM versions of the Connector/ODBC. They are all 32-bit, and almost all modern Linuxes are 64-bit. I tried shimming those files by installing the i386 architecture and required libraries. The 64-bit Python/UnixODBC was unable to load the myodbc plugins successfully, returning the generic "file not found" error, which I eventually traced back to a failed call to dlopen. Libtool's dlopen wrappers (used by UnixODBC) are considered not-very-debuggable by most people, and after a significant hassle my rudimentary Valgrind tricks seemed to indicate, as I expected, that it wasn't possible to dynamically load an architecture-incompatible (i386 vs x86-64) ODBC backend. Solutions/Remaining Options In general it's probably going to be easier to rewrite the code on your end. For example, you could make a Python module that wraps a legacy Python non-ODBC MySQL driver (as @FlipperPA suggested in the comments on the question), hack 'enough' of the pyodbc interface onto that module that you don't have to refactor too much of your code that calls it, and thoroughly test before deploying. I know that sucks and is risky, but it is probably your best bet. In writing such a module, you may be able to make use of some of the internal code in pyodbc that handles generic ODBC syntax/et cetera. You could even develop a "fake" ODBC backend for pyodbc that just called a non-ODBC Python MySQL driver, but I suspect that would be difficult, since pyodbc's backend-pluggability seems primarily geared towards compiled libraries rather than "dummy" shim code. I'm not an expert in this stuff, so there totally might be solutions I've missed! There are a few other possibilities I considered and gave up on: You could file a bug with the MariaDB folks and it might be fixed. I don't have a good sense of whether the protocol error I ended up with is "this is fundamentally incompatible at every level" or "the auth system just needs a tweak then everything will work". It might be worth a shot. Since there are 32-bit RPMs available of the version 2.50 Connector/ODBC code (they won't load into a 64-bit Python/UnixODBC environment), you could conceivably convert your whole stack (or even operating system distribution) to 32-bit code. However, if you use any non-common compiled stuff, this would likely be a significant hassle. While Ubuntu/Debian especially is very good about making packages available on old architectures, this still might be tricky. Even if you got everything converted, some behaviors might change, and it the old 32-bit characteristics would be an ongoing piece of strangeness for anyone working on your app. And that's only if the 2.50 driver works when accessed from a 32-bit runtime; there may be other issues after that which crop up. I'd only recommend trying this if the maintenance burden for all of your client code is likely to be very low in the future (if the project is small or unlikely to change). Morals of the story Software rots away bloody fast. Unless a project is continually committed to doing work to maintain backwards compatibility, things will quickly stop working, especially in web software. It's not that the product itself breaks, it's that the universe changes out from under it in a million little ways. Unless someone is enough of a generalist and has been around long enough to know all those little changes and how to reverse them, it's very hard to move everything backwards in time/revisions to a place where things "just work". If you get binaries for something, even if it's something supposedly "common" like a MySQL driver, keep them around. Ideally, share them with the internet. If you have sources for something, rigorously document the entire list of dependencies/toolchain they need, and document it for humans. Assume that the tools needed to read programmatic dependency lists for e.g. autotools will themselves go obsolete. Nothing is too "obvious" to document; not architecture, kernel ABI, libc behavior--nothing. Now that we have "in a box on any kernel" things like Docker, you may be able to store more of your dependencies in a programmatic way, but don't count on it.
MariaDB
47,350,382
12
I have developed a website using PhalconPHP. the website works perfectly fine on my local computer with the following specifications: PHP Version 7.0.22 Apache/2.4.18 PhalconPHP 3.3.1 and also on my previous Server (with DirectAdmin): PHP Version 5.6.26 Apache 2 PhalconPHP 3.0.1 But recently I have migrated to a new VPS. with cPanel: CENTOS 7.4 vmware [server] cPanel v68.0.30 PHP Version 5.6.34 (multiple versions available, this one selected by myself) PhalconPHP 3.2.2 On the new VPS my website always gives me Error 500. in my Apache Error logs file: [cgi:error] End of script output before headers: ea-php70, referer: http://mywebsitedomain.net What I suspect is the new database System. the new one is not mySql. it is MariaDB 10.1. I tried to downgrade to MySQL 5.6 but the WHM says there is no way I could downgrade to lower versions. this is my config file: [database] adapter = Mysql host = localhost username = root password = XXXXXXXXXXXX dbname = XXXXXXXXXXXX charset = utf8 and my Services.php: protected function initDb() { $config = $this->get('config')->get('database')->toArray(); $dbClass = 'Phalcon\Db\Adapter\Pdo\\' . $config['adapter']; unset($config['adapter']); return new $dbClass($config); } And in my controllers... for example this code throws Error 500: $this->view->files = Patients::query()->orderBy("id ASC")->execute(); but changing id to fname fixes the problem: $this->view->files = Patients::query()->orderBy("fname ASC")->execute(); or even the following code throws error 500: $user = Users::findFirst(array( "conditions" => "id = :id:", "bind" => array("id" => $this->session->get("userID")) )); is there a problem with the compatibility of PhalconPHP and MariaDB?
MariaDB was built to be mostly compatible with MySQL clients, it's unlikely to be the reason for your problems. If you're still concerned, you can switch from MariaDB to MySQL (and vice versa) by dumping (exporting) your tables, switching over, and importing them again. More likely, the error line you're showing indicates that your new server is actually running PHP7 (ea-php70) and not PHP5.6 as you thought you selected. The error End of script output before headers means the CGI script (in this case PHP7 itself) did not produce any HTTP headers before terminating. I suspect that your version of PhalconPHP is incompatible with PHP7 and therefore just crashes immediately. If cPanel doesn't let you properly configure your infrastructure you likely have no other option but to drop it and set up your stack manually. But since you probably paid for cPanel, you could try opening a support ticket with them first: https://cpanel.com/support/
MariaDB
49,319,731
12
I'm trying to wirte a little log procedure for my database. I create a procedure with this statment: create procedure prc_wirte_log ( in p_schema varchar(255), in p_item varchar(255), in p_message varchar(255) ) begin insert into weather.log (`schema`, item, message) values (p_schema, p_item, p_message); end; I get the error Error Code: 1064. You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '' at line 7 0.063 sec Why? The MySQL Workbench means Incomplet Statment: excepting ; after the insert query. What could I do?
Multistatement procedures (assumed when BEGIN...END is present) require delimiter overrides to prevent the statements they contain from terminating the procedure definition prematurely. Typically, you need to do something like: DELIMITER // CREATE PROCEDURE blah() BEGIN statements; END// DELIMITER ; The first example on the documentation here demonstrates this (though the last two on that page seem to repeat your mistake.
MariaDB
52,412,225
12
I installed mariadb via homebrew to set up a wordpress enviroment. It is meant to work with laravel valet. I am currently using using the zsh shell. I installed it without a problem (10.3.12), but when I run mysql.server start I get the following error: mysql.server start Starting MariaDB .190206 11:26:18 mysqld_safe Logging to '/usr/local/var/mysql/chriss-mbp.lan.err'. 190206 11:26:18 mysqld_safe Starting mysqld daemon with databases from /usr/local/var/mysql /usr/local/bin/mysql.server: line 260: kill: (55179) - No such process ERROR! Can anybody help me narrow down why I'm getting this error? I'm new to terminal and mariadb, so I'm hoping it's just a silly error that I wasn't aware of.
Brew has its own service manager included. Via brew services list you get all installed services listed. MariaDB should be there. To start it call brew services start mariadb.
MariaDB
54,560,307
12
Trying to enable regular password-based auth according to the below page: https://mariadb.com/kb/en/library/authentication-plugin-unix-socket/ The page suggests the following code: ALTER USER root@localhost IDENTIFIED VIA mysql_native_password; SET PASSWORD = PASSWORD('foo'); but on my machine it fails with a syntax error: MariaDB [(none)]> ALTER USER root@localhost IDENTIFIED VIA mysql_native_password; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'USER root@localhost IDENTIFIED VIA mysql_native_password' at line 1 MariaDB [(none)]> SET PASSWORD = PASSWORD('foo'); Query OK, 0 rows affected, 1 warning (0.00 sec)
If you're running MariaDB < 10.2, the ALTER USER command will not work, as stated above. To change the authentication method, use: UPDATE mysql.user SET plugin = 'mysql_native_password' WHERE user = 'root';
MariaDB
56,052,177
12
I'm currently working on a closed-source commercial web-project which uses MariaDB as the database. I wonder about the licensing of MariaDB. Do we have to get a license to use it with our commercial project? On the website, they mention the "GNU General Public License, version 2". What exactly does that mean? http://kb.askmonty.org/v/mariadb-license
The GPL (GNU General Public License) states that you can use the software free of charge, but you cannot modify and sell it unless you release the source code. This means you can use it in your closed-source project. MySQL was originally under the GPL, but has some different licensing issues since it was bought up by Oracle. You may still use it under the GPL, but Oracle also offers commercial licenses.
MariaDB
3,978,963
11
I am trying to use official mariadb docker image along with the php-fpm and nginx to run my Symfony 2 app. The idea is to keep all the db files on a mounted folder. While on Mac OS it works just fine - on the Windows machine I am getting an error every time MariaDB attempts to start. The weird part is that it's actually able to create some files - I can see the ibdata1 file but the size of it is 0 bytes. There are also two aria_log files with few KBs of data which means that mysql is actually able to write there. I'm using docker for windows 1.12.2 beta. But tried the stable one too. The windows disk I'm storing my project on is shared (through the "Shared drives" section of the Docker for Windows UI) The dir is 100% writeable since nginx and even mysql itself are able to put their logs in there I'm totally not "out of disk space" as logs suggest Here is my docker-compose file: version: '2' services: db: image: mariadb ports: - "3306:3306" environment: MYSQL_ROOT_PASSWORD: root MYSQL_DATABASE: test MYSQL_USER: root MYSQL_PASSWORD: root volumes: - ./docker-runtime/mysql:/var/lib/mysql php: build: ./docker/php-fpm volumes: - ./:/var/www/symfony - ./docker-runtime/logs/symfony:/var/www/symfony/app/logs links: - db nginx: build: ./docker/nginx ports: - "80:80" links: - php volumes_from: - php volumes: - ./docker-runtime/logs/nginx/:/var/log/nginx And here is what the log say when I do docker-compose up: db_1 | 2016-10-18 13:14:06 7f79eed7f7c0 InnoDB: Error: Write to file ./ibdata1 failed at offset 0. db_1 | InnoDB: 1048576 bytes should have been written, only 0 were written. db_1 | InnoDB: Operating system error number 22. db_1 | InnoDB: Check that your OS and file system support files of this size. db_1 | InnoDB: Check also that the disk is not full or a disk quota exceeded. db_1 | InnoDB: Error number 22 means 'Invalid argument'. db_1 | InnoDB: Some operating system error numbers are described at db_1 | InnoDB: http://dev.mysql.com/doc/refman/5.6/en/operating-system-error-codes.html db_1 | 2016-10-18 13:14:06 140161674901440 [ERROR] InnoDB: Error in creating ./ibdata1: probably out of disk space db_1 | 2016-10-18 13:14:06 140161674901440 [ERROR] InnoDB: Could not open or create the system tablespace. If you tried to add new data files to the system tablespace, and it failed here, you should now edit innodb_data_file_path in my.cnf back to what it was, and remove the new ibdata files InnoDB created in this failed attempt. InnoDB only wrote those files full of zeros, but did not yet use them in any way. But be careful: do not remove old data files which contain your precious data! db_1 | 2016-10-18 13:14:06 140161674901440 [ERROR] Plugin 'InnoDB' init function returned error. db_1 | 2016-10-18 13:14:06 140161674901440 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. db_1 | 2016-10-18 13:14:06 140161674901440 [ERROR] Unknown/unsupported storage engine: InnoDB db_1 | 2016-10-18 13:14:06 140161674901440 [ERROR] Aborting I would really appreciate any ideas on this issue as I'm currently at the point of pulling my hair out
You need to add this option when starting MariaDB (in Dockerfile's CMD or docker-compose command:): --innodb-flush-method=fsync It's documented here: https://github.com/docker-library/mariadb/issues/95 If it does not help, also add --innodb-use-native-aio=0. Asynchronous I/O are not supported on Windows nor Mac OS X: https://dev.mysql.com/doc/refman/8.0/en/innodb-linux-native-aio.html
MariaDB
40,109,596
11
I'm trying to upgrade mysql in xamp. I'm using laravel which requires mariaDB v10.2.2. So I downloaded the latest msi package from the mariaDB website. Now I followed following points to install the same: Install MySQL to C:\TEMP. Make old installation folder to mysql_old. copy the following folders "bin, include, lib, share, support-files" to xamp\mysql\ folder. I didn't copied the data folder Copied the my.ini file from old installation to new installation in xamp\mysql\bin\ folder Copied the old data folder to new mysql folder Now after doing this I tried to start mysql from the control panel and following error displays: Now while checking the error log I found this: 2017-02-25 12:31:56 5736 [Note] InnoDB: Mutexes and rw_locks use Windows interlocked functions 2017-02-25 12:31:56 5736 [Note] InnoDB: Uses event mutexes 2017-02-25 12:31:56 5736 [Note] InnoDB: Compressed tables use zlib 1.2.3 2017-02-25 12:31:56 5736 [Note] InnoDB: Number of pools: 1 2017-02-25 12:31:56 5736 [Note] InnoDB: Using generic crc32 instructions 2017-02-25 12:31:56 5736 [Note] InnoDB: Initializing buffer pool, total size = 16M, instances = 1, chunk size = 16M 2017-02-25 12:31:56 5736 [Note] InnoDB: Completed initialization of buffer pool 2017-02-25 12:31:56 5736 [Note] InnoDB: Highest supported file format is Barracuda. 2017-02-25 12:31:56 5736 [Note] InnoDB: Creating shared tablespace for temporary tables 2017-02-25 12:31:56 5736 [Note] InnoDB: Setting file 'F:\xamp\mysql\data\ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2017-02-25 12:31:57 5736 [Note] InnoDB: File 'F:\xamp\mysql\data\ibtmp1' size is now 12 MB. 2017-02-25 12:31:57 5736 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active. 2017-02-25 12:31:57 5736 [Note] InnoDB: 32 non-redo rollback segment(s) are active. 2017-02-25 12:31:57 5736 [Note] InnoDB: Waiting for purge to start 2017-02-25 12:31:57 5736 [Note] InnoDB: 5.7.14 started; log sequence number 2361919 2017-02-25 12:31:57 11468 [Note] InnoDB: Loading buffer pool(s) from F:\xamp\mysql\data\ib_buffer_pool 2017-02-25 12:31:57 11468 [Note] InnoDB: Buffer pool(s) load completed at 170225 12:31:57 2017-02-25 12:31:57 5736 [Note] Plugin 'FEEDBACK' is disabled. 2017-02-25 12:31:57 5736 [ERROR] f:\xamp\mysql\bin\mysqld.exe: unknown variable 'innodb_additional_mem_pool_size=2M' 2017-02-25 12:31:57 5736 [ERROR] Aborting Now following is my.ini file: # Example MySQL config file for small systems. # # This is for a system with little memory (<= 64M) where MySQL is only used # from time to time and it's important that the mysqld daemon # doesn't use much resources. # # You can copy this file to # F:/xamp/mysql/bin/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options (in this # installation this directory is F:/xamp/mysql/data) or # ~/.my.cnf to set user-specific options. # # In this file, you can use all long options that a program supports. # If you want to know which options a program supports, run the program # with the "--help" option. # The following options will be passed to all MySQL clients [client] # password = your_password port = 3306 socket = "F:/xamp/mysql/mysql.sock" # Here follows entries for some specific programs # The MySQL server [mysqld] port= 3306 socket = "F:/xamp/mysql/mysql.sock" basedir = "F:/xamp/mysql" tmpdir = "F:/xamp/tmp" datadir = "F:/xamp/mysql/data" pid_file = "mysql.pid" # enable-named-pipe key_buffer = 16M max_allowed_packet = 1M sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M log_error = "mysql_error.log" # Change here for bind listening # bind-address="127.0.0.1" # bind-address = ::1 # for ipv6 # Where do all the plugins live plugin_dir = "F:/xamp/mysql/lib/plugin/" # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # # commented in by lampp security #skip-networking #skip-federated # Replication Master Server (default) # binary logging is required for replication # log-bin deactivated by default since XAMPP 1.4.11 #log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 # Replication Slave (comment out master section to use this) # # To configure this host as a replication slave, you can choose between # two methods : # # 1) Use the CHANGE MASTER TO command (fully described in our manual) - # the syntax is: # # CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>, # MASTER_USER=<user>, MASTER_PASSWORD=<password> ; # # where you replace <host>, <user>, <password> by quoted strings and # <port> by the master's port number (3306 by default). # # Example: # # CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306, # MASTER_USER='joe', MASTER_PASSWORD='secret'; # # OR # # 2) Set the variables below. However, in case you choose this method, then # start replication for the first time (even unsuccessfully, for example # if you mistyped the password in master-password and the slave fails to # connect), the slave will create a master.info file, and any later # change in this file to the variables' values below will be ignored and # overridden by the content of the master.info file, unless you shutdown # the slave server, delete master.info and restart the slaver server. # For that reason, you may want to leave the lines below untouched # (commented) and instead use CHANGE MASTER TO (see above) # # required unique id between 2 and 2^32 - 1 # (and different from the master) # defaults to 2 if master-host is set # but will not function as a slave if omitted #server-id = 2 # # The replication master for this slave - required #master-host = <hostname> # # The username the slave will use for authentication when connecting # to the master - required #master-user = <username> # # The password the slave will authenticate with when connecting to # the master - required #master-password = <password> # # The port the master is listening on. # optional - defaults to 3306 #master-port = <port> # # binary logging - not required for slaves, but recommended #log-bin=mysql-bin # Point the following paths to different dedicated disks #tmpdir = "F:/xamp/tmp" #log-update = /path-to-dedicated-directory/hostname # Uncomment the following if you are using BDB tables #bdb_cache_size = 4M #bdb_max_lock = 10000 # Comment the following if you are using InnoDB tables #skip-innodb innodb_data_home_dir = "F:/xamp/mysql/data" innodb_data_file_path = ibdata1:10M:autoextend innodb_log_group_home_dir = "F:/xamp/mysql/data" #innodb_log_arch_dir = "F:/xamp/mysql/data" ## You can set .._buffer_pool_size up to 50 - 80 % ## of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 16M innodb_additional_mem_pool_size = 2M ## Set .._log_file_size to 25 % of buffer pool size innodb_log_file_size = 5M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 ## UTF 8 Settings #init-connect=\'SET NAMES utf8\' #collation_server=utf8_unicode_ci #character_set_server=utf8 #skip-character-set-client-handshake #character_sets-dir="F:/xamp/mysql/share/charsets" [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [myisamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout Please look if I'm doing something wrong.
Simply remove this line from my.ini: innodb_additional_mem_pool_size = 2M It was "deprecated in MySQL 5.6.3, are removed in MySQL 5.7.4." (If my notes are correct, it has been "unused" since 4.1.0!)
MariaDB
42,453,124
11
With MariaDB 10.2 it's possible to define default value for Datetime e.g. created and lastModified. How I should access this columns as readonly field? Because this values should only be under control of the database and should not be modified from code, but I want read access to this property in code.
It's simple. Just set the insertable and updatable attributes to false. @Column( name = "created_on", insertable = false, updatable = false ) private Timestamp createdOn;
MariaDB
45,430,983
11
I'm very new for docker, now I am trying to run django with mariadb in docker through docker-compose, but I always get this error: I use Docker version 17.09.1-ce, build 19e2cf6, docker-compose version 1.18.0, build 8dd22a9 django.db.utils.OperationalError: (2003, 'Can\'t connect to MySQL server on \'mariadb55\' (111 "Connection refused")') I can connect db correctly after run docker-compose up db in local or remote, and I even can run python manage.py runserver 0.0.0.0:6001 correctly in anaconda virtual environment to connect db service in docker by setting parameters of settings.py file like below: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'test', 'USER': 'belter', # 'HOST': 'mariadb55', 'HOST': '127.0.0.1', 'PORT': '3302', 'PASSWORD': 'belter_2017', 'default-character-set': 'utf8', 'OPTIONS': { 'sql_mode': 'traditional', } } } This is my docker-compose.yml file version: '3' services: db: image: mariadb:5.5 restart: always environment: - MYSQL_HOST=localhost - MYSQL_PORT=3306 - MYSQL_ROOT_HOST=% - MYSQL_DATABASE=test - MYSQL_USER=belter - MYSQL_PASSWORD=belter_2017 - MYSQL_ROOT_PASSWORD=123456_abc volumes: - /home/belter/mdbdata/mdb55:/var/lib/mysql ports: - "3302:3306" web: image: onlybelter/django_py35 command: python3 manage.py runserver 0.0.0.0:6001 volumes: - /mnt/data/www/mysite:/djcode ports: - "6001:6001" depends_on: - db links: - db:mariadb55 I almost tried everything I can find, but still cannot figure it out, any help would be nice! What I have tried: Docker compose mysql connection failing Linking django and mysql containers using docker-compose Django connection to postgres by docker-compose
Finally, I figured it out! The key point is, just as @SangminKim said, I need to use 3306 not 3302 in settings.py, and use db as HOST not 127.0.0.1. So this is my docker-compose.yml file now: version: '3' services: db: image: mariadb:5.5 restart: always environment: - MYSQL_HOST=localhost - MYSQL_PORT=3306 # cannot change this port to other number - MYSQL_ROOT_HOST=% - MYSQL_DATABASE=test - MYSQL_USER=belter - MYSQL_PASSWORD=belter_2017 - MYSQL_ROOT_PASSWORD=123456_abc volumes: - /home/belter/mdbdata/mdb55:/var/lib/mysql ports: - "3302:3306" web: image: onlybelter/django_py35 command: python3 manage.py runserver 0.0.0.0:6001 volumes: - .:/djcode ports: - "6001:6001" depends_on: - db So now we can connect this docker-mysql by mysql -h 127.0.0.1 -P 3302 -u root -p in shell directly, but we have to use db and 3306 in django settings.py file: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'test', 'USER': 'belter', # 'HOST': 'mariadb55', 'HOST': 'db', #<--- 'PORT': '3306', #<--- 'PASSWORD': 'belter_2017', 'default-character-set': 'utf8', 'OPTIONS': { 'sql_mode': 'traditional', } } } And we can still check if this port is open, by running extra command in docker-compose.yml file: ... web: image: onlybelter/django_py35 command: /bin/sh -c "python check_db.py --service-name mysql --ip db --port 3306" volumes: - .:/djcode ... Here is check_db.py file: # check_db.py import socket import time import argparse """ Check if port is open, avoid docker-compose race condition """ parser = argparse.ArgumentParser(description='Check if port is open, avoid\ docker-compose race condition') parser.add_argument('--service-name', required=True) parser.add_argument('--ip', required=True) parser.add_argument('--port', required=True) args = parser.parse_args() # Get arguments service_name = str(args.service_name) port = int(args.port) ip = str(args.ip) # Infinite loop while True: sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) result = sock.connect_ex((ip, port)) if result == 0: print("{0} port is open! Bye!".format(service_name)) break else: print("{0} port is not open! I'll check it soon!".format(service_name)) time.sleep(3) By the way, this is my Dockerfile for build django-py35: FROM python:3.5-alpine MAINTAINER Xin Xiong "[email protected]" ENV PYTHONUNBUFFERED 1 RUN set -e; \ apk add --no-cache --virtual .build-deps \ gcc \ libc-dev \ linux-headers \ mariadb-dev \ python3-dev \ postgresql-dev \ freetype-dev \ libpng-dev \ g++ \ ; RUN mkdir /djcode WORKDIR /djcode ENV REFRESHED_AT 2017-12-25 ADD requirements.txt /djcode/ RUN pip install --no-cache-dir -r /djcode/requirements.txt RUN pip install uwsgi ADD . /djcode/ # copy . to /djcode/ EXPOSE 6001 See more details from here: https://github.com/OnlyBelter/django-compose
MariaDB
47,979,270
11
Question Is any alternatives of MySQL RANDOM_BYTES(len) or MSSQL CRYPT_GEN_RANDOM(len) available in MariaDB? I read through their documentation, but I only found is rand(len), which is not cryptographically secure in generating random bytes. Issues Currently released version of MariaDB (10.3) does not support RANDOM_BYTES(len) according to this documentation https://mariadb.com/kb/en/library/function-differences-between-mariadb-103-and-mysql-57/ Limitation If possible, keep the code within the MariaDB, and I don't want to rely on PHP or any other external functions for security reasons.
It seems RANDOM_BYTES(length) is available since 10.10.0 (preview release) Check the documentation.
MariaDB
53,149,125
11
I want to query something with SQL's like query: SELECT * FROM users WHERE name LIKE '%m%' How can I achieve the same in MongoDB? I can't find an operator for like in the documentation.
That would have to be: db.users.find({"name": /.*m.*/}) Or, similar: db.users.find({"name": /m/}) You're looking for something that contains "m" somewhere (SQL's '%' operator is equivalent to regular expressions' '.*'), not something that has "m" anchored to the beginning of the string. Note: MongoDB uses regular expressions (see docs) which are more powerful than "LIKE" in SQL. With regular expressions you can create any pattern that you imagine. For more information on regular expressions, refer to Regular expressions (MDN).
MongoDB
3,305,561
1,948
I have tried to puzzle out an answer to this question for many months while learning pandas. I use SAS for my day-to-day work and it is great for it's out-of-core support. However, SAS is horrible as a piece of software for numerous other reasons. One day I hope to replace my use of SAS with python and pandas, but I currently lack an out-of-core workflow for large datasets. I'm not talking about "big data" that requires a distributed network, but rather files too large to fit in memory but small enough to fit on a hard-drive. My first thought is to use HDFStore to hold large datasets on disk and pull only the pieces I need into dataframes for analysis. Others have mentioned MongoDB as an easier to use alternative. My question is this: What are some best-practice workflows for accomplishing the following: Loading flat files into a permanent, on-disk database structure Querying that database to retrieve data to feed into a pandas data structure Updating the database after manipulating pieces in pandas Real-world examples would be much appreciated, especially from anyone who uses pandas on "large data". Edit -- an example of how I would like this to work: Iteratively import a large flat-file and store it in a permanent, on-disk database structure. These files are typically too large to fit in memory. In order to use Pandas, I would like to read subsets of this data (usually just a few columns at a time) that can fit in memory. I would create new columns by performing various operations on the selected columns. I would then have to append these new columns into the database structure. I am trying to find a best-practice way of performing these steps. Reading links about pandas and pytables it seems that appending a new column could be a problem. Edit -- Responding to Jeff's questions specifically: I am building consumer credit risk models. The kinds of data include phone, SSN and address characteristics; property values; derogatory information like criminal records, bankruptcies, etc... The datasets I use every day have nearly 1,000 to 2,000 fields on average of mixed data types: continuous, nominal and ordinal variables of both numeric and character data. I rarely append rows, but I do perform many operations that create new columns. Typical operations involve combining several columns using conditional logic into a new, compound column. For example, if var1 > 2 then newvar = 'A' elif var2 = 4 then newvar = 'B'. The result of these operations is a new column for every record in my dataset. Finally, I would like to append these new columns into the on-disk data structure. I would repeat step 2, exploring the data with crosstabs and descriptive statistics trying to find interesting, intuitive relationships to model. A typical project file is usually about 1GB. Files are organized into such a manner where a row consists of a record of consumer data. Each row has the same number of columns for every record. This will always be the case. It's pretty rare that I would subset by rows when creating a new column. However, it's pretty common for me to subset on rows when creating reports or generating descriptive statistics. For example, I might want to create a simple frequency for a specific line of business, say Retail credit cards. To do this, I would select only those records where the line of business = retail in addition to whichever columns I want to report on. When creating new columns, however, I would pull all rows of data and only the columns I need for the operations. The modeling process requires that I analyze every column, look for interesting relationships with some outcome variable, and create new compound columns that describe those relationships. The columns that I explore are usually done in small sets. For example, I will focus on a set of say 20 columns just dealing with property values and observe how they relate to defaulting on a loan. Once those are explored and new columns are created, I then move on to another group of columns, say college education, and repeat the process. What I'm doing is creating candidate variables that explain the relationship between my data and some outcome. At the very end of this process, I apply some learning techniques that create an equation out of those compound columns. It is rare that I would ever add rows to the dataset. I will nearly always be creating new columns (variables or features in statistics/machine learning parlance).
I routinely use tens of gigabytes of data in just this fashion e.g. I have tables on disk that I read via queries, create data and append back. It's worth reading the docs and late in this thread for several suggestions for how to store your data. Details which will affect how you store your data, like: Give as much detail as you can; and I can help you develop a structure. Size of data, # of rows, columns, types of columns; are you appending rows, or just columns? What will typical operations look like. E.g. do a query on columns to select a bunch of rows and specific columns, then do an operation (in-memory), create new columns, save these. (Giving a toy example could enable us to offer more specific recommendations.) After that processing, then what do you do? Is step 2 ad hoc, or repeatable? Input flat files: how many, rough total size in Gb. How are these organized e.g. by records? Does each one contains different fields, or do they have some records per file with all of the fields in each file? Do you ever select subsets of rows (records) based on criteria (e.g. select the rows with field A > 5)? and then do something, or do you just select fields A, B, C with all of the records (and then do something)? Do you 'work on' all of your columns (in groups), or are there a good proportion that you may only use for reports (e.g. you want to keep the data around, but don't need to pull in that column explicity until final results time)? Solution Ensure you have pandas at least 0.10.1 installed. Read iterating files chunk-by-chunk and multiple table queries. Since pytables is optimized to operate on row-wise (which is what you query on), we will create a table for each group of fields. This way it's easy to select a small group of fields (which will work with a big table, but it's more efficient to do it this way... I think I may be able to fix this limitation in the future... this is more intuitive anyhow): (The following is pseudocode.) import numpy as np import pandas as pd # create a store store = pd.HDFStore('mystore.h5') # this is the key to your storage: # this maps your fields to a specific group, and defines # what you want to have as data_columns. # you might want to create a nice class wrapping this # (as you will want to have this map and its inversion) group_map = dict( A = dict(fields = ['field_1','field_2',.....], dc = ['field_1',....,'field_5']), B = dict(fields = ['field_10',...... ], dc = ['field_10']), ..... REPORTING_ONLY = dict(fields = ['field_1000','field_1001',...], dc = []), ) group_map_inverted = dict() for g, v in group_map.items(): group_map_inverted.update(dict([ (f,g) for f in v['fields'] ])) Reading in the files and creating the storage (essentially doing what append_to_multiple does): for f in files: # read in the file, additional options may be necessary here # the chunksize is not strictly necessary, you may be able to slurp each # file into memory in which case just eliminate this part of the loop # (you can also change chunksize if necessary) for chunk in pd.read_table(f, chunksize=50000): # we are going to append to each table by group # we are not going to create indexes at this time # but we *ARE* going to create (some) data_columns # figure out the field groupings for g, v in group_map.items(): # create the frame for this group frame = chunk.reindex(columns = v['fields'], copy = False) # append it store.append(g, frame, index=False, data_columns = v['dc']) Now you have all of the tables in the file (actually you could store them in separate files if you wish, you would prob have to add the filename to the group_map, but probably this isn't necessary). This is how you get columns and create new ones: frame = store.select(group_that_I_want) # you can optionally specify: # columns = a list of the columns IN THAT GROUP (if you wanted to # select only say 3 out of the 20 columns in this sub-table) # and a where clause if you want a subset of the rows # do calculations on this frame new_frame = cool_function_on_frame(frame) # to 'add columns', create a new group (you probably want to # limit the columns in this new_group to be only NEW ones # (e.g. so you don't overlap from the other tables) # add this info to the group_map store.append(new_group, new_frame.reindex(columns = new_columns_created, copy = False), data_columns = new_columns_created) When you are ready for post_processing: # This may be a bit tricky; and depends what you are actually doing. # I may need to modify this function to be a bit more general: report_data = store.select_as_multiple([groups_1,groups_2,.....], where =['field_1>0', 'field_1000=foo'], selector = group_1) About data_columns, you don't actually need to define ANY data_columns; they allow you to sub-select rows based on the column. E.g. something like: store.select(group, where = ['field_1000=foo', 'field_1001>0']) They may be most interesting to you in the final report generation stage (essentially a data column is segregated from other columns, which might impact efficiency somewhat if you define a lot). You also might want to: create a function which takes a list of fields, looks up the groups in the groups_map, then selects these and concatenates the results so you get the resulting frame (this is essentially what select_as_multiple does). This way the structure would be pretty transparent to you. indexes on certain data columns (makes row-subsetting much faster). enable compression. Let me know when you have questions!
MongoDB
14,262,433
1,183
I have a MongoDB collection with documents in the following format: { "_id" : ObjectId("4e8ae86d08101908e1000001"), "name" : ["Name"], "zipcode" : ["2223"] } { "_id" : ObjectId("4e8ae86d08101908e1000002"), "name" : ["Another ", "Name"], "zipcode" : ["2224"] } I can currently get documents that match a specific array size: db.accommodations.find({ name : { $size : 2 }}) This correctly returns the documents with 2 elements in the name array. However, I can't do a $gt command to return all documents where the name field has an array size of greater than 2: db.accommodations.find({ name : { $size: { $gt : 1 } }}) How can I select all documents with a name array of a size greater than one (preferably without having to modify the current data structure)?
There's a more efficient way to do this in MongoDB 2.2+ now that you can use numeric array indexes (0 based) in query object keys. // Find all docs that have at least two name array elements. db.accommodations.find({'name.1': {$exists: true}}) You can support this query with an index that uses a partial filter expression (requires 3.2+): // index for at least two name array elements db.accommodations.createIndex( {'name.1': 1}, {partialFilterExpression: {'name.1': {$exists: true}}} );
MongoDB
7,811,163
977
What's the easiest way to do this from my bash prompt?
Like this: mongo <dbname> --eval "db.dropDatabase()" More info on scripting the shell from the command line here: https://docs.mongodb.com/manual/tutorial/write-scripts-for-the-mongo-shell/#scripting Edit: In Mongo 6.0 mongo was removed and replaced with mongosh which has to be installed separately. More info here: https://www.mongodb.com/docs/mongodb-shell/#mongodb-binary-bin.mongosh The equivalent in mongosh would be: mongosh --eval "use <dbname>" --eval "db.dropDatabase()"
MongoDB
8,857,276
939
In the MongoDB shell, how do I list all collections for the current database that I'm using?
You can do... JavaScript (shell): db.getCollectionNames() Node.js: db.listCollections() Non-JavaScript (shell only): show collections The reason I call that non-JavaScript is because: $ mongo prodmongo/app --eval "show collections" MongoDB shell version: 3.2.10 connecting to: prodmongo/app 2016-10-26T19:34:34.886-0400 E QUERY [thread1] SyntaxError: missing ; before statement @(shell eval):1:5 $ mongo prodmongo/app --eval "db.getCollectionNames()" MongoDB shell version: 3.2.10 connecting to: prodmongo/app [ "Profiles", "Unit_Info" ] If you really want that sweet, sweet show collections output, you can: $ mongo prodmongo/app --eval "db.getCollectionNames().join('\n')" MongoDB shell version: 3.2.10 connecting to: prodmongo/app Profiles Unit_Info
MongoDB
8,866,041
914
All of my records have a field called "pictures". This field is an array of strings. I now want the newest 10 records where this array IS NOT empty. I've googled around, but strangely enough I haven't found much on this. I've read into the $where option, but I was wondering how slow that is to native functions, and if there is a better solution. And even then, that does not work: ME.find({$where: 'this.pictures.length > 0'}).sort('-created').limit(10).execFind() Returns nothing. Leaving this.pictures without the length bit does work, but then it also returns empty records, of course.
If you also have documents that don't have the key, you can use: ME.find({ pictures: { $exists: true, $not: {$size: 0} } }) MongoDB doesn't use indexes if $size is involved, so here is a better solution: ME.find({ pictures: { $exists: true, $ne: [] } }) If your property can have invalid values (like null boolean or others) , then you an add an additional check using $types as proposed in this answer: With mongo >= 3.2: ME.find({ pictures: { $exists: true, $type: 'array', $ne: [] } }) With mongo < 3.2: ME.find({ pictures: { $exists: true, $type: 4, $ne: [] } }) Since the MongoDB 2.6 release, you can compare with the operator $gt, but this could lead to unexpected results (you can find a detailed explanation in this answer): ME.find({ pictures: { $gt: [] } })
MongoDB
14,789,684
778