question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I would like to know if there is support for OFFSET in AWS Athena. For mysql the following query is running but in athena it is giving me error. Any example would be helpful.
select * from employee where empSal >3000 LIMIT 300 OFFSET 20
| Athena is basically managed Presto. Since Presto 311 you can use OFFSET m LIMIT n syntax or ANSI SQL equivalent: OFFSET m ROWS FETCH NEXT n ROWS ONLY.
You can read more in Beyond LIMIT, Presto meets OFFSET and TIES.
For older versions (and this includes AWS Athena as of this writing), you can use row_number() window function to implement OFFSET + LIMIT.
For example, instead of
SELECT * FROM elb_logs
OFFSET 5 LIMIT 5 -- this doesn't work, obviously
You can execute
SELECT * FROM (
SELECT row_number() over() AS rn, * FROM elb_logs)
WHERE rn BETWEEN 5 AND 10;
Note: the execution engine will still need to read offset+limit rows from the underlying table, but this is still much better than sending all these rows back to the client and taking a sublist there.
Warning: see https://stackoverflow.com/a/45114359/65458 for explanation why avoiding OFFSET in queries is generally a good idea.
| Presto | 51,298,622 | 12 |
I have 3 tables I would like to work on using the date, however one of the tables includes the date in unix epoch format. Here is an example of the 3 fields:
Table1: 2017-02-01T07:58:40.756031Z
Table2: 2017-02-07T10:16:46Z
Table3: 1489236559
I would like to convert the date in table 3 to match the format of table 2 as closely as possible. This is what I have right now:
SELECT cast(from_unixtime(tstart) as timestamp) as ISODATE from scada_logs
This gives me something tantalizingly close, but not quite there. Here is what I get:
ISODATE
2017-03-20 14:31:06.000
2017-03-20 14:31:06.000
I've played around with this for a couple of hours but I'm not getting any closer. Does anyone have any ideas?
Thank you!
| Option 1: date_format
presto> select date_format(from_unixtime(1489236559),'%Y-%m-%dT%H:%i:%sZ');
_col0
----------------------
2017-03-11T12:49:19Z
Option 2: to_iso8601
presto> select to_iso8601(from_unixtime(1489236559));
_col0
--------------------------
2017-03-11T12:49:19.000Z
| Presto | 42,927,726 | 11 |
I am trying to search a column having the data type map(varchar,varchar). Now one way to access the column is to use this structure, name_of_column['key'], which will give the value for that key. But I want to know what are possible keys and then apply group by and other operations based on those keys.
I tried searching on the internet but was not able to find appropriate solution.
| Presto offers map_keys function for that:
presto> select map_keys(map(array['alice'], array['has a cat']));
_col0
---------
[alice]
(1 row)
| Presto | 45,096,736 | 11 |
I have a query that produces strings of arrays using they array_agg() function
SELECT
array_agg(message) as sequence
from mytable
group by id
which produces a table that looks like this:
sequence
1 foo foo bar baz bar baz
2 foo bar bar bar baz
3 foo foo foo bar bar baz
but I aim to condense the array of strings so that none can repeat more than once in a row, for example, the desired output would look like this:
sequence
1 foo bar baz bar baz
2 foo bar baz
3 foo bar baz
How would one go about doing this with Presto SQL ?
| You can do this in one of two ways:
Remove duplicates from the resulting arrays using the array_distinct function:
WITH mytable(id, message) AS (VALUES
(1, 'foo'), (1, 'foo'), (1, 'bar'), (1, 'bar'), (1, 'baz'), (1, 'baz'),
(2, 'foo'), (2, 'bar'), (2, 'bar'), (2, 'bar'), (2, 'baz'),
(3, 'foo'), (3, 'foo'), (3, 'foo'), (3, 'bar'), (3, 'bar'), (3, 'baz')
)
SELECT array_distinct(array_agg(message)) AS sequence
FROM mytable
GROUP BY id
Use the DISTINCT qualifier in the aggregation to remove the duplicate values before they are passed into array_agg.
WITH mytable(id, message) AS (VALUES
(1, 'foo'), (1, 'foo'), (1, 'bar'), (1, 'bar'), (1, 'baz'), (1, 'baz'),
(2, 'foo'), (2, 'bar'), (2, 'bar'), (2, 'bar'), (2, 'baz'), (3, 'foo'),
(3, 'foo'), (3, 'foo'), (3, 'bar'), (3, 'bar'), (3, 'baz')
)
SELECT array_agg(DISTINCT message) AS sequence
FROM mytable
GROUP BY id
Both alternatives produce the same result:
sequence
-----------------
[foo, bar, baz]
[foo, bar, baz]
[foo, bar, baz]
(3 rows)
UPDATE: You can remove repeated sequences of elements with the recently introduced MATCH_RECOGNIZE feature:
WITH mytable(id, message) AS (VALUES
(1, 'foo'), (1, 'foo'), (1, 'bar'), (1, 'baz'), (1, 'bar'), (1, 'baz'),
(2, 'foo'), (2, 'bar'), (2, 'bar'), (2, 'bar'), (2, 'baz'),
(3, 'foo'), (3, 'foo'), (3, 'foo'), (3, 'bar'), (3, 'bar'), (3, 'baz')
)
SELECT array_agg(value) AS sequence
FROM mytable
MATCH_RECOGNIZE(
PARTITION BY id
MEASURES A.message AS value
PATTERN (A B*)
DEFINE B AS message = PREV(message)
)
GROUP BY id
| Presto | 56,349,907 | 11 |
I suspect the answer is "it depends", but is there any general guidance about what kind of hardware to plan to use for Presto?
Since Presto uses a coordinator and a set of workers, and workers run with the data, I imagine the main issues will be having sufficient RAM for the coordinator, sufficient network bandwidth for partial results sent from workers to the coordinator, etc.
If you can supply some general thoughts on how to size for this appropriately, I'd love to hear them.
| Most people are running Trino (formerly PrestoSQL) on the Hadoop nodes they already have. At Facebook we typically run Presto on a few nodes within the Hadoop cluster to spread out the network load.
Generally, I'd go with the industry standard ratios for a new cluster: 2 cores and 2-4 gig of memory for each disk, with 10 gigabit networking if you can afford it. After you have a few machines (4+), benchmark using your queries on your data. It should be obvious if you need to adjust the ratios.
In terms of sizing the hardware for a cluster from scratch some things to consider:
Total data size will determine the number of disks you will need. HDFS has a large overhead so you will need lots of disks.
The ratio of CPU speed to disks depends on the ratio between hot data (the data you are working with) and the cold data (archive data). If you just starting your data warehouse you will need lots of CPUs since all the data will be new and hot. On the other hand, most physical disks can only deliver data so fast, so at some point more CPUs don't help.
The ratio of CPU speed to memory depends on the size of aggregations and joins you want to perform and the amount of (hot) data you want to cache. Currently, Presto requires the final aggregation results and the hash table for a join to fit in memory on a single machine (we're actively working on removing these restrictions). If you have larger amounts of memory, the OS will cache disk pages which will significantly improve the performance of queries.
In 2013 at Facebook we ran our Presto processes as follows:
We ran our JVMs with a 16 GB heap to leave most memory available for OS buffers
On the machines we ran Presto we didn't run MapReduce tasks.
Most of the Presto machines had 16 real cores and used processor affinity (eventually cgroups) to limit Presto to 12 cores (so the Hadoop data node process and other things could run easily).
Most of the servers were on a 10 gigabit networks, but we did have one large old crufty cluster using 1 gigabit (which worked fine).
We used the same configuration for the coordinator and the workers.
In recent times, we ran the following:
The machines had 256 GB of memory and we ran a 200 GB Java heap
Most of the machines had 24-32 real cores and Presto was allocated all cores.
The machines had only minimal local storage for logs, with all table data remote (in a proprietary distributed file system).
Most servers had a 25 gigabit network connection to a fabric network.
The coordinators and workers had similar configurations.
| Presto | 19,863,857 | 10 |
I'm using Presto(0.163) to query data and am trying to extract fields from a json.
I have a json like the one given below, which is present in the column 'style_attributes':
"attributes": {
"Brand Fit Name": "Regular Fit",
"Fabric": "Cotton",
"Fit": "Regular",
"Neck or Collar": "Round Neck",
"Occasion": "Casual",
"Pattern": "Striped",
"Sleeve Length": "Short Sleeves",
"Tshirt Type": "T-shirt"
}
I'm unable to extract field 'Short Sleeves'.
Below is the query i'm using:
Select JSON_EXTRACT(style_attributes,'$.attributes.Sleeve Length') as length from table;
The query fails with the following error- Invalid JSON path: '$.attributes.Sleeve Length'
For fields without ' '(space), query is running fine.
I tried to find the resolution in the Presto documentation, but with no success.
| presto:default> select json_extract_scalar('{"attributes":{"Sleeve Length": "Short Sleeves"}}','$.attributes["Sleeve Length"]');
_col0
---------------
Short Sleeves
or
presto:default> select json_extract_scalar('{"attributes":{"Sleeve Length": "Short Sleeves"}}','$["attributes"]["Sleeve Length"]');
_col0
---------------
Short Sleeves
JSON Function Changes
The :func:json_extract and :func:json_extract_scalar functions now
support the square bracket syntax:
SELECT json_extract(json, '$.store[book]');
SELECT json_extract(json,'$.store["book name"]');
As part of this change, the set of characters
allowed in a non-bracketed path segment has been restricted to
alphanumeric, underscores and colons. Additionally, colons cannot be
used in a un-quoted bracketed path segment. Use the new bracket syntax
with quotes to match elements that contain special characters.
https://github.com/prestodb/presto/blob/c73359fe2173e01140b7d5f102b286e81c1ae4a8/presto-docs/src/main/sphinx/release/release-0.75.rst
| Presto | 43,271,898 | 10 |
I am working with some course data in a Presto database. The data in the table looks like:
student_id period score completed
1 2016_Q1 3 Y
1 2016_Q3 4 Y
3 2017_Q1 4 Y
4 2018_Q1 2 N
I would like to format the data so that it looks like:
student_id 2018_Q1_score 2018_Q1_completed 2017_Q3_score
1 0 N 5
3 4 Y 4
4 2 N 2
I know that I could do this by joining to the table for each time period, but I wanted to ask here to see if any gurus had a recommendation for a more scalable solution (e.g. perhaps not having to manually create a new join for each period). Any suggestions?
| You can just use conditional aggregation:
select student_id,
max(case when period = '2018_Q1' then score else 0 end) as score_2018q1,
max(case when period = '2018_Q1' then completed then 'N' end) as completed_2018q1,
max(case when period = '2017_Q3' then score else 0 end) as score_2017q3
from t
group by student_id
| Presto | 51,372,566 | 10 |
I'm querying some tables on Athena (Presto SAS) and then downloading the generated CSV file to use locally. Opening the file, I realised the data contains new line characters that doesn't appear on AWS interface, only in the CSV and need to get rid of them. Tried using the function replace(string, search, replace) → varchar to skip the newline char replacing \n for \\n without success:
SELECT
p.recvepoch, replace(p.description, '\n', '\\n') AS description
FROM
product p
LIMIT 1000
How can I achieve that?
| The problem was that the underlying table data doesn't actually contains \n anywhere, instead, the actual newline character, which is represented by char(10). I was able to achieve the expected behaviour using the replace function passing it as parameter:
SELECT
p.recvepoch, replace(p.description, chr(10), '\n') AS description
FROM
product p
LIMIT 1000
| Presto | 56,001,003 | 10 |
SELECT sales_invoice_date,
MONTH( DATE_TRUNC('month',
CASE
WHEN TRIM(sales_invoice_date) = '' THEN
DATE('1999-12-31')
ELSE
DATE_PARSE(sales_invoice_date, '%m/%d/%Y')
END) ) AS DT
FROM testdata_parquet
I used the query above to convert the string into date and was able to get the month number on AWS athena but I wasn't able to get the corresponding month name
I have already tried monthname and datename('month', ...) but they gave the following error messages respectively:
SYNTAX_ERROR: line 2:1: Function monthname not registered
SYNTAX_ERROR: line 2:1: Function datename not registered
| Athena is currently based on Presto .172, so you should refer to https://trino.io/docs/0.172/functions/datetime.html for available functions on date/time values.
You can get month name with date_format():
date_format(value, '%M')
or similarly format_datetime().
format_datetime(value, 'MMM')
Example:
presto:default> SELECT date_format(current_date, '%M');
_col0
----------
December
(1 row)
(verified on Presto 327, but will work in Athena too)
| Presto | 59,485,913 | 10 |
I have a very basic group by query in Athena where I would like to use an alias. One can make the example work by putting the same reference in the group by, but that's not really handy when there's complex column modifications going on and logic needs to be copied in two places. Also I did that in the past and now I have a statement that doesn't work by copying over.
Problem:
SELECT
substr(accountDescriptor, 5) as account,
sum(revenue) as grossRevenue
FROM sales
GROUP BY account
This will throw an error:
alias Column 'account' cannot be resolved
The following works, so it's about the alias handling.
SELECT
substr(accountDescriptor, 5) as account,
sum(revenue) as grossRevenue
FROM sales
GROUP BY substr(accountDescriptor, 5)
| That is because SQL is evaluated in certain order, like table scan, filter, aggregation, projection, sort. You tried to use the result of projection as input of aggregation. In many cases it could be possible (where projection is trivial, like your case), but it such behaviour is not defined in ANSI SQL (which Presto and so Athena follows).
We see that in many cases it is very useful so, support for this might be added in future (extending ANSI SQL).
Currently, there are several ways to overcome this:
SELECT account, sum(revenue) as grossRevenue
FROM (SELECT substr(accountDescriptor, 5) as account, revenue FROM sales)
GROUP BY account
or
WITH better_sales AS (SELECT substr(accountDescriptor, 5) as account, revenue FROM sales)
SELECT account, sum(revenue) as grossRevenue
FROM better_sales
GROUP BY account
or
SELECT account, sum(revenue) as grossRevenue
FROM sales
LATERAL JOIN (SELECT substr(accountDescriptor, 5) as account)
GROUP BY account
or
SELECT substr(accountDescriptor, 5) as account, sum(revenue) as grossRevenue
FROM sales
GROUP BY 1;
| Presto | 60,142,810 | 10 |
I have the following query:
select id, table1.date1, table2.date2, table1.name
from table1
join table2 using (id)
I want also to have another column with MAX(table1.date1, table2.date2) but I don't find the proper syntax for that. I don't want MAX to go over all rows in table and take MAX() I want it to select max from the two values specified in the row.
Example:
id date1 date2 name max
1 2020-01-01 2020-04-01 A 2020-04-01
2 2019-02-01 2020-01-03 B 2020-01-03
3 2019-02-01 null c 2019-02-01
I can't also do group by because I don't want to group anything here.
It's more similar to coalesce give a function list of values and choose the max value from it
| Use greatest():
select id, t1.date1, t2.date2, t1.name,
greatest(t1.date1, t2.date2)
from table1 t1 join
table2 t2
using (id);
Note that greatest() returns NULL if any argument is NULL. So, if you have NULL values, you will need special care.
| Presto | 61,919,230 | 10 |
I'm looking for a function in Presto to concat two columns with a separator like underline.
| Your are looking here for the array_join function, see docs.
array_join(x, delimiter, null_replacement) → varchar
Concatenates the elements of the given array using the
delimiter and an optional string to replace nulls.
Example:
columns are c1,c2 you can add more of course:
WITH demo_table (c1,c2) AS
(SELECT * FROM (VALUES (1,2),(3,4),(5,null),(7,8) ))
SELECT array_join(array[c1,c2], '_', 'NA')
FROM demo_table
Results will be:
1_2
3_4
5_NA
7_8
| Presto | 62,868,039 | 10 |
I want to query a column in my table using a LIKE condition and this works fine-
select * from my_table where my_column LIKE '%hello%';
But, how do I query this column with multiple strings in my LIKE condition? Looking for something like-
select * from my_table where my_column LIKE ['%hello%'|'example%'|'%random%'|'%demo'];
| Use regexp_like():
select *
from my_table
where regexp_like(my_column, 'hello|example|random|demo');
| Presto | 64,780,028 | 10 |
After upgrading MySQL to 5.7.8-rc
and loging to server I got error:
Table 'performance_schema.session_variables' doesn't exist
I can't find any solution for this. Can you help ?
| I was able to log on to the mysql server after running the command @robregonm suggested:
mysql_upgrade -u root -p --force
A MySQL server restart is required.
| MySQL | 31,967,527 | 333 |
I'm working with map data, and the Latitude/Longitude extends to 8 decimal places. For example:
Latitude 40.71727401
Longitude -74.00898606
I saw in the Google document
which uses:
lat FLOAT( 10, 6 ) NOT NULL,
lng FLOAT( 10, 6 ) NOT NULL
however, their decimal places only go to 6.
Should I use FLOAT(10, 8) or is there another method to consider for storing this data so it's precise. It will be used with map calculations. Thanks!
| MySQL supports Spatial data types and Point is a single-value type which can be used. Example:
CREATE TABLE `buildings` (
`coordinate` POINT NOT NULL,
/* Even from v5.7.5 you can define an index for it */
SPATIAL INDEX `SPATIAL` (`coordinate`)
) ENGINE=InnoDB;
/* then for insertion you can */
INSERT INTO `buildings`
(`coordinate`)
VALUES
(POINT(40.71727401 -74.00898606));
| MySQL | 12,504,208 | 331 |
I have two tables, both looking like
id name value
===================
1 Joe 22
2 Derk 30
I need to copy the value of value from tableA to tableB based on check name in each table.
Any tips for this UPDATE statement?
| In addition to this answer if you need to change tableB.value according to tableA.value dynamically you can do for example:
UPDATE tableB
INNER JOIN tableA ON tableB.name = tableA.name
SET tableB.value = IF(tableA.value > 0, tableA.value, tableB.value)
WHERE tableA.name = 'Joe'
| MySQL | 11,709,043 | 330 |
This is how my connection is set:
Connection conn = DriverManager.getConnection(url + dbName + "?useUnicode=true&characterEncoding=utf-8", userName, password);
And I'm getting the following error when tyring to add a row to a table:
Incorrect string value: '\xF0\x90\x8D\x83\xF0\x90...' for column 'content' at row 1
I'm inserting thousands of records, and I always get this error when the text contains \xF0 (i.e. the the incorrect string value always starts with \xF0).
The column's collation is utf8_general_ci.
What could be the problem?
| MySQL's utf8 permits only the Unicode characters that can be represented with 3 bytes in UTF-8. Here you have a character that needs 4 bytes: \xF0\x90\x8D\x83 (U+10343 GOTHIC LETTER SAUIL).
If you have MySQL 5.5 or later you can change the column encoding from utf8 to utf8mb4. This encoding allows storage of characters that occupy 4 bytes in UTF-8.
You may also have to set the server property character_set_server to utf8mb4 in the MySQL configuration file. It seems that Connector/J defaults to 3-byte Unicode otherwise:
For example, to use 4-byte UTF-8 character sets with Connector/J, configure the MySQL server with character_set_server=utf8mb4, and leave characterEncoding out of the Connector/J connection string. Connector/J will then autodetect the UTF-8 setting.
| MySQL | 10,957,238 | 329 |
Is there any way to later output the name of the database that is currently selected?
| Just use mysql_query (or mysqli_query, even better, or use PDO, best of all) with:
SELECT DATABASE();
Addendum:
There is much discussion over whether or not FROM DUAL should be included in this or not. On a technical level, it is a holdover from Oracle and can safely be removed. If you are inclined, you can use it as follows:
SELECT DATABASE() FROM DUAL;
That said, it is perhaps important to note, that while FROM DUAL does not actually do anything, it is valid MySQL syntax. From a strict perspective, including braces in a single line conditional in JavaScript also does not do anything, but it is still a valid practice.
| MySQL | 8,096,550 | 329 |
I'm building a Django site and I am looking for a search engine.
A few candidates:
Lucene/Lucene with Compass/Solr
Sphinx
Postgresql built-in full text search
MySQl built-in full text search
Selection criteria:
result relevance and ranking
searching and indexing speed
ease of use and ease of integration with Django
resource requirements - site will be hosted on a VPS, so ideally the search engine wouldn't require a lot of RAM and CPU
scalability
extra features such as "did you mean?", related searches, etc
Anyone who has had experience with the search engines above, or other engines not in the list -- I would love to hear your opinions.
EDIT: As for indexing needs, as users keep entering data into the site, those data would need to be indexed continuously. It doesn't have to be real time, but ideally new data would show up in index with no more than 15 - 30 minutes delay
| Good to see someone's chimed in about Lucene - because I've no idea about that.
Sphinx, on the other hand, I know quite well, so let's see if I can be of some help.
Result relevance ranking is the default. You can set up your own sorting should you wish, and give specific fields higher weightings.
Indexing speed is super-fast, because it talks directly to the database. Any slowness will come from complex SQL queries and un-indexed foreign keys and other such problems. I've never noticed any slowness in searching either.
I'm a Rails guy, so I've no idea how easy it is to implement with Django. There is a Python API that comes with the Sphinx source though.
The search service daemon (searchd) is pretty low on memory usage - and you can set limits on how much memory the indexer process uses too.
Scalability is where my knowledge is more sketchy - but it's easy enough to copy index files to multiple machines and run several searchd daemons. The general impression I get from others though is that it's pretty damn good under high load, so scaling it out across multiple machines isn't something that needs to be dealt with.
There's no support for 'did-you-mean', etc - although these can be done with other tools easily enough. Sphinx does stem words though using dictionaries, so 'driving' and 'drive' (for example) would be considered the same in searches.
Sphinx doesn't allow partial index updates for field data though. The common approach to this is to maintain a delta index with all the recent changes, and re-index this after every change (and those new results appear within a second or two). Because of the small amount of data, this can take a matter of seconds. You will still need to re-index the main dataset regularly though (although how regularly depends on the volatility of your data - every day? every hour?). The fast indexing speeds keep this all pretty painless though.
I've no idea how applicable to your situation this is, but Evan Weaver compared a few of the common Rails search options (Sphinx, Ferret (a port of Lucene for Ruby) and Solr), running some benchmarks. Could be useful, I guess.
I've not plumbed the depths of MySQL's full-text search, but I know it doesn't compete speed-wise nor feature-wise with Sphinx, Lucene or Solr.
| MySQL | 737,275 | 329 |
How do I use the json_encode() function with MySQL query results? Do I need to iterate through the rows or can I just apply it to the entire results object?
| $sth = mysqli_query($conn, "SELECT ...");
$rows = array();
while($r = mysqli_fetch_assoc($sth)) {
$rows[] = $r;
}
print json_encode($rows);
The function json_encode needs PHP >= 5.2 and the php-json package - as mentioned here
Modern PHP versions support mysqli_fetch_all() function that will get your array in one go
$result = mysqli_query($conn, "SELECT ...");
$rows = mysqli_fetch_all($result); // list arrays with values only in rows
// or
$rows = mysqli_fetch_all($result, MYSQLI_ASSOC); // assoc arrays in rows
print json_encode($rows);
| MySQL | 383,631 | 329 |
In MySQL I am trying to copy a row with an autoincrement column ID=1 and insert the data into same table as a new row with column ID=2.
How can I do this in a single query?
| Use INSERT ... SELECT:
insert into your_table (c1, c2, ...)
select c1, c2, ...
from your_table
where id = 1
where c1, c2, ... are all the columns except id. If you want to explicitly insert with an id of 2 then include that in your INSERT column list and your SELECT:
insert into your_table (id, c1, c2, ...)
select 2, c1, c2, ...
from your_table
where id = 1
You'll have to take care of a possible duplicate id of 2 in the second case of course.
| MySQL | 9,156,340 | 328 |
I'm working on getting my database to talk to my Java programs.
Can someone give me a quick and dirty sample program using the JDBC?
I'm getting a rather stupendous error:
Exception in thread "main" com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2260)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:787)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:357)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:207)
at SqlTest.main(SqlTest.java:22)
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1122)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:344)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2181)
... 12 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:432)
at java.net.Socket.connect(Socket.java:529)
at java.net.Socket.connect(Socket.java:478)
at java.net.Socket.<init>(Socket.java:375)
at java.net.Socket.<init>(Socket.java:218)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:256)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:293)
... 13 more
Contents of the test file:
import com.mysql.jdbc.*;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
public class SqlTest {
public static void main(String [] args) throws Exception {
// Class.forName( "com.mysql.jdbc.Driver" ); // do this in init
// // edit the jdbc url
Connection conn = DriverManager.getConnection(
"jdbc:mysql://localhost:3306/projects?user=user1&password=123");
// Statement st = conn.createStatement();
// ResultSet rs = st.executeQuery( "select * from table" );
System.out.println("Connected?");
}
}
| So, you have a
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
java.net.ConnectException: Connection refused
I'm quoting from this answer which also contains a step-by-step MySQL+JDBC tutorial:
If you get a SQLException: Connection refused or Connection timed out or a MySQL specific CommunicationsException:
Communications link failure, then it means that the DB isn't reachable at all. This can have one or more of the following causes:
IP address or hostname in JDBC URL is wrong.
Hostname in JDBC URL is not recognized by local DNS server.
Port number is missing or wrong in JDBC URL.
DB server is down.
DB server doesn't accept TCP/IP connections.
DB server has run out of connections.
Something in between Java and DB is blocking connections, e.g. a firewall or proxy.
To solve the one or the other, follow the following advices:
Verify and test them with ping.
Refresh DNS or use IP address in JDBC URL instead.
Verify it based on my.cnf of MySQL DB.
Start the DB.
Verify if mysqld is started without the --skip-networking option.
Restart the DB and fix your code accordingly that it closes connections in finally.
Disable firewall and/or configure firewall/proxy to allow/forward the port.
See also:
How should I connect to JDBC database / datasource in a servlet based application?
Is it safe to use a static java.sql.Connection instance in a multithreaded system?
| MySQL | 2,983,248 | 326 |
I would like to know the command to perform a mysqldump of a database without the prompt for the password.
REASON:
I would like to run a cron job, which takes a mysqldump of the database once everyday. Therefore, I won't be able to insert the password when prompted.
How could I solve this?
| Since you are using Ubuntu, all you need to do is just to add a file in your home directory and it will disable the mysqldump password prompting. This is done by creating the file ~/.my.cnf (permissions need to be 600).
Add this to the .my.cnf file
[mysqldump]
user=mysqluser
password=secret
This lets you connect as a MySQL user who requires a password without having to actually enter the password. You don't even need the -p or --password.
Very handy for scripting mysql & mysqldump commands.
The steps to achieve this can be found in this link.
Alternatively, you could use the following command:
mysqldump -u [user name] -p[password] [database name] > [dump file]
but be aware that it is inherently insecure, as the entire command (including password) can be viewed by any other user on the system while the dump is running, with a simple ps ax command.
| MySQL | 9,293,042 | 325 |
If I have a table
CREATE TABLE users (
id int(10) unsigned NOT NULL auto_increment,
name varchar(255) NOT NULL,
profession varchar(255) NOT NULL,
employer varchar(255) NOT NULL,
PRIMARY KEY (id)
)
and I want to get all unique values of profession field, what would be faster (or recommended):
SELECT DISTINCT u.profession FROM users u
or
SELECT u.profession FROM users u GROUP BY u.profession
?
| They are essentially equivalent to each other (in fact this is how some databases implement DISTINCT under the hood).
If one of them is faster, it's going to be DISTINCT. This is because, although the two are the same, a query optimizer would have to catch the fact that your GROUP BY is not taking advantage of any group members, just their keys. DISTINCT makes this explicit, so you can get away with a slightly dumber optimizer.
When in doubt, test!
| MySQL | 581,521 | 325 |
I have a simple AJAX call, and the server will return either a JSON string with useful data or an error message string produced by the PHP function mysql_error(). How can I test whether this data is a JSON string or the error message.
It would be nice to use a function called isJSON just like you can use the function instanceof to test if something is an Array.
This is what I want:
if (isJSON(data)){
//do some data stuff
}else{
//report the error
alert(data);
}
| Use JSON.parse
function isJson(str) {
try {
JSON.parse(str);
} catch (e) {
return false;
}
return true;
}
| MySQL | 9,804,777 | 324 |
I'm having a problem where when I try to select the rows that have a NULL for a certain column, it returns an empty set. However, when I look at the table in phpMyAdmin, it says null for most of the rows.
My query looks something like this:
SELECT pid FROM planets WHERE userid = NULL
Empty set every time.
A lot of places said to make sure it's not stored as "NULL" or "null" instead of an actual value, and one said to try looking for just a space (userid = ' ') but none of these have worked. There was a suggestion to not use MyISAM and use innoDB because MyISAM has trouble storing null. I switched the table to innoDB but now I feel like the problem may be that it still isn't actually null because of the way it might convert it. I'd like to do this without having to recreate the table as innoDB or anything else, but if I have to, I can certainly try that.
| SQL NULL's special, and you have to do WHERE field IS NULL, as NULL cannot be equal to anything,
including itself (ie: NULL = NULL is always false).
See Rule 3 https://en.wikipedia.org/wiki/Codd%27s_12_rules
| MySQL | 3,536,670 | 324 |
I'm not sure how password hashing works (will be implementing it later), but need to create database schema now.
I'm thinking of limiting passwords to 4-20 characters, but as I understand after encrypting hash string will be of different length.
So, how to store these passwords in the database?
| Update: Simply using a hash function is not strong enough for storing passwords. You should read the answer from Gilles on this thread for a more detailed explanation.
For passwords, use a key-strengthening hash algorithm like Bcrypt or Argon2i. For example, in PHP, use the password_hash() function, which uses Bcrypt by default.
$hash = password_hash("rasmuslerdorf", PASSWORD_DEFAULT);
The result is a 60-character string similar to the following (but the digits will vary, because it generates a unique salt).
$2y$10$.vGA1O9wmRjrwAVXD98HNOgsNpDczlqm3Jq7KnEd1rVAGv3Fykk1a
Use the SQL data type CHAR(60) to store this encoding of a Bcrypt hash. Note this function doesn't encode as a string of hexadecimal digits, so we can't as easily unhex it to store in binary.
Other hash functions still have uses, but not for storing passwords, so I'll keep the original answer below, written in 2008.
It depends on the hashing algorithm you use. Hashing always produces a result of the same length, regardless of the input. It is typical to represent the binary hash result in text, as a series of hexadecimal digits. Or you can use the UNHEX() function to reduce a string of hex digits by half.
MD5 generates a 128-bit hash value. You can use CHAR(32) or BINARY(16)
SHA-1 generates a 160-bit hash value. You can use CHAR(40) or BINARY(20)
SHA-224 generates a 224-bit hash value. You can use CHAR(56) or BINARY(28)
SHA-256 generates a 256-bit hash value. You can use CHAR(64) or BINARY(32)
SHA-384 generates a 384-bit hash value. You can use CHAR(96) or BINARY(48)
SHA-512 generates a 512-bit hash value. You can use CHAR(128) or BINARY(64)
BCrypt generates an implementation-dependent 448-bit hash value. You might need CHAR(56), CHAR(60), CHAR(76), BINARY(56) or BINARY(60)
As of 2015, NIST recommends using SHA-256 or higher for any applications of hash functions requiring interoperability. But NIST does not recommend using these simple hash functions for storing passwords securely.
Lesser hashing algorithms have their uses (like internal to an application, not for interchange), but they are known to be crackable.
| MySQL | 247,304 | 324 |
Is it possible to do a select statement that takes only NOT NULL values?
Right now I am using this:
SELECT * FROM table
And then I have to filter out the null values with a php loop.
Is there a way to do:
SELECT * (that are NOT NULL) FROM table
?
Right now when I select * I get val1,val2,val3,null,val4,val5,null,null etc.... but I just want to get the values that are not null in my result. Is this possible without filtering with a loop?
| You should use IS NOT NULL. (The comparison operators = and <> both give UNKNOWN with NULL on either side of the expression.)
SELECT *
FROM table
WHERE YourColumn IS NOT NULL;
Just for completeness I'll mention that in MySQL you can also negate the null safe equality operator but this is not standard SQL.
SELECT *
FROM table
WHERE NOT (YourColumn <=> NULL);
Edited to reflect comments. It sounds like your table may not be in first normal form in which case changing the structure may make your task easier. A couple of other ways of doing it though...
SELECT val1 AS val
FROM your_table
WHERE val1 IS NOT NULL
UNION ALL
SELECT val2
FROM your_table
WHERE val2 IS NOT NULL
/*And so on for all your columns*/
The disadvantage of the above is that it scans the table multiple times once for each column. That may possibly be avoided by the below but I haven't tested this in MySQL.
SELECT CASE idx
WHEN 1 THEN val1
WHEN 2 THEN val2
END AS val
FROM your_table
/*CROSS JOIN*/
JOIN (SELECT 1 AS idx
UNION ALL
SELECT 2) t
HAVING val IS NOT NULL /*Can reference alias in Having in MySQL*/
| MySQL | 5,285,448 | 323 |
I have created a table and accidentally put varchar length as 300 instead of 65353. How can I fix that?
An example would be appreciated.
| Have you tried this?
ALTER TABLE <table_name> MODIFY <col_name> VARCHAR(65353);
This will change the col_name's type to VARCHAR(65353)
| MySQL | 1,279,568 | 322 |
Does MySQL index foreign key columns automatically?
| Yes, but only on innodb. Innodb is currently the only shipped table format that has foreign keys implemented.
| MySQL | 304,317 | 322 |
I'm going to run SHA256 on a password + salt, but I don't know how long to make my VARCHAR when setting up the MySQL database. What is a good length?
| A sha256 is 256 bits long -- as its name indicates.
Since sha256 returns a hexadecimal representation, 4 bits are enough to encode each character (instead of 8, like for ASCII), so 256 bits would represent 64 hex characters, therefore you need a varchar(64), or even a char(64), as the length is always the same, not varying at all.
And the demo :
$hash = hash('sha256', 'hello, world!');
var_dump($hash);
Will give you :
$ php temp.php
string(64) "68e656b251e67e8358bef8483ab0d51c6619f3e7a1a9f0e75838d41ff368f728"
i.e. a string with 64 characters.
| MySQL | 2,240,973 | 321 |
How do I enable the MySQL function that logs each SQL query statement received from clients and the time that query statement has submitted?
Can I do that in phpmyadmin or NaviCat?
How do I analyse the log?
| First, Remember that this logfile can grow very large on a busy server.
For mysql < 5.1.29:
To enable the query log, put this in /etc/my.cnf in the [mysqld] section
log = /path/to/query.log #works for mysql < 5.1.29
Also, to enable it from MySQL console
SET general_log = 1;
See http://dev.mysql.com/doc/refman/5.1/en/query-log.html
For mysql 5.1.29+
With mysql 5.1.29+ , the log option is deprecated. To specify the logfile and enable logging, use this in my.cnf in the [mysqld] section:
general_log_file = /path/to/query.log
general_log = 1
Alternately, to turn on logging from MySQL console (must also specify log file location somehow, or find the default location):
SET global general_log = 1;
Also note that there are additional options to log only slow queries, or those which do not use indexes.
| MySQL | 6,479,107 | 319 |
I know I can issue an alter table individually to change the table storage from MyISAM to InnoDB.
I am wondering if there is a way to quickly change all of them to InnoDB?
| Run this SQL statement (in the MySQL client, phpMyAdmin, or wherever) to retrieve all the MyISAM tables in your database.
Replace value of the name_of_your_db variable with your database name.
SET @DATABASE_NAME = 'name_of_your_db';
SELECT CONCAT('ALTER TABLE `', table_name, '` ENGINE=InnoDB;') AS sql_statements
FROM information_schema.tables AS tb
WHERE table_schema = @DATABASE_NAME
AND `ENGINE` = 'MyISAM'
AND `TABLE_TYPE` = 'BASE TABLE'
ORDER BY table_name DESC;
Then, copy the output and run as a new SQL query.
| MySQL | 3,856,435 | 317 |
In MySQL, I have a table, and I want to set the auto_increment value to 5 instead of 1. Is this possible and what query statement does this?
| You can use ALTER TABLE to change the auto_increment initial value:
ALTER TABLE tbl AUTO_INCREMENT = 5;
See the MySQL reference for more details.
| MySQL | 970,597 | 317 |
I am getting the following error when I try to connect to mysql:
Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
Is there a solution for this error? What might be the reason behind it?
| Are you connecting to "localhost" or "127.0.0.1" ? I noticed that when you connect to "localhost" the socket connector is used, but when you connect to "127.0.0.1" the TCP/IP connector is used. You could try using "127.0.0.1" if the socket connector is not enabled/working.
| MySQL | 4,448,467 | 315 |
How do I write a script to install MySQL server on Ubuntu?
sudo apt-get install mysql will install, but it will also ask for a password to be entered in the console.
How do I do this in a non-interactive way? That is, write a script that can provide the password?
#!/bin/bash
sudo apt-get install mysql # To install MySQL server
# How to write script for assigning password to MySQL root user
# End
| sudo debconf-set-selections <<< 'mysql-server mysql-server/root_password password your_password'
sudo debconf-set-selections <<< 'mysql-server mysql-server/root_password_again password your_password'
sudo apt-get -y install mysql-server
For specific versions, such as mysql-server-5.6, you'll need to specify the version in like this:
sudo debconf-set-selections <<< 'mysql-server-5.6 mysql-server/root_password password your_password'
sudo debconf-set-selections <<< 'mysql-server-5.6 mysql-server/root_password_again password your_password'
sudo apt-get -y install mysql-server-5.6
For mysql-community-server, the keys are slightly different:
sudo debconf-set-selections <<< 'mysql-community-server mysql-community-server/root-pass password your_password'
sudo debconf-set-selections <<< 'mysql-community-server mysql-community-server/re-root-pass password your_password'
sudo apt-get -y install mysql-community-server
Replace your_password with the desired root password. (it seems your_password can also be left blank for a blank root password.)
If you writing a script then your shell may be not bash but dash/ash or some basic unix shell. These shells, unlike zsh, ksh93 and bash, doesn't support here-strings <<<. So you should use:
echo ... | sudo debconf-set-selections
Or a more readable print of multiline strings:
cat << EOF | sudo debconf-set-selections
mysql-server mysql-server/root_password password your_password
mysql-server mysql-server/root_password_again password your_password
EOF
You can check with debconf-get-selections command:
$ sudo debconf-get-selections | grep ^mysql
mysql-server mysql-server/root_password_again password your_password
mysql-server mysql-server/root_password password your_password
| MySQL | 7,739,645 | 314 |
What does "unsigned" mean in MySQL and when should I use it?
| MySQL says:
All integer types can have an optional
(nonstandard) attribute UNSIGNED.
Unsigned type can be used to permit
only nonnegative numbers in a column
or when you need a larger upper
numeric range for the column. For
example, if an INT column is UNSIGNED,
the size of the column's range is the
same but its endpoints shift from
-2147483648 and 2147483647 up to 0 and 4294967295.
When do I use it ?
Ask yourself this question: Will this field ever contain a negative value?
If the answer is no, then you want an UNSIGNED data type.
A common mistake is to use a primary key that is an auto-increment INT starting at zero, yet the type is SIGNED, in that case you’ll never touch any of the negative numbers and you are reducing the range of possible id's to half.
| MySQL | 3,895,692 | 314 |
How do I sort a MySQL table by two columns?
What I want are articles sorted by highest ratings first, then most recent date. As an example, this would be a sample output (left # is the rating, then the article title, then the article date)
+================+=============================+==============+
| article_rating | article | article_time |
+================+=============================+==============+
| 50 | This article rocks | Feb 4, 2009 |
+----------------+-----------------------------+--------------+
| 35 | This article is pretty good | Feb 1, 2009 |
+----------------+-----------------------------+--------------+
| 5 | This Article isn't so hot | Jan 25, 2009 |
+================+=============================+==============+
The relevant SQL I'm using is:
ORDER BY article_rating, article_time DESC
I can sort by one or the other, but not both.
| Default sorting is ascending, you need to add the keyword DESC to both your orders:
ORDER BY article_rating DESC, article_time DESC
| MySQL | 514,943 | 314 |
I want to create a new user in MySQL with the syntax:
create user 'demo'@'localhost' identified by 'password';
But it returns an error:
Your password does not satisfy the current policy requirements.
I have tried many passwords but they don't work. How can I fix this?
| Because of your password. You can see password validate configuration metrics using the following query in MySQL client:
SHOW VARIABLES LIKE 'validate_password%';
The output should be something like that :
+--------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------+-------+
| validate_password.check_user_name | ON |
| validate_password.dictionary_file | |
| validate_password.length | 6 |
| validate_password.mixed_case_count | 1 |
| validate_password.number_count | 1 |
| validate_password.policy | LOW |
| validate_password.special_char_count | 1 |
+--------------------------------------+-------+
then you can set the password policy level lower, for example:
SET GLOBAL validate_password.length = 6;
SET GLOBAL validate_password.number_count = 0;
Check the MySQL Documentation.
| MySQL | 43,094,726 | 313 |
I really haven't found normal example of PHP file where MySQL transactions are being used. Can you show me simple example of that?
And one more question. I've already done a lot of programming and didn't use transactions. Can I put a PHP function or something in header.php that if one mysql_query fails, then the others fail too?
I think I have figured it out, is it right?:
mysql_query("SET AUTOCOMMIT=0");
mysql_query("START TRANSACTION");
$a1 = mysql_query("INSERT INTO rarara (l_id) VALUES('1')");
$a2 = mysql_query("INSERT INTO rarara (l_id) VALUES('2')");
if ($a1 and $a2) {
mysql_query("COMMIT");
} else {
mysql_query("ROLLBACK");
}
| The idea I generally use when working with transactions looks like this (semi-pseudo-code):
try {
// First of all, let's begin a transaction
$db->beginTransaction();
// A set of queries; if one fails, an exception should be thrown
$db->query('first query');
$db->query('second query');
$db->query('third query');
// If we arrive here, it means that no exception was thrown
// i.e. no query has failed, and we can commit the transaction
$db->commit();
} catch (\Throwable $e) {
// An exception has been thrown
// We must rollback the transaction
$db->rollback();
throw $e; // but the error must be handled anyway
}
Note that, with this idea, if a query fails, an Exception must be thrown:
PDO can do that, depending on how you configure it
See PDO::setAttribute
and PDO::ATTR_ERRMODE and PDO::ERRMODE_EXCEPTION
else, with some other API, you might have to test the result of the function used to execute a query, and throw an exception yourself.
Unfortunately, there is no magic involved. You cannot just put an instruction somewhere and have transactions done automatically: you still have to specific which group of queries must be executed in a transaction.
For example, quite often you'll have a couple of queries before the transaction (before the begin) and another couple of queries after the transaction (after either commit or rollback) and you'll want those queries executed no matter what happened (or not) in the transaction.
| MySQL | 2,708,237 | 313 |
I have a MySQL database configured with the default collation utf8mb4_general_ci. When I try to insert a row containing an emoji character in the text using the following query
insert into tablename
(column1,column2,column3,column4,column5,column6,column7)
values
('273','3','Hdhdhdh😜😀😊😃hzhzhzzhjzj 我爱你 ❌',49,1,'2016-09-13 08:02:29','2016-09-13 08:02:29');
MySQL is raising the following error
1366 Incorrect string value: '\xF0\x9F\x98\x83\xF0\x9F...' for column
'comment' at row 1
| 1) Database: Change Database default collation as utf8mb4.
2) Table: Change table collation as CHARACTER SET utf8mb4 COLLATE utf8mb4_bin.
Query:
ALTER TABLE Tablename CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_bin
3) Code:
INSERT INTO tablename (column1, column2, column3, column4, column5, column6, column7)
VALUES ('273', '3', 'Hdhdhdh😜😀😊😃hzhzhzzhjzj 我爱你 ❌', 49, 1, '2016-09-13 08:02:29', '2016-09-13 08:02:29')
4) Set utf8mb4 in database connection:
$database_connection = new mysqli($server, $user, $password, $database_name);
$database_connection->set_charset('utf8mb4');
| MySQL | 39,463,134 | 312 |
I connect to mysql from my Linux shell. Every now and then I run a SELECT query that is too big. It prints and prints and I already know this is not what I meant. I would like to stop the query.
Hitting Ctrl+C (a couple of times) kills mysql completely and takes me back to shell, so I have to reconnect.
Is it possible to stop a query without killing mysql itself?
| mysql> show processlist;
+----+------+-----------+-----+---------+------+---------------------+------------------------------+----------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+----+------+-----------+-----+---------+------+---------------------+------------------------------+----------+
| 14 | usr1 | localhost | db1 | Query | 0 | starting | show processlist | 0.000 |
| 16 | usr1 | localhost | db1 | Query | 94 | Creating sort index | SELECT `tbl1`.* FROM `tbl1` | 0.000 |
+----+------+-----------+-----+---------+------+---------------------+------------------------------+----------+
2 rows in set (0.000 sec)
mysql> kill 16;
Query OK, 0 rows affected (0.004 sec)
mysql> show processlist;
+----+------+-----------+-----+---------+------+----------+------------------+----------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+----+------+-----------+-----+---------+------+----------+------------------+----------+
| 14 | usr1 | localhost | db1 | Query | 0 | starting | show processlist | 0.000 |
+----+------+-----------+-----+---------+------+----------+------------------+----------+
1 row in set (0.000 sec)
| MySQL | 3,787,651 | 312 |
Does anyone know if there is a function to obtain the server's time zone setting in MySQL?
UPDATE
This doesn't output any valid info:
mysql> SELECT @@global.time_zone, @@session.time_zone;
+--------------------+---------------------+
| @@global.time_zone | @@session.time_zone |
+--------------------+---------------------+
| SYSTEM | SYSTEM |
+--------------------+---------------------+
If MySQL can't know the exact time_zone being used, that's fine - we can also involve PHP as long as I can get valid info (not like SYSTEM)
| From the manual (section 9.6):
The current values of the global and client-specific time zones can be retrieved like this:
mysql> SELECT @@global.time_zone, @@session.time_zone;
Edit The above returns SYSTEM if MySQL is set to use the system's timezone, which is less than helpful. Since you're using PHP, if the answer from MySQL is SYSTEM, you can then ask the system what timezone it's using via date_default_timezone_get. (Of course, as VolkerK pointed out, PHP may be running on a different server, but as assumptions go, assuming the web server and the DB server it's talking to are set to [if not actually in] the same timezone isn't a huge leap.) But beware that (as with MySQL), you can set the timezone that PHP uses (date_default_timezone_set), which means it may report a different value than the OS is using. If you're in control of the PHP code, you should know whether you're doing that and be okay.
But the whole question of what timezone the MySQL server is using may be a tangent, because asking the server what timezone it's in tells you absolutely nothing about the data in the database. Read on for details:
Further discussion:
If you're in control of the server, of course you can ensure that the timezone is a known quantity. If you're not in control of the server, you can set the timezone used by your connection like this:
set time_zone = '+00:00';
That sets the timezone to GMT, so that any further operations (like now()) will use GMT.
Note, though, that time and date values are not stored with timezone information in MySQL:
mysql> create table foo (tstamp datetime) Engine=MyISAM;
Query OK, 0 rows affected (0.06 sec)
mysql> insert into foo (tstamp) values (now());
Query OK, 1 row affected (0.00 sec)
mysql> set time_zone = '+01:00';
Query OK, 0 rows affected (0.00 sec)
mysql> select tstamp from foo;
+---------------------+
| tstamp |
+---------------------+
| 2010-05-29 08:31:59 |
+---------------------+
1 row in set (0.00 sec)
mysql> set time_zone = '+02:00';
Query OK, 0 rows affected (0.00 sec)
mysql> select tstamp from foo;
+---------------------+
| tstamp |
+---------------------+
| 2010-05-29 08:31:59 | <== Note, no change!
+---------------------+
1 row in set (0.00 sec)
mysql> select now();
+---------------------+
| now() |
+---------------------+
| 2010-05-29 10:32:32 |
+---------------------+
1 row in set (0.00 sec)
mysql> set time_zone = '+00:00';
Query OK, 0 rows affected (0.00 sec)
mysql> select now();
+---------------------+
| now() |
+---------------------+
| 2010-05-29 08:32:38 | <== Note, it changed!
+---------------------+
1 row in set (0.00 sec)
So knowing the timezone of the server is only important in terms of functions that get the time right now, such as now(), unix_timestamp(), etc.; it doesn't tell you anything about what timezone the dates in the database data are using. You might choose to assume they were written using the server's timezone, but that assumption may well be flawed. To know the timezone of any dates or times stored in the data, you have to ensure that they're stored with timezone information or (as I do) ensure they're always in GMT.
Why is assuming the data was written using the server's timezone flawed? Well, for one thing, the data may have been written using a connection that set a different timezone. The database may have been moved from one server to another, where the servers were in different timezones (I ran into that when I inherited a database that had moved from Texas to California). But even if the data is written on the server, with its current time zone, it's still ambiguous. Last year, in the United States, Daylight Savings Time was turned off at 2:00 a.m. on November 1st. Suppose my server is in California using the Pacific timezone and I have the value 2009-11-01 01:30:00 in the database. When was it? Was that 1:30 a.m. November 1st PDT, or 1:30 a.m. November 1st PST (an hour later)? You have absolutely no way of knowing. Moral: Always store dates/times in GMT (which doesn't do DST) and convert to the desired timezone as/when necessary.
| MySQL | 2,934,258 | 308 |
I want to remove constraints from my table. My query is:
ALTER TABLE `tbl_magazine_issue`
DROP CONSTRAINT `FK_tbl_magazine_issue_mst_users`
But I got an error:
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'constraint FK_tbl_magazine_issue_mst_users' at line 1
| Mysql has a special syntax for dropping foreign key constraints:
ALTER TABLE tbl_magazine_issue
DROP FOREIGN KEY FK_tbl_magazine_issue_mst_users
| MySQL | 14,122,031 | 305 |
I have a column of type "datetime" with values like 2009-10-20 10:00:00
I would like to extract date from datetime and write a query like:
SELECT * FROM
data
WHERE datetime = '2009-10-20'
ORDER BY datetime DESC
Is the following the best way to do it?
SELECT * FROM
data
WHERE datetime BETWEEN('2009-10-20 00:00:00' AND '2009-10-20 23:59:59')
ORDER BY datetime DESC
This however returns an empty resultset. Any suggestions?
| You can use MySQL's DATE() function:
WHERE DATE(datetime) = '2009-10-20'
You could also try this:
WHERE datetime LIKE '2009-10-20%'
See this answer for info on the performance implications of using LIKE.
| MySQL | 1,754,411 | 305 |
I tried to import a large sql file through phpMyAdmin...But it kept showing error
'MySql server has gone away'
What to do?
| As stated here:
Two most common reasons (and fixes) for the MySQL server has gone away
(error 2006) are:
Server timed out and closed the connection. How to fix:
check that wait_timeout variable in your mysqld’s my.cnf configuration file is large enough. On Debian: sudo nano
/etc/mysql/my.cnf, set wait_timeout = 600 seconds (you can
tweak/decrease this value when error 2006 is gone), then sudo
/etc/init.d/mysql restart. I didn't check, but the default value for
wait_timeout might be around 28800 seconds (8 hours).
Server dropped an incorrect or too large packet. If mysqld gets a packet that is too large or incorrect, it assumes that something has
gone wrong with the client and closes the connection. You can increase
the maximal packet size limit by increasing the value of
max_allowed_packet in my.cnf file. On Debian: sudo nano
/etc/mysql/my.cnf, set max_allowed_packet = 64M (you can
tweak/decrease this value when error 2006 is gone), then sudo
/etc/init.d/mysql restart.
Edit:
Notice that MySQL option files do not have their commands already available as comments (like in php.ini for instance). So you must type any change/tweak in my.cnf or my.ini and place them in mysql/data directory or in any of the other paths, under the proper group of options such as [client], [myslqd], etc. For example:
[mysqld]
wait_timeout = 600
max_allowed_packet = 64M
Then restart the server. To get their values, type in the mysql client:
> select @@wait_timeout;
> select @@max_allowed_packet;
| MySQL | 12,425,287 | 304 |
table1 is the parent table with a column ID and table2 has a column IDFromTable1.
Why when I put a FK on IDFromTable1 to ID in table1 do I get Foreign key constraint is incorrectly formed error?
(I would like to delete the table2 record if the table1 record gets deleted.)
ALTER TABLE `table2`
ADD CONSTRAINT `FK1`
FOREIGN KEY (`IDFromTable1`) REFERENCES `table1` (`ID`)
ON UPDATE CASCADE
ON DELETE CASCADE;
Both tables' engines are InnoDB. Both colmnns are type char. ID is the primary key in table1.
| I ran into this same cryptic error. My problem was that the foreign key column and the referencing column were not of the same type or length.
The foreign key column was SMALLINT(5) UNSIGNED
The referenced column was INT(10) UNSIGNED
Once I made them both the same exact type, the foreign key creation worked perfectly.
| MySQL | 8,434,518 | 304 |
I was wondering what the difference between BigInt, MediumInt, and Int are... it would seem obvious that they would allow for larger numbers; however, I can make an Int(20) or a BigInt(20) and that would make seem that it is not necessarily about size.
Some insight would be awesome, just kind of curious. I have been using MySQL for a while and trying to apply business needs when choosing types, but I never understood this aspect.
| See http://dev.mysql.com/doc/refman/8.0/en/numeric-types.html
INT is a four-byte signed integer.
BIGINT is an eight-byte signed integer.
They each accept no more and no fewer values than can be stored in their respective number of bytes. That means 232 values in an INT and 264 values in a BIGINT.
The 20 in INT(20) and BIGINT(20) means almost nothing. It's a hint for display width. It has nothing to do with storage, nor the range of values that column will accept.
Practically, it affects only the ZEROFILL option:
CREATE TABLE foo ( bar INT(20) ZEROFILL );
INSERT INTO foo (bar) VALUES (1234);
SELECT bar from foo;
+----------------------+
| bar |
+----------------------+
| 00000000000000001234 |
+----------------------+
It's a common source of confusion for MySQL users to see INT(20) and assume it's a size limit, something analogous to CHAR(20). This is not the case.
| MySQL | 3,135,804 | 304 |
I'm using a MySQL database.
In which situations should I create a unique key or a primary key?
| Primary Key:
There can only be one primary key constraint in a table
In some DBMS it cannot be NULL - e.g. MySQL adds NOT NULL
Primary Key is a unique key identifier of the record
Unique Key:
Can be more than one unique key in one table
Unique key can have NULL values
It can be a candidate key
Unique key can be NULL ; multiple rows can have NULL values and therefore may not be considered "unique"
| MySQL | 9,565,996 | 303 |
What MySQL query will do a text search and replace in one particular field in a table?
I.e. search for foo and replace with bar so a record with a field with the value hello foo becomes hello bar.
| Change table_name and field to match your table name and field in question:
UPDATE table_name SET field = REPLACE(field, 'foo', 'bar') WHERE INSTR(field, 'foo') > 0;
REPLACE (string functions)
INSTR (string functions)
| MySQL | 125,230 | 303 |
There are plenty of similar questions to be found on here but I don't think that any answer the question adequately.
I'll continue from the current most popular question and use their example if that's alright.
The task in this instance is to get the latest post for each author in the database.
The example query produces unusable results as its not always the latest post that is returned.
SELECT wp_posts.* FROM wp_posts
WHERE wp_posts.post_status='publish'
AND wp_posts.post_type='post'
GROUP BY wp_posts.post_author
ORDER BY wp_posts.post_date DESC
The current accepted answer is
SELECT
wp_posts.*
FROM wp_posts
WHERE
wp_posts.post_status='publish'
AND wp_posts.post_type='post'
GROUP BY wp_posts.post_author
HAVING wp_posts.post_date = MAX(wp_posts.post_date) <- ONLY THE LAST POST FOR EACH AUTHOR
ORDER BY wp_posts.post_date DESC
Unfortunately this answer is plain and simple wrong and in many cases produces less stable results than the orginal query.
My best solution is to use a subquery of the form
SELECT wp_posts.* FROM
(
SELECT *
FROM wp_posts
ORDER BY wp_posts.post_date DESC
) AS wp_posts
WHERE wp_posts.post_status='publish'
AND wp_posts.post_type='post'
GROUP BY wp_posts.post_author
My question is a simple one then:
Is there anyway to order rows before grouping without resorting to a subquery?
Edit: This question was a continuation from another question and the specifics of my situation are slightly different. You can (and should) assume that there is also a wp_posts.id that is a unique identifier for that particular post.
| Using an ORDER BY in a subquery is not the best solution to this problem.
The best solution to get the max(post_date) by author is to use a subquery to return the max date and then join that to your table on both the post_author and the max date.
The solution should be:
SELECT p1.*
FROM wp_posts p1
INNER JOIN
(
SELECT max(post_date) MaxPostDate, post_author
FROM wp_posts
WHERE post_status='publish'
AND post_type='post'
GROUP BY post_author
) p2
ON p1.post_author = p2.post_author
AND p1.post_date = p2.MaxPostDate
WHERE p1.post_status='publish'
AND p1.post_type='post'
order by p1.post_date desc
If you have the following sample data:
CREATE TABLE wp_posts
(`id` int, `title` varchar(6), `post_date` datetime, `post_author` varchar(3))
;
INSERT INTO wp_posts
(`id`, `title`, `post_date`, `post_author`)
VALUES
(1, 'Title1', '2013-01-01 00:00:00', 'Jim'),
(2, 'Title2', '2013-02-01 00:00:00', 'Jim')
;
The subquery is going to return the max date and author of:
MaxPostDate | Author
2/1/2013 | Jim
Then since you are joining that back to the table, on both values you will return the full details of that post.
See SQL Fiddle with Demo.
To expand on my comments about using a subquery to accurate return this data.
MySQL does not force you to GROUP BY every column that you include in the SELECT list. As a result, if you only GROUP BY one column but return 10 columns in total, there is no guarantee that the other column values which belong to the post_author that is returned. If the column is not in a GROUP BY MySQL chooses what value should be returned.
Using the subquery with the aggregate function will guarantee that the correct author and post is returned every time.
As a side note, while MySQL allows you to use an ORDER BY in a subquery and allows you to apply a GROUP BY to not every column in the SELECT list this behavior is not allowed in other databases including SQL Server.
| MySQL | 14,770,671 | 301 |
Usually I use manual find to replace text in a MySQL database using phpMyAdmin. I'm tired of it now, how can I run a query to find and replace a text with new text in the entire table in phpMyAdmin?
Example: find keyword domain.example, replace with www.domain.example.
| For a single table update
UPDATE `table_name`
SET `field_name` = replace(same_field_name, 'unwanted_text', 'wanted_text')
From multiple tables-
If you want to edit from all tables, best way is to take the dump and then find/replace and upload it back.
| MySQL | 11,839,060 | 299 |
MySQL has an OPTIMIZE TABLE command which can be used to reclaim unused space in a MySQL install. Is there a way (built-in command or common stored procedure) to run this optimization for every table in the database and/or server install, or is this something you'd have to script up yourself?
| You can use mysqlcheck to do this at the command line.
One database:
mysqlcheck -o <db_schema_name>
All databases:
mysqlcheck -o --all-databases
| MySQL | 5,474,662 | 299 |
I am looking for the syntax to add a column to a MySQL database with a default value of 0
Reference
| Try this:
ALTER TABLE table1 ADD COLUMN foo INT DEFAULT 0;
From the documentation that you linked to:
ALTER [ONLINE | OFFLINE] [IGNORE] TABLE tbl_name
alter_specification [, alter_specification] ...
alter_specification:
...
ADD [COLUMN] (col_name column_definition,...)
...
To find the syntax for column_definition search a bit further down the page:
column_definition clauses use the same syntax for ADD and CHANGE as for CREATE TABLE. See Section 12.1.17, “CREATE TABLE Syntax”.
And from the linked page:
column_definition:
data_type [NOT NULL | NULL] [DEFAULT default_value]
[AUTO_INCREMENT] [UNIQUE [KEY] | [PRIMARY] KEY]
[COMMENT 'string']
[COLUMN_FORMAT {FIXED|DYNAMIC|DEFAULT}]
[STORAGE {DISK|MEMORY|DEFAULT}]
[reference_definition]
Notice the word DEFAULT there.
| MySQL | 3,569,347 | 299 |
I'm trying to figure out what collation I should be using for various types of data. 100% of the content I will be storing is user-submitted.
My understanding is that I should be using UTF-8 General CI (Case-Insensitive) instead of UTF-8 Binary. However, I can't find a clear a distinction between UTF-8 General CI and UTF-8 Unicode CI.
Should I be storing user-submitted content in UTF-8 General or UTF-8 Unicode CI columns?
What type of data would UTF-8 Binary be applicable to?
| In general, utf8_general_ci is faster than utf8_unicode_ci, but less correct.
Here is the difference:
For any Unicode character set, operations performed using the _general_ci collation are faster than those for the _unicode_ci collation. For example, comparisons for the utf8_general_ci collation are faster, but slightly less correct, than comparisons for utf8_unicode_ci. The reason for this is that utf8_unicode_ci supports mappings such as expansions; that is, when one character compares as equal to combinations of other characters. For example, in German and some other languages “ß” is equal to “ss”. utf8_unicode_ci also supports contractions and ignorable characters. utf8_general_ci is a legacy collation that does not support expansions, contractions, or ignorable characters. It can make only one-to-one comparisons between characters.
Quoted from:
http://dev.mysql.com/doc/refman/5.0/en/charset-unicode-sets.html
For more detailed explanation, please read the following post from MySQL forums:
http://forums.mysql.com/read.php?103,187048,188748
As for utf8_bin:
Both utf8_general_ci and utf8_unicode_ci perform case-insensitive comparison. In constrast, utf8_bin is case-sensitive (among other differences), because it compares the binary values of the characters.
| MySQL | 2,344,118 | 297 |
I want to convert a timestamp in MySQL to a date.
I would like to format the user.registration field into the text file as a yyyy-mm-dd.
Here is my SQL:
$sql = requestSQL("SELECT user.email,
info.name,
FROM_UNIXTIME(user.registration),
info.news
FROM user, info
WHERE user.id = info.id ", "export members");
I also tried the date conversion with:
DATE_FORMAT(user.registration, '%d/%m/%Y')
DATE(user.registration)
I echo the result before to write the text file and I get :
email1;name1;DATE_FORMAT(user.registration, '%d/%m/%Y');news1
email2;name2;news2
How can I convert that field to date?
| DATE_FORMAT(FROM_UNIXTIME(`user.registration`), '%e %b %Y') AS 'date_formatted'
| MySQL | 9,251,561 | 296 |
How do I subtract 30 days from the current datetime in mysql?
SELECT * FROM table
WHERE exec_datetime BETWEEN DATEDIFF(NOW() - 30 days) AND NOW();
| SELECT * FROM table
WHERE exec_datetime BETWEEN DATE_SUB(NOW(), INTERVAL 30 DAY) AND NOW();
http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-add
| MySQL | 10,763,031 | 294 |
The database is latin1_general_ci now and I want to change collation to utf8mb4_general_ci.
Is there any setting in PhpMyAdmin to change collation of database, table, column? Rather than changing one by one?
| Changing for database:
ALTER DATABASE <database_name> CHARACTER SET utf8mb4
COLLATE utf8mb4_unicode_ci;
Note that it will only set a new default, that will be used for new tables created since, but wouldn't change for existing tables.
Changing it per table:
ALTER TABLE <table_name> CONVERT TO CHARACTER SET utf8mb4
COLLATE utf8mb4_unicode_ci;
Good practice is to change it at table level as it'll change it for columns as well. Changing for specific column is for any specific case.
Changing collation for a specific column:
ALTER TABLE <table_name> MODIFY <column_name> VARCHAR(255) CHARACTER SET utf8mb4
COLLATE utf8mb4_unicode_ci;
| MySQL | 1,294,117 | 294 |
Is a MySQL SELECT query case sensitive or case insensitive by default? And if not, what query would I have to send so that I can do something like the following?
SELECT * FROM `table` WHERE `Value` = "iaresavage"
Where in actuality, the real value of Value is IAreSavage.
| They are case insensitive, unless you do a binary comparison.
| MySQL | 3,936,967 | 293 |
I have a query that looks like this:
SELECT article FROM table1 ORDER BY publish_date LIMIT 20
How does ORDER BY work? Will it order all records, then get the first 20, or will it get 20 records and order them by the publish_date field?
If it's the last one, you're not guaranteed to really get the most recent 20 articles.
| It will order first, then get the first 20. A database will also process anything in the WHERE clause before ORDER BY.
| MySQL | 4,708,708 | 291 |
My iPhone application connects to my PHP web service to retrieve data from a MySQL database, a request can return up to 500 results.
What is the best way to implement paging and retrieve 20 items at a time?
Let's say I receive the first 20 entries from my database, how can I now request the next 20 entries?
| From the MySQL documentation:
The LIMIT clause can be used to constrain the number of rows returned by the SELECT statement. LIMIT takes one or two numeric arguments, which must both be nonnegative integer constants (except when using prepared statements).
With two arguments, the first argument specifies the offset of the first row to return, and the second specifies the maximum number of rows to return. The offset of the initial row is 0 (not 1):
SELECT * FROM tbl LIMIT 5,10; # Retrieve rows 6-15
To retrieve all rows from a certain offset up to the end of the result set, you can use some large number for the second parameter. This statement retrieves all rows from the 96th row to the last:
SELECT * FROM tbl LIMIT 95,18446744073709551615;
With one argument, the value specifies the number of rows to return from the beginning of the result set:
SELECT * FROM tbl LIMIT 5; # Retrieve first 5 rows
In other words, LIMIT row_count is equivalent to LIMIT 0, row_count.
| MySQL | 3,799,193 | 291 |
I am trying to find a MySQL query that will find DISTINCT values in a particular field, count the number of occurrences of that value and then order the results by the count.
example db
id name
----- ------
1 Mark
2 Mike
3 Paul
4 Mike
5 Mike
6 John
7 Mark
expected result
name count
----- -----
Mike 3
Mark 2
Paul 1
John 1
| SELECT name,COUNT(*) as count
FROM tablename
GROUP BY name
ORDER BY count DESC;
| MySQL | 1,346,345 | 291 |
In a [member] table, some rows have the same value for the email column.
login_id | email
---------|---------------------
john | [email protected]
peter | [email protected]
johnny | [email protected]
...
Some people used a different login_id but the same email address, no unique constraint was set on this column. Now I need to find these rows and see if they should be removed.
What SQL statement should I use to find these rows? (MySQL 5)
| This query will give you a list of email addresses and how many times they're used, with the most used addresses first.
SELECT email,
count(*) AS c
FROM TABLE
GROUP BY email
HAVING c > 1
ORDER BY c DESC
If you want the full rows:
select * from table where email in (
select email from table
group by email having count(*) > 1
)
| MySQL | 1,786,533 | 290 |
I am working on a moderately complex schema in MySQL Workbench, and the single page of the EER diagram is now full up. Does anyone know how to enlarge it to two or more pages?
| On the Model pull-down there is an option Diagram Properties and Size, which allows the size of the diagram to be changed.
| MySQL | 4,933,000 | 289 |
Is there a MySQL function which can be used to convert a Unix timestamp into a human readable date? I have one field where I save Unix times and now I want to add another field for human readable dates.
| Use FROM_UNIXTIME():
SELECT
FROM_UNIXTIME(timestamp)
FROM
your_table;
See also: MySQL documentation on FROM_UNIXTIME().
| MySQL | 6,267,564 | 287 |
In a nutshell
I want to run mysql in a docker container and connect to it from my host. So far, the best I have achieved is:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
More details
I'm using the following Dockerfile:
FROM ubuntu:14.04.3
RUN apt-get update && apt-get install -y mysql-server
# Ensure we won't bind to localhost only
RUN grep -v bind-address /etc/mysql/my.cnf > temp.txt \
&& mv temp.txt /etc/mysql/my.cnf
# It doesn't seem needed since I'll use -p, but it can't hurt
EXPOSE 3306
CMD /etc/init.d/mysql start && tail -F /var/log/mysql.log
In the directory where there is this file, I can succesfully build the image and run it with:
> docker build -t my-image .
> docker run -d -p 12345:3306 my-image
When I attach to the image, it seems to work just fine:
# from the host
> docker exec -it <my_image_name> bash
#inside of the container now
$ mysql -u root
Welcome to the MySQL monitor. Commands end with ; or \g.
[...]
However I don't have that much success from the host:
> mysql -P 12345 -uroot
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
Even more details
I've seen that there's a question which looks like mine. However, it isn't the same (and it doesn't have any answers anyway)
I've seen that there are images dedicated to mysql, but I didn't have more success with them
My grep -v may feel weird. Admittedly, there may be cleaner way to do it. But when I attach my image, I can observe it actually worked as expected (ie: removed the bind-address). And I can see in the container /var/log/mysql/error.log:
Server hostname (bind-address): '0.0.0.0'; port: 3306
- '0.0.0.0' resolves to '0.0.0.0';
Server socket created on IP: '0.0.0.0'.
| If your Docker MySQL host is running correctly you can connect to it from local machine, but you should specify host, port and protocol like this:
mysql -h localhost -P 3306 --protocol=tcp -u root
Change 3306 to port number you have forwarded from Docker container (in your case it will be 12345).
Because you are running MySQL inside Docker container, socket is not available and you need to connect through TCP. Setting "--protocol" in the mysql command will change that.
| MySQL | 33,001,750 | 286 |
What is main difference between INSERT INTO table VALUES .. and INSERT INTO table SET?
Example:
INSERT INTO table (a, b, c) VALUES (1,2,3)
INSERT INTO table SET a=1, b=2, c=3
And what about performance of these two?
| As far as I can tell, both syntaxes are equivalent. The first is SQL standard, the second is MySQL's extension.
So they should be exactly equivalent performance wise.
http://dev.mysql.com/doc/refman/5.6/en/insert.html says:
INSERT inserts new rows into an existing table. The INSERT ... VALUES and INSERT ... SET forms of the statement insert rows based on explicitly specified values. The INSERT ... SELECT form inserts rows selected from another table or tables.
| MySQL | 861,722 | 286 |
I understand that this question has been asked before, but most of the time it is asked in relation to a specific database or table. I cannot find an answer on this site that describes the two engines and their differences without respect to someones specific database.
I want to be able to make more informed decisions in the future with respect to designing a table or database, so am looking for a comprehensive answer on the differences between the two storage engines.
What's the difference between MyISAM and InnoDB, and what should I be looking for when trying to decide between one or the other?
| The main differences between InnoDB and MyISAM ("with respect to designing a table or database" you asked about) are support for "referential integrity" and "transactions".
We choose InnoDB if we need the database to enforce foreign key constraints or support transactions (i.e. changes made by two or more DML operations handled as single unit of work, with all of the changes either applied, or all the changes reverted). These features are not supported by the MyISAM engine.
Those are the two biggest differences. Another big difference is concurrency. With MyISAM, a DML statement will obtain an exclusive lock on the table, and while that lock is held, no other session can perform a SELECT or a DML operation on the table.
Those two specific engines you asked about (InnoDB and MyISAM) have different design goals. MySQL also has other storage engines, with their own design goals.
In choosing between InnoDB and MyISAM, the first step is to determine if we need the features provided by InnoDB. If not, then MyISAM is up for consideration.
A more detailed discussion of differences is rather impractical (in this forum) absent a more detailed discussion of the problem space... how the application will use the database, how many tables, size of the tables, the transaction load, volumes of select, insert, updates, concurrency requirements, replication features, etc.
The logical design of the database should be centered around data analysis and user requirements; the choice to use a relational database would come later, and even later would the choice of MySQL as a relational database management system, and then the selection of a storage engine for each table.
| MySQL | 12,614,541 | 281 |
I have been following a manual to install a software suite on Ubuntu. I have no knowledge of MySQL at all. I have done the following installations on my Ubuntu.
sudo apt-get update
sudo apt-get install mysql-server-5.5
sudo apt-get install mysql-client-5.5
sudo apt-get install mysql-common
sudo apt-get install glade
sudo apt-get install ntp
Then I do
cd ~/Desktop/iPDC-v1.3.1/DBServer-1.1
mysql -uroot -proot <"Db.sql"
I ended up with the following error message.
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
How may I fix it and continue?
| Note: For MySQL 5.7+, please see the answer from Lahiru to this question. That contains more current information.
For MySQL < 5.7:
The default root password is blank (i.e., an empty string), not root. So you can just log in as:
mysql -u root
You should obviously change your root password after installation:
mysqladmin -u root password [newpassword]
In most cases you should also set up individual user accounts before working extensively with the database as well.
| MySQL | 21,944,936 | 278 |
Why do you need to place columns you create yourself (for example select 1 as "number") after HAVING and not WHERE in MySQL?
And are there any downsides instead of doing WHERE 1 (writing the whole definition instead of a column name)?
| All other answers on this question didn't hit upon the key point.
Assume we have a table:
CREATE TABLE `table` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`value` int(10) unsigned NOT NULL,
PRIMARY KEY (`id`),
KEY `value` (`value`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
And have 10 rows with both id and value from 1 to 10:
INSERT INTO `table`(`id`, `value`) VALUES (1, 1),(2, 2),(3, 3),(4, 4),(5, 5),(6, 6),(7, 7),(8, 8),(9, 9),(10, 10);
Try the following 2 queries:
SELECT `value` v FROM `table` WHERE `value`>5; -- Get 5 rows
SELECT `value` v FROM `table` HAVING `value`>5; -- Get 5 rows
You will get exactly the same results, you can see the HAVING clause can work without GROUP BY clause.
Here's the difference:
SELECT `value` v FROM `table` WHERE `v`>5;
The above query will raise error: Error #1054 - Unknown column 'v' in 'where clause'
SELECT `value` v FROM `table` HAVING `v`>5; -- Get 5 rows
WHERE clause allows a condition to use any table column, but it cannot use aliases or aggregate functions.
HAVING clause allows a condition to use a selected (!) column, alias or an aggregate function.
This is because WHERE clause filters data before select, but HAVING clause filters resulting data after select.
So put the conditions in WHERE clause will be more efficient if you have many many rows in a table.
Try EXPLAIN to see the key difference:
EXPLAIN SELECT `value` v FROM `table` WHERE `value`>5;
+----+-------------+-------+-------+---------------+-------+---------+------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+-------+---------+------+------+--------------------------+
| 1 | SIMPLE | table | range | value | value | 4 | NULL | 5 | Using where; Using index |
+----+-------------+-------+-------+---------------+-------+---------+------+------+--------------------------+
EXPLAIN SELECT `value` v FROM `table` having `value`>5;
+----+-------------+-------+-------+---------------+-------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+-------+---------+------+------+-------------+
| 1 | SIMPLE | table | index | NULL | value | 4 | NULL | 10 | Using index |
+----+-------------+-------+-------+---------------+-------+---------+------+------+-------------+
You can see either WHERE or HAVING uses index, but the rows are different.
| MySQL | 2,905,292 | 277 |
Do you need to explicitly create an index, or is it implicit when defining the primary key? Is the answer the same for MyISAM and InnoDB?
| The primary key is always indexed. This is the same for MyISAM and InnoDB, and is generally true for all storage engines that at all supports indices.
| MySQL | 1,071,180 | 276 |
I have a WordPress production website.
I've exported the database by the following commands: select database > export > custom > select all tables > select .zip compression > 'Go'
I've downloaded the file which is example.sql.zip but when I upload to my localhost I get this error: phpMyAdmin - Error > Incorrect format parameter
I've tried to export with other formats and I get the same error.
I've tried with other SQL Databases and it exports/ imports just fine.
What could it be? A corrupt database or other?
Thanks
| This issue is not because of corrupt database but rather the PHP upload size limit. It is suggested to increase the values of the following variables in php.ini:
upload_max_filesize=64M
post_max_size=64M
You may also want to increase the max_exection_time to a longer value for larger databases so it does not timeout while uploading.
Save you changes to the file and restart your PHP server.
| MySQL | 50,690,076 | 275 |
The query I'm running is as follows, however I'm getting this error:
#1054 - Unknown column 'guaranteed_postcode' in 'IN/ALL/ANY subquery'
SELECT `users`.`first_name`, `users`.`last_name`, `users`.`email`,
SUBSTRING(`locations`.`raw`,-6,4) AS `guaranteed_postcode`
FROM `users` LEFT OUTER JOIN `locations`
ON `users`.`id` = `locations`.`user_id`
WHERE `guaranteed_postcode` NOT IN #this is where the fake col is being used
(
SELECT `postcode` FROM `postcodes` WHERE `region` IN
(
'australia'
)
)
My question is: why am I unable to use a fake column in the where clause of the same DB query?
| You can only use column aliases in GROUP BY, ORDER BY, or HAVING clauses.
Standard SQL doesn't allow you to
refer to a column alias in a WHERE
clause. This restriction is imposed
because when the WHERE code is
executed, the column value may not yet
be determined.
Copied from MySQL documentation
As pointed in the comments, using HAVING instead may do the work. Make sure to give a read at this question too: WHERE vs HAVING.
| MySQL | 942,571 | 274 |
I knew boolean in mysql as tinyint (1).
Today I see a table with defined an integer like tinyint(2), and also others like int(4), int(6) ...
What does the size means in field of type integer and tinyint ?
| The (m) indicates the column display width; applications such as the MySQL client make use of this when showing the query results.
For example:
| v | a | b | c |
+-----+-----+-----+-----+
| 1 | 1 | 1 | 1 |
| 10 | 10 | 10 | 10 |
| 100 | 100 | 100 | 100 |
Here a, b and c are using TINYINT(1), TINYINT(2) and TINYINT(3) respectively. As you can see, it pads the values on the left side using the display width.
It's important to note that it does not affect the accepted range of values for that particular type, i.e. TINYINT(1) still accepts [-128 .. 127].
| MySQL | 12,839,927 | 272 |
When doing:
DELETE FROM `jobs` WHERE `job_id` =1 LIMIT 1
It errors:
#1451 - Cannot delete or update a parent row: a foreign key constraint fails
(paymesomething.advertisers, CONSTRAINT advertisers_ibfk_1 FOREIGN KEY
(advertiser_id) REFERENCES jobs (advertiser_id))
Here are my tables:
CREATE TABLE IF NOT EXISTS `advertisers` (
`advertiser_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
`password` char(32) NOT NULL,
`email` varchar(128) NOT NULL,
`address` varchar(255) NOT NULL,
`phone` varchar(255) NOT NULL,
`fax` varchar(255) NOT NULL,
`session_token` char(30) NOT NULL,
PRIMARY KEY (`advertiser_id`),
UNIQUE KEY `email` (`email`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=2 ;
INSERT INTO `advertisers` (`advertiser_id`, `name`, `password`, `email`, `address`, `phone`, `fax`, `session_token`) VALUES
(1, 'TEST COMPANY', '', '', '', '', '', '');
CREATE TABLE IF NOT EXISTS `jobs` (
`job_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`advertiser_id` int(11) unsigned NOT NULL,
`name` varchar(255) NOT NULL,
`shortdesc` varchar(255) NOT NULL,
`longdesc` text NOT NULL,
`address` varchar(255) NOT NULL,
`time_added` int(11) NOT NULL,
`active` tinyint(1) NOT NULL,
`moderated` tinyint(1) NOT NULL,
PRIMARY KEY (`job_id`),
KEY `advertiser_id` (`advertiser_id`,`active`,`moderated`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=2 ;
INSERT INTO `jobs` (`job_id`, `advertiser_id`, `name`, `shortdesc`, `longdesc`, `address`, `active`, `moderated`) VALUES
(1, 1, 'TEST', 'TESTTEST', 'TESTTESTES', '', 0, 0);
ALTER TABLE `advertisers`
ADD CONSTRAINT `advertisers_ibfk_1` FOREIGN KEY (`advertiser_id`) REFERENCES `jobs` (`advertiser_id`);
| The simple way would be to disable the foreign key check; make the changes then re-enable foreign key check.
SET FOREIGN_KEY_CHECKS=0; -- to disable them
SET FOREIGN_KEY_CHECKS=1; -- to re-enable them
| MySQL | 1,905,470 | 271 |
What's the simplest (and hopefully not too slow) way to calculate the median with MySQL? I've used AVG(x) for finding the mean, but I'm having a hard time finding a simple way of calculating the median. For now, I'm returning all the rows to PHP, doing a sort, and then picking the middle row, but surely there must be some simple way of doing it in a single MySQL query.
Example data:
id | val
--------
1 4
2 7
3 2
4 2
5 9
6 8
7 3
Sorting on val gives 2 2 3 4 7 8 9, so the median should be 4, versus SELECT AVG(val) which == 5.
| In MariaDB / MySQL:
SELECT AVG(dd.val) as median_val
FROM (
SELECT d.val, @rownum:=@rownum+1 as `row_number`, @total_rows:=@rownum
FROM data d, (SELECT @rownum:=0) r
WHERE d.val is NOT NULL
-- put some where clause here
ORDER BY d.val
) as dd
WHERE dd.row_number IN ( FLOOR((@total_rows+1)/2), FLOOR((@total_rows+2)/2) );
Steve Cohen points out, that after the first pass, @rownum will contain the total number of rows. This can be used to determine the median, so no second pass or join is needed.
Also AVG(dd.val) and dd.row_number IN(...) is used to correctly produce a median when there are an even number of records. Reasoning:
SELECT FLOOR((3+1)/2),FLOOR((3+2)/2); -- when total_rows is 3, avg rows 2 and 2
SELECT FLOOR((4+1)/2),FLOOR((4+2)/2); -- when total_rows is 4, avg rows 2 and 3
Finally, MariaDB 10.3.3+ contains a MEDIAN function
| MySQL | 1,291,152 | 271 |
I got the following error while trying to alter a column's data type and setting a new default value:
ALTER TABLE foobar_data ALTER COLUMN col VARCHAR(255) NOT NULL SET DEFAULT '{}';
ERROR 1064 (42000): You have an error in your SQL syntax; check the
manual that corresponds to your MySQL server version for the right
syntax to use near 'VARCHAR(255) NOT NULL SET DEFAULT '{}'' at line 1
| ALTER TABLE foobar_data MODIFY COLUMN col VARCHAR(255) NOT NULL DEFAULT '{}';
A second possibility which does the same (thanks to juergen_d):
ALTER TABLE foobar_data CHANGE COLUMN col col VARCHAR(255) NOT NULL DEFAULT '{}';
| MySQL | 11,312,433 | 267 |
So I try to import sql file into rds (1G MEM, 1 CPU). The sql file is like 1.4G
mysql -h xxxx.rds.amazonaws.com -u user -ppass --max-allowed-packet=33554432 db < db.sql
It got stuck at:
ERROR 1227 (42000) at line 374: Access denied; you need (at least one of) the SUPER privilege(s) for this operation
The actual sql content is:
/*!50003 CREATE*/ /*!50017 DEFINER=`another_user`@`1.2.3.4`*/ /*!50003 TRIGGER `change_log_BINS` BEFORE INSERT ON `change_log` FOR EACH ROW
IF (NEW.created_at IS NULL OR NEW.created_at = '00-00-00 00:00:00' OR NEW.created_at = '') THEN
SET NEW.created_at = NOW();
END IF */;;
another_user is not existed in rds, so I do:
GRANT ALL PRIVILEGES ON db.* TO another_user@'localhost';
Still no luck.
| Either remove the DEFINER=.. statement from your sqldump file, or replace the user values with CURRENT_USER.
The MySQL server provided by RDS does not allow a DEFINER syntax for another user (in my experience).
You can use a sed script to remove them from the file:
sed 's/\sDEFINER=`[^`]*`@`[^`]*`//g' -i oldfile.sql
| MySQL | 44,015,692 | 266 |
Being new to MySQL, I have installed the latest version of the MySQL Workbench (5.2.33). I would like to know how you can create a database with this application. In the Overview tab of the SQL editor there are few "MySQL Schema" displayed, are these schemas the existing databases?
|
Launch MySQL Workbench.
On the left pane of the welcome window, choose a database to connect to under "Open Connection to Start Querying".
The query window will open. On its left pane, there is a section titled "Object Browser", which shows the list of databases. (Side note: The terms "schema" and "database" are synonymous in this program.)
Right-click on one of the existing databases and click "Create Schema...". This will launch a wizard that will help you create a database.
If you'd prefer to do it in SQL, enter this query into the query window:
CREATE SCHEMA Test
Press CTRL + Enter to submit it, and you should see confirmation in the output pane underneath the query window. You'll have to right-click on an existing schema in the Object panel and click "Refresh All" to see it show up, though.
| MySQL | 5,515,745 | 266 |
Is it possible to do mysqldump by single SQL query?
I mean to dump the whole database, like phpmyadmin does when you do export to SQL
| not mysqldump, but mysql cli...
mysql -e "select * from myTable" -u myuser -pxxxxxxxxx mydatabase
you can redirect it out to a file if you want :
mysql -e "select * from myTable" -u myuser -pxxxxxxxx mydatabase > mydumpfile.txt
Update:
Original post asked if he could dump from the database by query. What he asked and what he meant were different. He really wanted to just mysqldump all tables.
mysqldump --tables myTable --where="id < 1000"
| MySQL | 935,556 | 266 |
While starting mysql server 5.7.17 using mysqld_safe, following error occcours.
2017-02-10T17:05:44.870970Z mysqld_safe Logging to '/var/log/mysql/error.log'.
2017-02-10T17:05:44.872874Z mysqld_safe Logging to '/var/log/mysql/error.log'.
2017-02-10T17:05:44.874547Z mysqld_safe Directory '/var/run/mysqld' for UNIX socket file don't exists.
How to fix it ?
| It seems odd that this directory was not created at install - have you manually changed the path of the socket file in the my.cfg?
Have you tried simply creating this directory yourself, and restarting the service?
mkdir -p /var/run/mysqld
chown mysql:mysql /var/run/mysqld
| MySQL | 42,153,059 | 265 |
Anyone know a quick easy way to migrate a SQLite3 database to MySQL?
| Everyone seems to starts off with a few greps and perl expressions and you sorta kinda get something that works for your particular dataset but you have no idea if it's imported the data correctly or not. I'm seriously surprised nobody's built a solid library that can convert between the two.
Here a list of ALL the differences in SQL syntax that I know about between the two file formats:
The lines starting with:
BEGIN TRANSACTION
COMMIT
sqlite_sequence
CREATE UNIQUE INDEX
are not used in MySQL
SQLite uses CREATE TABLE/INSERT INTO "table_name" and MySQL uses CREATE TABLE/INSERT INTO table_name
MySQL doesn't use quotes inside the schema definition
MySQL uses single quotes for strings inside the INSERT INTO clauses
SQLite and MySQL have different ways of escaping strings inside INSERT INTO clauses
SQLite uses 't' and 'f' for booleans, MySQL uses 1 and 0 (a simple regex for this can fail when you have a string like: 'I do, you don't' inside your INSERT INTO)
SQLLite uses AUTOINCREMENT, MySQL uses AUTO_INCREMENT
Here is a very basic hacked up perl script which works for my dataset and checks for many more of these conditions that other perl scripts I found on the web. Nu guarantees that it will work for your data but feel free to modify and post back here.
#! /usr/bin/perl
while ($line = <>){
if (($line !~ /BEGIN TRANSACTION/) && ($line !~ /COMMIT/) && ($line !~ /sqlite_sequence/) && ($line !~ /CREATE UNIQUE INDEX/)){
if ($line =~ /CREATE TABLE \"([a-z_]*)\"(.*)/i){
$name = $1;
$sub = $2;
$sub =~ s/\"//g;
$line = "DROP TABLE IF EXISTS $name;\nCREATE TABLE IF NOT EXISTS $name$sub\n";
}
elsif ($line =~ /INSERT INTO \"([a-z_]*)\"(.*)/i){
$line = "INSERT INTO $1$2\n";
$line =~ s/\"/\\\"/g;
$line =~ s/\"/\'/g;
}else{
$line =~ s/\'\'/\\\'/g;
}
$line =~ s/([^\\'])\'t\'(.)/$1THIS_IS_TRUE$2/g;
$line =~ s/THIS_IS_TRUE/1/g;
$line =~ s/([^\\'])\'f\'(.)/$1THIS_IS_FALSE$2/g;
$line =~ s/THIS_IS_FALSE/0/g;
$line =~ s/AUTOINCREMENT/AUTO_INCREMENT/g;
print $line;
}
}
| MySQL | 18,671 | 264 |
I'm designing my database schema using MySQL Workbench, which is pretty cool because you can do diagrams and it converts them :P
Anyways, I've decided to use InnoDB because of it's Foreign Key support. One thing I noticed though is that it allows you to set On Update and on Delete options for foreign keys. Can someone explain where "Restrict", "Cascade" and set null could be used in a simple example?
For example, say I have a user table which includes a userID. And say I have a message table message which is a many-to-many which has 2 foreign keys (which reference the same primary key, userID in the user table). Is setting the On Update and On Delete options any useful in this case? If so, which one do I choose? If this isn't a good example, could you please come up with a good example to illustrate how these could be useful?
Thanks
| Do not hesitate to put constraints on the database. You'll be sure to have a consistent database, and that's one of the good reasons to use a database. Especially if you have several applications requesting it (or just one application but with a direct mode and a batch mode using different sources).
With MySQL you do not have advanced constraints like you would have in postgreSQL but at least the foreign key constraints are quite advanced.
We'll take an example, a company table with a user table containing people from theses company
CREATE TABLE COMPANY (
company_id INT NOT NULL,
company_name VARCHAR(50),
PRIMARY KEY (company_id)
) ENGINE=INNODB;
CREATE TABLE USER (
user_id INT,
user_name VARCHAR(50),
company_id INT,
INDEX company_id_idx (company_id),
FOREIGN KEY (company_id) REFERENCES COMPANY (company_id) ON...
) ENGINE=INNODB;
Let's look at the ON UPDATE clause:
ON UPDATE RESTRICT : the default : if you try to update a company_id in table COMPANY the engine will reject the operation if one USER at least links on this company.
ON UPDATE NO ACTION : same as RESTRICT.
ON UPDATE CASCADE : the best one usually : if you update a company_id in a row of table COMPANY the engine will update it accordingly on all USER rows referencing this COMPANY (but no triggers activated on USER table, warning). The engine will track the changes for you, it's good.
ON UPDATE SET NULL : if you update a company_id in a row of table COMPANY the engine will set related USERs company_id to NULL (should be available in USER company_id field). I cannot see any interesting thing to do with that on an update, but I may be wrong.
And now on the ON DELETE side:
ON DELETE RESTRICT : the default : if you try to delete a company_id Id in table COMPANY the engine will reject the operation if one USER at least links on this company, can save your life.
ON DELETE NO ACTION : same as RESTRICT
ON DELETE CASCADE : dangerous : if you delete a company row in table COMPANY the engine will delete as well the related USERs. This is dangerous but can be used to make automatic cleanups on secondary tables (so it can be something you want, but quite certainly not for a COMPANY<->USER example)
ON DELETE SET NULL : handful : if you delete a COMPANY row the related USERs will automatically have the relationship to NULL. If Null is your value for users with no company this can be a good behavior, for example maybe you need to keep the users in your application, as authors of some content, but removing the company is not a problem for you.
usually my default is: ON DELETE RESTRICT ON UPDATE CASCADE. with some ON DELETE CASCADE for track tables (logs--not all logs--, things like that) and ON DELETE SET NULL when the master table is a 'simple attribute' for the table containing the foreign key, like a JOB table for the USER table.
Edit
It's been a long time since I wrote that. Now I think I should add one important warning. MySQL has one big documented limitation with cascades. Cascades are not firing triggers. So if you were over confident enough in that engine to use triggers you should avoid cascades constraints.
http://dev.mysql.com/doc/refman/5.6/en/triggers.html
MySQL triggers activate only for changes made to tables by SQL statements. They do not activate for changes in views, nor by changes to tables made by APIs that do not transmit SQL statements to the MySQL Server
http://dev.mysql.com/doc/refman/5.6/en/stored-program-restrictions.html#stored-routines-trigger-restrictions
==> See below the last edit, things are moving on this domain
Triggers are not activated by foreign key actions.
And I do not think this will get fixed one day. Foreign key constraints are managed by the InnoDb storage and Triggers are managed by the MySQL SQL engine. Both are separated. Innodb is the only storage with constraint management, maybe they'll add triggers directly in the storage engine one day, maybe not.
But I have my own opinion on which element you should choose between the poor trigger implementation and the very useful foreign keys constraints support. And once you'll get used to database consistency you'll love PostgreSQL.
12/2017-Updating this Edit about MySQL:
as stated by @IstiaqueAhmed in the comments, the situation has changed on this subject. So follow the link and check the real up-to-date situation (which may change again in the future).
| MySQL | 6,720,050 | 263 |
I would like to write a script which copies my current database sitedb1 to sitedb2 on the same mysql database instance. I know I can dump the sitedb1 to a sql script:
mysqldump -u root -p sitedb1 >~/db_name.sql
and then import it to sitedb2.
Is there an easier way, without dumping the first database to a sql file?
| As the manual says in Copying Databases you can pipe the dump directly into the mysql client:
mysqldump db_name | mysql new_db_name
If you're using MyISAM you could copy the files, but I wouldn't recommend it. It's a bit dodgy.
Integrated from various good other answers
Both mysqldump and mysql commands accept options for setting connection details (and much more), like:
mysqldump -u <user name> --password=<pwd> <original db> | mysql -u <user name> -p <new db>
Also, if the new database is not existing yet, you have to create it beforehand (e.g. with echo "create database new_db_name" | mysql -u <dbuser> -p).
| MySQL | 675,289 | 263 |
I've recently taken over an old project that was created 10 years ago. It uses MySQL 5.1.
Among other things, I need to change the default character set from latin1 to utf8.
As an example, I have tables such as this:
CREATE TABLE `users` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`first_name` varchar(45) CHARACTER SET latin1 COLLATE latin1_general_ci DEFAULT NULL,
`last_name` varchar(45) CHARACTER SET latin1 COLLATE latin1_general_ci DEFAULT NULL,
`username` varchar(127) CHARACTER SET latin1 COLLATE latin1_general_ci NOT NULL,
`email` varchar(127) CHARACTER SET latin1 COLLATE latin1_general_ci NOT NULL,
`pass` varchar(20) CHARACTER SET latin1 COLLATE latin1_general_ci NOT NULL,
`active` char(1) CHARACTER SET latin1 COLLATE latin1_general_ci NOT NULL DEFAULT 'Y',
`created` datetime NOT NULL,
`last_login` datetime DEFAULT NULL,
`author` varchar(1) CHARACTER SET latin1 COLLATE latin1_general_ci DEFAULT 'N',
`locked_at` datetime DEFAULT NULL,
`created_at` datetime DEFAULT NULL,
`updated_at` datetime DEFAULT NULL,
`ripple_token` varchar(36) CHARACTER SET latin1 COLLATE latin1_general_ci DEFAULT NULL,
`ripple_token_expires` datetime DEFAULT '2014-10-31 08:03:55',
`authentication_token` varchar(255) CHARACTER SET latin1 COLLATE latin1_general_ci DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `index_users_on_reset_password_token` (`reset_password_token`),
UNIQUE KEY `index_users_on_confirmation_token` (`confirmation_token`),
UNIQUE KEY `index_users_on_unlock_token` (`unlock_token`),
KEY `users_active` (`active`),
KEY `users_username` (`username`),
KEY `index_users_on_email` (`email`)
) ENGINE=InnoDB AUTO_INCREMENT=1677 DEFAULT CHARSET=utf8 CHECKSUM=1 DELAY_KEY_WRITE=1 ROW_FORMAT=DYNAMIC
I set up my own Mac to work on this. Without thinking too much about it, I ran "brew install mysql" which installed MySQL 5.7. So I have some version conflicts.
I downloaded a copy of this database and imported it.
If I try to run a query like this:
ALTER TABLE users MODIFY first_name varchar(45) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL
I get this error:
ERROR 1292 (22007): Incorrect datetime value: '0000-00-00 00:00:00' for column 'created' at row 1
I thought I could fix this with:
ALTER TABLE users MODIFY created datetime NULL DEFAULT '1970-01-01 00:00:00';
Query OK, 0 rows affected (0.06 sec)
Records: 0 Duplicates: 0 Warnings: 0
but I get:
ALTER TABLE users MODIFY first_name varchar(45) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL ;
ERROR 1292 (22007): Incorrect datetime value: '0000-00-00 00:00:00' for column 'created' at row 1
Do I have to update every value?
| I wasn't able to do this:
UPDATE users SET created = NULL WHERE created = '0000-00-00 00:00:00'
(on MySQL 5.7.13).
I kept getting the Incorrect datetime value: '0000-00-00 00:00:00' error.
Strangely, this worked: SELECT * FROM users WHERE created = '0000-00-00 00:00:00'. I have no idea why the former fails and the latter works... maybe a MySQL bug?
At any case, this UPDATE query worked:
UPDATE users SET created = NULL WHERE CAST(created AS CHAR(20)) = '0000-00-00 00:00:00'
| MySQL | 35,565,128 | 262 |
I need to ALTER my existing database to add a column. Consequently I also want to update the UNIQUE field to encompass that new column. I'm trying to remove the current index but keep getting the error MySQL Cannot drop index needed in a foreign key constraint
CREATE TABLE mytable_a (
ID TINYINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
Name VARCHAR(255) NOT NULL,
UNIQUE(Name)
) ENGINE=InnoDB;
CREATE TABLE mytable_b (
ID TINYINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
Name VARCHAR(255) NOT NULL,
UNIQUE(Name)
) ENGINE=InnoDB;
CREATE TABLE mytable_c (
ID TINYINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
Name VARCHAR(255) NOT NULL,
UNIQUE(Name)
) ENGINE=InnoDB;
CREATE TABLE `mytable` (
`ID` int(11) NOT NULL AUTO_INCREMENT,
`AID` tinyint(5) NOT NULL,
`BID` tinyint(5) NOT NULL,
`CID` tinyint(5) NOT NULL,
PRIMARY KEY (`ID`),
UNIQUE KEY `AID` (`AID`,`BID`,`CID`),
KEY `BID` (`BID`),
KEY `CID` (`CID`),
CONSTRAINT `mytable_ibfk_1` FOREIGN KEY (`AID`) REFERENCES `mytable_a` (`ID`) ON DELETE CASCADE,
CONSTRAINT `mytable_ibfk_2` FOREIGN KEY (`BID`) REFERENCES `mytable_b` (`ID`) ON DELETE CASCADE,
CONSTRAINT `mytable_ibfk_3` FOREIGN KEY (`CID`) REFERENCES `mytable_c` (`ID`) ON DELETE CASCADE
) ENGINE=InnoDB;
mysql> ALTER TABLE mytable DROP INDEX AID;
ERROR 1553 (HY000): Cannot drop index 'AID': needed in a foreign key constraint
| You have to drop the foreign key. Foreign keys in MySQL automatically create an index on the table (There was a SO Question on the topic).
ALTER TABLE mytable DROP FOREIGN KEY mytable_ibfk_1 ;
| MySQL | 8,482,346 | 262 |
ID FirstName LastName
1 John Doe
2 Bugs Bunny
3 John Johnson
I want to select DISTINCT results from the FirstName column, but I need the corresponding ID and LastName.
The result set needs to show only one John, but with an ID of 1 and a LastName of Doe.
| try this query
SELECT ID, FirstName, LastName FROM table GROUP BY(FirstName)
| MySQL | 5,967,130 | 261 |
I have a table with a unique key for two columns:
CREATE TABLE `xpo`.`user_permanent_gift` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT ,
`fb_user_id` INT UNSIGNED NOT NULL ,
`gift_id` INT UNSIGNED NOT NULL ,
`purchase_timestamp` TIMESTAMP NULL DEFAULT now() ,
PRIMARY KEY (`id`) ,
UNIQUE INDEX `user_gift_UNIQUE` (`fb_user_id` ASC, `gift_id` ASC) );
I want to insert a row into that table, but if the key exists, to do nothing! I don't want an error to be generated because the keys exist.
I know that there is the following syntax:
INSERT ... ON DUPLICATE KEY UPDATE ...
but is there something like:
INSERT ... ON DUPLICATE KEY DO NOTHING
?
| Yes, use INSERT ... ON DUPLICATE KEY UPDATE id=id (it won't trigger row update even though id is assigned to itself).
If you don't care about errors (conversion errors, foreign key errors) and autoincrement field exhaustion (it's incremented even if the row is not inserted due to duplicate key), then use INSERT IGNORE like this:
INSERT IGNORE INTO <table_name> (...) VALUES (...)
| MySQL | 4,596,390 | 261 |
Are JOIN queries faster than several queries? (You run your main query, and then you run many other SELECTs based on the results from your main query)
I'm asking because JOINing them would complicate A LOT the design of my application
If they are faster, can anyone approximate very roughly by how much? If it's 1.5x I don't care, but if it's 10x I guess I do.
| For inner joins, a single query makes sense, since you only get matching rows.
For left joins, multiple queries is much better... look at the following benchmark I did:
Single query with 5 Joins
query: 8.074508 seconds
result size: 2268000
5 queries in a row
combined query time: 0.00262 seconds
result size: 165 (6 + 50 + 7 + 12 + 90)
.
Note that we get the same results in both cases (6 x 50 x 7 x 12 x 90 = 2268000)
left joins use exponentially more memory with redundant data.
The memory limit might not be as bad if you only do a join of two tables, but generally three or more and it becomes worth different queries.
As a side note, my MySQL server is right beside my application server... so connection time is negligible. If your connection time is in the seconds, then maybe there is a benefit
Frank
| MySQL | 1,067,016 | 261 |
This is a much discussed issue for OSX 10.6 users, but I haven't been able to find a solution that works. Here's my setup:
Python 2.6.1 64bit
Django 1.2.1
MySQL 5.1.47 osx10.6 64bit
I create a virtualenvwrapper with --no-site-packages, then installed Django. When I activate the virtualenv and run python manage.py syncdb, I get this error:
Traceback (most recent call last):
File "manage.py", line 11, in <module>
execute_manager(settings)
File "/Users/joerobinson/.virtualenvs/dj_tut/lib/python2.6/site-packages/django/core/management/__init__.py", line 438, in execute_manager
utility.execute()
File "/Users/joerobinson/.virtualenvs/dj_tut/lib/python2.6/site-packages/django/core/management/__init__.py", line 379, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/joerobinson/.virtualenvs/dj_tut/lib/python2.6/site-packages/django/core/management/__init__.py", line 257, in fetch_command
klass = load_command_class(app_name, subcommand)
File "/Users/joerobinson/.virtualenvs/dj_tut/lib/python2.6/site-packages/django/core/management/__init__.py", line 67, in load_command_class
module = import_module('%s.management.commands.%s' % (app_name, name))
File "/Users/joerobinson/.virtualenvs/dj_tut/lib/python2.6/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Users/joerobinson/.virtualenvs/dj_tut/lib/python2.6/site-packages/django/core/management/commands/syncdb.py", line 7, in <module>
from django.core.management.sql import custom_sql_for_model, emit_post_sync_signal
File "/Users/joerobinson/.virtualenvs/dj_tut/lib/python2.6/site-packages/django/core/management/sql.py", line 5, in <module>
from django.contrib.contenttypes import generic
File "/Users/joerobinson/.virtualenvs/dj_tut/lib/python2.6/site-packages/django/contrib/contenttypes/generic.py", line 6, in <module>
from django.db import connection
File "/Users/joerobinson/.virtualenvs/dj_tut/lib/python2.6/site-packages/django/db/__init__.py", line 75, in <module>
connection = connections[DEFAULT_DB_ALIAS]
File "/Users/joerobinson/.virtualenvs/dj_tut/lib/python2.6/site-packages/django/db/utils.py", line 91, in __getitem__
backend = load_backend(db['ENGINE'])
File "/Users/joerobinson/.virtualenvs/dj_tut/lib/python2.6/site-packages/django/db/utils.py", line 32, in load_backend
return import_module('.base', backend_name)
File "/Users/joerobinson/.virtualenvs/dj_tut/lib/python2.6/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Users/joerobinson/.virtualenvs/dj_tut/lib/python2.6/site-packages/django/db/backends/mysql/base.py", line 14, in <module>
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb
I've also installed the MySQL for Python adapter, but to no avail (maybe I installed it improperly?).
Anyone dealt with this before?
| I had the same error and pip install MySQL-python solved it for me.
Alternate installs:
If you don't have pip, easy_install MySQL-python should work.
If your python is managed by a packaging system, you might have to use
that system (e.g. sudo apt-get install ...)
Below, Soli notes that if you receive the following error:
EnvironmentError: mysql_config not found
... then you have a further system dependency issue. Solving this will vary from system to system, but for Debian-derived systems:
sudo apt-get install python-mysqldb
| MySQL | 2,952,187 | 260 |
I'm getting a 1022 error regarding duplicate keys on create table command. Having looked at the query, I can't understand where the duplication is taking place. Can anyone else see it?
SQL query:
-- -----------------------------------------------------
-- Table `apptwo`.`usercircle`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `apptwo`.`usercircle` (
`idUserCircle` MEDIUMINT NOT NULL ,
`userId` MEDIUMINT NULL ,
`circleId` MEDIUMINT NULL ,
`authUser` BINARY NULL ,
`authOwner` BINARY NULL ,
`startDate` DATETIME NULL ,
`endDate` DATETIME NULL ,
PRIMARY KEY ( `idUserCircle` ) ,
INDEX `iduser_idx` ( `userId` ASC ) ,
INDEX `idcategory_idx` ( `circleId` ASC ) ,
CONSTRAINT `iduser` FOREIGN KEY ( `userId` ) REFERENCES `apptwo`.`user` (
`idUser`
) ON DELETE NO ACTION ON UPDATE NO ACTION ,
CONSTRAINT `idcategory` FOREIGN KEY ( `circleId` ) REFERENCES `apptwo`.`circle` (
`idCircle`
) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE = INNODB;
MySQL said: Documentation
#1022 - Can't write; duplicate key in table 'usercircle'
| Most likely you already have a constraint with the name iduser or idcategory in your database. Just rename the constraints if so.
Constraints must be unique for the entire database, not just for the specific table you are creating/altering.
To find out where the constraints are currently in use you can use the following query:
SELECT `TABLE_SCHEMA`, `TABLE_NAME`
FROM `information_schema`.`KEY_COLUMN_USAGE`
WHERE `CONSTRAINT_NAME` IN ('iduser', 'idcategory');
| MySQL | 18,056,786 | 259 |
I have a large number of rows that I would like to copy, but I need to change one field.
I can select the rows that I want to copy:
select * from Table where Event_ID = "120"
Now I want to copy all those rows and create new rows while setting the Event_ID to 155. How can I accomplish this?
| INSERT INTO Table
( Event_ID
, col2
...
)
SELECT "155"
, col2
...
FROM Table WHERE Event_ID = "120"
Here, the col2, ... represent the remaining columns (the ones other than Event_ID) in your table.
| MySQL | 2,783,150 | 259 |
I have a SQL query where I want to insert multiple rows in single query. so I used something like:
$sql = "INSERT INTO beautiful (name, age)
VALUES
('Helen', 24),
('Katrina', 21),
('Samia', 22),
('Hui Ling', 25),
('Yumie', 29)";
mysql_query( $sql, $conn );
The problem is when I execute this query, I want to check whether a UNIQUE key (which is not the PRIMARY KEY), e.g. 'name' above, should be checked and if such a 'name' already exists, the corresponding whole row should be updated otherwise inserted.
For instance, in the example below, if 'Katrina' is already present in the database, the whole row, irrespective of the number of fields, should be updated. Again if 'Samia' is not present, the row should be inserted.
I thought of using:
INSERT INTO beautiful (name, age)
VALUES
('Helen', 24),
('Katrina', 21),
('Samia', 22),
('Hui Ling', 25),
('Yumie', 29) ON DUPLICATE KEY UPDATE
Here is the trap. I got stuck and confused about how to proceed. I have multiple rows to insert/update at a time. Please give me a direction. Thanks.
| Beginning with MySQL 8.0.19 you can use an alias for that row (see reference).
INSERT INTO beautiful (name, age)
VALUES
('Helen', 24),
('Katrina', 21),
('Samia', 22),
('Hui Ling', 25),
('Yumie', 29)
AS new
ON DUPLICATE KEY UPDATE
age = new.age
...
For earlier versions use the keyword VALUES (see reference, deprecated with MySQL 8.0.20).
INSERT INTO beautiful (name, age)
VALUES
('Helen', 24),
('Katrina', 21),
('Samia', 22),
('Hui Ling', 25),
('Yumie', 29)
ON DUPLICATE KEY UPDATE
age = VALUES(age),
...
| MySQL | 2,714,587 | 259 |
I'm currently creating a Login form and have this code:
string connectionString = ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString;
try
{
using (OdbcConnection connect = new OdbcConnection(connectionString))
{
connect.Open();
OdbcCommand cmd = new OdbcCommand("SELECT username, password FROM receptionist", connect);
OdbcDataReader reader = cmd.ExecuteReader();
if (username_login.Text == username && password_login.Text == password)
{
this.Hide();
MessageBox.Show("Invalid User", "Login Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
this.Close();
}
else
MessageBox.Show("Invalid User", "Login Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
connect.Close();
}
}
catch (OdbcException ex)
{
MessageBox.Show(ex.Message, "error", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
But whenever I try to type in the username and password there is an error called "Configuration system failed to initialize". What kind of problem is this, and how could I solve this?
| Make sure that your config file (web.config if web, or app.config if windows) in your project starts as:
<?xml version="1.0"?>
<configuration>
<configSections>
<sectionGroup name="applicationSettings"
type="System.Configuration.ApplicationSettingsGroup, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" >
<section name="YourProjectName.Properties.Settings"
type="System.Configuration.ClientSettingsSection, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
requirePermission="false" />
</sectionGroup>
</configSections>
</configuration>
Note that inside the configuration element, the first child must be the configSections element.
In the name property on section element, make sure you replace YourProjectName with your actual project's name.
It happened to me that I created a webservice in a class library project, then I copied (overwriting) the config file (in order to bring the endpoints configuration) to my windows app and I started to have the same problem. I had inadvertently removed configSections.
| MySQL | 6,436,157 | 258 |
I have an SQL table called "posts" that looks like this:
id | category
-----------------------
1 | 3
2 | 1
3 | 4
4 | 2
5 | 1
6 | 1
7 | 2
Each category number corresponds to a category. How would I go about counting the number of times each category appears on a post all in one SQL query?
As an example, such a query might return a symbolic array such as this: (1:3, 2:2, 3:1, 4:1)
My current method is to use queries for each possible category, such as: SELECT COUNT(*) AS num FROM posts WHERE category=#, and then combine the return values into a final array. However, I'm looking for a solution that uses only one query.
| SELECT
category,
COUNT(*) AS `num`
FROM
posts
GROUP BY
category
| MySQL | 7,053,902 | 257 |
Currently I am having the following MySQL table: Employees (empID, empName, department);
I want to change the table to the following: Employees (empID, department, empName);
How can this be done using ALTER statements?
Note: I want to change only column positions.
| If empName is a VARCHAR(50) column:
ALTER TABLE Employees MODIFY COLUMN empName VARCHAR(50) AFTER department;
EDIT
Per the comments, you can also do this:
ALTER TABLE Employees CHANGE COLUMN empName empName VARCHAR(50) AFTER department;
Note that the repetition of empName is deliberate. You have to tell MySQL that you want to keep the same column name.
You should be aware that both syntax versions are specific to MySQL. They won't work, for example, in PostgreSQL or many other DBMSs.
Another edit: As pointed out by @Luis Rossi in a comment, you need to completely specify the altered column definition just before the AFTER modifier. The above examples just have VARCHAR(50), but if you need other characteristics (such as NOT NULL or a default value) you need to include those as well. Consult the docs on ALTER TABLE for more info.
| MySQL | 6,805,426 | 257 |
I installed MySQL via MacPorts. What is the command I need to stop the server (I need to test how my application behave when MySQL is dead)?
| There are different cases depending on whether you installed MySQL with the official binary installer, using MacPorts, or using Homebrew:
Homebrew
brew services start mysql
brew services stop mysql
brew services restart mysql
MacPorts
sudo port load mysql57-server
sudo port unload mysql57-server
Note: this is persistent after a reboot.
Binary installer
sudo /Library/StartupItems/MySQLCOM/MySQLCOM stop
sudo /Library/StartupItems/MySQLCOM/MySQLCOM start
sudo /Library/StartupItems/MySQLCOM/MySQLCOM restart
| MySQL | 100,948 | 256 |
Is there a fast way of getting all column names from all tables in MySQL, without having to list all the tables?
| select column_name from information_schema.columns
where table_schema = 'your_db'
order by table_name,ordinal_position
| MySQL | 5,648,420 | 255 |
I am getting this error in wordpress phpMyadmin
#145 - Table './DB_NAME/wp_posts' is marked as crashed and should be repaired
When I login to phpMyadmin, it says wp_posts is "in use"
My website is currently down because of this.
I googled this problem, but I don't see the "repair" button on phpMyadmin. Please let me know how to fix this. I am not sure where to issue PHP command. Please advise, my proficiency with PHP is very basic.
| Here is where the repair button is:
| MySQL | 4,357,270 | 254 |
I have following data in my table "devices"
affiliate_name affiliate_location model ip os_type os_version
cs1 inter Dell 10.125.103.25 Linux Fedora
cs2 inter Dell 10.125.103.26 Linux Fedora
cs3 inter Dell 10.125.103.27 NULL NULL
cs4 inter Dell 10.125.103.28 NULL NULL
I executed below query
SELECT CONCAT(`affiliate_name`,'-',`model`,'-',`ip`,'-',`os_type`,'-',`os_version`) AS device_name
FROM devices
It returns result given below
cs1-Dell-10.125.103.25-Linux-Fedora
cs2-Dell-10.125.103.26-Linux-Fedora
(NULL)
(NULL)
How to come out of this so that it should ignore NULL AND result should be
cs1-Dell-10.125.103.25-Linux-Fedora
cs2-Dell-10.125.103.26-Linux-Fedora
cs3-Dell-10.125.103.27-
cs4-Dell-10.125.103.28-
| convert the NULL values with empty string by wrapping it in COALESCE
SELECT CONCAT(COALESCE(`affiliate_name`,''),'-',COALESCE(`model`,''),'-',COALESCE(`ip`,''),'-',COALESCE(`os_type`,''),'-',COALESCE(`os_version`,'')) AS device_name
FROM devices
| MySQL | 15,741,314 | 254 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.