question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I hope there's someone who can help me suggest a suitable data model to be implemented using nosql database Apache Cassandra. More of than I need it to work under high loads and large amounts of data. Simplified I have 3 types of objects: Product Tag ProductTag Product: key - string key name - string .... - some other fields Tag: key - string key name - unique tag words ProductTag: product_key - foreign key referring to product tag_key - foreign key referring to tag rating - this is rating of tag for this product Each product may have 0 or many tags. Tag may be assigned to 1 or many products. Means relation between products and tags is many-to-many in terms of relational databases. Value of "rating" is updated "very" often. I need to be run the following queries Select objects by keys Select tags for product ordered by rating Select products by tag order by rating Update rating by product_key and tag_key The most important is to make these queries really fast on large amounts of data, considering that rating is constantly updated.
Something like this: Products : { // Column Family productA : { //Row key name: 'The name of the product' // column price: 33.55 // column tags : 'fun, toy' // column } } ProductTag : { // Column Family fun : { //Row key timeuuid_1 : productA // column timeuuid_2 : productB // column }, toy : { //Row key timeuuid_3 : productA // column } } UPDATE Check this Model to store biggest score
Cassandra
2,479,589
10
I'm evaluating a storage platform for an upcoming project and keep coming back to Cassandra. For this project loosing any amount of data is unacceptable. So far we've used a relational database (Microsoft SQL Server), but the data is so varied and large that it has become an issue to store and query. Is Cassandra robust enough to use as a primary data store? Or should it only be used to mirror existing data to speed up access?
Anecdotally: yes, Twitter, Digg, Ooyala, SimpleGeo, Mahalo, and others are using or moving to Cassandra for a primary data store (http://n2.nabble.com/Cassandra-users-survey-td4040068.html). Technically: yes; besides supporting replication (including to multiple datacenters), each Cassandra node has an fsync'd commit log to make sure writes are durable; from there writes are turned into SSTables which are immutable until compaction (which combines multiple SSTables to GC old versions). Snapshotting is supported at any time, including automatic snapshot-before-compaction.
Cassandra
1,849,204
10
From the MySQL console, what command displays the schema of any given table?
For formatted output: describe [db_name.]table_name; For an SQL statement that can be used to create a table: show create table [db_name.]table_name;
MySQL
1,498,777
467
I would like to know the following: how to get data from multiple tables in my database? what types of methods are there to do this? what are joins and unions and how are they different from one another? When should I use each one compared to the others? I am planning to use this in my (for example - PHP) application, but don't want to run multiple queries against the database, what options do I have to get data from multiple tables in a single query? Note: I am writing this as I would like to be able to link to a well written guide on the numerous questions that I constantly come across in the PHP queue, so I can link to this for further detail when I post an answer. The answers cover off the following: Part 1 - Joins and Unions Part 2 - Subqueries Part 3 - Tricks and Efficient Code Part 4 - Subqueries in the From Clause Part 5 - Mixed Bag of John's Tricks
Part 1 - Joins and Unions This answer covers: Part 1 Joining two or more tables using an inner join (See the wikipedia entry for additional info) How to use a union query Left and Right Outer Joins (this stackOverflow answer is excellent to describe types of joins) Intersect queries (and how to reproduce them if your database doesn't support them) - this is a function of SQL-Server (see info) and part of the reason I wrote this whole thing in the first place. Part 2 Subqueries - what they are, where they can be used and what to watch out for Cartesian joins AKA - Oh, the misery! There are a number of ways to retrieve data from multiple tables in a database. In this answer, I will be using ANSI-92 join syntax. This may be different to a number of other tutorials out there which use the older ANSI-89 syntax (and if you are used to 89, may seem much less intuitive - but all I can say is to try it) as it is much easier to understand when the queries start getting more complex. Why use it? Is there a performance gain? The short answer is no, but it is easier to read once you get used to it. It is easier to read queries written by other folks using this syntax. I am also going to use the concept of a small caryard which has a database to keep track of what cars it has available. The owner has hired you as his IT Computer guy and expects you to be able to drop him the data that he asks for at the drop of a hat. I have made a number of lookup tables that will be used by the final table. This will give us a reasonable model to work from. To start off, I will be running my queries against an example database that has the following structure. I will try to think of common mistakes that are made when starting out and explain what goes wrong with them - as well as of course showing how to correct them. The first table is simply a color listing so that we know what colors we have in the car yard. mysql> create table colors(id int(3) not null auto_increment primary key, -> color varchar(15), paint varchar(10)); Query OK, 0 rows affected (0.01 sec) mysql> show columns from colors; +-------+-------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------+-------------+------+-----+---------+----------------+ | id | int(3) | NO | PRI | NULL | auto_increment | | color | varchar(15) | YES | | NULL | | | paint | varchar(10) | YES | | NULL | | +-------+-------------+------+-----+---------+----------------+ 3 rows in set (0.01 sec) mysql> insert into colors (color, paint) values ('Red', 'Metallic'), -> ('Green', 'Gloss'), ('Blue', 'Metallic'), -> ('White' 'Gloss'), ('Black' 'Gloss'); Query OK, 5 rows affected (0.00 sec) Records: 5 Duplicates: 0 Warnings: 0 mysql> select * from colors; +----+-------+----------+ | id | color | paint | +----+-------+----------+ | 1 | Red | Metallic | | 2 | Green | Gloss | | 3 | Blue | Metallic | | 4 | White | Gloss | | 5 | Black | Gloss | +----+-------+----------+ 5 rows in set (0.00 sec) The brands table identifies the different brands of the cars out caryard could possibly sell. mysql> create table brands (id int(3) not null auto_increment primary key, -> brand varchar(15)); Query OK, 0 rows affected (0.01 sec) mysql> show columns from brands; +-------+-------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------+-------------+------+-----+---------+----------------+ | id | int(3) | NO | PRI | NULL | auto_increment | | brand | varchar(15) | YES | | NULL | | +-------+-------------+------+-----+---------+----------------+ 2 rows in set (0.01 sec) mysql> insert into brands (brand) values ('Ford'), ('Toyota'), -> ('Nissan'), ('Smart'), ('BMW'); Query OK, 5 rows affected (0.00 sec) Records: 5 Duplicates: 0 Warnings: 0 mysql> select * from brands; +----+--------+ | id | brand | +----+--------+ | 1 | Ford | | 2 | Toyota | | 3 | Nissan | | 4 | Smart | | 5 | BMW | +----+--------+ 5 rows in set (0.00 sec) The model table will cover off different types of cars, it is going to be simpler for this to use different car types rather than actual car models. mysql> create table models (id int(3) not null auto_increment primary key, -> model varchar(15)); Query OK, 0 rows affected (0.01 sec) mysql> show columns from models; +-------+-------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------+-------------+------+-----+---------+----------------+ | id | int(3) | NO | PRI | NULL | auto_increment | | model | varchar(15) | YES | | NULL | | +-------+-------------+------+-----+---------+----------------+ 2 rows in set (0.00 sec) mysql> insert into models (model) values ('Sports'), ('Sedan'), ('4WD'), ('Luxury'); Query OK, 4 rows affected (0.00 sec) Records: 4 Duplicates: 0 Warnings: 0 mysql> select * from models; +----+--------+ | id | model | +----+--------+ | 1 | Sports | | 2 | Sedan | | 3 | 4WD | | 4 | Luxury | +----+--------+ 4 rows in set (0.00 sec) And finally, to tie up all these other tables, the table that ties everything together. The ID field is actually the unique lot number used to identify cars. mysql> create table cars (id int(3) not null auto_increment primary key, -> color int(3), brand int(3), model int(3)); Query OK, 0 rows affected (0.01 sec) mysql> show columns from cars; +-------+--------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------+--------+------+-----+---------+----------------+ | id | int(3) | NO | PRI | NULL | auto_increment | | color | int(3) | YES | | NULL | | | brand | int(3) | YES | | NULL | | | model | int(3) | YES | | NULL | | +-------+--------+------+-----+---------+----------------+ 4 rows in set (0.00 sec) mysql> insert into cars (color, brand, model) values (1,2,1), (3,1,2), (5,3,1), -> (4,4,2), (2,2,3), (3,5,4), (4,1,3), (2,2,1), (5,2,3), (4,5,1); Query OK, 10 rows affected (0.00 sec) Records: 10 Duplicates: 0 Warnings: 0 mysql> select * from cars; +----+-------+-------+-------+ | id | color | brand | model | +----+-------+-------+-------+ | 1 | 1 | 2 | 1 | | 2 | 3 | 1 | 2 | | 3 | 5 | 3 | 1 | | 4 | 4 | 4 | 2 | | 5 | 2 | 2 | 3 | | 6 | 3 | 5 | 4 | | 7 | 4 | 1 | 3 | | 8 | 2 | 2 | 1 | | 9 | 5 | 2 | 3 | | 10 | 4 | 5 | 1 | +----+-------+-------+-------+ 10 rows in set (0.00 sec) This will give us enough data (I hope) to cover off the examples below of different types of joins and also give enough data to make them worthwhile. So getting into the grit of it, the boss wants to know The IDs of all the sports cars he has. This is a simple two table join. We have a table that identifies the model and the table with the available stock in it. As you can see, the data in the model column of the cars table relates to the models column of the cars table we have. Now, we know that the models table has an ID of 1 for Sports so lets write the join. select ID, model from cars join models on model=ID So this query looks good right? We have identified the two tables and contain the information we need and use a join that correctly identifies what columns to join on. ERROR 1052 (23000): Column 'ID' in field list is ambiguous Oh noes! An error in our first query! Yes, and it is a plum. You see, the query has indeed got the right columns, but some of them exist in both tables, so the database gets confused about what actual column we mean and where. There are two solutions to solve this. The first is nice and simple, we can use tableName.columnName to tell the database exactly what we mean, like this: select cars.ID, models.model from cars join models on cars.model=models.ID +----+--------+ | ID | model | +----+--------+ | 1 | Sports | | 3 | Sports | | 8 | Sports | | 10 | Sports | | 2 | Sedan | | 4 | Sedan | | 5 | 4WD | | 7 | 4WD | | 9 | 4WD | | 6 | Luxury | +----+--------+ 10 rows in set (0.00 sec) The other is probably more often used and is called table aliasing. The tables in this example have nice and short simple names, but typing out something like KPI_DAILY_SALES_BY_DEPARTMENT would probably get old quickly, so a simple way is to nickname the table like this: select a.ID, b.model from cars a join models b on a.model=b.ID Now, back to the request. As you can see we have the information we need, but we also have information that wasn't asked for, so we need to include a where clause in the statement to only get the Sports cars as was asked. As I prefer the table alias method rather than using the table names over and over, I will stick to it from this point onwards. Clearly, we need to add a where clause to our query. We can identify Sports cars either by ID=1 or model='Sports'. As the ID is indexed and the primary key (and it happens to be less typing), lets use that in our query. select a.ID, b.model from cars a join models b on a.model=b.ID where b.ID=1 +----+--------+ | ID | model | +----+--------+ | 1 | Sports | | 3 | Sports | | 8 | Sports | | 10 | Sports | +----+--------+ 4 rows in set (0.00 sec) Bingo! The boss is happy. Of course, being a boss and never being happy with what he asked for, he looks at the information, then says I want the colors as well. Okay, so we have a good part of our query already written, but we need to use a third table which is colors. Now, our main information table cars stores the car color ID and this links back to the colors ID column. So, in a similar manner to the original, we can join a third table: select a.ID, b.model from cars a join models b on a.model=b.ID join colors c on a.color=c.ID where b.ID=1 +----+--------+ | ID | model | +----+--------+ | 1 | Sports | | 3 | Sports | | 8 | Sports | | 10 | Sports | +----+--------+ 4 rows in set (0.00 sec) Damn, although the table was correctly joined and the related columns were linked, we forgot to pull in the actual information from the new table that we just linked. select a.ID, b.model, c.color from cars a join models b on a.model=b.ID join colors c on a.color=c.ID where b.ID=1 +----+--------+-------+ | ID | model | color | +----+--------+-------+ | 1 | Sports | Red | | 8 | Sports | Green | | 10 | Sports | White | | 3 | Sports | Black | +----+--------+-------+ 4 rows in set (0.00 sec) Right, that's the boss off our back for a moment. Now, to explain some of this in a little more detail. As you can see, the from clause in our statement links our main table (I often use a table that contains information rather than a lookup or dimension table. The query would work just as well with the tables all switched around, but make less sense when we come back to this query to read it in a few months time, so it is often best to try to write a query that will be nice and easy to understand - lay it out intuitively, use nice indenting so that everything is as clear as it can be. If you go on to teach others, try to instill these characteristics in their queries - especially if you will be troubleshooting them. It is entirely possible to keep linking more and more tables in this manner. select a.ID, b.model, c.color from cars a join models b on a.model=b.ID join colors c on a.color=c.ID join brands d on a.brand=d.ID where b.ID=1 While I forgot to include a table where we might want to join more than one column in the join statement, here is an example. If the models table had brand-specific models and therefore also had a column called brand which linked back to the brands table on the ID field, it could be done as this: select a.ID, b.model, c.color from cars a join models b on a.model=b.ID join colors c on a.color=c.ID join brands d on a.brand=d.ID and b.brand=d.ID where b.ID=1 You can see, the query above not only links the joined tables to the main cars table, but also specifies joins between the already joined tables. If this wasn't done, the result is called a cartesian join - which is dba speak for bad. A cartesian join is one where rows are returned because the information doesn't tell the database how to limit the results, so the query returns all the rows that fit the criteria. So, to give an example of a cartesian join, lets run the following query: select a.ID, b.model from cars a join models b +----+--------+ | ID | model | +----+--------+ | 1 | Sports | | 1 | Sedan | | 1 | 4WD | | 1 | Luxury | | 2 | Sports | | 2 | Sedan | | 2 | 4WD | | 2 | Luxury | | 3 | Sports | | 3 | Sedan | | 3 | 4WD | | 3 | Luxury | | 4 | Sports | | 4 | Sedan | | 4 | 4WD | | 4 | Luxury | | 5 | Sports | | 5 | Sedan | | 5 | 4WD | | 5 | Luxury | | 6 | Sports | | 6 | Sedan | | 6 | 4WD | | 6 | Luxury | | 7 | Sports | | 7 | Sedan | | 7 | 4WD | | 7 | Luxury | | 8 | Sports | | 8 | Sedan | | 8 | 4WD | | 8 | Luxury | | 9 | Sports | | 9 | Sedan | | 9 | 4WD | | 9 | Luxury | | 10 | Sports | | 10 | Sedan | | 10 | 4WD | | 10 | Luxury | +----+--------+ 40 rows in set (0.00 sec) Good god, that's ugly. However, as far as the database is concerned, it is exactly what was asked for. In the query, we asked for for the ID from cars and the model from models. However, because we didn't specify how to join the tables, the database has matched every row from the first table with every row from the second table. Okay, so the boss is back, and he wants more information again. I want the same list, but also include 4WDs in it. This however, gives us a great excuse to look at two different ways to accomplish this. We could add another condition to the where clause like this: select a.ID, b.model, c.color from cars a join models b on a.model=b.ID join colors c on a.color=c.ID join brands d on a.brand=d.ID where b.ID=1 or b.ID=3 While the above will work perfectly well, lets look at it differently, this is a great excuse to show how a union query will work. We know that the following will return all the Sports cars: select a.ID, b.model, c.color from cars a join models b on a.model=b.ID join colors c on a.color=c.ID join brands d on a.brand=d.ID where b.ID=1 And the following would return all the 4WDs: select a.ID, b.model, c.color from cars a join models b on a.model=b.ID join colors c on a.color=c.ID join brands d on a.brand=d.ID where b.ID=3 So by adding a union all clause between them, the results of the second query will be appended to the results of the first query. select a.ID, b.model, c.color from cars a join models b on a.model=b.ID join colors c on a.color=c.ID join brands d on a.brand=d.ID where b.ID=1 union all select a.ID, b.model, c.color from cars a join models b on a.model=b.ID join colors c on a.color=c.ID join brands d on a.brand=d.ID where b.ID=3 +----+--------+-------+ | ID | model | color | +----+--------+-------+ | 1 | Sports | Red | | 8 | Sports | Green | | 10 | Sports | White | | 3 | Sports | Black | | 5 | 4WD | Green | | 7 | 4WD | White | | 9 | 4WD | Black | +----+--------+-------+ 7 rows in set (0.00 sec) As you can see, the results of the first query are returned first, followed by the results of the second query. In this example, it would of course have been much easier to simply use the first query, but union queries can be great for specific cases. They are a great way to return specific results from tables from tables that aren't easily joined together - or for that matter completely unrelated tables. There are a few rules to follow however. The column types from the first query must match the column types from every other query below. The names of the columns from the first query will be used to identify the entire set of results. The number of columns in each query must be the same. Now, you might be wondering what the difference is between using union and union all. A union query will remove duplicates, while a union all will not. This does mean that there is a small performance hit when using union over union all but the results may be worth it - I won't speculate on that sort of thing in this though. On this note, it might be worth noting some additional notes here. If we wanted to order the results, we can use an order by but you can't use the alias anymore. In the query above, appending an order by a.ID would result in an error - as far as the results are concerned, the column is called ID rather than a.ID - even though the same alias has been used in both queries. We can only have one order by statement, and it must be as the last statement. For the next examples, I am adding a few extra rows to our tables. I have added Holden to the brands table. I have also added a row into cars that has the color value of 12 - which has no reference in the colors table. Okay, the boss is back again, barking requests out - *I want a count of each brand we carry and the number of cars in it!` - Typical, we just get to an interesting section of our discussion and the boss wants more work. Rightyo, so the first thing we need to do is get a complete listing of possible brands. select a.brand from brands a +--------+ | brand | +--------+ | Ford | | Toyota | | Nissan | | Smart | | BMW | | Holden | +--------+ 6 rows in set (0.00 sec) Now, when we join this to our cars table we get the following result: select a.brand from brands a join cars b on a.ID=b.brand group by a.brand +--------+ | brand | +--------+ | BMW | | Ford | | Nissan | | Smart | | Toyota | +--------+ 5 rows in set (0.00 sec) Which is of course a problem - we aren't seeing any mention of the lovely Holden brand I added. This is because a join looks for matching rows in both tables. As there is no data in cars that is of type Holden it isn't returned. This is where we can use an outer join. This will return all the results from one table whether they are matched in the other table or not: select a.brand from brands a left outer join cars b on a.ID=b.brand group by a.brand +--------+ | brand | +--------+ | BMW | | Ford | | Holden | | Nissan | | Smart | | Toyota | +--------+ 6 rows in set (0.00 sec) Now that we have that, we can add a lovely aggregate function to get a count and get the boss off our backs for a moment. select a.brand, count(b.id) as countOfBrand from brands a left outer join cars b on a.ID=b.brand group by a.brand +--------+--------------+ | brand | countOfBrand | +--------+--------------+ | BMW | 2 | | Ford | 2 | | Holden | 0 | | Nissan | 1 | | Smart | 1 | | Toyota | 5 | +--------+--------------+ 6 rows in set (0.00 sec) And with that, away the boss skulks. Now, to explain this in some more detail, outer joins can be of the left or right type. The Left or Right defines which table is fully included. A left outer join will include all the rows from the table on the left, while (you guessed it) a right outer join brings all the results from the table on the right into the results. Some databases will allow a full outer join which will bring back results (whether matched or not) from both tables, but this isn't supported in all databases. Now, I probably figure at this point in time, you are wondering whether or not you can merge join types in a query - and the answer is yes, you absolutely can. select b.brand, c.color, count(a.id) as countOfBrand from cars a right outer join brands b on b.ID=a.brand join colors c on a.color=c.ID group by a.brand, c.color +--------+-------+--------------+ | brand | color | countOfBrand | +--------+-------+--------------+ | Ford | Blue | 1 | | Ford | White | 1 | | Toyota | Black | 1 | | Toyota | Green | 2 | | Toyota | Red | 1 | | Nissan | Black | 1 | | Smart | White | 1 | | BMW | Blue | 1 | | BMW | White | 1 | +--------+-------+--------------+ 9 rows in set (0.00 sec) So, why is that not the results that were expected? It is because although we have selected the outer join from cars to brands, it wasn't specified in the join to colors - so that particular join will only bring back results that match in both tables. Here is the query that would work to get the results that we expected: select a.brand, c.color, count(b.id) as countOfBrand from brands a left outer join cars b on a.ID=b.brand left outer join colors c on b.color=c.ID group by a.brand, c.color +--------+-------+--------------+ | brand | color | countOfBrand | +--------+-------+--------------+ | BMW | Blue | 1 | | BMW | White | 1 | | Ford | Blue | 1 | | Ford | White | 1 | | Holden | NULL | 0 | | Nissan | Black | 1 | | Smart | White | 1 | | Toyota | NULL | 1 | | Toyota | Black | 1 | | Toyota | Green | 2 | | Toyota | Red | 1 | +--------+-------+--------------+ 11 rows in set (0.00 sec) As we can see, we have two outer joins in the query and the results are coming through as expected. Now, how about those other types of joins you ask? What about Intersections? Well, not all databases support the intersection but pretty much all databases will allow you to create an intersection through a join (or a well structured where statement at the least). An Intersection is a type of join somewhat similar to a union as described above - but the difference is that it only returns rows of data that are identical (and I do mean identical) between the various individual queries joined by the union. Only rows that are identical in every regard will be returned. A simple example would be as such: select * from colors where ID>2 intersect select * from colors where id<4 While a normal union query would return all the rows of the table (the first query returning anything over ID>2 and the second anything having ID<4) which would result in a full set, an intersect query would only return the row matching id=3 as it meets both criteria. Now, if your database doesn't support an intersect query, the above can be easily accomlished with the following query: select a.ID, a.color, a.paint from colors a join colors b on a.ID=b.ID where a.ID>2 and b.ID<4 +----+-------+----------+ | ID | color | paint | +----+-------+----------+ | 3 | Blue | Metallic | +----+-------+----------+ 1 row in set (0.00 sec) If you wish to perform an intersection across two different tables using a database that doesn't inherently support an intersection query, you will need to create a join on every column of the tables.
MySQL
12,475,850
466
I'm trying to use a select statement to get all of the columns from a certain MySQL table except one. Is there a simple way to do this? EDIT: There are 53 columns in this table (NOT MY DESIGN)
Actually there is a way, you need to have permissions of course for doing this ... SET @sql = CONCAT('SELECT ', (SELECT REPLACE(GROUP_CONCAT(COLUMN_NAME), '<columns_to_omit>,', '') FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '<table>' AND TABLE_SCHEMA = '<database>'), ' FROM <table>'); PREPARE stmt1 FROM @sql; EXECUTE stmt1; Replacing <table>, <database> and <columns_to_omit>
MySQL
9,122
466
I am having a big problem trying to connect to mysql. When I run: /usr/local/mysql/bin/mysql start I have the following error : Can't connect to local MySQL server through socket '/var/mysql/mysql.sock' (38) I do have mysql.sock under the /var/mysql directory. In /etc/my.cnf I have: [client] port=3306 socket=/var/mysql/mysql.sock [mysqld] port=3306 socket=/var/mysql/mysql.sock key_buffer_size=16M max_allowed_packet=8M and in /etc/php.ini I have : ; Default socket name for local MySQL connects. If empty, uses the built-in ; MySQL defaults. mysql.default_socket = /var/mysql/mysql.sock I have restarted apache using sudo /opt/local/apache2/bin/apachectl restart But I still have the error. Otherwise, I don't know if that's relevant but when I do mysql_config --sockets I get --socket [/tmp/mysql.sock]
If your file my.cnf (usually in the /etc/mysql/ folder) is correctly configured with: socket=/var/lib/mysql/mysql.sock You can check if mysql is running with the following command: mysqladmin -u root -p status Try changing your permission to mysql folder. If you are working locally, you can try: sudo chmod -R 755 /var/lib/mysql/ That solved it for me.
MySQL
5,376,427
463
I believe that I've successfully deployed my (very basic) site to fortrabbit, but as soon as I connect to SSH to run some commands (such as php artisan migrate or php artisan db:seed) I get an error message: [PDOException] SQLSTATE[HY000] [2002] No such file or directory At some point the migration must have worked, because my tables are there - but this doesn't explain why it isn't working for me now.
One of simplest reasons for this error is that a MySQL server is not running. So verify that first. In case it's up, proceed to other recommendations: Laravel 4: Change "host" in the app/config/database.php file from "localhost" to "127.0.0.1" Laravel 5+: Change "DB_HOST" in the .env file from "localhost" to "127.0.0.1" I had the exact same problem. None of the above solutions worked for me. I solved the problem by changing the "host" in the /app/config/database.php file from "localhost" to "127.0.0.1". Not sure why "localhost" doesn't work by default but I found this answer in a similar question solved in a symfony2 post. https://stackoverflow.com/a/9251924 Update: Some people have asked as to why this fix works so I have done a little bit of research into the topic. It seems as though they use different connection types as explained in this post https://stackoverflow.com/a/9715164 The issue that arose here is that "localhost" uses a UNIX socket and can not find the database in the standard directory. However "127.0.0.1" uses TCP (Transmission Control Protocol), which essentially means it runs through the "local internet" on your computer being much more reliable than the UNIX socket in this case.
MySQL
20,723,803
460
Basically the question is how to get from this: foo_id foo_name 1 A 1 B 2 C to this: foo_id foo_name 1 A B 2 C
SELECT id, GROUP_CONCAT(name SEPARATOR ' ') FROM table GROUP BY id; https://dev.mysql.com/doc/refman/8.0/en/aggregate-functions.html#function_group-concat From the link above, GROUP_CONCAT: This function returns a string result with the concatenated non-NULL values from a group. It returns NULL if there are no non-NULL values.
MySQL
149,772
457
I have a simple mysql table: CREATE TABLE IF NOT EXISTS `pers` ( `persID` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(35) NOT NULL, `gehalt` int(11) NOT NULL, `chefID` int(11) DEFAULT NULL, PRIMARY KEY (`persID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; INSERT INTO `pers` (`persID`, `name`, `gehalt`, `chefID`) VALUES (1, 'blb', 1000, 3), (2, 'as', 1000, 3), (3, 'chef', 1040, NULL); I tried to run following update, but I get only the error 1093: UPDATE pers P SET P.gehalt = P.gehalt * 1.05 WHERE (P.chefID IS NOT NULL OR gehalt < (SELECT ( SELECT MAX(gehalt * 1.05) FROM pers MA WHERE MA.chefID = MA.chefID) AS _pers )) I searched for the error and found from mysql following page http://dev.mysql.com/doc/refman/5.1/en/subquery-restrictions.html, but it doesn't help me. What shall I do to correct the sql query?
The problem is that MySQL, for whatever inane reason, doesn't allow you to write queries like this: UPDATE myTable SET myTable.A = ( SELECT B FROM myTable INNER JOIN ... ) That is, if you're doing an UPDATE/INSERT/DELETE on a table, you can't reference that table in an inner query (you can however reference a field from that outer table...) The solution is to replace the instance of myTable in the sub-query with (SELECT * FROM myTable), like this UPDATE myTable SET myTable.A = ( SELECT B FROM (SELECT * FROM myTable) AS something INNER JOIN ... ) This apparently causes the necessary fields to be implicitly copied into a temporary table, so it's allowed. I found this solution here. A note from that article: You don’t want to just SELECT * FROM table in the subquery in real life; I just wanted to keep the examples simple. In reality, you should only be selecting the columns you need in that innermost query, and adding a good WHERE clause to limit the results, too.
MySQL
4,429,319
454
I am trying to execute the following query: INSERT INTO table_listnames (name, address, tele) VALUES ('Rupert', 'Somewhere', '022') WHERE NOT EXISTS ( SELECT name FROM table_listnames WHERE name='value' ); But this returns an error. Basically I don't want to insert a record if the 'name' field of the record already exists in another record - how to check if the new name is unique?
I'm not actually suggesting that you do this, as the UNIQUE index as suggested by Piskvor and others is a far better way to do it, but you can actually do what you were attempting: CREATE TABLE `table_listnames` ( `id` int(11) NOT NULL auto_increment, `name` varchar(255) NOT NULL, `address` varchar(255) NOT NULL, `tele` varchar(255) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB; Insert a record: INSERT INTO table_listnames (name, address, tele) SELECT * FROM (SELECT 'Rupert', 'Somewhere', '022') AS tmp WHERE NOT EXISTS ( SELECT name FROM table_listnames WHERE name = 'Rupert' ) LIMIT 1; Query OK, 1 row affected (0.00 sec) Records: 1 Duplicates: 0 Warnings: 0 SELECT * FROM `table_listnames`; +----+--------+-----------+------+ | id | name | address | tele | +----+--------+-----------+------+ | 1 | Rupert | Somewhere | 022 | +----+--------+-----------+------+ Try to insert the same record again: INSERT INTO table_listnames (name, address, tele) SELECT * FROM (SELECT 'Rupert', 'Somewhere', '022') AS tmp WHERE NOT EXISTS ( SELECT name FROM table_listnames WHERE name = 'Rupert' ) LIMIT 1; Query OK, 0 rows affected (0.00 sec) Records: 0 Duplicates: 0 Warnings: 0 +----+--------+-----------+------+ | id | name | address | tele | +----+--------+-----------+------+ | 1 | Rupert | Somewhere | 022 | +----+--------+-----------+------+ Insert a different record: INSERT INTO table_listnames (name, address, tele) SELECT * FROM (SELECT 'John', 'Doe', '022') AS tmp WHERE NOT EXISTS ( SELECT name FROM table_listnames WHERE name = 'John' ) LIMIT 1; Query OK, 1 row affected (0.00 sec) Records: 1 Duplicates: 0 Warnings: 0 SELECT * FROM `table_listnames`; +----+--------+-----------+------+ | id | name | address | tele | +----+--------+-----------+------+ | 1 | Rupert | Somewhere | 022 | | 2 | John | Doe | 022 | +----+--------+-----------+------+ And so on... Update: To prevent #1060 - Duplicate column name error in case two values may equal, you must name the columns of the inner SELECT: INSERT INTO table_listnames (name, address, tele) SELECT * FROM (SELECT 'Unknown' AS name, 'Unknown' AS address, '022' AS tele) AS tmp WHERE NOT EXISTS ( SELECT name FROM table_listnames WHERE name = 'Rupert' ) LIMIT 1; Query OK, 1 row affected (0.00 sec) Records: 1 Duplicates: 0 Warnings: 0 SELECT * FROM `table_listnames`; +----+---------+-----------+------+ | id | name | address | tele | +----+---------+-----------+------+ | 1 | Rupert | Somewhere | 022 | | 2 | John | Doe | 022 | | 3 | Unknown | Unknown | 022 | +----+---------+-----------+------+
MySQL
3,164,505
452
I know that you can insert multiple rows at once, is there a way to update multiple rows at once (as in, in one query) in MySQL? Edit: For example I have the following Name id Col1 Col2 Row1 1 6 1 Row2 2 2 3 Row3 3 9 5 Row4 4 16 8 I want to combine all the following Updates into one query UPDATE table SET Col1 = 1 WHERE id = 1; UPDATE table SET Col1 = 2 WHERE id = 2; UPDATE table SET Col2 = 3 WHERE id = 3; UPDATE table SET Col1 = 10 WHERE id = 4; UPDATE table SET Col2 = 12 WHERE id = 4;
Yes, that's possible - you can use INSERT ... ON DUPLICATE KEY UPDATE. Using your example: INSERT INTO table (id,Col1,Col2) VALUES (1,1,1),(2,2,3),(3,9,3),(4,10,12) ON DUPLICATE KEY UPDATE Col1=VALUES(Col1),Col2=VALUES(Col2);
MySQL
3,432
450
I'm running the following MySQL UPDATE statement: mysql> update customer set account_import_id = 1; ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction I'm not using a transaction, so why would I be getting this error? I even tried restarting my MySQL server and it didn't help. The table has 406,733 rows.
HOW TO FORCE UNLOCK for locked tables in MySQL: Breaking locks like this may cause atomicity in the database to not be enforced on the sql statements that caused the lock. This is hackish, and the proper solution is to fix your application that caused the locks. However, when dollars are on the line, a swift kick will get things moving again. 1) Enter MySQL mysql -u your_user -p 2) Let's see the list of locked tables mysql> show open tables where in_use>0; 3) Let's see the list of the current processes, one of them is locking your table(s) mysql> show processlist; 4) Kill one of these processes mysql> kill <put_process_id_here>;
MySQL
5,836,623
449
Is there a way to get the count of rows in all tables in a MySQL database without running a SELECT count() on each table?
SELECT SUM(TABLE_ROWS) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = '{your_db}'; Note from the docs though: For InnoDB tables, the row count is only a rough estimate used in SQL optimization. You'll need to use COUNT(*) for exact counts (which is more expensive).
MySQL
286,039
449
I have installed MySQL Community Edition 5.5 on my local machine and I want to allow remote connections so that I can connect from external source. How can I do that?
That is allowed by default on MySQL. What is disabled by default is remote root access. If you want to enable that, run this SQL command locally: GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password' WITH GRANT OPTION; FLUSH PRIVILEGES; And then find the following line and comment it out in your my.cnf file, which usually lives on /etc/mysql/my.cnf on Unix/OSX systems. In some cases the location for the file is /etc/mysql/mysql.conf.d/mysqld.cnf). If it's a Windows system, you can find it in the MySQL installation directory, usually something like C:\Program Files\MySQL\MySQL Server 5.5\ and the filename will be my.ini. Change line bind-address = 127.0.0.1 to #bind-address = 127.0.0.1 And restart the MySQL server (Unix/OSX, and Windows) for the changes to take effect.
MySQL
14,779,104
448
I have a table with the following columns in a MySQL database [id, url] And the URLs are like: http://domain1.example/images/img1.jpg I want to update all the URLs to another domain http://domain2.example/otherfolder/img1.jpg keeping the name of the file as is. What's the query must I run?
UPDATE urls SET url = REPLACE(url, 'domain1.example/images/', 'domain2.example/otherfolder/')
MySQL
10,177,208
447
I have a table with the following fields: id (Unique) url (Unique) title company site_id Now, I need to remove rows having same title, company and site_id. One way to do it will be using the following SQL along with a script (PHP): SELECT title, site_id, location, id, count( * ) FROM jobs GROUP BY site_id, company, title, location HAVING count( * ) >1 After running this query, I can remove duplicates using a server side script. But, I want to know if this can be done only using SQL query.
A really easy way to do this is to add a UNIQUE index on the 3 columns. When you write the ALTER statement, include the IGNORE keyword. Like so: ALTER IGNORE TABLE jobs ADD UNIQUE INDEX idx_name (site_id, title, company); This will drop all the duplicate rows. As an added benefit, future INSERTs that are duplicates will error out. As always, you may want to take a backup before running something like this... Edit: no longer works in MySQL 5.7+ This feature has been deprecated in MySQL 5.6 and removed in MySQL 5.7, so it doesn't work.
MySQL
3,311,903
447
What's the difference between VARCHAR and CHAR in MySQL? I am trying to store MD5 hashes.
VARCHAR is variable-length. CHAR is fixed length. If your content is a fixed size, you'll get better performance with CHAR. See the MySQL page on CHAR and VARCHAR Types for a detailed explanation (be sure to also read the comments).
MySQL
1,885,630
447
How do I rename a column in table xyz? The columns are: Manufacurerid, name, status, AI, PK, int I want to rename to manufacturerid I tried using PHPMyAdmin panel, but I get this error: MySQL said: Documentation #1025 - Error on rename of '.\shopping\#sql-c98_26' to '.\shopping\tblmanufacturer' (errno: 150)
Lone Ranger is very close... in fact, you also need to specify the datatype of the renamed column. For example: ALTER TABLE `xyz` CHANGE `manufacurerid` `manufacturerid` INT; Remember : Replace INT with whatever your column data type is (REQUIRED) Tilde/ Backtick (`) is optional
MySQL
4,002,340
443
I am really interested in how MySQL indexes work, more specifically, how can they return the data requested without scanning the entire table? It's off-topic, I know, but if there is someone who could explain this to me in detail, I would be very, very thankful.
Basically an index on a table works like an index in a book (that's where the name came from): Let's say you have a book about databases and you want to find some information about, say, storage. Without an index (assuming no other aid, such as a table of contents) you'd have to go through the pages one by one, until you found the topic (that's a full table scan). On the other hand, an index has a list of keywords, so you'd consult the index and see that storage is mentioned on pages 113-120,231 and 354. Then you could flip to those pages directly, without searching (that's a search with an index, somewhat faster). Of course, how useful the index will be, depends on many things - a few examples, using the simile above: if you had a book on databases and indexed the word "database", you'd see that it's mentioned on pages 1-59,61-290, and 292 to 400. In such case, the index is not much help and it might be faster to go through the pages one by one (in a database, this is "poor selectivity"). For a 10-page book, it makes no sense to make an index, as you may end up with a 10-page book prefixed by a 5-page index, which is just silly - just scan the 10 pages and be done with it. The index also needs to be useful - there's generally no point to index e.g. the frequency of the letter "L" per page.
MySQL
3,567,981
443
I got the Error Code: 2013. Lost connection to MySQL server during query error when I tried to add an index to a table using MySQL Workbench. I also noticed that it appears whenever I run long query. Is there a way to increase the timeout value?
New versions of MySQL WorkBench have an option to change specific timeouts. For me it was under Edit → Preferences → SQL Editor → DBMS connection read time out (in seconds): 600 Changed the value to 6000. Also unchecked limit rows as putting a limit in every time I want to search the whole data set gets tiresome.
MySQL
10,563,619
442
I'm using MySQL 5.7.13 on my windows PC with WAMP Server. My problem is while executing this query SELECT * FROM `tbl_customer_pod_uploads` WHERE `load_id` = '78' AND `status` = 'Active' GROUP BY `proof_type` I'm getting always error like this. Expression #1 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'returntr_prod.tbl_customer_pod_uploads.id' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by Can you please tell me the best solution? My result should be like below: +----+---------+---------+---------+----------+-----------+------------+---------------+--------------+------------+--------+---------------------+---------------------+ | id | user_id | load_id | bill_id | latitude | langitude | proof_type | document_type | file_name | is_private | status | createdon | updatedon | +----+---------+---------+---------+----------+-----------+------------+---------------+--------------+------------+--------+---------------------+---------------------+ | 1 | 1 | 78 | 1 | 21.1212 | 21.5454 | 1 | 1 | id_Card.docx | 0 | Active | 2017-01-27 11:30:11 | 2017-01-27 11:30:14 | +----+---------+---------+---------+----------+-----------+------------+---------------+--------------+------------+--------+---------------------+---------------------+
This Expression #1 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'returntr_prod.tbl_customer_pod_uploads.id' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by will be simply solved by changing the sql mode in MySQL by this command. SET GLOBAL sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY','')); This too works for me. I used this, because in my project there are many queries like this so I just changed the sql mode to only_full_group_by. OR simply include all columns in the GROUP BY clause that was specified by the SELECT statement. The sql_mode can be left enabled. Thank You. :-) Updated:14 Jul 2023 Changing SQL mode is a solution, but still, the best practice for Structured Query Language will be avoid selecting all (SELECT * ...) columns, instead use aggregator functions on the grouping columns as mentioned by @Tim Biegeleisen below answers https://stackoverflow.com/a/41887524/3602846
MySQL
41,887,460
440
I looked around some and didn't find what I was after so here goes. SELECT * FROM trees WHERE trees.`title` LIKE '%elm%' This works fine, but not if the tree is named Elm or ELM etc... How do I make SQL case insensitive for this wild-card search? I'm using MySQL 5 and Apache.
I've always solved this using lower: SELECT * FROM trees WHERE LOWER( trees.title ) LIKE '%elm%'
MySQL
2,876,789
440
How can I import a database with mysql from terminal? I cannot find the exact syntax.
Assuming you're on a Linux or Windows console: Prompt for password: mysql -u <username> -p <databasename> < <filename.sql> Enter password directly (not secure): mysql -u <username> -p<PlainPassword> <databasename> < <filename.sql> Example: mysql -u root -p wp_users < wp_users.sql mysql -u root -pPassword123 wp_users < wp_users.sql See also: 4.5.1.5. Executing SQL Statements from a Text File Note: If you are on windows then you will have to cd (change directory) to your MySQL/bin directory inside the CMD before executing the command.
MySQL
4,546,778
439
I have a database that is quite large so I want to export it using Command Prompt but I don't know how to. I am using WAMP.
First check if your command line recognizes mysql command. If not go to command & type in: set path=c:\wamp\bin\mysql\mysql5.1.36\bin Then use this command to export your database: mysqldump -u YourUser -p YourDatabaseName > wantedsqlfile.sql You will then be prompted for the database password. This exports the database to the path you are currently in, while executing this command Note: Here are some detailed instructions regarding both import and export
MySQL
3,031,412
435
I want to keep a backup of all my MySQL databases. I have more than 100 MySQL databases. I want to export all of them at the same time and again import all of them into my MySQL server at once. How can I do that?
Export: mysqldump -u root -p --all-databases > alldb.sql Look up the documentation for mysqldump. You may want to use some of the options mentioned in comments: mysqldump -u root -p --opt --all-databases > alldb.sql mysqldump -u root -p --all-databases --skip-lock-tables > alldb.sql Import: mysql -u root -p < alldb.sql
MySQL
9,497,869
433
I have an email column that I want to be unique. But I also want it to accept null values. Can my database have 2 null emails that way?
Yes, MySQL allows multiple NULLs in a column with a unique constraint. CREATE TABLE table1 (x INT NULL UNIQUE); INSERT table1 VALUES (1); INSERT table1 VALUES (1); -- Duplicate entry '1' for key 'x' INSERT table1 VALUES (NULL); INSERT table1 VALUES (NULL); SELECT * FROM table1; Result: x NULL NULL 1 This is not true for all databases. SQL Server 2005 and older, for example, only allows a single NULL value in a column that has a unique constraint.
MySQL
3,712,222
430
I want to add a Foreign Key to a table called "katalog". ALTER TABLE katalog ADD CONSTRAINT `fk_katalog_sprache` FOREIGN KEY (`Sprache`) REFERENCES `Sprache` (`ID`) ON DELETE SET NULL ON UPDATE SET NULL; When I try to do this, I get this error message: Error Code: 1005. Can't create table 'mytable.#sql-7fb1_7d3a' (errno: 150) Error in INNODB Status: 120405 14:02:57 Error in foreign key constraint of table mytable.#sql-7fb1_7d3a: FOREIGN KEY (`Sprache`) REFERENCES `Sprache` (`ID`) ON DELETE SET NULL ON UPDATE SET NULL: Cannot resolve table name close to: (`ID`) ON DELETE SET NULL ON UPDATE SET NULL When i use this query it works, but with wrong "on delete" action: ALTER TABLE `katalog` ADD FOREIGN KEY (`Sprache` ) REFERENCES `sprache` (`ID` ) Both tables are InnoDB and both fields are "INT(11) not null". I'm using MySQL 5.1.61. Trying to fire this ALTER Query with MySQL Workbench (newest) on a MacBook Pro. Table Create Statements: CREATE TABLE `katalog` ( `ID` int(11) unsigned NOT NULL AUTO_INCREMENT, `Name` varchar(50) COLLATE utf8_unicode_ci NOT NULL, `AnzahlSeiten` int(4) unsigned NOT NULL, `Sprache` int(11) NOT NULL, PRIMARY KEY (`ID`), UNIQUE KEY `katalogname_uq` (`Name`) ) ENGINE=InnoDB AUTO_INCREMENT=12 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ROW_FORMAT=DYNAMIC$$ CREATE TABLE `sprache` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `Bezeichnung` varchar(45) NOT NULL, PRIMARY KEY (`ID`), UNIQUE KEY `Bezeichnung_UNIQUE` (`Bezeichnung`), KEY `ix_sprache_id` (`ID`) ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8
To add a foreign key (grade_id) to an existing table (users), follow the following steps: ALTER TABLE users ADD grade_id SMALLINT UNSIGNED NOT NULL DEFAULT 0; ALTER TABLE users ADD CONSTRAINT fk_grade_id FOREIGN KEY (grade_id) REFERENCES grades(id);
MySQL
10,028,214
428
I need to store a url in a MySQL table. What's the best practice for defining a field that will hold a URL with an undetermined length?
Lowest common denominator max URL length among popular web browsers: 2,083 (Internet Explorer) http://dev.mysql.com/doc/refman/5.0/en/char.html Values in VARCHAR columns are variable-length strings. The length can be specified as a value from 0 to 255 before MySQL 5.0.3, and 0 to 65,535 in 5.0.3 and later versions. The effective maximum length of a VARCHAR in MySQL 5.0.3 and later is subject to the maximum row size (65,535 bytes, which is shared among all columns) and the character set used. So ... < MySQL 5.0.3 use TEXT or >= MySQL 5.0.3 use VARCHAR(2083)
MySQL
219,569
426
Is there a way to grab the columns name of a table in MySQL using PHP?
You can use DESCRIBE: DESCRIBE my_table; Or in newer versions you can use INFORMATION_SCHEMA: SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'my_database' AND TABLE_NAME = 'my_table'; Or you can use SHOW COLUMNS: SHOW COLUMNS FROM my_table; Or to get column names with comma in a line: SELECT group_concat(COLUMN_NAME) FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'my_database' AND TABLE_NAME = 'my_table';
MySQL
1,526,688
424
What is the difference between tinyint, smallint, mediumint, bigint and int in MySQL? In what cases should these be used?
They take up different amounts of space and they have different ranges of acceptable values. Here are the sizes and ranges of values for SQL Server, other RDBMSes have similar documentation: MySQL Postgres Oracle (they just have a NUMBER datatype really) DB2 Turns out they all use the same specification (with a few minor exceptions noted below) but support various combinations of those types (Oracle not included because it has just a NUMBER datatype, see the above link): | SQL Server MySQL Postgres DB2 --------------------------------------------------- tinyint | X X smallint | X X X X mediumint | X int/integer | X X X X bigint | X X X X And they support the same value ranges (with one exception below) and all have the same storage requirements: | Bytes Range (signed) Range (unsigned) -------------------------------------------------------------------------------------------- tinyint | 1 byte -128 to 127 0 to 255 smallint | 2 bytes -32768 to 32767 0 to 65535 mediumint | 3 bytes -8388608 to 8388607 0 to 16777215 int/integer | 4 bytes -2147483648 to 2147483647 0 to 4294967295 bigint | 8 bytes -9223372036854775808 to 9223372036854775807 0 to 18446744073709551615 The "unsigned" types are only available in MySQL, and the rest just use the signed ranges, with one notable exception: tinyint in SQL Server is unsigned and has a value range of 0 to 255
MySQL
2,991,405
423
I seem to be unable to re-create a simple user I've deleted, even as root in MySQL. My case: user 'jack' existed before, but I deleted it from mysql.user in order to recreate it. I see no vestiges of this in that table. If I execute this command for some other, random username, say 'jimmy', it works fine (just as it originally did for 'jack'). What have I done to corrupt user 'jack' and how can I undo that corruption in order to re-create 'jack' as a valid user for this installation of MySQL? See example below. (Of course, originally, there was much time between the creation of 'jack' and his removal.) mysql> CREATE USER 'jack'@'localhost' IDENTIFIED BY 'test123'; Query OK, 0 rows affected (0.00 sec) mysql> select user,host from user; +------------------+-----------------+ | user | host | +------------------+-----------------+ | root | 127.0.0.1 | | debian-sys-maint | localhost | | jack | localhost | | root | localhost | | root | russ-elite-book | +------------------+-----------------+ 5 rows in set (0.00 sec) mysql> delete from user where user = 'jack'; Query OK, 1 row affected (0.00 sec) mysql> select user,host from user; +------------------+-----------------+ | user | host | +------------------+-----------------+ | root | 127.0.0.1 | | debian-sys-maint | localhost | | root | localhost | | root | russ-elite-book | +------------------+-----------------+ 4 rows in set (0.00 sec) mysql> CREATE USER 'jack'@'localhost' IDENTIFIED BY 'test123'; ERROR 1396 (HY000): Operation CREATE USER failed for 'jack'@'localhost' mysql> CREATE USER 'jimmy'@'localhost' IDENTIFIED BY 'test123'; Query OK, 0 rows affected (0.00 sec) mysql> select user,host from user; +------------------+-----------------+ | user | host | +------------------+-----------------+ | root | 127.0.0.1 | | debian-sys-maint | localhost | | jimmy | localhost | | root | localhost | | root | russ-elite-book | +------------------+-----------------+ 5 rows in set (0.00 sec)
yes this bug is there. However, I found a small workaround. Assume the user is there, so drop the user After deleting the user, there is need to flush the mysql privileges Now create the user. That should solve it. Assuming we want to create the user admin @ localhost, these would be the commands: drop user admin@localhost; flush privileges; create user admin@localhost identified by 'admins_password'
MySQL
5,555,328
422
I'm trying to find out if a row exists in a table. Using MySQL, is it better to do a query like this: SELECT COUNT(*) AS total FROM table1 WHERE ... and check to see if the total is non-zero or is it better to do a query like this: SELECT * FROM table1 WHERE ... LIMIT 1 and check to see if any rows were returned? In both queries, the WHERE clause uses an index.
You could also try EXISTS: SELECT EXISTS(SELECT * FROM table1 WHERE ...) and per the documentation, you can SELECT anything. Traditionally, an EXISTS subquery starts with SELECT *, but it could begin with SELECT 5 or SELECT column1 or anything at all. MySQL ignores the SELECT list in such a subquery, so it makes no difference.
MySQL
1,676,551
422
Let's say I am doing a MySQL INSERT into one of my tables and the table has the column item_id which is set to autoincrement and primary key. How do I get the query to output the value of the newly generated primary key item_id in the same query? Currently I am running a second query to retrieve the id but this hardly seems like good practice considering this might produce the wrong result... If this is not possible then what is the best practice to ensure I retrieve the correct id?
You need to use the LAST_INSERT_ID() function: http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_last-insert-id Eg: INSERT INTO table_name (col1, col2,...) VALUES ('val1', 'val2'...); SELECT LAST_INSERT_ID(); This will get you back the PRIMARY KEY value of the last row that you inserted: The ID that was generated is maintained in the server on a per-connection basis. This means that the value returned by the function to a given client is the first AUTO_INCREMENT value generated for most recent statement affecting an AUTO_INCREMENT column by that client. So the value returned by LAST_INSERT_ID() is per user and is unaffected by other queries that might be running on the server from other users.
MySQL
17,112,852
418
I've been trying to figure out how I can make a query with MySQL that checks if the value (string $haystack ) in a certain column contains certain data (string $needle), like this: SELECT * FROM `table` WHERE `column`.contains('{$needle}') In PHP, the function is called substr($haystack, $needle), so maybe: WHERE substr(`column`, '{$needle}')=1
Quite simple actually: SELECT * FROM `table` WHERE `column` LIKE '%{$needle}%' The % is a wildcard for any characters set (none, one or many). Do note that this can get slow on very large datasets so if your database grows you'll need to use fulltext indices.
MySQL
2,602,252
417
I'm trying to setup up MySQL on mac os 10.6 using Homebrew by brew install mysql 5.1.52. Everything goes well and I am also successful with the mysql_install_db. However when I try to connect to the server using: /usr/local/Cellar/mysql/5.1.52/bin/mysqladmin -u root password 'mypass' I get: /usr/local/Cellar/mysql/5.1.52/bin/mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: NO)' I've tried to access mysqladmin or mysql using -u root -proot as well, but it doesn't work with or without password. This is a brand new installation on a brand new machine and as far as I know the new installation must be accessible without a root password. I also tried: /usr/local/Cellar/mysql/5.1.52/bin/mysql_secure_installation but I also get ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)
I think one can end up in this position with older versions of mysql already installed. I had the same problem and none of the above solutions worked for me. I fixed it thus: Used brew's remove & cleanup commands, unloaded the launchctl script, then deleted the mysql directory in /usr/local/var, deleted my existing /etc/my.cnf (leave that one up to you, should it apply) and launchctl plist Updated the string for the plist. Note also your alternate security script directory will be based on which version of MySQL you are installing. Step-by-step: brew remove mysql brew cleanup launchctl unload -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist rm ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist sudo rm -rf /usr/local/var/mysql I then started from scratch: installed mysql with brew install mysql ran the commands brew suggested: (see note: below) unset TMPDIR mysql_install_db --verbose --user=`whoami` --basedir="$(brew --prefix mysql)" --datadir=/usr/local/var/mysql --tmpdir=/tmp Start mysql with mysql.server start command, to be able to log on it Used the alternate security script: /usr/local/Cellar/mysql/5.5.10/bin/mysql_secure_installation Followed the launchctl section from the brew package script output such as, #start launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist #stop launchctl unload -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist Note: the --force bit on brew cleanup will also cleanup outdated kegs, think it's a new-ish homebrew feature. Note the second: a commenter says step 2 is not required. I don't want to test it, so YMMV!
MySQL
4,359,131
416
What is the correct format to pass to the date() function in PHP if I want to insert the result into a MySQL datetime type column? I've been trying date('Y-M-D G:i:s') but that just inserts "0000-00-00 00:00:00" everytime.
The problem is that you're using 'M' and 'D', which are a textual representations, MySQL is expecting a numeric representation of the format 2010-02-06 19:30:13 Try: date('Y-m-d H:i:s') which uses the numeric equivalents. edit: switched G to H, though it may not have impact, you probably want to use 24-hour format with leading 0s.
MySQL
2,215,354
413
I have a function that returns five characters with mixed case. If I do a query on this string it will return the value regardless of case. How can I make MySQL string queries case sensitive?
Use this to make a case-sensitive query: SELECT * FROM `table` WHERE BINARY `column` = 'value'
MySQL
5,629,111
412
I have just installed Debian Lenny with Apache, MySQL, and PHP and I am receiving a PDOException could not find driver. This is the specific line of code it is referring to: $dbh = new PDO('mysql:host=' . DB_HOST . ';dbname=' . DB_NAME, DB_USER, DB_PASS) DB_HOST, DB_NAME, DB_USER, and DB_PASS are constants that I have defined. It works fine on the production server (and on my previous Ubuntu Server setup). Is this something to do with my PHP installation? Searching the internet has not helped, all I get is experts-exchange and examples, but no solutions.
You need to have a module called pdo_mysql. Looking for following in phpinfo(), pdo_mysql PDO Driver for MySQL, client library version => 5.1.44
MySQL
2,852,748
407
I am having a problem with BLOB fields in my MySQL database - when uploading files larger than approx 1MB I get an error Packets larger than max_allowed_packet are not allowed. Here is what i've tried: In MySQL Query Browser I ran a show variables like 'max_allowed_packet' which gave me 1048576. Then I execute the query set global max_allowed_packet=33554432 followed by show variables like 'max_allowed_packet' - it gives me 33554432 as expected. But when I restart the MySQL server it magically goes back to 1048576. What am I doing wrong here? Bonus question, is it possible to compress a BLOB field?
Change in the my.ini or ~/.my.cnf file by including the single line under [mysqld] or [client] section in your file: max_allowed_packet=500M then restart the MySQL service and you are done. See the documentation for further information.
MySQL
8,062,496
406
I've got the following two tables (in MySQL): Phone_book +----+------+--------------+ | id | name | phone_number | +----+------+--------------+ | 1 | John | 111111111111 | +----+------+--------------+ | 2 | Jane | 222222222222 | +----+------+--------------+ Call +----+------+--------------+ | id | date | phone_number | +----+------+--------------+ | 1 | 0945 | 111111111111 | +----+------+--------------+ | 2 | 0950 | 222222222222 | +----+------+--------------+ | 3 | 1045 | 333333333333 | +----+------+--------------+ How do I find out which calls were made by people whose phone_number is not in the Phone_book? The desired output would be: Call +----+------+--------------+ | id | date | phone_number | +----+------+--------------+ | 3 | 1045 | 333333333333 | +----+------+--------------+
There's several different ways of doing this, with varying efficiency, depending on how good your query optimiser is, and the relative size of your two tables: This is the shortest statement, and may be quickest if your phone book is very short: SELECT * FROM Call WHERE phone_number NOT IN (SELECT phone_number FROM Phone_book) alternatively (thanks to Alterlife) SELECT * FROM Call WHERE NOT EXISTS (SELECT * FROM Phone_book WHERE Phone_book.phone_number = Call.phone_number) or (thanks to WOPR) SELECT * FROM Call LEFT OUTER JOIN Phone_Book ON (Call.phone_number = Phone_book.phone_number) WHERE Phone_book.phone_number IS NULL (ignoring that, as others have said, it's normally best to select just the columns you want, not '*')
MySQL
367,863
405
Someone sent me a SQL query where the GROUP BY clause consisted of the statement: GROUP BY 1. This must be a typo right? No column is given the alias 1. What could this mean? Am I right to assume that this must be a typo?
It means to group by the first column of your result set regardless of what it's called. You can do the same with ORDER BY.
MySQL
7,392,730
401
I am trying to forward engineer my new schema onto my database server, but I can't figure out why I am getting this error. I've tried to search for the answer here, but everything I've found has said to either set the database engine to InnoDB or to make sure the keys I'm trying to use as a foreign key are primary keys in their own tables. I have done both of these things, if I'm not mistaken. What else can I do? Executing SQL script in server ERROR: Error 1215: Cannot add foreign key constraint -- ----------------------------------------------------- -- Table `Alternative_Pathways`.`Clients_has_Staff` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `Alternative_Pathways`.`Clients_has_Staff` ( `Clients_Case_Number` INT NOT NULL , `Staff_Emp_ID` INT NOT NULL , PRIMARY KEY (`Clients_Case_Number`, `Staff_Emp_ID`) , INDEX `fk_Clients_has_Staff_Staff1_idx` (`Staff_Emp_ID` ASC) , INDEX `fk_Clients_has_Staff_Clients_idx` (`Clients_Case_Number` ASC) , CONSTRAINT `fk_Clients_has_Staff_Clients` FOREIGN KEY (`Clients_Case_Number` ) REFERENCES `Alternative_Pathways`.`Clients` (`Case_Number` ) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `fk_Clients_has_Staff_Staff1` FOREIGN KEY (`Staff_Emp_ID` ) REFERENCES `Alternative_Pathways`.`Staff` (`Emp_ID` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB SQL script execution finished: statements: 7 succeeded, 1 failed Here is the SQL for the parent tables. CREATE TABLE IF NOT EXISTS `Alternative_Pathways`.`Clients` ( `Case_Number` INT NOT NULL , `First_Name` CHAR(10) NULL , `Middle_Name` CHAR(10) NULL , `Last_Name` CHAR(10) NULL , `Address` CHAR(50) NULL , `Phone_Number` INT(10) NULL , PRIMARY KEY (`Case_Number`) ) ENGINE = InnoDB CREATE TABLE IF NOT EXISTS `Alternative_Pathways`.`Staff` ( `Emp_ID` INT NOT NULL , `First_Name` CHAR(10) NULL , `Middle_Name` CHAR(10) NULL , `Last_Name` CHAR(10) NULL , PRIMARY KEY (`Emp_ID`) ) ENGINE = InnoDB
I'm guessing that Clients.Case_Number and/or Staff.Emp_ID are not exactly the same data type as Clients_has_Staff.Clients_Case_Number and Clients_has_Staff.Staff_Emp_ID. Perhaps the columns in the parent tables are INT UNSIGNED? They need to be exactly the same data type in both tables.
MySQL
16,969,060
400
How do you select all the columns from one table and just some columns from another table using JOIN? In MySQL.
Just use the table name: SELECT myTable.*, otherTable.foo, otherTable.bar... That would select all columns from myTable and columns foo and bar from otherTable.
MySQL
3,492,904
397
How do I set the initial value for an "id" column in a MySQL table that start from 1001? I want to do an insert "INSERT INTO users (name, email) VALUES ('{$name}', '{$email}')"; Without specifying the initial value for the id column.
Use this: ALTER TABLE users AUTO_INCREMENT=1001; or if you haven't already added an id column, also add it ALTER TABLE users ADD id INT UNSIGNED NOT NULL AUTO_INCREMENT, ADD INDEX (id);
MySQL
1,485,668
394
My Current Data for SELECT PROD_CODE FROM `PRODUCT` is PROD_CODE 2 5 7 8 22 10 9 11 I have tried all the four queries and none work. (Ref) SELECT CAST(PROD_CODE) AS INT FROM PRODUCT; SELECT CAST(PROD_CODE AS INT) FROM PRODUCT; SELECT CAST(PROD_CODE) AS INTEGER FROM PRODUCT; SELECT CAST(PROD_CODE AS INTEGER) FROM PRODUCT; All throw syntax errors such as below: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ') AS INT FROM PRODUCT LIMIT 0, 30' at line 1 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INTEGER) FROM PRODUCT LIMIT 0, 30' at line 1 What is the right syntax to cast varchar to integer in MySQL? MySQL Version: 5.5.16
As described in Cast Functions and Operators: The type for the result can be one of the following values: BINARY[(N)] CHAR[(N)] DATE DATETIME DECIMAL[(M[,D])] SIGNED [INTEGER] TIME UNSIGNED [INTEGER] Therefore, you should use: SELECT CAST(PROD_CODE AS UNSIGNED) FROM PRODUCT
MySQL
12,126,991
393
I have changed all the php.ini parameters I know: upload_max_filesize, post_max_size. Why am I still seeing 2MB? Im using Zend Server CE, on a Ubuntu VirtualBox over a Windows 7 host.
Find the file called: php.ini on your server and follow below steps With apache2 and php5 installed you need to make three changes in the php.ini file. First open the file for editing, e.g.: sudo gedit /etc/php5/apache2/php.ini OR sudo gedit /etc/php/7.0/apache2/php.ini Next, search for the post_max_size entry, and enter a larger number than the size of your database (15M in this case), for example: post_max_size = 25M Next edit the entry for memory_limit and give it a larger value than the one given to post_max_size. Then ensure the value of upload_max_filesize is smaller than post_max_size. The order from biggest to smallest should be: memory_limit post_max_size upload_max_filesize After saving the file, restart apache (e.g. sudo /etc/init.d/apache2 restart) and you are set. Don't forget to Restart Apache Services for changes to be applied.
MySQL
3,958,615
391
When I tried running the following command on MySQL from within Terminal: mysql -u $user -p$password -e "statement" The execution works as expected, but it always issues a warning: Warning: Using a password on the command line interface can be insecure. However, I have to conduct the statement above using an environment variable ($password) that stores my password, because I want to run the command iteratively in bash script from within Terminal, and I definitely don't like the idea of waiting a prompt showing up and forcing me to input my password 50 or 100 times in a single script. So here's my question: Is it feasible to suppress the warning? The command works properly as I stated, but the window becomes pretty messy when I loop over and run the command 50 or 100 times. Should I obey the warning message and do NOT write my password in my script? If that's the case, then do I have to type in my password every time the prompt forces me to do so? Running man mysql doesn't help, saying only --show-warnings Cause warnings to be shown after each statement if there are any. This option applies to interactive and batch mode. and mentions nothing about how to turn off the functionality, if I'm not missing something. I'm on OS X 10.9.1 Mavericks and use MySQL 5.6 from homebrew.
I use something like: mysql --defaults-extra-file=/path/to/config.cnf or mysqldump --defaults-extra-file=/path/to/config.cnf Where config.cnf contains: [client] user = "whatever" password = "whatever" host = "whatever" This allows you to have multiple config files - for different servers/roles/databases. Using ~/.my.cnf will only allow you to have one set of configuration (although it may be a useful set of defaults). If you're on a Debian based distro, and running as root, you could skip the above and just use /etc/mysql/debian.cnf to get in ... : mysql --defaults-extra-file=/etc/mysql/debian.cnf
MySQL
20,751,352
390
If I have a MySQL table looking something like this: company_name action pagecount Company A PRINT 3 Company A PRINT 2 Company A PRINT 3 Company B EMAIL Company B PRINT 2 Company B PRINT 2 Company B PRINT 1 Company A PRINT 3 Is it possible to run a MySQL query to get output like this: company_name EMAIL PRINT 1 pages PRINT 2 pages PRINT 3 pages CompanyA 0 0 1 3 CompanyB 1 1 2 0 The idea is that pagecount can vary so the output column amount should reflect that, one column for each action/pagecount pair and then number of hits per company_name. I'm not sure if this is called a pivot table but someone suggested that?
This basically is a pivot table. A nice tutorial on how to achieve this can be found here: http://www.artfulsoftware.com/infotree/qrytip.php?id=78 I advise reading this post and adapt this solution to your needs. Update After the link above is currently not available any longer I feel obliged to provide some additional information for all of you searching for mysql pivot answers in here. It really had a vast amount of information, and I won't put everything from there in here (even more since I just don't want to copy their vast knowledge), but I'll give some advice on how to deal with pivot tables the sql way generally with the example from peku who asked the question in the first place. Maybe the link comes back soon, I'll keep an eye out for it. The spreadsheet way... Many people just use a tool like MSExcel, OpenOffice or other spreadsheet-tools for this purpose. This is a valid solution, just copy the data over there and use the tools the GUI offer to solve this. But... this wasn't the question, and it might even lead to some disadvantages, like how to get the data into the spreadsheet, problematic scaling and so on. The SQL way... Given his table looks something like this: CREATE TABLE `test_pivot` ( `pid` bigint(20) NOT NULL AUTO_INCREMENT, `company_name` varchar(32) DEFAULT NULL, `action` varchar(16) DEFAULT NULL, `pagecount` bigint(20) DEFAULT NULL, PRIMARY KEY (`pid`) ) ENGINE=MyISAM; Now look into his/her desired table: company_name EMAIL PRINT 1 pages PRINT 2 pages PRINT 3 pages ------------------------------------------------------------- CompanyA 0 0 1 3 CompanyB 1 1 2 0 The rows (EMAIL, PRINT x pages) resemble conditions. The main grouping is by company_name. In order to set up the conditions this rather shouts for using the CASE-statement. In order to group by something, well, use ... GROUP BY. The basic SQL providing this pivot can look something like this: SELECT P.`company_name`, COUNT( CASE WHEN P.`action`='EMAIL' THEN 1 ELSE NULL END ) AS 'EMAIL', COUNT( CASE WHEN P.`action`='PRINT' AND P.`pagecount` = '1' THEN P.`pagecount` ELSE NULL END ) AS 'PRINT 1 pages', COUNT( CASE WHEN P.`action`='PRINT' AND P.`pagecount` = '2' THEN P.`pagecount` ELSE NULL END ) AS 'PRINT 2 pages', COUNT( CASE WHEN P.`action`='PRINT' AND P.`pagecount` = '3' THEN P.`pagecount` ELSE NULL END ) AS 'PRINT 3 pages' FROM test_pivot P GROUP BY P.`company_name`; This should provide the desired result very fast. The major downside for this approach, the more rows you want in your pivot table, the more conditions you need to define in your SQL statement. This can be dealt with, too, therefore people tend to use prepared statements, routines, counters and such. Some additional links about this topic: http://anothermysqldba.blogspot.de/2013/06/pivot-tables-example-in-mysql.html http://www.codeproject.com/Articles/363339/Cross-Tabulation-Pivot-Tables-with-MySQL http://datacharmer.org/downloads/pivot_tables_mysql_5.pdf https://codingsight.com/pivot-tables-in-mysql/
MySQL
7,674,786
388
I am looking for the syntax for dumping all data in my mysql database. I don't want any table information.
mysqldump --no-create-info ... Also you may use: --skip-triggers: if you are using triggers --no-create-db: if you are using --databases ... option --compact: if you want to get rid of extra comments
MySQL
5,109,993
388
I have a innoDB table which records online users. It gets updated on every page refresh by a user to keep track of which pages they are on and their last access date to the site. I then have a cron that runs every 15 minutes to DELETE old records. I got a 'Deadlock found when trying to get lock; try restarting transaction' for about 5 minutes last night and it appears to be when running INSERTs into this table. Can someone suggest how to avoid this error? === EDIT === Here are the queries that are running: First Visit to site: INSERT INTO onlineusers SET ip = 123.456.789.123, datetime = now(), userid = 321, page = '/thispage', area = 'thisarea', type = 3 On each page refresh: UPDATE onlineusers SET ips = 123.456.789.123, datetime = now(), userid = 321, page = '/thispage', area = 'thisarea', type = 3 WHERE id = 888 Cron every 15 minutes: DELETE FROM onlineusers WHERE datetime <= now() - INTERVAL 900 SECOND It then does some counts to log some stats (ie: members online, visitors online).
One easy trick that can help with most deadlocks is sorting the operations in a specific order. You get a deadlock when two transactions are trying to lock two locks at opposite orders, ie: connection 1: locks key(1), locks key(2); connection 2: locks key(2), locks key(1); If both run at the same time, connection 1 will lock key(1), connection 2 will lock key(2) and each connection will wait for the other to release the key -> deadlock. Now, if you changed your queries such that the connections would lock the keys at the same order, ie: connection 1: locks key(1), locks key(2); connection 2: locks key(1), locks key(2); it will be impossible to get a deadlock. So this is what I suggest: Make sure you have no other queries that lock access more than one key at a time except for the delete statement. if you do (and I suspect you do), order their WHERE in (k1,k2,..kn) in ascending order. Fix your delete statement to work in ascending order: Change DELETE FROM onlineusers WHERE datetime <= now() - INTERVAL 900 SECOND To DELETE FROM onlineusers WHERE id IN ( SELECT id FROM onlineusers WHERE datetime <= now() - INTERVAL 900 SECOND ORDER BY id ) u; Another thing to keep in mind is that MySQL documentation suggest that in case of a deadlock the client should retry automatically. you can add this logic to your client code. (Say, 3 retries on this particular error before giving up).
MySQL
2,332,768
388
Is there a difference between a schema and a database in MySQL? In SQL Server, a database is a higher level container in relation to a schema. I read that Create Schema and Create Database do essentially the same thing in MySQL, which leads me to believe that schemas and databases are different words for the same objects.
As defined in the MySQL Glossary: In MySQL, physically, a schema is synonymous with a database. You can substitute the keyword SCHEMA instead of DATABASE in MySQL SQL syntax, for example using CREATE SCHEMA instead of CREATE DATABASE. Some other database products draw a distinction. For example, in the Oracle Database product, a schema represents only a part of a database: the tables and other objects owned by a single user.
MySQL
11,618,277
387
How do I get the current AUTO_INCREMENT value for a table in MySQL?
You can get all of the table data by using this query: SHOW TABLE STATUS FROM `DatabaseName` WHERE `name` LIKE 'TableName' ; You can get exactly this information by using this query: SELECT `AUTO_INCREMENT` FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'DatabaseName' AND TABLE_NAME = 'TableName';
MySQL
15,821,532
385
I am moving away from Linode because I don't have the Linux sysadmin skills necessary; before I complete the transition to a more noob-friendly service, I need to export the contents of a MySQL database. Is there a way I can do this from the command line?
You can accomplish this using the mysqldump command-line function. For example: If it's an entire DB, then: $ mysqldump -u [uname] -p db_name > db_backup.sql If it's all DBs, then: $ mysqldump -u [uname] -p --all-databases > all_db_backup.sql If it's specific tables within a DB, then: $ mysqldump -u [uname] -p db_name table1 table2 > table_backup.sql You can even go as far as auto-compressing the output using gzip (if your DB is very big): $ mysqldump -u [uname] -p db_name | gzip > db_backup.sql.gz If you want to do this remotely and you have the access to the server in question, then the following would work (presuming the MySQL server is on port 3306): $ mysqldump -P 3306 -h [ip_address] -u [uname] -p db_name > db_backup.sql It should drop the .sql file in the folder you run the command-line from. EDIT: Updated to avoid inclusion of passwords in CLI commands, use the -p option without the password. It will prompt you for it and not record it.
MySQL
13,484,667
385
I've just started getting into Node.js. I come from a PHP background, so I'm fairly used to using MySQL for all my database needs. How can I use MySQL with Node.js?
Check out the node.js module list node-mysql — A node.js module implementing the MySQL protocol node-mysql2 — Yet another pure JS async driver. Pipelining, prepared statements. node-mysql-libmysqlclient — MySQL asynchronous bindings based on libmysqlclient node-mysql looks simple enough: var mysql = require('mysql'); var connection = mysql.createConnection({ host : 'example.org', user : 'bob', password : 'secret', }); connection.connect(function(err) { // connected! (unless `err` is set) }); Queries: var post = {id: 1, title: 'Hello MySQL'}; var query = connection.query('INSERT INTO posts SET ?', post, function(err, result) { // Neat! }); console.log(query.sql); // INSERT INTO posts SET `id` = 1, `title` = 'Hello MySQL'
MySQL
5,818,312
384
How can you connect to MySQL from the command line in a Mac? (i.e. show me the code) I'm doing a PHP/SQL tutorial, but it starts by assuming you're already in MySQL.
See here http://dev.mysql.com/doc/refman/5.0/en/connecting.html mysql -u USERNAME -pPASSWORD -h HOSTNAMEORIP DATABASENAME The options above means: -u: username -p: password (**no space between -p and the password text**) -h: host last one is name of the database that you wanted to connect. Look into the link, it's detailed there! As already mentioned by Rick, you can avoid passing the password as the part of the command by not passing the password like this: mysql -u USERNAME -h HOSTNAMEORIP DATABASENAME -p People editing this answer: PLEASE DONOT ADD A SPACE between -p and PASSWORD
MySQL
5,131,931
384
When I issue SHOW PROCESSLIST query, only the first 100 characters of the running SQL query are returned in the info column. Is it possible to change MySQL config or issue a different kind of request to see complete query (the queries I'm looking at are longer than 100 characters)
SHOW FULL PROCESSLIST If you don't use FULL, "only the first 100 characters of each statement are shown in the Info field". When using phpMyAdmin, you should also click on the "Full texts" option ("← T →" on top left corner of a results table) to see untruncated results.
MySQL
3,638,689
383
I have been very excited about MongoDb and have been testing it lately. I had a table called posts in MySQL with about 20 million records indexed only on a field called 'id'. I wanted to compare speed with MongoDB and I ran a test which would get and print 15 records randomly from our huge databases. I ran the query about 1,000 times each for mysql and MongoDB and I am suprised that I do not notice a lot of difference in speed. Maybe MongoDB is 1.1 times faster. That's very disappointing. Is there something I am doing wrong? I know that my tests are not perfect but is MySQL on par with MongoDb when it comes to read intensive chores. Note: I have dual core + ( 2 threads ) i7 cpu and 4GB ram I have 20 partitions on MySQL each of 1 million records Sample Code Used For Testing MongoDB <?php function microtime_float() { list($usec, $sec) = explode(" ", microtime()); return ((float)$usec + (float)$sec); } $time_taken = 0; $tries = 100; // connect $time_start = microtime_float(); for($i=1;$i<=$tries;$i++) { $m = new Mongo(); $db = $m->swalif; $cursor = $db->posts->find(array('id' => array('$in' => get_15_random_numbers()))); foreach ($cursor as $obj) { //echo $obj["thread_title"] . "<br><Br>"; } } $time_end = microtime_float(); $time_taken = $time_taken + ($time_end - $time_start); echo $time_taken; function get_15_random_numbers() { $numbers = array(); for($i=1;$i<=15;$i++) { $numbers[] = mt_rand(1, 20000000) ; } return $numbers; } ?> Sample Code For Testing MySQL <?php function microtime_float() { list($usec, $sec) = explode(" ", microtime()); return ((float)$usec + (float)$sec); } $BASE_PATH = "../src/"; include_once($BASE_PATH . "classes/forumdb.php"); $time_taken = 0; $tries = 100; $time_start = microtime_float(); for($i=1;$i<=$tries;$i++) { $db = new AQLDatabase(); $sql = "select * from posts_really_big where id in (".implode(',',get_15_random_numbers()).")"; $result = $db->executeSQL($sql); while ($row = mysql_fetch_array($result) ) { //echo $row["thread_title"] . "<br><Br>"; } } $time_end = microtime_float(); $time_taken = $time_taken + ($time_end - $time_start); echo $time_taken; function get_15_random_numbers() { $numbers = array(); for($i=1;$i<=15;$i++) { $numbers[] = mt_rand(1, 20000000); } return $numbers; } ?>
MongoDB is not magically faster. If you store the same data, organised in basically the same fashion, and access it exactly the same way, then you really shouldn't expect your results to be wildly different. After all, MySQL and MongoDB are both GPL, so if Mongo had some magically better IO code in it, then the MySQL team could just incorporate it into their codebase. People are seeing real world MongoDB performance largely because MongoDB allows you to query in a different manner that is more sensible to your workload. For example, consider a design that persisted a lot of information about a complicated entity in a normalised fashion. This could easily use dozens of tables in MySQL (or any relational db) to store the data in normal form, with many indexes needed to ensure relational integrity between tables. Now consider the same design with a document store. If all of those related tables are subordinate to the main table (and they often are), then you might be able to model the data such that the entire entity is stored in a single document. In MongoDB you can store this as a single document, in a single collection. This is where MongoDB starts enabling superior performance. In MongoDB, to retrieve the whole entity, you have to perform: One index lookup on the collection (assuming the entity is fetched by id) Retrieve the contents of one database page (the actual binary json document) So a b-tree lookup, and a binary page read. Log(n) + 1 IOs. If the indexes can reside entirely in memory, then 1 IO. In MySQL with 20 tables, you have to perform: One index lookup on the root table (again, assuming the entity is fetched by id) With a clustered index, we can assume that the values for the root row are in the index 20+ range lookups (hopefully on an index) for the entity's pk value These probably aren't clustered indexes, so the same 20+ data lookups once we figure out what the appropriate child rows are. So the total for mysql, even assuming that all indexes are in memory (which is harder since there are 20 times more of them) is about 20 range lookups. These range lookups are likely comprised of random IO — different tables will definitely reside in different spots on disk, and it's possible that different rows in the same range in the same table for an entity might not be contiguous (depending on how the entity has been updated, etc). So for this example, the final tally is about 20 times more IO with MySQL per logical access, compared to MongoDB. This is how MongoDB can boost performance in some use cases.
MySQL
9,702,643
382
I'm using PuTTY to run: mysql> SELECT * FROM sometable; sometable has many fields and this results in many columns trying to be displayed in the terminal. The fields wrap onto the next line so it is very hard to line up column titles with field values. What solutions are there for viewing such data in terminal? I don't have nor want access to phpMyAdmin - or any other GUI interfaces. I'm looking for command-line solutions such as this one: Save MySQL Query results into text or CVS file
Terminate the query with \G in place of ;. For example: SELECT * FROM sometable\G This query displays the rows vertically, like this: *************************** 1. row *************************** Host: localhost Db: mydatabase1 User: myuser1 Select_priv: Y Insert_priv: Y Update_priv: Y ... *************************** 2. row *************************** Host: localhost Db: mydatabase2 User: myuser2 Select_priv: Y Insert_priv: Y Update_priv: Y ...
MySQL
924,729
380
I have an unnormalized events-diary CSV from a client that I'm trying to load into a MySQL table so that I can refactor into a sane format. I created a table called 'CSVImport' that has one field for every column of the CSV file. The CSV contains 99 columns , so this was a hard enough task in itself: CREATE TABLE 'CSVImport' (id INT); ALTER TABLE CSVImport ADD COLUMN Title VARCHAR(256); ALTER TABLE CSVImport ADD COLUMN Company VARCHAR(256); ALTER TABLE CSVImport ADD COLUMN NumTickets VARCHAR(256); ... ALTER TABLE CSVImport Date49 ADD COLUMN Date49 VARCHAR(256); ALTER TABLE CSVImport Date50 ADD COLUMN Date50 VARCHAR(256); No constraints are on the table, and all the fields hold VARCHAR(256) values, except the columns which contain counts (represented by INT), yes/no (represented by BIT), prices (represented by DECIMAL), and text blurbs (represented by TEXT). I tried to load data into the file: LOAD DATA INFILE '/home/paul/clientdata.csv' INTO TABLE CSVImport; Query OK, 2023 rows affected, 65535 warnings (0.08 sec) Records: 2023 Deleted: 0 Skipped: 0 Warnings: 198256 SELECT * FROM CSVImport; | NULL | NULL | NULL | NULL | NULL | ... The whole table is filled with NULL. I think the problem is that the text blurbs contain more than one line, and MySQL is parsing the file as if each new line would correspond to one databazse row. I can load the file into OpenOffice without a problem. The clientdata.csv file contains 2593 lines, and 570 records. The first line contains column names. I think it is comma delimited, and text is apparently delimited with doublequote. UPDATE: When in doubt, read the manual: http://dev.mysql.com/doc/refman/5.0/en/load-data.html I added some information to the LOAD DATA statement that OpenOffice was smart enough to infer, and now it loads the correct number of records: LOAD DATA INFILE "/home/paul/clientdata.csv" INTO TABLE CSVImport COLUMNS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '"' LINES TERMINATED BY '\n' IGNORE 1 LINES; But still there are lots of completely NULL records, and none of the data that got loaded seems to be in the right place.
Use mysqlimport to load a table into the database: mysqlimport --ignore-lines=1 \ --fields-terminated-by=, \ --local -u root \ -p Database \ TableName.csv I found it at http://chriseiffel.com/everything-linux/how-to-import-a-large-csv-file-to-mysql/ To make the delimiter a tab, use --fields-terminated-by='\t'
MySQL
3,635,166
379
I need to get UTF-8 working in my Java webapp (servlets + JSP, no framework used) to support äöå etc. for regular Finnish text and Cyrillic alphabets like ЦжФ for special cases. My setup is the following: Development environment: Windows XP Production environment: Debian Database used: MySQL 5.x Users mainly use Firefox2 but also Opera 9.x, FF3, IE7 and Google Chrome are used to access the site. How to achieve this?
Answering myself as the FAQ of this site encourages it. This works for me: Mostly characters äåö are not a problematic as the default character set used by browsers and tomcat/java for webapps is latin1 ie. ISO-8859-1 which "understands" those characters. To get UTF-8 working under Java+Tomcat+Linux/Windows+Mysql requires the following: Configuring Tomcat's server.xml It's necessary to configure that the connector uses UTF-8 to encode url (GET request) parameters: <Connector port="8080" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" compressionMinSize="128" noCompressionUserAgents="gozilla, traviata" compressableMimeType="text/html,text/xml,text/plain,text/css,text/ javascript,application/x-javascript,application/javascript" URIEncoding="UTF-8" /> The key part being URIEncoding="UTF-8" in the above example. This quarantees that Tomcat handles all incoming GET parameters as UTF-8 encoded. As a result, when the user writes the following to the address bar of the browser: https://localhost:8443/ID/Users?action=search&name=*ж* the character ж is handled as UTF-8 and is encoded to (usually by the browser before even getting to the server) as %D0%B6. POST request are not affected by this. CharsetFilter Then it's time to force the java webapp to handle all requests and responses as UTF-8 encoded. This requires that we define a character set filter like the following: package fi.foo.filters; import javax.servlet.*; import java.io.IOException; public class CharsetFilter implements Filter { private String encoding; public void init(FilterConfig config) throws ServletException { encoding = config.getInitParameter("requestEncoding"); if (encoding == null) encoding = "UTF-8"; } public void doFilter(ServletRequest request, ServletResponse response, FilterChain next) throws IOException, ServletException { // Respect the client-specified character encoding // (see HTTP specification section 3.4.1) if (null == request.getCharacterEncoding()) { request.setCharacterEncoding(encoding); } // Set the default response content type and encoding response.setContentType("text/html; charset=UTF-8"); response.setCharacterEncoding("UTF-8"); next.doFilter(request, response); } public void destroy() { } } This filter makes sure that if the browser hasn't set the encoding used in the request, that it's set to UTF-8. The other thing done by this filter is to set the default response encoding ie. the encoding in which the returned html/whatever is. The alternative is to set the response encoding etc. in each controller of the application. This filter has to be added to the web.xml or the deployment descriptor of the webapp: <!--CharsetFilter start--> <filter> <filter-name>CharsetFilter</filter-name> <filter-class>fi.foo.filters.CharsetFilter</filter-class> <init-param> <param-name>requestEncoding</param-name> <param-value>UTF-8</param-value> </init-param> </filter> <filter-mapping> <filter-name>CharsetFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> The instructions for making this filter are found at the tomcat wiki (http://wiki.apache.org/tomcat/Tomcat/UTF-8) JSP page encoding In your web.xml, add the following: <jsp-config> <jsp-property-group> <url-pattern>*.jsp</url-pattern> <page-encoding>UTF-8</page-encoding> </jsp-property-group> </jsp-config> Alternatively, all JSP-pages of the webapp would need to have the following at the top of them: <%@page pageEncoding="UTF-8" contentType="text/html; charset=UTF-8"%> If some kind of a layout with different JSP-fragments is used, then this is needed in all of them. HTML-meta tags JSP page encoding tells the JVM to handle the characters in the JSP page in the correct encoding. Then it's time to tell the browser in which encoding the html page is: This is done with the following at the top of each xhtml page produced by the webapp: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="fi"> <head> <meta http-equiv='Content-Type' content='text/html; charset=UTF-8' /> ... JDBC-connection When using a db, it has to be defined that the connection uses UTF-8 encoding. This is done in context.xml or wherever the JDBC connection is defiend as follows: <Resource name="jdbc/AppDB" auth="Container" type="javax.sql.DataSource" maxActive="20" maxIdle="10" maxWait="10000" username="foo" password="bar" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/ ID_development?useEncoding=true&amp;characterEncoding=UTF-8" /> MySQL database and tables The used database must use UTF-8 encoding. This is achieved by creating the database with the following: CREATE DATABASE `ID_development` /*!40100 DEFAULT CHARACTER SET utf8 COLLATE utf8_swedish_ci */; Then, all of the tables need to be in UTF-8 also: CREATE TABLE `Users` ( `id` int(10) unsigned NOT NULL auto_increment, `name` varchar(30) collate utf8_swedish_ci default NULL PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_swedish_ci ROW_FORMAT=DYNAMIC; The key part being CHARSET=utf8. MySQL server configuration MySQL serveri has to be configured also. Typically this is done in Windows by modifying my.ini -file and in Linux by configuring my.cnf -file. In those files it should be defined that all clients connected to the server use utf8 as the default character set and that the default charset used by the server is also utf8. [client] port=3306 default-character-set=utf8 [mysql] default-character-set=utf8 Mysql procedures and functions These also need to have the character set defined. For example: DELIMITER $$ DROP FUNCTION IF EXISTS `pathToNode` $$ CREATE FUNCTION `pathToNode` (ryhma_id INT) RETURNS TEXT CHARACTER SET utf8 READS SQL DATA BEGIN DECLARE path VARCHAR(255) CHARACTER SET utf8; SET path = NULL; ... RETURN path; END $$ DELIMITER ; GET requests: latin1 and UTF-8 If and when it's defined in tomcat's server.xml that GET request parameters are encoded in UTF-8, the following GET requests are handled properly: https://localhost:8443/ID/Users?action=search&name=Petteri https://localhost:8443/ID/Users?action=search&name=ж Because ASCII-characters are encoded in the same way both with latin1 and UTF-8, the string "Petteri" is handled correctly. The Cyrillic character ж is not understood at all in latin1. Because Tomcat is instructed to handle request parameters as UTF-8 it encodes that character correctly as %D0%B6. If and when browsers are instructed to read the pages in UTF-8 encoding (with request headers and html meta-tag), at least Firefox 2/3 and other browsers from this period all encode the character themselves as %D0%B6. The end result is that all users with name "Petteri" are found and also all users with the name "ж" are found. But what about äåö? HTTP-specification defines that by default URLs are encoded as latin1. This results in firefox2, firefox3 etc. encoding the following https://localhost:8443/ID/Users?action=search&name=*Päivi* in to the encoded version https://localhost:8443/ID/Users?action=search&name=*P%E4ivi* In latin1 the character ä is encoded as %E4. Even though the page/request/everything is defined to use UTF-8. The UTF-8 encoded version of ä is %C3%A4 The result of this is that it's quite impossible for the webapp to correly handle the request parameters from GET requests as some characters are encoded in latin1 and others in UTF-8. Notice: POST requests do work as browsers encode all request parameters from forms completely in UTF-8 if the page is defined as being UTF-8 Stuff to read A very big thank you for the writers of the following for giving the answers for my problem: http://tagunov.tripod.com/i18n/i18n.html http://wiki.apache.org/tomcat/Tomcat/UTF-8 http://java.sun.com/developer/technicalArticles/Intl/HTTPCharset/ http://dev.mysql.com/doc/refman/5.0/en/charset-syntax.html http://cagan327.blogspot.com/2006/05/utf-8-encoding-fix-tomcat-jsp-etc.html http://cagan327.blogspot.com/2006/05/utf-8-encoding-fix-for-mysql-tomcat.html http://jeppesn.dk/utf-8.html http://www.nabble.com/request-parameters-mishandle-utf-8-encoding-td18720039.html http://www.utoronto.ca/webdocs/HTMLdocs/NewHTML/iso_table.html http://www.utf8-chartable.de/ Important Note mysql supports the Basic Multilingual Plane using 3-byte UTF-8 characters. If you need to go outside of that (certain alphabets require more than 3-bytes of UTF-8), then you either need to use a flavor of VARBINARY column type or use the utf8mb4 character set (which requires MySQL 5.5.3 or later). Just be aware that using the utf8 character set in MySQL won't work 100% of the time. Tomcat with Apache One more thing If you are using Apache + Tomcat + mod_JK connector then you also need to do following changes: Add URIEncoding="UTF-8" into tomcat server.xml file for 8009 connector, it is used by mod_JK connector. <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" URIEncoding="UTF-8"/> Goto your apache folder i.e. /etc/httpd/conf and add AddDefaultCharset utf-8 in httpd.conf file. Note: First check that it is exist or not. If exist you may update it with this line. You can add this line at bottom also.
MySQL
138,948
374
I want to store a hashed password (using BCrypt) in a database. What would be a good type for this, and which would be the correct length? Are passwords hashed with BCrypt always of same length? EDIT Example hash: $2a$10$KssILxWNR6k62B7yiX0GAe2Q7wwHlrzhF3LqtVvpyvHZf0MwvNfVu After hashing some passwords, it seems that BCrypt always generates 60 character hashes. EDIT 2 Sorry for not mentioning the implementation. I am using jBCrypt.
The modular crypt format for bcrypt consists of $2$, $2a$ or $2y$ identifying the hashing algorithm and format a two digit value denoting the cost parameter, followed by $ a 53 characters long base-64-encoded value (they use the alphabet ., /, 0–9, A–Z, a–z that is different to the standard Base 64 Encoding alphabet) consisting of: 22 characters of salt (effectively only 128 bits of the 132 decoded bits) 31 characters of encrypted output (effectively only 184 bits of the 186 decoded bits) Thus the total length is 59 or 60 bytes respectively. As you use the 2a format, you’ll need 60 bytes. And thus for MySQL I’ll recommend to use the CHAR(60) BINARYor BINARY(60) (see The _bin and binary Collations for information about the difference). CHAR is not binary safe and equality does not depend solely on the byte value but on the actual collation; in the worst case A is treated as equal to a. See The _bin and binary Collations for more information.
MySQL
5,881,169
373
How do you connect to a MySQL database in Java? When I try, I get java.sql.SQLException: No suitable driver found for jdbc:mysql://database/table at java.sql.DriverManager.getConnection(DriverManager.java:689) at java.sql.DriverManager.getConnection(DriverManager.java:247) Or java.lang.ClassNotFoundException: com.mysql.jdbc.Driver Or java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driver
Here's a step by step explanation how to install MySQL and JDBC and how to use it: Download and install the MySQL server. Just do it the usual way. Remember the port number whenever you've changed it. It's by default 3306. Download the JDBC driver and put in classpath, extract the ZIP file and put the containing JAR file in the classpath. The vendor-specific JDBC driver is a concrete implementation of the JDBC API (tutorial here). If you're using an IDE like Eclipse or Netbeans, then you can add it to the classpath by adding the JAR file as Library to the Build Path in project's properties. If you're doing it "plain vanilla" in the command console, then you need to specify the path to the JAR file in the -cp or -classpath argument when executing your Java application. java -cp .;/path/to/mysql-connector.jar com.example.YourClass The . is just there to add the current directory to the classpath as well so that it can locate com.example.YourClass and the ; is the classpath separator as it is in Windows. In Unix and clones : should be used. If you're developing a servlet based WAR application and wish to manually manage connections (poor practice, actually), then you need to ensure that the JAR ends up in /WEB-INF/lib of the build. See also How to add JAR libraries to WAR project without facing java.lang.ClassNotFoundException? Classpath vs Build Path vs /WEB-INF/lib. The better practice is to install the physical JDBC driver JAR file in the server itself and configure the server to create a JDBC connection pool. Here's an example for Tomcat: How should I connect to JDBC database / datasource in a servlet based application? Create a database in MySQL. Let's create a database javabase. You of course want World Domination, so let's use UTF-8 as well. CREATE DATABASE javabase DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci; Create a user for Java and grant it access. Simply because using root is a bad practice. CREATE USER 'java'@'localhost' IDENTIFIED BY 'password'; GRANT ALL ON javabase.* TO 'java'@'localhost' IDENTIFIED BY 'password'; Yes, java is the username and password is the password here. Determine the JDBC URL. To connect the MySQL database using Java you need an JDBC URL in the following syntax: jdbc:mysql://hostname:port/databasename hostname: The hostname where MySQL server is installed. If it's installed at the same machine where you run the Java code, then you can just use localhost. It can also be an IP address like 127.0.0.1. If you encounter connectivity problems and using 127.0.0.1 instead of localhost solved it, then you've a problem in your network/DNS/hosts config. port: The TCP/IP port where MySQL server listens on. This is by default 3306. databasename: The name of the database you'd like to connect to. That's javabase. So the final URL should look like: jdbc:mysql://localhost:3306/javabase Test the connection to MySQL using Java. Create a simple Java class with a main() method to test the connection. String url = "jdbc:mysql://localhost:3306/javabase"; String username = "java"; String password = "password"; System.out.println("Connecting database ..."); try (Connection connection = DriverManager.getConnection(url, username, password)) { System.out.println("Database connected!"); } catch (SQLException e) { throw new IllegalStateException("Cannot connect the database!", e); } If you get a SQLException: No suitable driver, then it means that either the JDBC driver wasn't autoloaded at all or that the JDBC URL is wrong (i.e. it wasn't recognized by any of the loaded drivers). See also The infamous java.sql.SQLException: No suitable driver found. Normally, a JDBC 4.0 driver should be autoloaded when you just drop it in runtime classpath. To exclude one and other, you can always manually load it as below: System.out.println("Loading driver ..."); try { Class.forName("com.mysql.cj.jdbc.Driver"); // Use com.mysql.jdbc.Driver if you're not on MySQL 8+ yet. System.out.println("Driver loaded!"); } catch (ClassNotFoundException e) { throw new IllegalStateException("Cannot find the driver in the classpath!", e); } Note that the newInstance() call is not needed here. It's in case of MySQL just to fix the old and buggy org.gjt.mm.mysql.Driver. Explanation here. If this line throws ClassNotFoundException, then the JAR file containing the JDBC driver class is simply not been placed in the classpath. Please also note that it's very important to throw an exception so that the code execution is immediately blocked, instead of suppressing it of merely printing the stack trace and then continuing the rest of the code. Also note that you don't need to load the driver everytime before connecting. Just only once during application startup is enough. If you get a SQLException: Connection refused or Connection timed out or a MySQL specific CommunicationsException: Communications link failure, then it means that the DB isn't reachable at all. This can have one or more of the following causes: IP address or hostname in JDBC URL is wrong. Hostname in JDBC URL is not recognized by local DNS server. Port number is missing or wrong in JDBC URL. DB server is down. DB server doesn't accept TCP/IP connections. DB server has run out of connections. Something in between Java and DB is blocking connections, e.g. a firewall or proxy. To solve the one or the other, follow the following advices: Verify and test them with ping. Refresh DNS or use IP address in JDBC URL instead. Verify it based on my.cnf of MySQL DB. Start the DB. Verify if mysqld is started without the --skip-networking option. Restart the DB and fix your code accordingly that it closes connections in finally. Disable firewall and/or configure firewall/proxy to allow/forward the port. Note that closing the Connection is extremely important. If you don't close connections and keep getting a lot of them in a short time, then the database may run out of connections and your application may break. Always acquire the Connection in a try-with-resources statement. This also applies to Statement, PreparedStatement and ResultSet. See also How often should Connection, Statement and ResultSet be closed in JDBC? That was it as far the connectivity concerns. You can find here a more advanced tutorial how to load and store fullworthy Java model objects in a database with help of a basic DAO class. Using a Singleton Pattern and/or a static variable for the DB Connection is a bad practice. See among others Is it safe to use a static java.sql.Connection instance in a multithreaded system? This is a #1 starters mistake. Make sure you don't fall in this trap.
MySQL
2,839,321
372
I have a column in a table which might contain null or empty values. How do I check if a column is empty or null in the rows present in a table? (e.g. null or '' or ' ' or ' ' and ...)
This will select all rows where some_col is NULL or '' (empty string) SELECT * FROM table WHERE some_col IS NULL OR some_col = '';
MySQL
8,470,813
369
I'd like to get all of a mysql table's col names into an array in php? Is there a query for this?
The best way is to use the INFORMATION_SCHEMA metadata virtual database. Specifically the INFORMATION_SCHEMA.COLUMNS table... SELECT `COLUMN_NAME` FROM `INFORMATION_SCHEMA`.`COLUMNS` WHERE `TABLE_SCHEMA`='yourdatabasename' AND `TABLE_NAME`='yourtablename'; It's VERY powerful, and can give you TONS of information without need to parse text (Such as column type, whether the column is nullable, max column size, character set, etc)... Oh, and it's standard SQL (Whereas SHOW ... is a MySQL specific extension)... For more information about the difference between SHOW... and using the INFORMATION_SCHEMA tables, check out the MySQL Documentation on INFORMATION_SCHEMA in general...
MySQL
4,165,195
368
I'm currently developing an application using a MySQL database. The database-structure is still in flux and changes while development progresses (I change my local copy, leaving the one on the test-server alone). Is there a way to compare the two instances of the database to see if there were any changes? While currently simply discarding the previous test server database is fine, as testing starts entering test data it could get a bit tricky. The same though more so will happen again later in production... Is there an easy way to incrementally make changes to the production database, preferably by automatically creating a script to modify it? Tools mentioned in the answers: Red-Gate's MySQL Schema & Data Compare (Commercial) Maatkit (now Percona) liquibase Toad Nob Hill Database Compare (Commercial) MySQL Diff SQL EDT (Commercial)
If you're working with small databases I've found running mysqldump on both databases with the --skip-comments and --skip-extended-insert options to generate SQL scripts, then running diff on the SQL scripts works pretty well. By skipping comments you avoid meaningless differences such as the time you ran the mysqldump command. By using the --skip-extended-insert command you ensure each row is inserted with its own insert statement. This eliminates the situation where a single new or modified record can cause a chain reaction in all future insert statements. Running with these options produces larger dumps with no comments so this is probably not something you want to do in production use but for development it should be fine. I've put examples of the commands I use below: mysqldump --skip-comments --skip-extended-insert -u root -p dbName1>file1.sql mysqldump --skip-comments --skip-extended-insert -u root -p dbName2>file2.sql diff file1.sql file2.sql
MySQL
225,772
368
I get this error when I try to source a large SQL file (a big INSERT query). mysql> source file.sql ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 2 Current database: *** NONE *** ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 3 Current database: *** NONE *** Nothing in the table is updated. I've tried deleting and undeleting the table/database, as well as restarting MySQL. None of these things resolve the problem. Here is my max-packet size: +--------------------+---------+ | Variable_name | Value | +--------------------+---------+ | max_allowed_packet | 1048576 | +--------------------+---------+ Here is the file size: $ ls -s file.sql 79512 file.sql When I try the other method... $ ./mysql -u root -p my_db < file.sql Enter password: ERROR 2006 (HY000) at line 1: MySQL server has gone away
max_allowed_packet=64M Adding this line into my.cnf file solves my problem. This is useful when the columns have large values, which cause the issues, you can find the explanation here. On Windows this file is located at: "C:\ProgramData\MySQL\MySQL Server 5.6" On Linux (Ubuntu): /etc/mysql
MySQL
10,474,922
366
I want to combine multiple databases in my system. Most of the time the database is MySQL; but it may differ in future i.e. Admin can generate such a reports which is use source of heterogeneous database system. So my question is does Laravel provide any Facade to deal with such situations? Or any other framework have more suitable capabilities for problem is?
From Laravel Docs: You may access each connection via the connection method on the DB facade when using multiple connections. The name passed to the connection method should correspond to one of the connections listed in your config/database.php configuration file: $users = DB::connection('foo')->select(...); Define Connections Using .env >= 5.0 (or higher) DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=mysql_database DB_USERNAME=root DB_PASSWORD=secret DB_CONNECTION_PGSQL=pgsql DB_HOST_PGSQL=127.0.0.1 DB_PORT_PGSQL=5432 DB_DATABASE_PGSQL=pgsql_database DB_USERNAME_PGSQL=root DB_PASSWORD_PGSQL=secret Using config/database.php 'mysql' => [ 'driver' => env('DB_CONNECTION'), 'host' => env('DB_HOST'), 'port' => env('DB_PORT'), 'database' => env('DB_DATABASE'), 'username' => env('DB_USERNAME'), 'password' => env('DB_PASSWORD'), ], 'pgsql' => [ 'driver' => env('DB_CONNECTION_PGSQL'), 'host' => env('DB_HOST_PGSQL'), 'port' => env('DB_PORT_PGSQL'), 'database' => env('DB_DATABASE_PGSQL'), 'username' => env('DB_USERNAME_PGSQL'), 'password' => env('DB_PASSWORD_PGSQL'), ], Note: In pgsql, if DB_username and DB_password are the same, then you can use env('DB_USERNAME'), which is mentioned in .env first few lines. Without .env <= 4.0 (or lower) app/config/database.php return array( 'default' => 'mysql', 'connections' => array( # Primary/Default database connection 'mysql' => array( 'driver' => 'mysql', 'host' => '127.0.0.1', 'database' => 'mysql_database', 'username' => 'root', 'password' => 'secret' 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', ), # Secondary database connection 'pgsql' => [ 'driver' => 'pgsql', 'host' => 'localhost', 'port' => '5432', 'database' => 'pgsql_database', 'username' => 'root', 'password' => 'secret', 'charset' => 'utf8', 'prefix' => '', 'schema' => 'public', ] ), ); Schema / Migration Run the connection() method to specify which connection to use. Schema::connection('pgsql')->create('some_table', function($table) { $table->increments('id'): }); Or, at the top, define a connection. protected $connection = 'pgsql'; Query Builder $users = DB::connection('pgsql')->select(...); Model (In Laravel >= 5.0 (or higher)) Set the $connection variable in your model class ModelName extends Model { // extend changed protected $connection = 'pgsql'; } Eloquent (In Laravel <= 4.0 (or lower)) Set the $connection variable in your model class SomeModel extends Eloquent { protected $connection = 'pgsql'; } Transaction Mode DB::transaction(function () { DB::connection('mysql')->table('users')->update(['name' => 'John']); DB::connection('pgsql')->table('orders')->update(['status' => 'shipped']); }); or DB::connection('mysql')->beginTransaction(); try { DB::connection('mysql')->table('users')->update(['name' => 'John']); DB::connection('pgsql')->beginTransaction(); DB::connection('pgsql')->table('orders')->update(['status' => 'shipped']); DB::connection('pgsql')->commit(); DB::connection('mysql')->commit(); } catch (\Exception $e) { DB::connection('mysql')->rollBack(); DB::connection('pgsql')->rollBack(); throw $e; } You can also define the connection at runtime via the setConnection method or the on static method: class SomeController extends BaseController { public function someMethod() { $someModel = new SomeModel; $someModel->setConnection('pgsql'); // non-static method $something = $someModel->find(1); $something = SomeModel::on('pgsql')->find(1); // static method return $something; } } Note: Be careful about building relationships with tables across databases! It is possible to do, but it can come with caveats depending on your database and settings. Tested versions (Updated) Version Tested (Yes/No) 4.2 No 5 Yes (5.5) 6 No 7 No 8 Yes (8.4) 9 Yes (9.2) Useful Links Laravel 5 multiple database connections FROM laracasts.com Connect multiple databases in Laravel FROM tutsnare.com Multiple DB Connections in Laravel FROM fideloper.com
MySQL
31,847,054
365
In MySQL, I know I can list the tables in a database with: SHOW TABLES However, I want to insert these table names into another table, for instance: INSERT INTO metadata(table_name) SHOW TABLES /* does not work */ Is there a way to get the table names using a standard SELECT statement, something like: INSERT INTO metadata(table_name) SELECT name FROM table_names /* what should table_names be? */
To get the name of all tables use: SELECT table_name FROM information_schema.tables; To get the name of the tables from a specific database use: SELECT table_name FROM information_schema.tables WHERE table_schema = 'your_database_name'; Now, to answer the original question, use this query: INSERT INTO table_name SELECT table_name FROM information_schema.tables WHERE table_schema = 'your_database_name'; For more details see: http://dev.mysql.com/doc/refman/5.0/en/information-schema.html
MySQL
8,334,493
365
I want to store many records in a MySQL database. All of them contains money values. But I don't know how many digits will be inserted for each one. Which data type do I have to use for this purpose? VARCHAR or INT (or other numeric data types)?
Since money needs an exact representation don't use data types that are only approximate like float. You can use a fixed-point numeric data type for that like decimal(15,2) 15 is the precision (total length of value including decimal places) 2 is the number of digits after decimal point The max possible number in this example would be 9999999999999.99 See MySQL Numeric Types: These types are used when it is important to preserve exact precision, for example with monetary data.
MySQL
13,030,368
363
I am using Fedora 14 and I have MySQL and MySQL server 5.1.42 installed and running. Now I tried to do this as root user: gem install mysql But I get this error: Building native extensions. This could take a while... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb mkmf.rb can't find header files for ruby at /usr/lib/ruby/ruby.h Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/mysql-2.8.1 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/mysql-2.8.1/ext/mysql_api/gem_make.out What's wrong here? In installed ruby 1.8.7. and the latest rubygems 1.3.7.
For those who may be confused by the accepted answer, as I was, you also need to have the ruby headers installed [ruby-devel]. The article that saved my hide is here. And this is the revised solution (note that I'm on Fedora 13): yum -y install gcc mysql-devel ruby-devel rubygems gem install -y mysql -- --with-mysql-config=/usr/bin/mysql_config For Debian, and other distributions using Debian style packaging the ruby development headers are installed by: sudo apt-get install ruby-dev For Ubuntu the ruby development headers are installed by: sudo apt-get install ruby-all-dev If you are using a earlier version of ruby (such as 2.2), then you will need to run: sudo apt-get install ruby2.2-dev (where 2.2 is your desired Ruby version)
MySQL
4,304,438
363
Is it possible for me to turn on audit logging on my mysql database? I basically want to monitor all queries for an hour, and dump the log to a file.
Besides what I came across here, running the following was the simplest way to dump queries to a log file without restarting SET global log_output = 'FILE'; SET global general_log_file='/Applications/MAMP/logs/mysql_general.log'; SET global general_log = 1; can be turned off with SET global general_log = 0;
MySQL
303,994
363
So I'm trying to add Foreign Key constraints to my database as a project requirement and it worked the first time or two on different tables, but I have two tables on which I get an error when trying to add the Foreign Key Constraints. The error message that I get is: ERROR 1215 (HY000): Cannot add foreign key constraint This is the SQL I'm using to create the tables, the two offending tables are Patient and Appointment. SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0; SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=1; SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL,ALLOW_INVALID_DATES'; CREATE SCHEMA IF NOT EXISTS `doctorsoffice` DEFAULT CHARACTER SET utf8 ; USE `doctorsoffice` ; -- ----------------------------------------------------- -- Table `doctorsoffice`.`doctor` -- ----------------------------------------------------- DROP TABLE IF EXISTS `doctorsoffice`.`doctor` ; CREATE TABLE IF NOT EXISTS `doctorsoffice`.`doctor` ( `DoctorID` INT(11) NOT NULL AUTO_INCREMENT , `FName` VARCHAR(20) NULL DEFAULT NULL , `LName` VARCHAR(20) NULL DEFAULT NULL , `Gender` VARCHAR(1) NULL DEFAULT NULL , `Specialty` VARCHAR(40) NOT NULL DEFAULT 'General Practitioner' , UNIQUE INDEX `DoctorID` (`DoctorID` ASC) , PRIMARY KEY (`DoctorID`) ) ENGINE = InnoDB DEFAULT CHARACTER SET = utf8; -- ----------------------------------------------------- -- Table `doctorsoffice`.`medicalhistory` -- ----------------------------------------------------- DROP TABLE IF EXISTS `doctorsoffice`.`medicalhistory` ; CREATE TABLE IF NOT EXISTS `doctorsoffice`.`medicalhistory` ( `MedicalHistoryID` INT(11) NOT NULL AUTO_INCREMENT , `Allergies` TEXT NULL DEFAULT NULL , `Medications` TEXT NULL DEFAULT NULL , `ExistingConditions` TEXT NULL DEFAULT NULL , `Misc` TEXT NULL DEFAULT NULL , UNIQUE INDEX `MedicalHistoryID` (`MedicalHistoryID` ASC) , PRIMARY KEY (`MedicalHistoryID`) ) ENGINE = InnoDB DEFAULT CHARACTER SET = utf8; -- ----------------------------------------------------- -- Table `doctorsoffice`.`Patient` -- ----------------------------------------------------- DROP TABLE IF EXISTS `doctorsoffice`.`Patient` ; CREATE TABLE IF NOT EXISTS `doctorsoffice`.`Patient` ( `PatientID` INT unsigned NOT NULL AUTO_INCREMENT , `FName` VARCHAR(30) NULL , `LName` VARCHAR(45) NULL , `Gender` CHAR NULL , `DOB` DATE NULL , `SSN` DOUBLE NULL , `MedicalHistory` smallint(5) unsigned NOT NULL, `PrimaryPhysician` smallint(5) unsigned NOT NULL, PRIMARY KEY (`PatientID`) , UNIQUE INDEX `PatientID_UNIQUE` (`PatientID` ASC) , CONSTRAINT `FK_MedicalHistory` FOREIGN KEY (`MEdicalHistory` ) REFERENCES `doctorsoffice`.`medicalhistory` (`MedicalHistoryID` ) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `FK_PrimaryPhysician` FOREIGN KEY (`PrimaryPhysician` ) REFERENCES `doctorsoffice`.`doctor` (`DoctorID` ) ON DELETE CASCADE ON UPDATE CASCADE) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `doctorsoffice`.`Appointment` -- ----------------------------------------------------- DROP TABLE IF EXISTS `doctorsoffice`.`Appointment` ; CREATE TABLE IF NOT EXISTS `doctorsoffice`.`Appointment` ( `AppointmentID` smallint(5) unsigned NOT NULL AUTO_INCREMENT , `Date` DATE NULL , `Time` TIME NULL , `Patient` smallint(5) unsigned NOT NULL, `Doctor` smallint(5) unsigned NOT NULL, PRIMARY KEY (`AppointmentID`) , UNIQUE INDEX `AppointmentID_UNIQUE` (`AppointmentID` ASC) , CONSTRAINT `FK_Patient` FOREIGN KEY (`Patient` ) REFERENCES `doctorsoffice`.`Patient` (`PatientID` ) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `FK_Doctor` FOREIGN KEY (`Doctor` ) REFERENCES `doctorsoffice`.`doctor` (`DoctorID` ) ON DELETE CASCADE ON UPDATE CASCADE) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `doctorsoffice`.`InsuranceCompany` -- ----------------------------------------------------- DROP TABLE IF EXISTS `doctorsoffice`.`InsuranceCompany` ; CREATE TABLE IF NOT EXISTS `doctorsoffice`.`InsuranceCompany` ( `InsuranceID` smallint(5) NOT NULL AUTO_INCREMENT , `Name` VARCHAR(50) NULL , `Phone` DOUBLE NULL , PRIMARY KEY (`InsuranceID`) , UNIQUE INDEX `InsuranceID_UNIQUE` (`InsuranceID` ASC) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `doctorsoffice`.`PatientInsurance` -- ----------------------------------------------------- DROP TABLE IF EXISTS `doctorsoffice`.`PatientInsurance` ; CREATE TABLE IF NOT EXISTS `doctorsoffice`.`PatientInsurance` ( `PolicyHolder` smallint(5) NOT NULL , `InsuranceCompany` smallint(5) NOT NULL , `CoPay` INT NOT NULL DEFAULT 5 , `PolicyNumber` smallint(5) NOT NULL AUTO_INCREMENT , PRIMARY KEY (`PolicyNumber`) , UNIQUE INDEX `PolicyNumber_UNIQUE` (`PolicyNumber` ASC) , CONSTRAINT `FK_PolicyHolder` FOREIGN KEY (`PolicyHolder` ) REFERENCES `doctorsoffice`.`Patient` (`PatientID` ) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `FK_InsuranceCompany` FOREIGN KEY (`InsuranceCompany` ) REFERENCES `doctorsoffice`.`InsuranceCompany` (`InsuranceID` ) ON DELETE CASCADE ON UPDATE CASCADE) ENGINE = InnoDB; USE `doctorsoffice` ; SET SQL_MODE=@OLD_SQL_MODE; SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS; SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS;
To find the specific error run this: SHOW ENGINE INNODB STATUS; And look in the LATEST FOREIGN KEY ERROR section. The data type for the child column must match the parent column exactly. For example, since medicalhistory.MedicalHistoryID is an INT, Patient.MedicalHistory also needs to be an INT, not a SMALLINT. Also, you should run the query set foreign_key_checks=0 before running the DDL so you can create the tables in an arbitrary order rather than needing to create all parent tables before the relevant child tables.
MySQL
15,534,977
362
In a MySQL JOIN, what is the difference between ON and USING()? As far as I can tell, USING() is just more convenient syntax, whereas ON allows a little more flexibility when the column names are not identical. However, that difference is so minor, you'd think they'd just do away with USING(). Is there more to this than meets the eye? If yes, which should I use in a given situation?
It is mostly syntactic sugar, but a couple differences are noteworthy: ON is the more general of the two. One can join tables ON a column, a set of columns and even a condition. For example: SELECT * FROM world.City JOIN world.Country ON (City.CountryCode = Country.Code) WHERE ... USING is useful when both tables share a column of the exact same name on which they join. In this case, one may say: SELECT ... FROM film JOIN film_actor USING (film_id) WHERE ... An additional nice treat is that one does not need to fully qualify the joining columns: SELECT film.title, film_id -- film_id is not prefixed FROM film JOIN film_actor USING (film_id) WHERE ... To illustrate, to do the above with ON, we would have to write: SELECT film.title, film.film_id -- film.film_id is required here FROM film JOIN film_actor ON (film.film_id = film_actor.film_id) WHERE ... Notice the film.film_id qualification in the SELECT clause. It would be invalid to just say film_id since that would make for an ambiguity: ERROR 1052 (23000): Column 'film_id' in field list is ambiguous As for select *, the joining column appears in the result set twice with ON while it appears only once with USING: mysql> create table t(i int);insert t select 1;create table t2 select*from t; Query OK, 0 rows affected (0.11 sec) Query OK, 1 row affected (0.00 sec) Records: 1 Duplicates: 0 Warnings: 0 Query OK, 1 row affected (0.19 sec) Records: 1 Duplicates: 0 Warnings: 0 mysql> select*from t join t2 on t.i=t2.i; +------+------+ | i | i | +------+------+ | 1 | 1 | +------+------+ 1 row in set (0.00 sec) mysql> select*from t join t2 using(i); +------+ | i | +------+ | 1 | +------+ 1 row in set (0.00 sec) mysql>
MySQL
11,366,006
362
I want to search in all fields from all tables of a MySQL database a given string, possibly using syntax as: SELECT * FROM * WHERE * LIKE '%stuff%' Is it possible to do something like this?
You could do an SQLDump of the database (and its data) then search that file.
MySQL
639,531
362
I have a table which has several ID columns to other tables. I want a foreign key to force integrity only if I put data in there. If I do an update at a later time to populate that column, then it should also check the constraint. (This is likely database server dependant, I'm using MySQL & InnoDB table type) I believe this is a reasonable expectation, but correct me if I am wrong.
Yes, you can enforce the constraint only when the value is not NULL. This can be easily tested with the following example: CREATE DATABASE t; USE t; CREATE TABLE parent (id INT NOT NULL, PRIMARY KEY (id) ) ENGINE=INNODB; CREATE TABLE child (id INT NULL, parent_id INT NULL, FOREIGN KEY (parent_id) REFERENCES parent(id) ) ENGINE=INNODB; INSERT INTO child (id, parent_id) VALUES (1, NULL); -- Query OK, 1 row affected (0.01 sec) INSERT INTO child (id, parent_id) VALUES (2, 1); -- ERROR 1452 (23000): Cannot add or update a child row: a foreign key -- constraint fails (`t/child`, CONSTRAINT `child_ibfk_1` FOREIGN KEY -- (`parent_id`) REFERENCES `parent` (`id`)) The first insert will pass because we insert a NULL in the parent_id. The second insert fails because of the foreign key constraint, since we tried to insert a value that does not exist in the parent table.
MySQL
2,366,854
361
SELECT * FROM table ORDER BY string_length(column); Is there a MySQL function to do this (of course instead of string_length)?
You are looking for CHAR_LENGTH() to get the number of characters in a string. For multi-byte charsets LENGTH() will give you the number of bytes the string occupies, while CHAR_LENGTH() will return the number of characters.
MySQL
1,870,937
360
I'm tired of opening Dia and creating a database diagram at the beginning of every project. Is there a tool out there that will let me select specific tables and then create a database diagram for me based on a MySQL database? Preferably it would allow me to edit the diagram afterward since none of the foreign keys are set... Here is what I am picturing diagram-wise (please excuse the horrible data design, I didn't design it. Let's focus on the diagram concept and not on the actual data it represents for this example ;) ): see full size diagram
Try MySQL Workbench, formerly DBDesigner 4: http://dev.mysql.com/workbench/ This has a "Reverse Engineer Database" mode: Database -> Reverse Engineer
MySQL
2,488
360
Every time is set up a new SQL table or add a new varchar column to an existing table, I am wondering one thing: what is the best value for the length. So, lets say, you have a column called name of type varchar. So, you have to choose the length. I cannot think of a name > 20 chars, but you will never know. But instead of using 20, I always round up to the next 2^n number. In this case, I would choose 32 as the length. I do that, because from an computer scientist point of view, a number 2^n looks more even to me than other numbers and I'm just assuming that the architecture underneath can handle those numbers slightly better than others. On the other hand, MSSQL server for example, sets the default length value to 50, when you choose to create a varchar column. That makes me thinking about it. Why 50? is it just a random number, or based on average column length, or what? It could also be - or probably is - that different SQL servers implementations (like MySQL, MSSQL, Postgres, ...) have different best column length values.
No DBMS I know of has any "optimization" that will make a VARCHAR with a 2^n length perform better than one with a max length that is not a power of 2. I think early SQL Server versions actually treated a VARCHAR with length 255 differently than one with a higher maximum length. I don't know if this is still the case. For almost all DBMS, the actual storage that is required is only determined by the number of characters you put into it, not the max length you define. So from a storage point of view (and most probably a performance one as well), it does not make any difference whether you declare a column as VARCHAR(100) or VARCHAR(500). You should see the max length provided for a VARCHAR column as a kind of constraint (or business rule) rather than a technical/physical thing. For PostgreSQL the best setup is to use text without a length restriction and a CHECK CONSTRAINT that limits the number of characters to whatever your business requires. If that requirement changes, altering the check constraint is much faster than altering the table (because the table does not need to be re-written) The same can be applied for Oracle and others - in Oracle it would be VARCHAR(4000) instead of text though. I don't know if there is a physical storage difference between VARCHAR(max) and e.g. VARCHAR(500) in SQL Server. But apparently there is a performance impact when using varchar(max) as compared to varchar(8000). See this link (posted by Erwin Brandstetter as a comment) Edit 2013-09-22 Regarding bigown's comment: In Postgres versions before 9.2 (which was not available when I wrote the initial answer) a change to the column definition did rewrite the whole table, see e.g. here. Since 9.2 this is no longer the case and a quick test confirmed that increasing the column size for a table with 1.2 million rows indeed only took 0.5 seconds. For Oracle this seems to be true as well, judging by the time it takes to alter a big table's varchar column. But I could not find any reference for that. For MySQL the manual says "In most cases, ALTER TABLE makes a temporary copy of the original table". And my own tests confirm that: running an ALTER TABLE on a table with 1.2 million rows (the same as in my test with Postgres) to increase the size of a column took 1.5 minutes. In MySQL however you can not use the "workaround" to use a check constraint to limit the number of characters in a column. For SQL Server I could not find a clear statement on this but the execution time to increase the size of a varchar column (again the 1.2 million rows table from above) indicates that no rewrite takes place. Edit 2017-01-24 Seems I was (at least partially) wrong about SQL Server. See this answer from Aaron Bertrand that shows that the declared length of a nvarchar or varchar columns makes a huge difference for the performance.
MySQL
8,295,131
359
How do you get the rows that contain the max value for each grouped set? I've seen some overly-complicated variations on this question, and none with a good answer. I've tried to put together the simplest possible example: Given a table like that below, with person, group, and age columns, how would you get the oldest person in each group? (A tie within a group should give the first alphabetical result) Person | Group | Age --- Bob | 1 | 32 Jill | 1 | 34 Shawn| 1 | 42 Jake | 2 | 29 Paul | 2 | 36 Laura| 2 | 39 Desired result set: Shawn | 1 | 42 Laura | 2 | 39
The correct solution is: SELECT o.* FROM `Persons` o # 'o' from 'oldest person in group' LEFT JOIN `Persons` b # 'b' from 'bigger age' ON o.Group = b.Group AND o.Age < b.Age WHERE b.Age is NULL # bigger age not found How it works: It matches each row from o with all the rows from b having the same value in column Group and a bigger value in column Age. Any row from o not having the maximum value of its group in column Age will match one or more rows from b. The LEFT JOIN makes it match the oldest person in group (including the persons that are alone in their group) with a row full of NULLs from b ('no biggest age in the group'). Using INNER JOIN makes these rows not matching and they are ignored. The WHERE clause keeps only the rows having NULLs in the fields extracted from b. They are the oldest persons from each group. Further readings This solution and many others are explained in the book SQL Antipatterns Volume 1: Avoiding the Pitfalls of Database Programming
MySQL
12,102,200
358
I tried but failed: mysql> select max(1,0); ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '0)' at line 1
Use GREATEST() E.g.: SELECT GREATEST(2,1); Note: Whenever if any single value contains null at that time this function always returns null (Thanks to user @sanghavi7)
MySQL
1,565,688
356
Is it possible to check if a (MySQL) database exists after having made a connection. I know how to check if a table exists in a DB, but I need to check if the DB exists. If not I have to call another piece of code to create it and populate it. I know this all sounds somewhat inelegant - this is a quick and dirty app.
SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'DBName' If you just need to know if a db exists so you won't get an error when you try to create it, simply use (From here): CREATE DATABASE IF NOT EXISTS DBName;
MySQL
838,978
355
I have a query that inserts using a SELECT statement: INSERT INTO courses (name, location, gid) SELECT name, location, gid FROM courses WHERE cid = $cid Is it possible to only select "name, location" for the insert, and set gid to something else in the query?
Yes, absolutely, but check your syntax. INSERT INTO courses (name, location, gid) SELECT name, location, 1 FROM courses WHERE cid = 2 You can put a constant of the same type as gid in its place, not just 1, of course. And, I just made up the cid value.
MySQL
5,391,344
352
My current query looks like this: SELECT * FROM fiberbox f WHERE f.fiberBox LIKE '%1740 %' OR f.fiberBox LIKE '%1938 %' OR f.fiberBox LIKE '%1940 %' I did some looking around and can't find anything similar to a LIKE IN() - I envision it working like this: SELECT * FROM fiberbox f WHERE f.fiberbox LIKE IN('%140 %', '%1938 %', '%1940 %') Any ideas? Am I just thinking of the problem the wrong way - some obscure command I've never seen. MySQL 5.0.77-community-log
A REGEXP might be more efficient, but you'd have to benchmark it to be sure, e.g. SELECT * from fiberbox where field REGEXP '1740|1938|1940';
MySQL
1,127,088
352
I'm setting up a database using phpMyAdmin. I have two tables (foo and bar), indexed on their primary keys. I am trying to create a relational table (foo_bar) between them, using their primary keys as foreign keys. I created these tables as MyISAM, but have since changed all three to InnoDB, because I read that MyISAM doesn't support foreign keys. All id fields are INT(11). When I choose the foo_bar table, click the "relation view" link, and try to set the FK columns to be database.foo.id and database.bar.id, it says "No index defined!" beside each column. What am I missing? Clarification/Update For the sake of simplicity, I want to keep using phpMyAdmin. I am currently using XAMPP, which is easy enough to let me focus on the PHP/CSS/Javascript, and it comes with phpMyAdmin. Also, although I haven't been able to set up explicit foreign keys yet, I do have a relational table and can perform joins like this: SELECT * FROM foo INNER JOIN foo_bar ON foo.id = foo_bar.foo_id INNER JOIN bar ON foo_bar.bar_id = bar.id; It just makes me uncomfortable not to have the FKs explicitly defined in the database.
If you want to use phpMyAdmin to set up relations, you have to do 2 things. First of all, you have to define an index on the foreign key column in the referring table (so foo_bar.foo_id, in your case). Then, go to relation view (in the referring table) and select the referred column (so in your case foo.id) and the on update and on delete actions. I think foreign keys are useful if you have multiple tables linked to one another, in particular, your delete scripts will become very short if you set the referencing options correctly. EDIT: Make sure both of the tables have the InnoDB engine selected.
MySQL
459,312
352
Is the database query faster if I insert multiple rows at once: like INSERT.... UNION INSERT.... UNION (I need to insert like 2-3000 rows)
INSERT statements that use VALUES syntax can insert multiple rows. To do this, include multiple lists of column values, each enclosed within parentheses and separated by commas. Example: INSERT INTO tbl_name (a,b,c) VALUES (1,2,3), (4,5,6), (7,8,9); Source
MySQL
6,889,065
349
What is the difference between MUL, PRI and UNI in MySQL? I'm working on a MySQL query, using the command: desc mytable; One of the fields is shown as being a MUL key, others show up as UNI or PRI. I know that if a key is PRI, only one record per table can be associated with that key. If a key is MUL, does that mean that there could be more than one associated record? Here's the response of mytable. +-----------+---------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------+---------+------+-----+---------+-------+ | courseid | int(11) | YES | MUL | NULL | | | dept | char(3) | YES | | NULL | | | coursenum | char(4) | YES | | NULL | | +-----------+---------+------+-----+---------+-------+
DESCRIBE <table>; This is acutally a shortcut for: SHOW COLUMNS FROM <table>; In any case, there are three possible values for the "Key" attribute: PRI UNI MUL The meaning of PRI and UNI are quite clear: PRI => primary key UNI => unique key The third possibility, MUL, (which you asked about) is basically an index that is neither a primary key nor a unique key. The name comes from "multiple" because multiple occurrences of the same value are allowed. Straight from the MySQL documentation: If Key is MUL, the column is the first column of a nonunique index in which multiple occurrences of a given value are permitted within the column. There is also a final caveat: If more than one of the Key values applies to a given column of a table, Key displays the one with the highest priority, in the order PRI, UNI, MUL. As a general note, the MySQL documentation is quite good. When in doubt, check it out!
MySQL
5,317,889
349
I'm using GROUP_CONCAT() in a MySQL query to convert multiple rows into a single string. However, the maximum length of the result of this function is 1024 characters. I'm very well aware that I can change the param group_concat_max_len to increase this limit: SET SESSION group_concat_max_len = 1000000; However, on the server I'm using, I can't change any param. Not by using the preceding query and not by editing any configuration file. So my question is: Is there any other way to get the output of a multiple row query into a single string?
SET SESSION group_concat_max_len = 1000000; is a temporary, session-scope, setting. It only applies to the current session You should use it like this. SET SESSION group_concat_max_len = 1000000; select group_concat(column) from table group by column You can do this even in sharing hosting, but when you use an other session, you need to repeat the SET SESSION command.
MySQL
2,567,000
349
I have been reading about scalable architectures recently. In that context, two words that keep on showing up with regards to databases are sharding and partitioning. I looked up descriptions but still ended up confused. Could the experts at stackoverflow help me get the basics right? What is the difference between sharding and partitioning ? Is it true that 'all sharded databases are essentially partitioned (over different nodes), but all partitioned databases are not necessarily sharded' ?
Partitioning is more a generic term for dividing data across tables or databases. Sharding is one specific type of partitioning, part of what is called horizontal partitioning. Here you replicate the schema across (typically) multiple instances or servers, using some kind of logic or identifier to know which instance or server to look for the data. An identifier of this kind is often called a "Shard Key". A common, key-less logic is to use the alphabet to divide the data. A-D is instance 1, E-G is instance 2 etc. Customer data is well suited for this, but will be somewhat misrepresented in size across instances if the partitioning does not take in to account that some letters are more common than others. Another common technique is to use a key-synchronization system or logic that ensures unique keys across the instances. A well known example you can study is how Instagram solved their partitioning in the early days (see link below). They started out partitioned on very few servers, using Postgres to divide the data from the get-go. I believe it was several thousand logical shards on those few physical shards. Read their awesome writeup from 2012 here: Instagram Engineering - Sharding & IDs See here as well: http://www.quora.com/Whats-the-difference-between-sharding-and-partition
MySQL
20,771,435
348
Currently I am doing a very basic OrderBy in my statement. SELECT * FROM tablename WHERE visible=1 ORDER BY position ASC, id DESC The problem with this is that NULL entries for 'position' are treated as 0. Therefore all entries with position as NULL appear before those with 1,2,3,4. eg: NULL, NULL, NULL, 1, 2, 3, 4 Is there a way to achieve the following ordering: 1, 2, 3, 4, NULL, NULL, NULL.
MySQL has an undocumented syntax to sort nulls last. Place a minus sign (-) before the column name and switch the ASC to DESC: SELECT * FROM tablename WHERE visible=1 ORDER BY -position DESC, id DESC It is essentially the inverse of position DESC placing the NULL values last but otherwise the same as position ASC. A good reference is here http://troels.arvin.dk/db/rdbms#select-order_by
MySQL
2,051,602
348
How can I install the MySQLdb module for Python using pip?
It's easy to do, but hard to remember the correct spelling: pip install mysqlclient If you need 1.2.x versions (legacy Python only), use pip install MySQL-python Note: Some dependencies might have to be in place when running the above command. Some hints on how to install these on various platforms: Ubuntu 14, Ubuntu 16, Debian 8.6 (jessie) sudo apt-get install python-pip python-dev libmysqlclient-dev Fedora 24: sudo dnf install python python-devel mysql-devel redhat-rpm-config gcc Mac OS brew install mysql-connector-c if that fails, try brew install mysql
MySQL
25,865,270
347
For homebrew mysql installs, where's my.cnf? Does it install one?
There is no my.cnf by default. As such, MySQL starts with all of the default settings. If you want to create your own my.cnf to override any defaults, place it at /etc/my.cnf. Also, you can run mysql --help and look through it for the conf locations listed. Default options are read from the following files in the given order: /etc/my.cnf /etc/mysql/my.cnf /usr/etc/my.cnf ~/.my.cnf The following groups are read: mysql client The following options may be given as the first argument: --print-defaults Print the program argument list and exit. --no-defaults Don't read default options from any option file. --defaults-file=# Only read default options from the given file #. --defaults-extra-file=# Read this file after the global files are read. As you can see, there are also some options for bypassing the conf files, or specifying other files to read when you invoke mysql on the command line.
MySQL
7,973,927
346
The following query: SELECT * FROM `objects` WHERE (date_field BETWEEN '2010-09-29 10:15:55' AND '2010-01-30 14:15:55') returns nothing. I should have more than enough data to for the query to work though. What am I doing wrong?
Your second date is before your first date (ie. you are querying between September 29 2010 and January 30 2010). Try reversing the order of the dates: SELECT * FROM `objects` WHERE (date_field BETWEEN '2010-01-30 14:15:55' AND '2010-09-29 10:15:55') Official Docs: https://dev.mysql.com/doc/refman/8.0/en/datetime.html
MySQL
3,822,648
344
How can I generate a create table script for an existing table in phpmyadmin?
Use one of the following queries in sql tab: SHOW CREATE TABLE tablename; SHOW CREATE TABLE database.tablename; To view full query There is this Hyperlink named +Options left above, There select Full Texts
MySQL
11,739,014
341
I'm rewriting a project to use Node.js. I'd like to keep using MySQL as the DB (even though I don't mind rewriting the schema). I'm looking for a simple-to-use, reasonable-performance ORM, which supports caching, many-to-one and many-to-many relations. From the MySQL ORMs I could find, persistencejs and sequelize seem the most mature. Do you have experience with either? What are the relevant pros and cons I should be aware of in my decision?
I would choose Sequelize because of it's excellent documentation. It's just a honest opinion (I never really used MySQL with Node that much).
MySQL
6,007,353
341
I can read the MySQL documentation and it's pretty clear. But, how does one decide which character set to use? On what data does collation have an effect? I'm asking for an explanation of the two and how to choose them.
From MySQL docs: A character set is a set of symbols and encodings. A collation is a set of rules for comparing characters in a character set. Let's make the distinction clear with an example of an imaginary character set. Suppose that we have an alphabet with four letters: 'A', 'B', 'a', 'b'. We give each letter a number: 'A' = 0, 'B' = 1, 'a' = 2, 'b' = 3. The letter 'A' is a symbol, the number 0 is the encoding for 'A', and the combination of all four letters and their encodings is a character set. Now, suppose that we want to compare two string values, 'A' and 'B'. The simplest way to do this is to look at the encodings: 0 for 'A' and 1 for 'B'. Because 0 is less than 1, we say 'A' is less than 'B'. Now, what we've just done is apply a collation to our character set. The collation is a set of rules (only one rule in this case): "compare the encodings." We call this simplest of all possible collations a binary collation. But what if we want to say that the lowercase and uppercase letters are equivalent? Then we would have at least two rules: (1) treat the lowercase letters 'a' and 'b' as equivalent to 'A' and 'B'; (2) then compare the encodings. We call this a case-insensitive collation. It's a little more complex than a binary collation. In real life, most character sets have many characters: not just 'A' and 'B' but whole alphabets, sometimes multiple alphabets or eastern writing systems with thousands of characters, along with many special symbols and punctuation marks. Also in real life, most collations have many rules: not just case insensitivity but also accent insensitivity (an "accent" is a mark attached to a character as in German 'ö') and multiple-character mappings (such as the rule that 'ö' = 'OE' in one of the two German collations).
MySQL
341,273
341
At what point does a MySQL database start to lose performance? Does physical database size matter? Do number of records matter? Is any performance degradation linear or exponential? I have what I believe to be a large database, with roughly 15M records which take up almost 2GB. Based on these numbers, is there any incentive for me to clean the data out, or am I safe to allow it to continue scaling for a few more years?
The physical database size doesn't matter. The number of records don't matter. In my experience the biggest problem that you are going to run in to is not size, but the number of queries you can handle at a time. Most likely you are going to have to move to a master/slave configuration so that the read queries can run against the slaves and the write queries run against the master. However if you are not ready for this yet, you can always tweak your indexes for the queries you are running to speed up the response times. Also there is a lot of tweaking you can do to the network stack and kernel in Linux that will help. I have had mine get up to 10GB, with only a moderate number of connections and it handled the requests just fine. I would focus first on your indexes, then have a server admin look at your OS, and if all that doesn't help it might be time to implement a master/slave configuration.
MySQL
1,276
341
Given an array of ids $galleries = array(1,2,5) I want to have a SQL query that uses the values of the array in its WHERE clause like: SELECT * FROM galleries WHERE id = /* values of array $galleries... eg. (1 || 2 || 5) */ How can I generate this query string to use with MySQL?
BEWARE! This answer contains a severe SQL injection vulnerability. Do NOT use the code samples as presented here, without making sure that any external input is sanitized. $ids = join("','",$galleries); $sql = "SELECT * FROM galleries WHERE id IN ('$ids')";
MySQL
907,806
339
I installed MySQL on Mac OS X Mountain Lion with homebrew install mysql, but when I tried mysql -u root I got the following error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) What does this error mean? How can I fix it?
You'll need to start MySQL before you can use the mysql command on your terminal. To do this, run brew services start mysql. By default, brew installs the MySQL database without a root password. To secure it run: mysql_secure_installation. To connect run: mysql -uroot. root is the username name here.
MySQL
15,450,091
336
I have a WordPress website on my local WAMP server. But when I upload its database to live server, I get error #1273 – Unknown collation: ‘utf8mb4_unicode_520_ci’
I believe this error is caused because the local server and live server are running different versions of MySQL. To solve this: Open the sql file in your text editor Find and replace all utf8mb4_unicode_520_ci with utf8mb4_unicode_ci Save and upload to a fresh mySql db
MySQL
42,385,099
335
Migration error on Laravel 5.4 with php artisan make:auth [Illuminate\Database\QueryException] SQLSTATE[42000]: Syntax error or access violation: 1071 Specified key was too long; max key length is 767 bytes (SQL: alter tabl e users add unique users_email_unique(email)) [PDOException] SQLSTATE[42000]: Syntax error or access violation: 1071 Specified key was too long; max key length is 767 bytes
According to the official Laravel 7.x documentation, you can solve this quite easily. Update your /app/Providers/AppServiceProvider.php to contain: use Illuminate\Support\Facades\Schema; /** * Bootstrap any application services. * * @return void */ public function boot() { Schema::defaultStringLength(191); } Alternatively, you may enable the innodb_large_prefix option for your database. Refer to your database's documentation for instructions on how to properly enable this option.
MySQL
42,244,541
335