question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
How can I find the count of all the keys that has a matching pattern. For example, there are two keys abc:random-text-1 and abc:random-text-2 . The common pattern here isabc: . So, here the count is 2. How can I do this in redis?
From here: eval "return #redis.pcall('keys', 'abc:*')" 0 It's not O(1), but at least the count is done on the server side.
Redis
20,418,529
107
I am using redis and trying to open CLI of redis using this: $redis-cli -h 127.0.0.1 -p 6379 -a mysupersecretpassword and getting and error : (error) NOAUTH Authentication required why so ?
My solution is put the password in single quotes like: $redis-cli -h 127.0.0.1 -p 6379 -a 'thisizmy!PASS'
Redis
35,745,481
104
In many Redis tutorials (such as this one), data is stored in a set, but with multiple values combined together in a string (i.e. a user account might be stored in the set as two entries, "user:1000:username" and "user:1000:password"). However, Redis also has hashes. It seems that it would make more sense to have a "user:1000" hash, which contains a "username" entry and a "password" entry. Rather than concatenating strings to access a particular value, you just access them directly in the hash. So why isn't it used as much? Are these just old tutorials? Or do Redis hashes have performance issues?
Redis hashes are good for storing more complex data, like you suggest in your question. I use them for exactly that - to store objects with multiple attributes that need to be cached (specifically, inventory data for a particular product on an e-commerce site). Sure, I could use a concatenated string - but that adds unneeded complexity to my client code, and updating an individual field is not possible. You may be right - the tutorials may simply be from before Hashes were introduced. They were clearly designed for storing Object representations: http://oldblog.antirez.com/post/redis-weekly-update-1.html I suppose one concern would be the number of commands Redis must service when a new item is inserted (n number of commands, where n is the number of fields in the Hash) when compared to a simple String SET command. I haven't found this to be a problem yet on a service which hits Redis about 1 million times per day. Using the right data structure to me is more important than a negligible performance impact. (Also, please see my comment regarding Redis Sets vs. Redis Strings - I think your question is referring to Strings but correct me if I'm wrong!)
Redis
13,557,075
102
I've been trying to get a high level understanding of what MurmurHash does. I've read a basic description but have yet to find a good explanation of when to use it and why. I know its very fast but want to know a bit more. I asked a related question about how I could fit a UUID into a Redis bitset, and someone suggested using MurmurHash. It works but I'd like to understand the risks/benefits.
Murmur is a family of good general purpose hashing functions, suitable for non-cryptographic usage. As stated by Austin Appleby, MurmurHash provides the following benefits: simple (in term of number of generated assembly instructions). good distribution (passing chi-squared tests for practically all keysets & bucket sizes. good avalanche behavior (max bias of 0.5%). good collision resistance (passes Bob Jenkin's frog.c torture-test. No collisions possible for 4-byte keys, no small (1- to 7-bit) differentials). great performance on Intel/AMD hardware, good tradeoff between hash quality and CPU consumption. You can certainly use it to hash UUIDs (like any other advanced hashing functions: CityHash, Jenkins, Paul Hsieh's, etc ...). Now, a Redis bitset is limited to 4 GB bits (512 MB). So you need to reduce 128 bits of data (UUID) to 32 bits (hashed value). Whatever the quality of the hashing function, there will be collisions. Using an engineered hash function like Murmur will maximize the quality of the distribution, and minimize the number of collisions, but it offers no other guarantee. Here are some links comparing the quality of general purpose hash functions: http://www.azillionmonkeys.com/qed/hash.html http://www.strchr.com/hash_functions http://blog.aggregateknowledge.com/2011/12/05/choosing-a-good-hash-function-part-1/ http://blog.aggregateknowledge.com/2011/12/29/choosing-a-good-hash-function-part-2/ http://blog.aggregateknowledge.com/2012/02/02/choosing-a-good-hash-function-part-3/
Redis
11,899,616
102
I'm writing an event-driven publish/subscribe application with NodeJS and Redis. I need an example of how to notify web clients when the data values in Redis change.
OLD only use a reference Dependencies uses express, socket.io, node_redis and last but not least the sample code from media fire. Install node.js+npm(as non root) First you should(if you have not done this yet) install node.js+npm in 30 seconds (the right way because you should NOT run npm as root): echo 'export PATH=$HOME/local/bin:$PATH' >> ~/.bashrc . ~/.bashrc mkdir ~/local mkdir ~/node-latest-install cd ~/node-latest-install curl http://nodejs.org/dist/node-latest.tar.gz | tar xz --strip-components=1 ./configure --prefix=~/local make install # ok, fine, this step probably takes more than 30 seconds... curl http://npmjs.org/install.sh | sh Install dependencies After you installed node+npm you should install dependencies by issuing: npm install express npm install socket.io npm install hiredis redis # hiredis to use c binding for redis => FAST :) Download sample You can download complete sample from mediafire. Unzip package unzip pbsb.zip # can also do via graphical interface if you prefer. What's inside zip ./app.js const PORT = 3000; const HOST = 'localhost'; var express = require('express'); var app = module.exports = express.createServer(); app.use(express.staticProvider(__dirname + '/public')); const redis = require('redis'); const client = redis.createClient(); const io = require('socket.io'); if (!module.parent) { app.listen(PORT, HOST); console.log("Express server listening on port %d", app.address().port) const socket = io.listen(app); socket.on('connection', function(client) { const subscribe = redis.createClient(); subscribe.subscribe('pubsub'); // listen to messages from channel pubsub subscribe.on("message", function(channel, message) { client.send(message); }); client.on('message', function(msg) { }); client.on('disconnect', function() { subscribe.quit(); }); }); } ./public/index.html <html> <head> <title>PubSub</title> <script src="/socket.io/socket.io.js"></script> <script src="/javascripts/jquery-1.4.3.min.js"></script> </head> <body> <div id="content"></div> <script> $(document).ready(function() { var socket = new io.Socket('localhost', {port: 3000, rememberTransport: false/*, transports: ['xhr-polling']*/}); var content = $('#content'); socket.on('connect', function() { }); socket.on('message', function(message){ content.prepend(message + '<br />'); }) ; socket.on('disconnect', function() { console.log('disconnected'); content.html("<b>Disconnected!</b>"); }); socket.connect(); }); </script> </body> </html> Start server cd pbsb node app.js Start browser Best if you start google chrome(because of websockets support, but not necessary). Visit http://localhost:3000 to see sample(in the beginning you don't see anything but PubSub as title). But on publish to channel pubsub you should see a message. Below we publish "Hello world!" to the browser. From ./redis-cli publish pubsub "Hello world!"
Redis
4,441,798
102
Can anyone give example use cases of when you would benefit from using Redis and MongoDB in conjunction with each other?
Redis and MongoDB can be used together with good results. A company well-known for running MongoDB and Redis (along with MySQL and Sphinx) is Craiglist. See this presentation from Jeremy Zawodny. MongoDB is interesting for persistent, document oriented, data indexed in various ways. Redis is more interesting for volatile data, or latency sensitive semi-persistent data. Here are a few examples of concrete usage of Redis on top of MongoDB. Pre-2.2 MongoDB does not have yet an expiration mechanism. Capped collections cannot really be used to implement a real TTL. Redis has a TTL-based expiration mechanism, making it convenient to store volatile data. For instance, user sessions are commonly stored in Redis, while user data will be stored and indexed in MongoDB. Note that MongoDB 2.2 has introduced a low accuracy expiration mechanism at the collection level (to be used for purging data for instance). Redis provides a convenient set datatype and its associated operations (union, intersection, difference on multiple sets, etc ...). It is quite easy to implement a basic faceted search or tagging engine on top of this feature, which is an interesting addition to MongoDB more traditional indexing capabilities. Redis supports efficient blocking pop operations on lists. This can be used to implement an ad-hoc distributed queuing system. It is more flexible than MongoDB tailable cursors IMO, since a backend application can listen to several queues with a timeout, transfer items to another queue atomically, etc ... If the application requires some queuing, it makes sense to store the queue in Redis, and keep the persistent functional data in MongoDB. Redis also offers a pub/sub mechanism. In a distributed application, an event propagation system may be useful. This is again an excellent use case for Redis, while the persistent data are kept in MongoDB. Because it is much easier to design a data model with MongoDB than with Redis (Redis is more low-level), it is interesting to benefit from the flexibility of MongoDB for main persistent data, and from the extra features provided by Redis (low latency, item expiration, queues, pub/sub, atomic blocks, etc ...). It is indeed a good combination. Please note you should never run a Redis and MongoDB server on the same machine. MongoDB memory is designed to be swapped out, Redis is not. If MongoDB triggers some swapping activity, the performance of Redis will be catastrophic. They should be isolated on different nodes.
Redis
10,696,463
101
I created a Visual Studio (Community 2019) project with C# using ServiceStack.Redis. Since it is C#, I use Windows 10 (there is a Redis version for Windows but it is really old and as I know, it is unofficial so I am afraid that might be the problem). Here is an excerpt from my code: public class PeopleStorage: IDisposable { public PeopleStorage() { redisManager = new RedisManagerPool("localhost"); redis = (RedisClient)redisManager.GetClient(); facts = (RedisTypedClient<List<Fact>>)redis.As<List<Fact>>(); } public List<Fact> GetFacts(int id) { string sid = id.ToString(); if (facts.ContainsKey(sid)) return facts[sid]; return accessor.GetFacts(id); } private RedisTypedClient<List<Fact>> facts; private RedisClient redis; private RedisManagerPool redisManager; } In an attempt to connect to Redis in line return facts[sid];, an exception occurs: System.IO.FileLoadException: "Could not load file or assembly "System.Runtime.CompilerServices.Unsafe, Version=4.0.4.1, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" or one of it's dependences. The found Assembly's manifest definition does not match the Assembly reference. (Exception from HRESULT: 0x80131040)" (May be inaccurate as I have translated it) I have tried updating all the packages, starting with ServiceStack packages, ending with System.Runtime.CompilerServices.Unsafe itself. Moreover, you can't choose 4.0.4.1 version in NuGet, the closest one there is 4.0.0, while the relevant is 4.0.7. I do not understand why it uses this version and how I can fix this problem. Even a clean reinstall of Visual Studio did not help.
Could not load file or assembly System.Runtime.CompilerServices.Unsafe It seems that you have installed System.Runtime.CompilerServices.Unsafe nuget package 4.5.3 version. And it corresponds to System.Runtime.CompilerServices.Unsafe.dll assembly version 4.0.4.1. Suggestion 1) Please try to register System.Runtime.CompilerServices.Unsafe version 4.0.4.1 into GAC so that the system can it. Run Developer Command Prompt for VS2019 as Administrator type: cd xxxxx (the path of the the System.Runtime.CompilerServices.Unsafe 4.0.4.1) gacutil /i System.Runtime.CompilerServices.Unsafe.dll 2) If you use Net Framework projects with xxx.config file, you could use bindingRedirect. Add these in app.config file or web.config file: <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Runtime.CompilerServices.Unsafe" publicKeyToken="b03f5f7f11d50a3a" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-4.0.4.1" newVersion="4.0.4.1"/> </dependentAssembly> </assemblyBinding> </runtime> </configuration> Besides, if you update System.Runtime.CompilerServices.Unsafe NuGet package version to the newer version, you should also changed the bindingRedirect assembly version. You can refer to these assembly versions of System.Runtime.CompilerServices.Unsafe NuGet package version Assembly version 4.5.0, 4.5.1, 4.5.2 4.0.4.0 4.5.3 4.0.4.1 4.6.0 4.0.5.0 4.7.0, 4.7.1 4.0.6.0 5.0.0 5.0.0.0
Redis
62,764,744
101
Suppose you have a LIST datatype in Redis. How do you delete all its entries? I've tried this already: LTRIM key 0 0 LTRIM key -1 0 Both of those leave the first element. This will leave all the elements: LTRIM key 0 -1 I don't see a separate command to completely empty a list.
Delete the key, and that will clear all items. Not having the list at all is similar to not having any items in it. Redis will not throw any exceptions when you try to access a non-existent key. DEL key Here's some console logs. redis> KEYS * (empty list or set) redis> LPUSH names John (integer) 1 redis> LPUSH names Mary (integer) 2 redis> LPUSH names Alice (integer) 3 redis> LLEN names (integer) 3 redis> LRANGE names 0 2 1) "Alice" 2) "Mary" 3) "John" redis> DEL names (integer) 1 redis> LLEN names (integer) 0 redis> LRANGE names 0 2 (empty list or set)
Redis
9,828,160
101
I'm learning how to use Redis for a project of mine. One thing I haven't got my head around is what exactly the colons are used for in the names of keys. I have seen names of key such as these: users:bob color:blue item:bag Does the colon separate keys into categories and make finding the keys faster? If so can you use multiple colons when naming keys to break them down into sub categories? Lastly do they have anything to do with defining different databases within the Redis server? I have read through documentation and done numerous Google searches on the matter but oddly I can't find anything discussing this.
The colons have been in earlier redis versions as a concept for storing namespaced data. In early versions redis supported only strings, if you wanted to store the email and the age of 'bob' you had to store it all as a string, so colons were used: SET user:bob:email [email protected] SET user:bob:age 31 They had no special handling or performance characteristics in redis, the only purpose was namespacing the data to find it again. Nowadays you can use hashes to store most of the coloned keys: HSET user:bob email [email protected] HSET user:bob age 31 You don't have to name the hash "user:bob" we could name it "bob", but namespacing it with the user-prefix we instantly know which information this hash should/could have.
Redis
3,554,888
101
I am using redis for session support in nodejs app. I have installed redis server and it works when I run redis-server, but when I close terminal redis stops and does not work. How do I keep redis server running after closing the terminal?
And, if you'd like a quick option, run: redis-server --daemonize yes.
Redis
14,816,892
100
I am new with redis and I didn't figured out how to create and change to another redis database. How do I do this?
By default there are 16 databases (indexed from 0 to 15) and you can navigate between them using select command. Number of databases can be changed in redis config file with databases setting. By default, it selects the database 0. To select a specified one, use redis-cli -n 2 (selects db 2)
Redis
13,386,053
97
I have simple redis list key => "supplier_id" Now all I want it retrieve all value of list without actually iterating over or popping the value from list Example to retrieve all the value from a list Now I have iterate over redis length element = [] 0.upto(redis.llen("supplier_id")-1) do |index| element << redis.lindex("supplier_id",index) end can this be done without the iteration perhap with better redis modelling . can anyone suggest
To retrieve all the items of a list with Redis, you do not need to iterate and fetch each individual items. It would be really inefficient. You just have to use the LRANGE command to retrieve all the items in one shot. elements = redis.lrange( "supplier_id", 0, -1 ) will return all the items of the list without altering the list itself.
Redis
10,703,019
96
The hmset function can set the value of each field, but I found that if the value itself is a complex structured object, the value return from hget is a serialized string, not the original object e.g images= [{'type':'big', 'url':'....'}, {'type':'big', 'url':'....'}, {'type':'big', 'url':'....'}] redis = Redis() redis.hset('photo:1', 'images', images) i = redis.hget('photo:1', 'images') print type(i) the type of i is a string, not a python object, is there any way to solve this problem besides manually parse each fields?
Actually, you can store python objects in redis using the built-in module pickle. Here is example. import pickle import redis r = redis.StrictRedis(host='localhost', port=6379, db=0) obj = ExampleObject() pickled_object = pickle.dumps(obj) r.set('some_key', pickled_object) unpacked_object = pickle.loads(r.get('some_key')) obj == unpacked_object
Redis
15,219,858
95
Hi I'm trying to retrieve the function/method of one struct but I'm using an interface as parameter and using this interface I'm trying to access the function of the struct. To demonstrate what I want below is my code // Here I'm trying to use "GetValue" a function of RedisConnection but since "c" is an interface it doesn't know that I'm trying to access the RedisConnection function. How Do I fix this? func GetRedisValue(c Connection, key string) (string, error) { value, err := c.GetValue(key) return value, err } // Connection ... type Connection interface { GetClient() (*redis.Client, error) } // RedisConnection ... type RedisConnection struct {} // NewRedisConnection ... func NewRedisConnection() Connection { return RedisConnection{} } // GetClient ... func (r RedisConnection) GetClient() (*redis.Client, error) { redisHost := "localhost" redisPort := "6379" if os.Getenv("REDIS_HOST") != "" { redisHost = os.Getenv("REDIS_HOST") } if os.Getenv("REDIS_PORT") != "" { redisPort = os.Getenv("REDIS_PORT") } client := redis.NewClient(&redis.Options{ Addr: redisHost + ":" + redisPort, Password: "", // no password set DB: 0, // use default DB }) return client, nil } // GetValue ... func (r RedisConnection) GetValue(key string) (string, error) { client, e := r.GetClient() result, err := client.Ping().Result() return result, nil }
To answer the question directly, i.e., to cast an interface into a concrete type, you do: v = i.(T) where i is the interface and T is the concrete type. It will panic if the underlying type is not T. To have a safe cast, you use: v, ok = i.(T) and if the underlying type is not T, ok is set to false, otherwise true. Note that T can also be an interface type and if it is, the code cast i into a new interface instead of a concrete type. And please be noted, casting an interface is likely a symbol of bad design. As in your code, you should ask yourself, does your custom interface Connection solely requires GetClient or does it always requires a GetValue? Does your GetRedisValue function requires a Connection or does it always wants a concrete struct? Change your code accordingly.
Redis
50,939,497
94
I see lots of people struggling with this, sort of feel like maybe there is a bug in the redis container image, and others seem to be chasing a similar problem. I'm using the standard redis image on DockerHub. (https://github.com/dockerfile/redis) running it like this: docker run -it -p 6379:6379 redis bash Once I'm in I can start the server, and do a redis ping from the container image. Unfortunately, I cannot connect to the redis container from my host. I have tried setting, such as below. bind 127.0.0.1 and removed the bind from the configuration and tried turn off protected mode protected-mode no I know it is reading the configuration file, since I changed ports just to test, and I was able to do that. I'm running Windows 10, so maybe it is a windows networking issue. I never have a problem with docker normally. I'm puzzled
The problem is with your bind, You should set the following: bind 0.0.0.0 This will set redis to bind to all interfaces available, in a containerized environment with one interface, (eth0) and a loopback (lo) redis will bind to both of the above. You should consider adding security measures via other directives in config file or using external tools like firewalls. because with this approach everyone can connect to your redis server. The default setting is bind 127.0.0.1 and this setting will cause redis to only listen on loopback interface, and it will be only accessible from inside the container. (for security) To run redis with custom configuration file: sudo docker run -d --name redis-test -p 6379:6379 \ -v /path/to/redisconf/redis.conf:/redis.conf \ redis redis-server /redis.conf Now to verify on docker host with redis-tools installed: redis-cli 127.0.0.1:6379> 127.0.0.1:6379> set farhad likes:stackoverflow OK 127.0.0.1:6379> get farhad "likes:stackoverflow" 127.0.0.1:6379> You can also connnect to your redis container from an external host via: redis-cli -h 'IP-address-of-dockerhost-running-redis-container'
Redis
41,371,402
93
I have a Sorted set and want to get all members of set. How to identify a max/min score for command : zrange key min max ?
You're in luck, as zrange does not take scores, but indices. 0 is the first index, and -1 will be interpreted as the last index: zrange key 0 -1 To get a range by score, you would call zrangebyscore instead -- where -inf and +inf can be used to denote negative and positive infinity, respectively, as Didier Spezia notes in his comment: zrangebyscore key -inf +inf
Redis
11,504,154
93
In my application im using redis database.I have gone through their documentation but i couldn't find the difference between HSET and HMSET.
HSET used to be able to set only one key-value pair. And if you needed to set several at once, you would have to use HMSET (M for multi). That was changed a few years ago, to allow both commands to accept multiple pairs. And now HMSET is redundant. From official documentation: As per Redis 4.0.0, HMSET is considered deprecated. Please use HSET in new code.
Redis
15,264,480
92
I'm using memcached for some caching in my Rails 3 app through the simple Rails.cache interface and now I'd like to do some background job processing with redis and resque. I think they're different enough to warrant using both. On heroku though, there are separate fees to use both memcached and redis. Does it make sense to use both or should I migrate to just using redis? I like using memcached for caching because least recently used keys automatically get pushed out of the cache and I don't need the cache data to persist. Redis is mostly new to me, but I understand that it's persistent by default and that keys do not expire out of the cache automatically. EDIT: Just wanted to be more clear with my question. I know it's feasible to use only Redis instead of both. I guess I just want to know if there are any specific disadvantages in doing so? Considering both implementation and infrastructure, are there any reasons why I shouldn't just use Redis? (I.e., is memcached faster for simple caching?) I haven't found anything definitive either way.
Assuming that migrating from memcached to redis for the caching you already do is easy enough, I'd go with redis only to keep things simple. In redis persistence is optional, so you can use it much like memcached if that is what you want. You may even find that making your cache persistent is useful to avoid lots of cache misses after a restart. Expiry is available also - the algorithm is a bit different from memcached, but not enough to matter for most purposes - see http://redis.io/commands/expire for details.
Redis
4,188,620
92
I'm reading the Redis documentation on persistence here - https://redis.io/topics/persistence - and am wondering what the acronyms AOF and RDB stand for. Thanks! :)
AOF stands for Append Only File. It's the change-log style persistent format. RDB is for Redis Database File. It's the snapshot style persistence format.
Redis
45,040,666
91
Is it possible in Redis to set TTL (time to live) not for a specific key, but for a member for a set? I am using a structure for tags proposed by Redis documentation - the data are simple key-value pairs, and the tags are sets containing keys corresponding to each tag, e.g. > SETEX id:id_1 100 'Lorem ipsum' OK > SADD tag:tag_1 id:id_1 (integer) 1 The key id:id_1 will expire as expected but i don't see an efficient way to remove the corresponding member from the tag:tag_1 set. One way I came up is using a cron job containing a script which would remove expired keys from sets periodically - by adding all the tag names to another set and then iterating through all the tags, then all the ids corresponding to each tag and checking if corresponding key exists - if not, calling SREM. I don't think it will be an efficient way and I would possibly like to keep the tags as clean as possible, because the size of the sets will probably affect performance of searching by multiple tags (SINTER). Is there a more "internal" way?
No, this isn’t possible (and not planned either). The recommended approach is to use an ordered set with score set to timestamp and then manually removing expired keys. To query for non-expired keys, you can use ZRANGEBYSCORE $now +inf, to delete expired keys, ZREMRANGEBYSCORE -inf $now will do the trick. In my application, I simply issue both commands every time I query the set. I also combine this with (long) expiration time on the set itself to eventually purge unused sets. This article walks through it in more detail.
Redis
17,060,672
91
Redis can do everything that Memcached provides (LRU cache, item expiry, and now clustering in version 3.x+, currently in beta) or by tools like twemproxy. The performance is similar too. Morever, Redis adds persistence due to which you need not do cache warming in case of a server restart. Reference to some old answers which compare Redis and Memcache, some of which favor Redis as replacement of Memcache (if already present in the stack): Memcached vs. Redis? Is memcached a dinosaur in comparison to Redis? Redis and Memcache or just Redis? In spite of this, on studying stacks of large web scale companies like Instagram, Pinterest, Twitter etc, I found that they use both Memcached and Redis for different purposes, not using Redis for primary caching. The primary cache is still Memcached, and Redis is used for its data structures based logical caching. As of 2014, why is memcached still worth the pain to be added as additional component into your stack, when you already have a Redis component which can do everything that memcached can? What are the favorable points that incline the architects/engineers to still include memcached apart from already existing Redis? Update : For our platforms, we have completely discarded Memcached, and use redis for plain as well as logical caching requirements. Highly performant, flexible and reliable. Some example scenarios: Listing all cached keys by a specific pattern, and read or delete their values. Very easy in redis, not doable ( easily ) in memcached. Storing a payload more than 1mb, easy to do in redis, requires slab size tweaks in memcached, which has performance side effects of its own. Easy snapshots of current cache content Redis cluster is production ready as well along with language drivers,hence clustered deployment is easy too.
The main reason I see today as an use-case for memcached over Redis is the superior memory efficiency you should be able to get with plain HTML fragments caching (or similar applications). If you need to store different fields of your objects in different memcached keys, then Redis hashes are going to be more memory efficient, but when you have a large number of key -> simple_string pairs, memcached should be able to give you more items per megabyte. Other things which are good points about memcached: It is a very simple piece of code, so if you just need the functionality it provides, it is a reasonable alternative I guess, but I never used it in production. It is multi-threaded, so if you need to scale in a single-box setup, it is a good thing and you need to talk with just one instance. I believe that Redis as a cache makes more and more sense as people move towards intelligent caching or when they try to preserve structure of the cached data via Redis data structures. Comparison between Redis LRU and memcached LRU. Both memcached and Redis don't perform real LRU evictions, but only an approximation of that. Memcache eviction is per-size class and depends on the implementation details of its slab allocator. For example if you want to add an item which fits in a given size class, memcached will try to remove expired / not-recently-used items in that class, instead to try a global attempt to understand what is the object, regardless of its size, which is the best candidate. Redis instead tries to pick a good object as a candidate for eviction when the maxmemory limit is reached, looking at all the objects, regardless of the size class, but is only able to provide an approximately good object, not the best object with the greater idle time. The way Redis does this is by sampling a few objects, picking the one which was idle (not accessed) for the longest time. Since Redis 3.0 (currently in beta) the algorithm was improved and also takes a good candidates pools across evictions, so the approximation was improved. In the Redis documentation you can find a description and graphs with details about how it works. Why memcached has a better memory footprint than Redis for simple string -> string maps. Redis is a more complex piece of software, so values in Redis are stored in a way more similar to objects in a high level programming language: they have associated type, encoding, reference counting for memory management. This makes Redis internal structure good and manageable, but has an overhead compared to memcached which only deals with strings. When Redis starts to be more memory efficient Redis is able to store small aggregate data types in a special memory saving way. For example a small Redis Hash representing an object, is stored internally not with an hash table, but as a binary unique blob. So setting multiple fields per object into an hash is more efficient than storing N separated keys into memcached. You can, actually, store an object into memcached as a single JSON (or binary-encoded) blob, but contrary to Redis, this will not allow you to fetch or update independent fields. The advantage of Redis in the context of intelligent caching. Because of Redis data structures, the usual pattern used with memcached of destroying objects when the cache is invalidated, to recreate it from the DB later, is a primitive way of using Redis. For example, imagine you need to cache the latest N news posted into Hacker News in order to populate the "Newest" section of the site. What you do with Redis is to take a list (capped to M items) with the newest news inserted. If you use another store for your data, and Redis as a cache, what you do is to populate both the views (Redis and the DB) when a new item is posted. There is no cache invalidation. However the application can always have logic so that if the Redis list is found to be empty, for example after a startup, the initial view can be re-created from the DB. By using intelligent caching it is possible to perform caching with Redis in a more efficient way compared to memcached, but not all the problems are suitable for this pattern. For example HTML fragments caching may not benefit from this technique.
Redis
23,601,622
90
Maybe I'm just blind, but I don't see an explicit set command in Redis for emptying an existing set (without emptying the entire database). For the time being, I'm doing a set difference on the set with itself and storing it back into itself: redis> SMEMBERS metasyn 1) "foo" 2) "bar" redis> SDIFFSTORE metasyn metasyn metasyn (integer) 0 redis> SMEMBERS metasyn (empty list or set) But that looks a little silly... is there a better way to do this?
You could delete the set altogether with DEL. DEL metasyn From redis console, redis> SMEMBERS metasyn 1) "foo" 2) "bar" redis> DEL metasyn (integer) 1 redis> SMEMBERS metasyn (empty list or set)
Redis
6,301,399
90
I've been looking at Redis. It looks very interesting. But from a practical perspective, in what cases would it be better to use Redis over MySQL?
Ignoring the whole NoSQL vs SQL debate, I think the best approach is to combine them. In other words, use MySQL for for some parts of the system (complex lookups, transactions) and redis for others (performance, counters etc). In my experience, performance issues related to scalability (lots of users...) eventually forces you to add some kind of cache to remove load from the MySQL server and redis/memcached is very good at that.
Redis
3,966,689
89
Does anyone know what the maximum value size you can store in redis? I want to use redis as a message queue with celery to store some small documents that need to be processed by a worker on another server, and I want to make sure the documents aren't going to be too big. I found one page with a reference to 1GB, but when I followed the link on the page for where they got that answer the link wasn't valid anymore. Here is the link: http://news.ycombinator.com/item?id=1182005
All string values are limited to 512 MiB. This is the size limit you probably care most about. EDIT: Because keys in Redis are strings, the maximum key size is 512 MiB. The maximum number of keys is 2^32 - 1 = 4,294,967,295. Values, on the other hand, can vary in size depending on their type. For aggregate data types (i.e. hash, list, set, and sorted set), the maximum value size is 512 MiB for each element, although the data structure itself can have up to 2^32 - 1 elements. https://redis.io/topics/data-types https://redis.io/topics/faq#what-is-the-maximum-number-of-keys-a-single-redis-instance-can-hold-and-what-is-the-max-number-of-elements-in-a-hash-list-set-sorted-set http://groups.google.com/group/redis-db/browse_thread/thread/1c7e33fbc98734b3?fwc=2
Redis
5,606,106
86
My overall question is: Using Redis for PubSub, what happens to messages when publishers push messages into a channel faster than subscribers are able to read them? For example, let's say I have: A simple publisher publishing messages at the rate of 2 msg/sec. A simple subscriber reading messages at the rate of 1 msg/sec. My naive assumption would be the subscriber would only see 50% of the messages published onto Redis. To test this theory, I wrote two scripts: pub.py queue = redis.StrictRedis(host='localhost', port=6379, db=0) channel = queue.pubsub() for i in range(10): queue.publish("test", i) time.sleep(0.5) sub.py r = redis.StrictRedis(host='localhost', port=6379, db=0) p = r.pubsub() p.subscribe('test') while True: message = p.get_message() if message: print "Subscriber: %s" % message['data'] time.sleep(1) Results When I ran sub.py first, immediately followed by pub.py, I found that sub.py actually displayed all the messages (1-10), one after another with a delay of 1 second in between. My initial assumption was wrong, Redis is queuing messages. More tests needed. When I ran pub.py first, then waited 5 seconds before running sub.py, I found that sub.py only displayed the second half of the messages (5-10). I would have assumed this originally, but given my previous results, I would have thought messages were queued, which led me to the following conclusion... Conclusions Redis server appears to queue messages for each client, for each channel. As long as a client is listening, it doesn't matter how fast it reads messages. As long as it's connected, messages will remain queued for that client, for that channel. Remaining Questions Are these conclusions valid? If so, how long will client/channel messages remained queued? If so, is there a redis-cli info command to see how many messages are queued (for each client/channel)?
The tests are valid, but the conclusions are partially wrong. Redis does not queue anything on pub/sub channels. On the contrary, it tends to read the item from the publisher socket, and write the item in all the subscriber sockets, ideally in the same iteration of the event loop. Nothing is kept in Redis data structures. Now, as you demonstrated, there is still some kind of buffering. It is due to the usage of TCP/IP sockets, and Redis communication buffers. Sockets have buffers, and of course, TCP comes with some flow control mechanisms. It avoids the loss of data when buffers are full. If a subscriber is not fast enough, data will accumulate in its socket buffer. When it is full, TCP will block the communication and prevents Redis to push more information in the socket. Redis also manages output communication buffers (on top of the ones of the sockets) to generate data formatted with the Redis protocol. So when the output buffer of the socket is full, the event loop will mark the socket as non writable, and data will remain in Redis output buffers. Provided the TCP connection is still valid, data can remain in the buffers for a very long time. Now, both the socket and Redis output buffer are bound. If the subscribers are really too slow, and a lot of data accumulate, Redis will ultimately close the connection with subscribers (as a safety mechanism). By default, for pub/sub, Redis has a soft limit at 8 MB, and a hard limit at 32 MB, per connection buffer. If the output buffer reaches the hard limit, or if it remains between the soft and hard limit for more than 60 seconds, the connection with the slow subscriber will be closed. Knowing the number of pending messages is not easy. It can be evaluated by looking at the size of the pending information in the socket buffers, and the Redis output buffers. For Redis output buffers, you can use the CLIENT LIST command (from redis-cli). The size of the output buffer is returned in the obl and oll fields (in bytes). For socket buffers, there is no Redis command. However, on Linux, it is possible to build a script to interpret the content of the /proc/net/tcp file. See an example here. This script probably needs to be adapted to your system.
Redis
27,745,842
85
I have a long text file of redis commands that I need to execute using the redis command line interface: e.g. DEL 9012012 DEL 1212 DEL 12214314 etc. I can't seem to figure out a way to enter the commands faster than one at a time. There are several hundred thousands lines, so I don't want to just pile them all into one DEL command, they also don't need to all run at once.
the following code works for me with redis 2.4.7 on mac ./redis-cli < temp.redisCmds Does that satisfy your requirements? Or are you looking to see if there's a way to programmatically do it faster?
Redis
10,822,877
84
Have Redis setup with ruby on ubuntu server, but can't figure out how to access its log file. Tutorial says it should be here: /var/log/redis_6379.log But can't even find the /var/ folder
Found it with: sudo tail /var/log/redis/redis-server.log -n 100 So if the setup was more standard that should be: sudo tail /var/log/redis_6379.log -n 100 This outputs the last 100 lines of the file. Where your log file is located is in your configs that you can access with: redis-cli CONFIG GET * The log file may not always be shown using the above. In that case use tail -f `less /etc/redis/redis.conf | grep logfile|cut -d\ -f2`
Redis
16,337,107
83
I have downloaded redis-2.6.16.tar.gz file and i installed sucessfully. After installed i run src/redis-server it worked fine. But i don't want manually run src/redis-server everytime, rather i want redis-server running as background process continuously. So far after installed i did following tasks: 1. vim redis.conf and i changed to # By default Redis does not run as a daemon. Use 'yes' if you need it. # Note that Redis will write a pid file in /var/run/redis.pid when daemonized. daemonize yes But same result i found. What mistake i did? After redis run in background. I will run juggernaut also as background process with following command. nohup node server.js But i am not able to make redis run in background. Please provide some solution.
Since Redis 2.6 it is possible to pass Redis configuration parameters using the command line directly. This is very useful for testing purposes. redis-server --daemonize yes Check if the process started or not: ps aux | grep redis-server
Redis
24,221,449
80
I am trying to build a small site with the server push functionality on Flask micro-web framework, but I did not know if there is a framework to work with directly. I used Juggernaut, but it seems to be not working with redis-py in current version, and Juggernaut has been deprecated recently. Does anyone has a suggestion with my case?
Have a look at Server-Sent Events. Server-Sent Events is a browser API that lets you keep open a socket to your server, subscribing to a stream of updates. For more Information read Alex MacCaw (Author of Juggernaut) post on why he kills juggernaut and why the simpler Server-Sent Events are in manny cases the better tool for the job than Websockets. The protocol is really easy. Just add the mimetype text/event-stream to your response. The browser will keep the connection open and listen for updates. An Event sent from the server is a line of text starting with data: and a following newline. data: this is a simple message <blank line> If you want to exchange structured data, just dump your data as json and send the json over the wire. An advantage is that you can use SSE in Flask without the need for an extra Server. There is a simple chat application example on github which uses redis as a pub/sub backend. def event_stream(): pubsub = red.pubsub() pubsub.subscribe('chat') for message in pubsub.listen(): print message yield 'data: %s\n\n' % message['data'] @app.route('/post', methods=['POST']) def post(): message = flask.request.form['message'] user = flask.session.get('user', 'anonymous') now = datetime.datetime.now().replace(microsecond=0).time() red.publish('chat', u'[%s] %s: %s' % (now.isoformat(), user, message)) @app.route('/stream') def stream(): return flask.Response(event_stream(), mimetype="text/event-stream") You do not need to use gunicron to run the example app. Just make sure to use threading when running the app, because otherwise the SSE connection will block your development server: if __name__ == '__main__': app.debug = True app.run(threaded=True) On the client side you just need a Javascript handler function which will be called when a new message is pushed from the server. var source = new EventSource('/stream'); source.onmessage = function (event) { alert(event.data); }; Server-Sent Events are supported by recent Firefox, Chrome and Safari browsers. Internet Explorer does not yet support Server-Sent Events, but is expected to support them in Version 10. There are two recommended Polyfills to support older browsers EventSource.js jquery.eventsource
Redis
12,232,304
80
My rough understanding is that Redis is better if you need the in-memory key-value store feature, however I am not sure how that has anything to do with distributing tasks? Does that mean we should use Redis as a message broker IF we are already using it for something else?
I've used both recently (2017-2018), and they are both super stable with Celery 4. So your choice can be based on the details of your hosting setup. If you must use Celery version 2 or version 3, go with RabbitMQ. Otherwise... If you are using Redis for any other reason, go with Redis If you are hosting at AWS, go with Redis so that you can use a managed Redis as service If you hate complicated installs, go with Redis If you already have RabbitMQ installed, stay with RabbitMQ In the past, I would have recommended RabbitMQ because it was more stable and easier to setup with Celery than Redis, but I don't believe that's true any more. Update 2019 AWS now has a managed service that is equivalent to RabbitMQ called Amazon MQ, which could reduce the headache of running this as a service in production. Please comment below if you have any experience with this and celery.
Redis
43,264,838
77
What possible reasons can Sidekiq prevent from processing jobs in the queue? The queue is full. The log file sidekiq.log indicates no activity at all. Thus the queue is full but the log is empty, and Sidekiq does not seem to process items. There seem to no worker processing jobs. Restarting Redis or flush it with FLUSHALL or FLUSHDB has no effect. Sidekiq has been started with bundle exec sidekiq -L log/sidekiq.log and produces the following log file: 2013-05-30..Booting Sidekiq 2.12.0 using redis://localhost:6379/0 with options {} 2013-05-30..Running in ruby 1.9.3p374 (2013-01-15 revision 38858) [i686-linux] 2013-05-30..See LICENSE and the LGPL-3.0 for licensing details. 2013-05-30..Starting processing, hit Ctrl-C to stop How can you find out what went wrong? Are there any hidden log files?
The reason was in our case: Sidekiq may look for the wrong queue. By default Sidekiq uses a queue named "default". We used two different queue names, and defined them in config/sidekiq.yml # configuration file for Sidekiq :queues: - queue_name_1 - queue_name_2 The problem is that this config file is not automatically loaded by default in your development environment (unlike database.yml or thinking_sphinx.yml for instance) by a simple bundle exec sidekiq command. Thus we wrote our jobs in two certain queues, and Sidekiq was waiting for jobs in a third queue (the default one). You have to pass the path to the config file as a parameter through the -Cor --config option: bundle exec sidekiq -C ./config/sidekiq.yml or you can pass the queue names directly (no spaces allowed here after the comma): bundle exec sidekiq -q queue_name_1,queue_name_2 To find the problem out it is helpful to pass the option -v or --verbose at the command line, too, or to use :verbose: true in the sidekiq.yml file. Everything which is defined in a config file is of course useless if the config file is not loaded.. Therefore make sure you are using the right config file first.
Redis
16,835,963
77
Suppose I do this in redis at 13:30 20 Feb 2020, > set foo "bar spam" OK I want to get time of creation of foo. Is there something like > gettime foo 13:30 20 Feb 2020 ?
Redis doesn't store this information. You could use a separate key: MULTI SET foo "bar spam" SET foo:time "13:30 20 Feb 2020" EXEC GET foo:time
Redis
9,917,331
76
When we use a transaction in Redis, it basically pipelines all the commands within the transaction. And when EXEC is fired, then all the commands are executed together, thus always maintaining the atomicity of multiple commands. Isn't this same as pipelining? How are pipelining and transaction different? Also, why does not the single threaded nature of Redis suffice? Why do we explicitly need pipelining/transaction?
Pipelining is primarily a network optimization. It essentially means the client buffers up a bunch of commands and ships them to the server in one go. The commands are not guaranteed to be executed in a transaction. The benefit here is saving network round trip time for every command. Redis is single threaded so an individual command is always atomic, but two given commands from different clients can execute in sequence, alternating between them for example. Multi/exec, however, ensures no other clients are executing commands in between the commands in the multi/exec sequence.
Redis
29,327,544
74
For lists I can do the operation: LLEN KeyName and it will return the size of a list in Redis. What is the equivalent command for sets? I can't seem to find this in any documentation.
You are looking for the SCARD command: SCARD key Returns the set cardinality (number of elements) of the set stored at Return value Integer reply: the cardinality (number of elements) of the set, or 0 if key does not exist. Time complexity: O(1) You can view all of the set commands on the documentation webpage.
Redis
21,792,227
74
I want to use Redis as a database, not a cache. From my (limited) understanding, Redis is an in-memory datastore. What are the risks of using Redis, and how can I mitigate them?
You can use Redis as an authoritative store in a number of different ways: Turn on AOF (Append-only File store) see AOF docs. This will keep a log of all Redis commands made against your dataset in real-time. Run Redis using Master-Slave replication see replication docs. This will allow you to provide high-availability if one of your instances fails. If you're running on something like EC2 you can EBS back your Redis partition to provide another layer of protection against instance failure. On the horizon is Redis Cluster - this is specifically designed as a way to run Redis in a way that should help with HA and scalability. However, this won't appear for at least another six months or so.
Redis
4,718,832
73
I'm using an ORM called Ohm in Ruby that works on top of Redis and am curious to find out how the data is actually stored. I was wondering if there is way to list all the keys/values in a Redis db. Update: A note for others trying this out using redis-cli, use this: $ redis-cli keys * (press * followed by Ctrl-D) ... (prints a list of keys and exits) $ Thanks @antirez and @hellvinz!
You can explore the Redis dataset using the redis-cli tool included in the Redis distribution. Just start the tool without arguments, then type commands to explore the dataset. For instance KEYS will list all the keys matching a glob-style pattern, for instance with: keys * you'll see all the keys available. Then you can use the TYPE command to check what type is a given key, if it's a list you can retrieve the elements inside using LRANGE mykey 0 -1. If it is a Set you'll use instead SMEMBERS mykey and so forth. Check the Redis documentation for a list of all the available commands and how they work.
Redis
3,798,874
73
Tearing my hair out with this one... has anyone managed to scale Socket.IO to multiple "worker" processes spawned by Node.js's cluster module? Lets say I have the following on four worker processes (pseudo): // on the server var express = require('express'); var server = express(); var socket = require('socket.io'); var io = socket.listen(server); // socket.io io.set('store', new socket.RedisStore); // set-up connections... io.sockets.on('connection', function(socket) { socket.on('join', function(rooms) { rooms.forEach(function(room) { socket.join(room); }); }); socket.on('leave', function(rooms) { rooms.forEach(function(room) { socket.leave(room); }); }); }); // Emit a message every second function send() { io.sockets.in('room').emit('data', 'howdy'); } setInterval(send, 1000); And on the browser... // on the client socket = io.connect(); socket.emit('join', ['room']); socket.on('data', function(data){ console.log(data); }); The problem: Every second, I'm receiving four messages, due to four separate worker processes sending the messages. How do I ensure the message is only sent once?
Edit: In Socket.IO 1.0+, rather than setting a store with multiple Redis clients, a simpler Redis adapter module can now be used. var io = require('socket.io')(3000); var redis = require('socket.io-redis'); io.adapter(redis({ host: 'localhost', port: 6379 })); The example shown below would look more like this: var cluster = require('cluster'); var os = require('os'); if (cluster.isMaster) { // we create a HTTP server, but we do not use listen // that way, we have a socket.io server that doesn't accept connections var server = require('http').createServer(); var io = require('socket.io').listen(server); var redis = require('socket.io-redis'); io.adapter(redis({ host: 'localhost', port: 6379 })); setInterval(function() { // all workers will receive this in Redis, and emit io.emit('data', 'payload'); }, 1000); for (var i = 0; i < os.cpus().length; i++) { cluster.fork(); } cluster.on('exit', function(worker, code, signal) { console.log('worker ' + worker.process.pid + ' died'); }); } if (cluster.isWorker) { var express = require('express'); var app = express(); var http = require('http'); var server = http.createServer(app); var io = require('socket.io').listen(server); var redis = require('socket.io-redis'); io.adapter(redis({ host: 'localhost', port: 6379 })); io.on('connection', function(socket) { socket.emit('data', 'connected to worker: ' + cluster.worker.id); }); app.listen(80); } If you have a master node that needs to publish to other Socket.IO processes, but doesn't accept socket connections itself, use socket.io-emitter instead of socket.io-redis. If you are having trouble scaling, run your Node applications with DEBUG=*. Socket.IO now implements debug which will also print out Redis adapter debug messages. Example output: socket.io:server initializing namespace / +0ms socket.io:server creating engine.io instance with opts {"path":"/socket.io"} +2ms socket.io:server attaching client serving req handler +2ms socket.io-parser encoding packet {"type":2,"data":["event","payload"],"nsp":"/"} +0ms socket.io-parser encoded {"type":2,"data":["event","payload"],"nsp":"/"} as 2["event","payload"] +1ms socket.io-redis ignore same uid +0ms If both your master and child processes both display the same parser messages, then your application is properly scaling. There shouldn't be a problem with your setup if you are emitting from a single worker. What you're doing is emitting from all four workers, and due to Redis publish/subscribe, the messages aren't duplicated, but written four times, as you asked the application to do. Here's a simple diagram of what Redis does: Client <-- Worker 1 emit --> Redis Client <-- Worker 2 <----------| Client <-- Worker 3 <----------| Client <-- Worker 4 <----------| As you can see, when you emit from a worker, it will publish the emit to Redis, and it will be mirrored from other workers, which have subscribed to the Redis database. This also means you can use multiple socket servers connected the the same instance, and an emit on one server will be fired on all connected servers. With cluster, when a client connects, it will connect to one of your four workers, not all four. That also means anything you emit from that worker will only be shown once to the client. So yes, the application is scaling, but the way you're doing it, you're emitting from all four workers, and the Redis database is making it as if you were calling it four times on a single worker. If a client actually connected to all four of your socket instances, they'd be receiving sixteen messages a second, not four. The type of socket handling depends on the type of application you're going to have. If you're going to handle clients individually, then you should have no problem, because the connection event will only fire for one worker per one client. If you need a global "heartbeat", then you could have a socket handler in your master process. Since workers die when the master process dies, you should offset the connection load off of the master process, and let the children handle connections. Here's an example: var cluster = require('cluster'); var os = require('os'); if (cluster.isMaster) { // we create a HTTP server, but we do not use listen // that way, we have a socket.io server that doesn't accept connections var server = require('http').createServer(); var io = require('socket.io').listen(server); var RedisStore = require('socket.io/lib/stores/redis'); var redis = require('socket.io/node_modules/redis'); io.set('store', new RedisStore({ redisPub: redis.createClient(), redisSub: redis.createClient(), redisClient: redis.createClient() })); setInterval(function() { // all workers will receive this in Redis, and emit io.sockets.emit('data', 'payload'); }, 1000); for (var i = 0; i < os.cpus().length; i++) { cluster.fork(); } cluster.on('exit', function(worker, code, signal) { console.log('worker ' + worker.process.pid + ' died'); }); } if (cluster.isWorker) { var express = require('express'); var app = express(); var http = require('http'); var server = http.createServer(app); var io = require('socket.io').listen(server); var RedisStore = require('socket.io/lib/stores/redis'); var redis = require('socket.io/node_modules/redis'); io.set('store', new RedisStore({ redisPub: redis.createClient(), redisSub: redis.createClient(), redisClient: redis.createClient() })); io.sockets.on('connection', function(socket) { socket.emit('data', 'connected to worker: ' + cluster.worker.id); }); app.listen(80); } In the example, there are five Socket.IO instances, one being the master, and four being the children. The master server never calls listen() so there is no connection overhead on that process. However, if you call an emit on the master process, it will be published to Redis, and the four worker processes will perform the emit on their clients. This offsets connection load to workers, and if a worker were to die, your main application logic would be untouched in the master. Note that with Redis, all emits, even in a namespace or room will be processed by other worker processes as if you triggered the emit from that process. In other words, if you have two Socket.IO instances with one Redis instance, calling emit() on a socket in the first worker will send the data to its clients, while worker two will do the same as if you called the emit from that worker.
Redis
18,310,635
72
Here is the thing - I want to store native JS (node.js) objects (flash sockets references) in redis under a certain key. When I do that with simple client.set() it's stored as a string. When I try to get value I get [object Object] - just a string. Any chance to get this working? Here's my code: addSocket : function(sid, socket) { client.set(sid, socket); }, getSocket : function(sid) { client.get(sid, function(err, reply) { // cant't get an object here. All I get is useless string }); },
Since the socket is of type Object, you need to convert the object to a string before storing and when retrieving the socket, need to convert it back to an object. You can use JSON.stringify(socket) to convert to a string and JSON.parse(socketstr) to convert back to an object. Edit: Since the release of version 2.0.0, we are able to store objects as hashes into Redis. client.hmset("hosts", "mjr", "1", "another", "23", "home", "1234"); client.hgetall("hosts", function (err, obj) { console.dir(obj); }); https://redis.io/commands/hset https://github.com/NodeRedis/node_redis
Redis
8,694,871
72
I'm using the regular redis package in order to connect my Python code to my Redis server. As part of my code I check if a string object is existed in my Redis server keys. string = 'abcde' if string in redis.keys(): do something.. For some reasons, redis.keys() returns a list with bytes objects, such as [b'abcde'], while my string is, of course, a str object. I already tried to set charset, encoding and decode_responses in my redis generator, but it did not help. My goal is to insert the data as string ahead, and not iterate over the keys list and change each element to str() while checking it. Thanks ahead
You can configure the Redis client to automatically convert responses from bytes to strings using the decode_responses argument to the StrictRedis constructor: r = redis.StrictRedis('localhost', 6379, charset="utf-8", decode_responses=True) Make sure you are consistent with the charset option between clients. Note You would be better off using the EXISTS command and restructuring your code like: string = 'abcde' if redis.exists(string): do something.. The KEYS operation returns every key in your Redis database and will cause serious performance degradation in production. As a side effect you avoid having to deal with the binary to string conversion.
Redis
44,026,515
70
Lets say I have a hash of a hash e.g. $data = { 'harry' : { 'age' : 25, 'weight' : 75, }, 'sally' : { 'age' : 25, 'weight' : 75, } } What would the 'usual' way to store such a data structure (or would you not?) Would you be able to directly get a value (e.g. get harry : age ? Once stored could you directly change the value of a sub key (e.g. sally : weight = 100)
What would the 'usual' way to store such a data structure (or would you not?) For example harry and sally would be stored each in separate hashes where fields would represent their properties like age and weight. Then set structure would hold all the members (harry, sally, ...) which you have stored in redis. Would you be able to directly get a value (e.g. get harry : age ?) Yes, see HGET or HMGET or HGETALL. Once stored could you directly change the value of a sub key (e.g. sally : weight = 100) Yes, see HSET.
Redis
8,810,036
70
My Redis container is defined as a standard image in my docker_compose.yml: redis: image: redis ports: - "6379" I guess it's using standard settings like binding to Redis at localhost. I need to bind it to 0.0.0.0, is there any way to add a local redis.conf file to change the binding and let docker-compose use it?
Yes. Mount the config into the image with a volume and modify the command to call it e.g: redis: image: redis command: redis-server /usr/local/etc/redis/redis.conf volumes: - ./redis.conf:/usr/local/etc/redis/redis.conf ports: - "6379" Alternatively, create a new image based on the redis image with your conf file copied in. Full instructions are at: https://registry.hub.docker.com/_/redis/ However, the redis image does bind to 0.0.0.0 by default. To access it from the host, you need to use the port that Docker has mapped to the host for you which you find by using docker ps or the docker port command, you can then access it at localhost:32678 where 32678 is the mapped port. Alternatively, you can specify a specific port to map to in the docker-compose.yml. As you seem to be new to Docker, this might all make a bit more sense if you start by using raw Docker commands rather than starting with Compose.
Redis
30,547,274
69
I'm writing a django management command to handle some of our redis caching. Basically, I need to choose all keys, that confirm to a certain pattern (for example: "prefix:*") and delete them. I know I can use the cli to do that: redis-cli KEYS "prefix:*" | xargs redis-cli DEL But I need to do this from within the app. So I need to use the python binding (I'm using py-redis). I have tried feeding a list into delete, but it fails: from common.redis_client import get_redis_client cache = get_redis_client() x = cache.keys('prefix:*') x == ['prefix:key1','prefix:key2'] # True # And now cache.delete(x) # returns 0 . nothing is deleted I know I can iterate over x: for key in x: cache.delete(key) But that would be losing redis awesome speed and misusing its capabilities. Is there a pythonic solution with py-redis, without iteration and/or the cli? Thanks!
Use SCAN iterators: https://pypi.python.org/pypi/redis for key in r.scan_iter("prefix:*"): r.delete(key)
Redis
21,975,228
69
I find the configure in this, it just said the command to use the specify configure: ./redis-server <path>/redis.conf But,I have no idea about how to write the configure. So I have find the default configure in this. But, I still don't understand how to set max memory. Does it just add this line in configure? maxmemory 2mb By the way, I want to know how much the default memory is. and I want to set the memory to 2GB, how to do it? Then, I have added this line to the redis configure to set maxmemory to 40GB: maxmemory 41943040 And I use the command in redis-cli: config get maxmemory it show me: 127.0.0.1:6379> config get maxmemory 1) "maxmemory" 2) "41943040" But, my java program throw the exception like this when key number is about 200000: Exception in thread "Thread-228" redis.clients.jedis.exceptions.JedisDataException: OOM command not allowed when used memory > 'maxmemory'. at redis.clients.jedis.Protocol.processError(Protocol.java:117) at redis.clients.jedis.Protocol.process(Protocol.java:151) at redis.clients.jedis.Protocol.read(Protocol.java:205) at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:297) at redis.clients.jedis.Connection.getStatusCodeReply(Connection.java:196) at redis.clients.jedis.Jedis.hmset(Jedis.java:644) at cn.ict.dt2redis.analyser.AbstractAnalyser.pushOne(AbstractAnalyser.java:21) at cn.ict.dt2redis.analyser.BatchAbstractAnalyser.run(BatchAbstractAnalyser.java:16) at java.lang.Thread.run(Thread.java:722) I have no idea about it, do I success in setting max memory to 40 GB? How to do it? please give me some code in detail.
Yes - to set the memory limit just uncomment the maxmemory line in the .conf file. The default is 0, which means unlimited (until the operating system runs out of RAM and kills the process - I recommend to always set maxmemory to a sane value). Updated: as @Eric Uldall mentioned in the comments, a CONFIG SET maxmemory <sane value>, followed by a CONFIG REWRITE should also do the trick. This will modify your redis.conf to preserve changes in case of restart
Redis
33,115,325
67
I store my data in redis. I store in one raw it guid, createday, and it size. So I define the following: var dbclient1 = db.createClient(); dbclient1.hmset("doc:3743-da23-dcdf-3213", "date", "2015-09-06 00:00:01", "size", "203") dbclient1.zadd("cache", 32131, "37463-da23-dcdf-3213") I wish to view all my files in my db. So I try the following: dbclient1.hgetall("doc:*", function (err, res){ console.log(err) console.log(res) }) but res is undefined. How can I do it?
HGETALL returns all fields and values of the hash stored at key, you can't specify a mask: http://redis.io/commands/hgetall You can call KEYS doc:* to get a list of all keys matching your criteria and then get all values in a loop. However, an important note per docs: Warning: consider KEYS as a command that should only be used in production environments with extreme care. It may ruin performance when it is executed against large databases. This command is intended for debugging and special operations, such as changing your keyspace layout. Don't use KEYS in your regular application code. If you're looking for a way to find keys in a subset of your keyspace, consider using SCAN or sets.
Redis
30,728,973
65
From what I understand a virtual machine falls into two categories either "system virtual machine" or a "process virtual machine". It's kind of fuzzy to me where BEAM lies. Is there another kind of virtual machine I am not aware of?
The Erlang VM runs as one OS process. By default it runs one OS thread per core to achieve maximum utilisation of the machine. The number of threads and on which cores they run can be set when the VM is started. Erlang processes are implemented entirely by the Erlang VM and have no connection to either OS processes or OS threads. So even if you are running an Erlang system of over one million processes it is still only one OS process and one thread per core. So in this sense the Erlang VM is a "process virtual machine" while the Erlang system itself very much behaves like an OS and Erlang processes have very similar properties to OS processes, for example isolation. There is actually an Erlang VM, based on the BEAM, which runs on bare metal and is in fact an OS in its own right, see Erlang on Xen. By the way, it is perfectly possible to have systems running millions of Erlang processes and it is actually done in some products, for example WhatsApp. We were definitely thinking very much about OSes when we designed the basic Erlang environment.
Beam
16,779,162
131
What are some fundamental Feature/Architectural difference between the BEAM and JVM? Yes I know: one was originally built around java and the other built around erlang I understand the JVM (somewhat) and want to compare their structures For example I know that the JVM has one Global GC and BEAM has one per process
First of all, Beam is a register machine, not a stack machine. Like the WAM for Prolog, it uses "X-registers" which are normal registers (implemented as an array in C), and "Y-registers" which are names for slots in the local function activation record (the "call frame") on the stack. There are no stack manipulation instructions. Second, there are instructions for quickly allocating a few more words of heap memory, for initializing tuples and other data structures on the heap, for selecting elements of tuples, etc. JVM is focused on objects, and has a 'new' operation that hides the details of memory allocation and basic initialization. The BEAM has an instruction for decrementing the "reduction counter" for the process and deciding whether it is time to yield to let another process run. JVM on the other hand has synchronization instructions for threads. One important difference is that BEAM has tail call instructions, which JVM lacks. Finally, for both the BEAM and JVM, the instruction set used in object files are really only a transport format. The BEAM emulator rewrites the instructions from the file into an internal version with many optimized special-case instructions (that can change from one release to another). Alternatively, you can compile to native code. Most JVMs do the same thing.
Beam
2,276,370
22
I’ve begun studying Erlang and find the BEAM runtime environment fascinating. It’s commonly stated that in Erlang, processes belong to the language rather than the OS (meaning the runtime, meaning BEAM in this case). These are the lightweight, “green processes” that Erlang is getting famous for. It’s further stated (on page 5 of this paper) that BEAM uses one (1) OS thread per CPU core for scheduling and another OS thread for i/o. So I wonder: From what thread do the CPU cycles needed to actually execute the Erlang code come from? Further, if I’m running on a dual core machine I would expect -- based on what I’ve read so far -- to see three (3) threads running under the BEAM process: two schedulers (one for each core) and one i/o thread. But I see 10. Sometimes 11. Sometimes it starts at 13 and, like high-quality amplifiers, goes to 11. I’m confused. Any insight will be appreciated.
Following @user425720's advice, I asked my question on the erlang-questions LISTSERV. It's also available as a Google Group. Kresten Krab Thorup of Trifork answered me almost at once. My thanks to go out to Kreston. Here is his answer. (Parentheticals and emphasis are mine.) Here is AFAIK, the basic scenario: Erlang code will be run in as many "green threads" as there are processes; the process limit is controlled by the +P (command line) flag. The green threads are mapped on to S threads, where S is the number of cores/CPUs. The fact that these threads are also called schedulers can seem somewhat confusing, but from the VMs point of view they are. From the developer's point of view, they are the threads that run your erlang code. The number S can be controlled with the +S option to the erl command line. In addition hereto, there are a number of so-called "Async Threads". That's a thread pool which is used by I/O processes called linked in drivers, to react to select / poll etc. The number of asynch threads is dynamic, but limited by the +A flag. So, the 11 threads you see on a dual-core may be 2 schedulers, and 9 async threads. For instance. Read more about the flags here.
Beam
3,663,823
18
I have a temporary situation where beam files compiled on one node are executed on another node. Are the beam files portable? How close do the versions of the Erlang distributions need to be?
Beam files are portable across nodes, as they are bytecode that is interpreted by the Erlang VM, in the same way that Java works. The exception is if they're compiled for native optimization (+native), in which case they're obviously not very portable, other than possibly between windows machines. (edit two years later: also machines that have identical hardware and software setups, as you would possibly find in telecom uses of erlang) Version wise, it's obvious that you shouldn't use features that the oldest version doesn't support. As long as the features are supported, it should work even if the version gap is big. Note also that some modules may have been experimental in earlier versions, and so their functions may have had slightly different results.
Beam
2,255,658
15
What do the letters B. E. A. and M. stand for? I recall seeing an explanation of the acronym "BEAM", but I have not managed to find it again. It comes up in error codes: ➜ gentoo iex Erlang/OTP 17 [erts-6.4.1] [source] [64-bit] [smp:8:8] [async-threads:10] [kernel-poll:false] Interactive Elixir (1.0.4) - press Ctrl+C to exit (type h() ENTER for help) iex(1)> import Math 08:05:02.839 [error] Loading of /var/opt/proj/elx/ubuntu/Elixir.Math.beam failed: :badfile ** (CompileError) iex:1: module Math is not loaded and could not be found 08:05:02.846 [error] beam/beam_load.c(1104): Error loading module 'Elixir.Math': non-ascii garbage '78705400' instead of chunk type id (elixir) src/elixir_exp.erl:123: :elixir_exp.expand/2 iex(1)> So, it looks like there's some sort of problem with a .beam file, probably due to my use of vi. (Note to notive Elixir programmers: Do not edit .beam files, it is painful.) This question explains what the BEAM virtual machine is, but not what the letters stand for. And it seems difficult to find out much about the etymology quick or to the point on Erlang central. Supposedly BEAM is the secret sauce of Erlang and Elixir both.
It stands for "Bogdan/Björn's Erlang Abstract Machine" - it is just the name of the VM, much like JVM (Java Virtual Machine). Almost everyone uses "the new BEAM", where BEAM stands for Bogdan/Björn's Erlang Abstract Machine. This is the virtual machine supported in the commercial release. http://www.erlang.org/faq/implementations.html The name probably finds its routes from the Warren Abstract Machine - an abstract instruction set for Prolog which you can read about at: http://en.wikipedia.org/wiki/Warren_Abstract_Machine The WAM influenced JAM (Joe Abstract Machine - named after Joe Armstrong) which was the precursor to BEAM. You can read more in the "the development of Erlang" article on the Erlang website.
Beam
30,670,087
14
I know that Erlang has arbitrary size integers, but is there a max limit on one of the standard implementations? If so, what?
Erlang uses bignum arithmetic, and Integers in Erlang are limited by available memory on the machine. Virtually, there is no limit on how large an Integer can be in Erlang. Take a look on this document: http://erlang.org/doc/efficiency_guide/advanced.html It has more detailed explanations regarding limits.
Beam
39,268,564
12
I've found that Elixir programs can run C code either via NIFs (native implemented functions) or via OS-level ports. Having read those and similar links, I'm not a hundred percent clear on when to use one or the other method (or something else entirely?), and feel it would be good to have a direct comparison available, for myself and other novices. Can anyone provide?
What are ports? Ports are basically separate programs which are run separately from the Erlang VM. The Erlang VM communicates with the running port over standard input/output, and the resulting port lives behind an Erlang process that owns it and can facilitate communication between the port and the rest of your Erlang or Elixir application. Ports are "safe" in the sense that if the port crashes, it doesn't bring down the whole Erlang VM. Porcelain might be of interest as a possible improvement and expansion over what's already provided in the Port module. System.cmd/3 also uses ports in its underlying implementation. What are NIFs? Native inline functions or "NIFs" are functions defined in what are essentially shared libraries / DLLs loaded by the Erlang VM and written using some language which exposes a C-compatible ABI. NIFs are more efficient than ports (since they don't have to communicate over STDIN/STDOUT) and are simpler in many respects (since you don't have to deal with encoding and decoding data between your Elixir and non-Elixir codebases), but they're also much less safe; a NIF can crash the Erlang VM, and a long-running NIF can potentially lock up the Erlang VM (since the scheduler can't reason about native code). What are port drivers? Port drivers are kind of an in-between approach to integrating external code with an Erlang or Elixir codebase. Like NIFs, they're loaded into the Erlang VM, and a port driver can therefore crash or hang the whole VM. Like ports, they behave similarly to Erlang processes. When should I use a port? You want your external code to behave like an ordinary Erlang process (at least enough for such a process to wrap it and send/receive messages on behalf of your external code) You want the Erlang VM to be able to survive your external code crashing You want to implement a long-running task in your external code You want to write your external code in a language that does not support C-compatible FFI (or otherwise don't want to deal with your language's FFI facilities) When should I use a NIF? You want your external code to behave like a collection of ordinary Erlang functions (particularly if you want to define an Erlang/Elixir module that exports functions implemented in native-compiled code) You want to avoid any potential performance hits / overhead from communicating via standard input/output and/or you want to avoid having to translate between Erlang terms and something your external code understands You are reasonably confident that the things your external code is doing are neither long-running nor likely to crash (including, in the latter case, if you're writing your NIFs in something like Rust; see also: Rustler), or... You are reasonably confident that crashing or hanging the Erlang VM is acceptable for your use case (e.g. your code is both distributed and able to survive the sudden loss of an Erlang node, or you're writing a desktop application and an application-wide crash is not a big deal aside from being an inconvenience to users) When should I use port drivers? You want your external code to behave like an Erlang process You want to avoid the overhead and/or complexity of communicating over standard input/output You are reasonably confident that your port driver won't crash or hang the Erlang VM, or... You are reasonably confident that a crash or hang of the Erlang VM is not a critical issue What do you recommend? There are two aspects to weigh here: Process-like v. module-like Safe v. efficient If you want maximum safety behind a process-like interface, go with a port. If you want maximum safety behind a module-like interface, go with a module with functions that either wrap System.cmd/3 or directly use a port to communicate with your external code If you want better efficiency behind a process-like interface, go with a port driver. If you want better efficiency behind a module-like interface, go with NIFs.
Beam
42,035,912
11
I have a form on a website which has a lot of different fields. Some of the fields are optional while some are mandatory. In my DB I have a table which holds all these values, is it better practice to insert a NULL value or an empty string into the DB columns where the user didn't put any data?
By using NULL you can distinguish between "put no data" and "put empty data". Some more differences: A LENGTH of NULL is NULL, a LENGTH of an empty string is 0. NULLs are sorted before the empty strings. COUNT(message) will count empty strings but not NULLs You can search for an empty string using a bound variable but not for a NULL. This query: SELECT * FROM mytable WHERE mytext = ? will never match a NULL in mytext, whatever value you pass from the client. To match NULLs, you'll have to use other query: SELECT * FROM mytable WHERE mytext IS NULL
MySQL
1,267,999
244
I currently have just under a million locations in a mysql database all with longitude and latitude information. I am trying to find the distance between one point and many other points via a query. It's not as fast as I want it to be especially with 100+ hits a second. Is there a faster query or possibly a faster system other than mysql for this? I'm using this query: SELECT name, ( 3959 * acos( cos( radians(42.290763) ) * cos( radians( locations.lat ) ) * cos( radians(locations.lng) - radians(-71.35368)) + sin(radians(42.290763)) * sin( radians(locations.lat)))) AS distance FROM locations WHERE active = 1 HAVING distance < 10 ORDER BY distance; Note: The provided distance is in Miles. If you need Kilometers, use 6371 instead of 3959.
Create your points using Point values of Geometry data types in MyISAM table. As of Mysql 5.7.5, InnoDB tables now also support SPATIAL indices. Create a SPATIAL index on these points Use MBRContains() to find the values: SELECT * FROM table WHERE MBRContains(LineFromText(CONCAT( '(' , @lon + 10 / ( 111.1 / cos(RADIANS(@lat))) , ' ' , @lat + 10 / 111.1 , ',' , @lon - 10 / ( 111.1 / cos(RADIANS(@lat))) , ' ' , @lat - 10 / 111.1 , ')' ) ,mypoint) , or, in MySQL 5.1 and above: SELECT * FROM table WHERE MBRContains ( LineString ( Point ( @lon + 10 / ( 111.1 / COS(RADIANS(@lat))), @lat + 10 / 111.1 ), Point ( @lon - 10 / ( 111.1 / COS(RADIANS(@lat))), @lat - 10 / 111.1 ) ), mypoint ) This will select all points approximately within the box (@lat +/- 10 km, @lon +/- 10km). This actually is not a box, but a spherical rectangle: latitude and longitude bound segment of the sphere. This may differ from a plain rectangle on the Franz Joseph Land, but quite close to it on most inhabited places. Apply additional filtering to select everything inside the circle (not the square) Possibly apply additional fine filtering to account for the big circle distance (for large distances)
MySQL
1,006,654
243
I’m trying to modify a table to make its primary key column AUTO_INCREMENT after the fact. I have tried the following SQL, but got a syntax error notification. ALTER TABLE document ALTER COLUMN document_id AUTO_INCREMENT Am I doing something wrong or is this not possible? +--------------------+ | VERSION() | +--------------------+ | 5.0.75-0ubuntu10.2 | +--------------------+
ALTER TABLE document MODIFY COLUMN document_id INT auto_increment
MySQL
2,169,080
242
I have seen many queries with something as follows: SELECT 1 FROM table What does this 1 mean, how will it be executed, and what will it return? Also, in what type of scenarios can this be used?
select 1 from table will return the constant 1 for every row of the table. It's useful when you want to cheaply determine if record matches your where clause and/or join.
MySQL
7,171,041
241
I execute an INSERT INTO statement cursor.execute("INSERT INTO mytable(height) VALUES(%s)",(height)) and I want to get the primary key. My table has 2 columns: id primary, auto increment height this is the other column. How do I get the "id", after I just inserted this?
Use cursor.lastrowid to get the last row ID inserted on the cursor object, or connection.insert_id() to get the ID from the last insert on that connection.
MySQL
2,548,493
240
I installed mySQL on my Mac. Beside starting the SQL server with mySQL.prefPane tool installed in System Preferences, I want to know the instructions to start from command-line. I do as follows: After su root I start the mySQL server by command-line, but it produces an error as below: sh-3.2# /usr/local/mysql/bin/mysqld 111028 16:57:43 [Warning] Setting lower_case_table_names=2 because file system for /usr/local/mysql-5.5.17-osx10.6-x86_64/data/ is case insensitive 111028 16:57:43 [ERROR] Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root! 111028 16:57:43 [ERROR] Aborting 111028 16:57:43 [Note] /usr/local/mysql/bin/mysqld: Shutdown complete
Simply: mysql.server start mysql.server stop mysql.server restart
MySQL
7,927,854
238
I'm looking to be able to run a single query on a remote server in a scripted task. For example, intuitively, I would imagine it would go something like: mysql -uroot -p -hslavedb.mydomain.com mydb_production "select * from users;"
mysql -u <user> -p -e 'select * from schema.table' (Note the use of single quotes rather than double quotes, to avoid the shell expanding the * into filenames)
MySQL
1,602,904
238
How do I drop all tables in Windows MySQL, using command prompt? The reason I want to do this is that our user has access to the database drops, but no access to re-creating the database itself, for this reason we must drop the tables manually. Is there a way to drop all the tables at once? Bear in mind that most of the tables are linked with foreign keys so they would have to be dropped in a specific order.
You can generate statement like this: DROP TABLE t1, t2, t3, ... and then use prepared statements to execute it: SET FOREIGN_KEY_CHECKS = 0; SET @tables = NULL; SELECT GROUP_CONCAT('`', table_schema, '`.`', table_name, '`') INTO @tables FROM information_schema.tables WHERE table_schema = 'database_name'; -- specify DB name here. SET @tables = CONCAT('DROP TABLE ', @tables); PREPARE stmt FROM @tables; EXECUTE stmt; DEALLOCATE PREPARE stmt; SET FOREIGN_KEY_CHECKS = 1;
MySQL
12,403,662
237
I've a table like: +-----------+-------+------------+ | client_id | views | percentage | +-----------+-------+------------+ | 1 | 6 | 20 | | 1 | 4 | 55 | | 1 | 9 | 56 | | 1 | 2 | 67 | | 1 | 7 | 80 | | 1 | 5 | 66 | | 1 | 3 | 33 | | 1 | 8 | 34 | | 1 | 1 | 52 | I tried group_concat: SELECT li.client_id, group_concat(li.views) AS views, group_concat(li.percentage) FROM li GROUP BY client_id; +-----------+-------------------+-----------------------------+ | client_id | views | group_concat(li.percentage) | +-----------+-------------------+-----------------------------+ | 1 | 6,4,9,2,7,5,3,8,1 | 20,55,56,67,80,66,33,34,52 | +-----------+-------------------+-----------------------------+ But I want to get the views in order, like: +-----------+-------------------+----------------------------+ | client_id | views | percentage | +-----------+-------------------+----------------------------+ | 1 | 1,2,3,4,5,6,7,8,9 | 52,67,33,55,66,20,80,34,56 | +-----------+-------------------+----------------------------+
You can use ORDER BY inside the GROUP_CONCAT function in this way: SELECT li.client_id, group_concat(li.views ORDER BY li.views ASC) AS views, group_concat(li.percentage ORDER BY li.views ASC) AS percentage FROM li GROUP BY client_id
MySQL
8,631,210
237
How do I change the MySQL root password and username in ubuntu server? Do I need to stop the mysql service before setting any changes? I have a phpmyadmin setup as well, will phpmyadmin get updated automatically?
Set / change / reset the MySQL root password on Ubuntu Linux. Enter the following lines in your terminal. Stop the MySQL Server: sudo /etc/init.d/mysql stop (In some cases, if /var/run/mysqld doesn't exist, you have to create it at first: sudo mkdir -v /var/run/mysqld && sudo chown mysql /var/run/mysqld Start the mysqld configuration: sudo mysqld --skip-grant-tables & Login to MySQL as root: mysql -u root mysql Replace YOURNEWPASSWORD with your new password: For MySQL < 8.0 UPDATE mysql.user SET Password = PASSWORD('YOURNEWPASSWORD') WHERE User = 'root'; FLUSH PRIVILEGES; If your MySQL uses new auth plugin, you will need to use: update user set plugin="mysql_native_password" where User='root'; before flushing privileges. Note: on some versions, if password column doesn't exist, you may want to try: UPDATE user SET authentication_string=password('YOURNEWPASSWORD') WHERE user='root'; Note: This method is not regarded as the most secure way of resetting the password, however, it works. For MySQL >= 8.0 FLUSH PRIVILEGES; ALTER USER 'root'@'localhost' IDENTIFIED BY 'YOURNEWPASSWORD'; FLUSH PRIVILEGES; Last step: As noted in comments by @lambart, you might need to kill the temporary password-less mysql process that you started, i.e. sudo killall -9 mysqld and then start normal daemon: sudo service mysql start References: Set / Change / Reset the MySQL root password on Ubuntu Linux How to Reset the Root Password (v5.6) How to Reset the Root Password (v8.0)
MySQL
16,556,497
236
I'm trying to follow along this tutorial to enable remote access to MySQL. The problem is, where should my.cnf file be located? I'm using Mac OS X Lion.
This thread on the MySQL forum says: By default, the OS X installation does not use a my.cnf, and MySQL just uses the default values. To set up your own my.cnf, you could just create a file straight in /etc. OS X provides example configuration files at /usr/local/mysql/support-files/. And if you can't find them there, MySQLWorkbench can create them for you by: Opening a connection. In the left column select "Administration" tab and then the "Options File" under "INSTANCE" in the menu. MySQL Workbench will search for my.cnf and if it can't find it, it'll create it for you.
MySQL
10,757,169
236
Having a table with a column like: mydate DATETIME ... I have a query such as: SELECT SUM(foo), mydate FROM a_table GROUP BY a_table.mydate; This will group by the full datetime, including hours and minutes. I wish to make the group by, only by the date YYYY/MM/DD not by the YYYY/MM/DD/HH/mm. How to do this?
Cast the datetime to a date, then GROUP BY using this syntax: SELECT SUM(foo), DATE(mydate) FROM a_table GROUP BY DATE(a_table.mydate); Or you can GROUP BY the alias as @orlandu63 suggested: SELECT SUM(foo), DATE(mydate) DateOnly FROM a_table GROUP BY DateOnly; Though I don't think it'll make any difference to performance, it is a little clearer.
MySQL
366,603
235
I have a table called provider. I have three columns called person, place, thing. There can be duplicate persons, duplicate places, and duplicate things, but there can never be a dupicate person-place-thing combination. How would I ALTER TABLE to add a composite primary key for this table in MySQL with the these three columns?
ALTER TABLE provider ADD PRIMARY KEY(person,place,thing); If a primary key already exists then you want to do this ALTER TABLE provider DROP PRIMARY KEY, ADD PRIMARY KEY(person, place, thing);
MySQL
8,859,353
234
I need to change my column type from date to datetime for an app I am making. I don't care about the data as its still being developed. How can I do this?
First in your terminal: rails g migration change_date_format_in_my_table Then in your migration file: For Rails >= 3.2: class ChangeDateFormatInMyTable < ActiveRecord::Migration def up change_column :my_table, :my_column, :datetime end def down change_column :my_table, :my_column, :date end end
MySQL
5,191,405
234
I need to do a mysqldump of a database on a remote server, but the server does not have mysqldump installed. I would like to use the mysqldump on my machine to connect to the remote database and do the dump on my machine. I have tried to create an ssh tunnel and then do the dump, but this does not seem to work. I tried: ssh -f -L3310:remote.server:3306 [email protected] -N The tunnel is created with success. If I do telnet localhost 3310 I get some blurb which shows the correct server mysql version. However, doing the following seems to try to connect locally mysqldump -P 3310 -h localhost -u mysql_user -p database_name table_name
As I haven't seen it at serverfault yet, and the answer is quite simple: Change: ssh -f -L3310:remote.server:3306 [email protected] -N To: ssh -f -L3310:localhost:3306 [email protected] -N And change: mysqldump -P 3310 -h localhost -u mysql_user -p database_name table_name To: mysqldump -P 3310 -h 127.0.0.1 -u mysql_user -p database_name table_name (do not use localhost, it's one of these 'special meaning' nonsense that probably connects by socket rather then by port) edit: well, to elaborate: if host is set to localhost, a configured (or default) --socket option is assumed. See the manual for which option files are sought / used. Under Windows, this can be a named pipe.
MySQL
2,989,724
234
The MySQL reference manual does not provide a clearcut example on how to do this. I have an ENUM-type column of country names that I need to add more countries to. What is the correct MySQL syntax to achieve this? Here's my attempt: ALTER TABLE carmake CHANGE country country ENUM('Sweden','Malaysia'); The error I get is: ERROR 1265 (01000): Data truncated for column 'country' at row 1. The country column is the ENUM-type column in the above-statement. SHOW CREATE TABLE OUTPUT: mysql> SHOW CREATE TABLE carmake; +---------+---------------------------------------------------------------------+ | Table | Create Table +---------+---------------------------------------------------------------------+ | carmake | CREATE TABLE `carmake` ( `carmake_id` tinyint(4) NOT NULL AUTO_INCREMENT, `name` tinytext, `country` enum('Japan','USA','England','Australia','Germany','France','Italy','Spain','Czech Republic','China','South Korea','India') DEFAULT NULL, PRIMARY KEY (`carmake_id`), KEY `name` (`name`(3)) ) ENGINE=InnoDB AUTO_INCREMENT=49 DEFAULT CHARSET=latin1 | +---------+---------------------------------------------------------------------+ 1 row in set (0.00 sec) SELECT DISTINCT country FROM carmake OUTPUT: +----------------+ | country | +----------------+ | Italy | | Germany | | England | | USA | | France | | South Korea | | NULL | | Australia | | Spain | | Czech Republic | +----------------+
ALTER TABLE `table_name` MODIFY COLUMN `column_name2` enum( 'existing_value1', 'existing_value2', 'new_value1', 'new_value2' ) NOT NULL AFTER `column_name1`;
MySQL
1,501,958
234
Is there a way in a MySQL statement to order records (through a date stamp) by >= NOW() -1 so all records from the day before today to the future are selected?
Judging by the documentation for date/time functions, you should be able to do something like: SELECT * FROM FOO WHERE MY_DATE_FIELD >= NOW() - INTERVAL 1 DAY
MySQL
8,544,438
233
I've read that Mysql server creates a log file where it keeps a record of all activities - like when and what queries execute. Can anybody tell me where it exists in my system? How can I read it? Basically, I need to back up the database with different input [backup between two dates] so I think I need to use log file here, that's why I want to do it... I think this log must be secured somehow because sensitive information such as usernames and password may be logged [if any query require this]; so may it be secured, not easily able to be seen? I have root access to the system, how can I see the log? When I try to open /var/log/mysql.log it is empty. This is my config file: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] log = /var/log/mysql/mysql.log binlog-do-db=zero user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 general_log_file = /var/log/mysql/mysql.log general_log = 1
Here is a simple way to enable them. In mysql we need to see often 3 logs which are mostly needed during any project development. The Error Log. It contains information about errors that occur while the server is running (also server start and stop) The General Query Log. This is a general record of what mysqld is doing (connect, disconnect, queries) The Slow Query Log. Ιt consists of "slow" SQL statements (as indicated by its name). By default no log files are enabled in MYSQL. All errors will be shown in the syslog (/var/log/syslog). To Enable them just follow below steps: step1: Go to this file (/etc/mysql/conf.d/mysqld_safe_syslog.cnf) and remove or comment those line. step2: Go to mysql conf file (/etc/mysql/my.cnf) and add following lines To enable error log add following [mysqld_safe] log_error=/var/log/mysql/mysql_error.log [mysqld] log_error=/var/log/mysql/mysql_error.log To enable general query log add following general_log_file = /var/log/mysql/mysql.log general_log = 1 To enable Slow Query Log add following log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 log-queries-not-using-indexes step3: save the file and restart mysql using following commands service mysql restart To enable logs at runtime, login to mysql client (mysql -u root -p) and give: SET GLOBAL general_log = 'ON'; SET GLOBAL slow_query_log = 'ON'; Finally one thing I would like to mention here is I read this from a blog. Thanks. It works for me. Click here to visit the blog
MySQL
5,441,972
233
I have two tables, one for job deadlines, one for describe a job. Each job can take a status and some statuses means the jobs' deadlines must be deleted from the other table. I can easily SELECT the jobs/deadlines that meets my criteria with a LEFT JOIN: SELECT * FROM `deadline` LEFT JOIN `job` ON deadline.job_id = job.job_id WHERE `status` = 'szamlazva' OR `status` = 'szamlazhato' OR `status` = 'fizetve' OR `status` = 'szallitva' OR `status` = 'storno' (status belongs to job table not deadline) But when I'd like to delete these rows from deadline, MySQL throws an error. My query is: DELETE FROM `deadline` LEFT JOIN `job` ON deadline.job_id = job.job_id WHERE `status` = 'szamlazva' OR `status` = 'szamlazhato' OR `status` = 'fizetve' OR `status` = 'szallitva' OR `status` = 'storno' MySQL error says nothing: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'LEFT JOIN job ON deadline.job_id = job.job_id WHERE status = 'szaml' at line 1 How can I turn my SELECT into a working DELETE query?
You simply need to specify on which tables to apply the DELETE. Delete only the deadline rows: DELETE `deadline` FROM `deadline` LEFT JOIN `job` .... Delete the deadline and job rows: DELETE `deadline`, `job` FROM `deadline` LEFT JOIN `job` .... Delete only the job rows: DELETE `job` FROM `deadline` LEFT JOIN `job` ....
MySQL
2,763,206
233
I have a perplexing issue that I can't seem to comprehend... I have two SQL statements: The first enters information from a form into the database. The second takes data from the database entered above, sends an email, and then logs the details of the transaction The problem is that it appears that a single quote is triggering a MySQL error on the second entry only! The first instance works without issue, but the second instance triggers the mysql_error(). Does the data from a form get handled differently from the data captured in a form? Query 1 - This works without issue (and without escaping the single quote) $result = mysql_query("INSERT INTO job_log (order_id, supplier_id, category_id, service_id, qty_ordered, customer_id, user_id, salesperson_ref, booking_ref, booking_name, address, suburb, postcode, state_id, region_id, email, phone, phone2, mobile, delivery_date, stock_taken, special_instructions, cost_price, cost_price_gst, sell_price, sell_price_gst, ext_sell_price, retail_customer, created, modified, log_status_id) VALUES ('$order_id', '$supplier_id', '$category_id', '{$value['id']}', '{$value['qty']}', '$customer_id', '$user_id', '$salesperson_ref', '$booking_ref', '$booking_name', '$address', '$suburb', '$postcode', '$state_id', '$region_id', '$email', '$phone', '$phone2', '$mobile', STR_TO_DATE('$delivery_date', '%d/%m/%Y'), '$stock_taken', '$special_instructions', '$cost_price', '$cost_price_gst', '$sell_price', '$sell_price_gst', '$ext_sell_price', '$retail_customer', '".date('Y-m-d H:i:s', time())."', '".date('Y-m-d H:i:s', time())."', '1')"); Query 2 - This fails when entering a name with a single quote (for example, O'Brien) $query = mysql_query("INSERT INTO message_log (order_id, timestamp, message_type, email_from, supplier_id, primary_contact, secondary_contact, subject, message_content, status) VALUES ('$order_id', '".date('Y-m-d H:i:s', time())."', '$email', '$from', '$row->supplier_id', '$row->primary_email' ,'$row->secondary_email', '$subject', '$message_content', '1')");
You should be escaping each of these strings (in both snippets) with mysql_real_escape_string(). https://www.php.net/mysql-real-escape-string The reason your two queries are behaving differently is likely because you have magic_quotes_gpc turned on (which you should know is a bad idea). This means that strings gathered from $_GET, $_POST and $_COOKIES are escaped for you (i.e., "O'Brien" -> "O\'Brien"). Once you store the data, and subsequently retrieve it again, the string you get back from the database will not be automatically escaped for you. You'll get back "O'Brien". So, you will need to pass it through mysql_real_escape_string().
MySQL
2,687,866
233
There are many conflicting statements around. What is the best way to get the row count using PDO in PHP? Before using PDO, I just simply used mysql_num_rows. fetchAll is something I won't want because I may sometimes be dealing with large datasets, so not good for my use. Do you have any suggestions?
When you need only the number of rows, but not the data itself, such a function shouldn't be used anyway. Instead, ask the database to do the count, with a code like this: $sql = "SELECT count(*) FROM `table` WHERE foo = ?"; $result = $con->prepare($sql); $result->execute([$bar]); $number_of_rows = $result->fetchColumn(); For getting the number of rows along with the data retrieved, PDO has PDOStatement::rowCount(), which apparently does work in MySql with buffered queries (enabled by default). But it's not guaranteed for other drivers. From the PDO Doc: For most databases, PDOStatement::rowCount() does not return the number of rows affected by a SELECT statement. Instead, use PDO::query() to issue a SELECT COUNT(*) statement with the same predicates as your intended SELECT statement, then use PDOStatement::fetchColumn() to retrieve the number of rows that will be returned. Your application can then perform the correct action. But in this case you can use the data itself. Assuming you are selecting a reasonable amount of data, it can be fetched into array using PDO::fetchAll(), and then count() will give you the number of rows. EDIT: The above code example uses a prepared statement, but if a query doesn't use any variables, one can use query() function instead: $nRows = $pdo->query('select count(*) from blah')->fetchColumn(); echo $nRows;
MySQL
883,365
233
I want to order by Time,but seems no way to do that ? mysql> show processlist; +--------+-------------+--------------------+------+---------+--------+----------------------------------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +--------+-------------+--------------------+------+---------+--------+----------------------------------+------------------------------------------------------------------------------------------------------+ | 1 | system user | | NULL | Connect | 226953 | Waiting for master to send event | NULL | | 2 | system user | | v3 | Connect | 35042 | Locked | update postings a left join cities b on b.id=a.job_city_id left join states h on h.id=b.stat | | 313888 | irnadmin | 172.19.0.239:40136 | v3 | Sleep | 0 | | NULL | | 314075 | irnadmin | 172.19.0.239:41113 | v3 | Sleep | 0 | | NULL | | 314118 | irnadmin | 172.19.0.239:41282 | v3 | Query | 34978 | freeing items | SELECT id, screen_name, type, active, bound, LastLogin, robotno, protocol FROM accounts WHERE email_ | | 314686 | irnadmin | 172.19.0.239:43251 | v3 | Sleep | 0 | | NULL | | 314732 | irnadmin | 172.19.0.239:43436 | v3 | Query | 34978 | freeing items | SELECT id, screen_name, type, active, bound, LastLogin, robotno, protocol FROM accounts WHERE email_ | | 314984 | irnadmin | 172.19.0.239:44366 | v3 | Sleep | 2 | | NULL | | 315051 | irnadmin | 172.19.0.239:44713 | v3 | Query | 0 | NULL | NULL | | 315198 | irnadmin | 172.19.0.239:51569 | v3 | Sleep | 2 | | NULL | | 315280 | irnadmin | 172.19.0.239:51849 | v3 | Query | 34978 | freeing items | SELECT id, email_address, type, closed, robotno FROM accounts WHERE screen_name = 'ShantanuS' | | 315320 | irnadmin | 172.19.0.239:52045 | v3 | Query | 34978 | freeing items | SELECT id, screen_name, type, active, bound, LastLogin, robotno, protocol FROM accounts WHERE email_ | | 315384 | irnadmin | 172.19.0.239:52463 | v3 | Sleep | 1 | | NULL | | 452248 | irnadmin | 172.19.0.28:54899 | v3 | Query | 34978 | freeing items | SELECT id, email_address, type, closed, robotno FROM accounts WHERE screen_name = 'LIZW0218' | | 452291 | irnadmin | 172.19.0.28:55045 | v3 | Sleep | 1 | | NULL | | 452316 | irnadmin | 172.19.0.28:55144 | v3 | Sleep | 0 | | NULL | | 452353 | irnadmin | 172.19.0.28:55278 | v3 | Sleep | 0 | | NULL | | 452382 | irnadmin | 172.19.0.28:55371 | v3 | Query | 34978 | freeing items | SELECT o.account_id FROM online o JOIN accounts a ON a.id=o.account_id WHERE o.server_id IS NULL AND | | 452413 | irnadmin | 172.19.0.28:55479 | v3 | Sleep | 1 | | NULL | | 452541 | irnadmin | 172.19.0.28:55946 | v3 | Query | 34978 | freeing items | SELECT o.account_id FROM online o JOIN accounts a ON a.id=o.account_id WHERE o.server_id IS NULL AND | | 452626 | irnadmin | 172.19.0.28:56215 | v3 | Sleep | 2 | | NULL | | 452711 | irnadmin | 172.19.0.28:39916 | v3 | Sleep | 0 | | NULL | | 452781 | irnadmin | 172.19.0.28:40161 | v3 | Sleep | 1 | | NULL | | 452904 | irnadmin | 172.19.0.28:40955 | v3 | Query | 34978 | freeing items | select a.id, aa.screen_name, i.requester from interview_requests i left join accounts aa on aa.id=i. | | 453014 | irnadmin | 172.19.0.28:41291 | v3 | Query | 34978 | freeing items | SELECT o.account_id FROM online o JOIN accounts a ON a.id=o.account_id WHERE o.server_id IS NULL AND | | 453057 | irnadmin | 172.19.0.28:41377 | v3 | Query | 34978 | freeing items | select a.id, aa.screen_name, i.requester from interview_requests i left join accounts aa on aa.id=i. | | 453084 | irnadmin | 172.19.0.28:41441 | v3 | Sleep | 0 | | NULL | | 453112 | irnadmin | 172.19.0.28:41536 | v3 | Sleep | 0 | | NULL | | 453156 | irnadmin | 172.19.0.28:41653 | v3 | Query | 34978 | freeing items | SELECT protocol FROM accounts WHERE email_address= '***@gtalk.jabber.jobirn.c | | 453214 | irnadmin | 172.19.0.28:41800 | v3 | Sleep | 5 | | NULL | | 453243 | irnadmin | 172.19.0.28:41991 | v3 | Sleep | 0 | | NULL | | 453313 | irnadmin | 172.19.0.28:42255 | v3 | Query | 34978 | freeing items | SELECT o.account_id FROM online o JOIN accounts a ON a.id=o.account_id WHERE o.server_id IS NULL AND | | 453396 | irnadmin | 172.19.0.28:53718 | v3 | Sleep | 2 | | NULL | | 453476 | irnadmin | 172.19.0.28:54019 | v3 | Sleep | 0 | | NULL | | 453561 | irnadmin | 172.19.0.28:54352 | v3 | Sleep | 3 | | NULL | | 453594 | irnadmin | 172.19.0.28:54456 | v3 | Sleep | 0 | | NULL | | 453727 | irnadmin | 172.19.0.28:55166 | v3 | Query | 34978 | freeing items | SELECT id, screen_name, type, active, bound, LastLogin, robotno, protocol FROM accounts WHERE email_ | | 453786 | irnadmin | 172.19.0.28:55320 | v3 | Sleep | 4 | | NULL | | 610140 | irnadmin | 172.19.0.28:33848 | v3 | Query | 34978 | freeing items | select a.id, aa.screen_name, i.requester from interview_requests i left join accounts aa on aa.id=i. | | 685119 | irnadmin | 172.19.0.27:37251 | v3 | Query | 34980 | Sending data | select postings.id id,category, job_desc_title, IF(c1.name is not null,c1.name,IF(c2.name is not n | | 685226 | irnadmin | 172.19.0.139:57274 | v3 | Query | 34735 | Locked | SELECT job_desc_title,job_desc,job_state_name,job_city_name,company_categories.name,postings.categor | | 685229 | irnadmin | 172.19.0.139:57278 | v3 | Query | 34735 | Locked | SELECT job_desc_title,job_desc,job_state_name,job_city_name,company_categories.name,postings.categor | | 685232 | irnadmin | 172.19.0.139:57283 | v3 | Query | 34734 | Locked | select job_desc_title,job_desc from postings where id=287650 | | 685233 | irnadmin | 172.19.0.139:57286 | v3 | Query | 34734 | Locked | SELECT accounts.screen_name,postings.url url, accounts.type owner_type, postings.id ID, postings.job | | 685235 | irnadmin | 172.19.0.28:37502 | v3 | Query | 34734 | Locked | SELECT accounts.screen_name,postings.url url, accounts.type owner_type, postings.id ID, postings.job | | 686496 | irnadmin | 172.19.0.239:33306 | v3 | Query | 32589 | Locked | SELECT accounts.screen_name,postings.url url, accounts.type owner_type, postings.id ID, postings.job | | 686503 | irnadmin | 172.19.0.28:54051 | v3 | Query | 32588 | Locked | SELECT job_desc_title, job_desc, IF(postings.category IS NOT NULL, postings.category, job_categories | | 709550 | root | localhost | v3 | Query | 0 | NULL | show processlist | | 710084 | irnadmin | 172.19.0.27:53285 | NULL | Query | 0 | removing tmp table | show status where Variable_name='Threads_running' | +--------+-------------+--------------------+------+---------+--------+----------------------------------+------------------------------------------------------------------------------------------------------+ 49 rows in set (0.00 sec)
Newer versions of SQL support the process list in information_schema: SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST You can ORDER BY in any way you like. The INFORMATION_SCHEMA.PROCESSLIST table was added in MySQL 5.1.7. You can find out which version you're using with: SELECT VERSION()
MySQL
929,612
232
I am trying to understand how to UPDATE multiple rows with different values and I just don't get it. The solution is everywhere but to me it looks difficult to understand. For instance, three updates into 1 query: UPDATE table_users SET cod_user = '622057' , date = '12082014' WHERE user_rol = 'student' AND cod_office = '17389551'; UPDATE table_users SET cod_user = '2913659' , date = '12082014' WHERE user_rol = 'assistant' AND cod_office = '17389551'; UPDATE table_users SET cod_user = '6160230' , date = '12082014' WHERE user_rol = 'admin' AND cod_office = '17389551'; I read an example, but I really don't understand how to make the query. i.e: UPDATE table_to_update SET cod_user= IF(cod_office = '17389551','622057','2913659','6160230') ,date = IF(cod_office = '17389551','12082014') WHERE ?? IN (??) ; I'm not entirely clear how to do the query if there are multiple condition in the WHERE and in the IF condition..any ideas?
You can do it this way: UPDATE table_users SET cod_user = (case when user_role = 'student' then '622057' when user_role = 'assistant' then '2913659' when user_role = 'admin' then '6160230' end), date = '12082014' WHERE user_role in ('student', 'assistant', 'admin') AND cod_office = '17389551'; I don't understand your date format. Dates should be stored in the database using native date and time types.
MySQL
25,674,737
231
What's the main difference between length() and char_length()? I believe it has something to do with binary and non-binary strings. Is there any practical reason to store strings as binary? mysql> select length('MySQL'), char_length('MySQL'); +-----------------+----------------------+ | length('MySQL') | char_length('MySQL') | +-----------------+----------------------+ | 5 | 5 | +-----------------+----------------------+ 1 row in set (0.01 sec)
LENGTH() returns the length of the string measured in bytes. CHAR_LENGTH() returns the length of the string measured in characters. This is especially relevant for Unicode, in which most characters are encoded in two bytes. Or UTF-8, where the number of bytes varies. For example: select length(_utf8 '€'), char_length(_utf8 '€') --> 3, 1 As you can see the Euro sign occupies 3 bytes (it's encoded as 0xE282AC in UTF-8) even though it's only one character.
MySQL
1,734,334
231
I have a table whose primary key is used in several other tables and has several foreign keys to other tables. CREATE TABLE location ( locationID INT NOT NULL AUTO_INCREMENT PRIMARY KEY ... ) ENGINE = InnoDB; CREATE TABLE assignment ( assignmentID INT NOT NULL AUTO_INCREMENT PRIMARY KEY, locationID INT NOT NULL, FOREIGN KEY locationIDX (locationID) REFERENCES location (locationID) ... ) ENGINE = InnoDB; CREATE TABLE assignmentStuff ( ... assignmentID INT NOT NULL, FOREIGN KEY assignmentIDX (assignmentID) REFERENCES assignment (assignmentID) ) ENGINE = InnoDB; The problem is that when I'm trying to drop one of the foreign key columns (ie locationIDX) it gives me an error. "ERROR 1025 (HY000): Error on rename" How can I drop the column in the assignment table above without getting this error?
As explained here, seems the foreign key constraint has to be dropped by constraint name and not the index name. The syntax is: ALTER TABLE footable DROP FOREIGN KEY fooconstraint;
MySQL
838,354
231
Scenario in short: A table with more than 16 million records [2GB in size]. The higher LIMIT offset with SELECT, the slower the query becomes, when using ORDER BY *primary_key* So SELECT * FROM large ORDER BY `id` LIMIT 0, 30 takes far less than SELECT * FROM large ORDER BY `id` LIMIT 10000, 30 That only orders 30 records and same eitherway. So it's not the overhead from ORDER BY. Now when fetching the latest 30 rows it takes around 180 seconds. How can I optimize that simple query?
I had the exact same problem myself. Given the fact that you want to collect a large amount of this data and not a specific set of 30 you'll be probably running a loop and incrementing the offset by 30. So what you can do instead is: Hold the last id of a set of data(30) (e.g. lastId = 530) Add the condition WHERE id > lastId limit 0,30 So you can always have a ZERO offset. You will be amazed by the performance improvement.
MySQL
4,481,388
230
If you try to create a TEXT column on a table, and give it a default value in MySQL, you get an error (on Windows at least). I cannot see any reason why a text column should not have a default value. No explanation is given by the MySQL documentation. It seems illogical to me (and somewhat frustrating, as I want a default value!). Anybody know why this is not allowed?
Windows MySQL v5 throws an error but Linux and other versions only raise a warning. This needs to be fixed. WTF? Also see an attempt to fix this as bug #19498 in the MySQL Bugtracker: Bryce Nesbitt on April 4 2008 4:36pm: On MS Windows the "no DEFAULT" rule is an error, while on other platforms it is often a warning. While not a bug, it's possible to get trapped by this if you write code on a lenient platform, and later run it on a strict platform: Personally, I do view this as a bug. Searching for "BLOB/TEXT column can't have a default value" returns about 2,940 results on Google. Most of them are reports of incompatibilities when trying to install DB scripts that worked on one system but not others. I am running into the same problem now on a webapp I'm modifying for one of my clients, originally deployed on Linux MySQL v5.0.83-log. I'm running Windows MySQL v5.1.41. Even trying to use the latest version of phpMyAdmin to extract the database, it doesn't report a default for the text column in question. Yet, when I try running an insert on Windows (that works fine on the Linux deployment) I receive an error of no default on ABC column. I try to recreate the table locally with the obvious default (based on a select of unique values for that column) and end up receiving the oh-so-useful BLOB/TEXT column can't have a default value. Again, not maintaining basic compatability across platforms is unacceptable and is a bug. How to disable strict mode in MySQL 5 (Windows): Edit /my.ini and look for line sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" Replace it with sql_mode='MYSQL40' Restart the MySQL service (assuming that it is mysql5) net stop mysql5 net start mysql5 If you have root/admin access you might be able to execute mysql_query("SET @@global.sql_mode='MYSQL40'");
MySQL
3,466,872
230
Is there a way to check if a table exists without selecting and checking values from it? That is, I know I can go SELECT testcol FROM testtable and check the count of fields returned, but it seems there must be a more direct / elegant way to do it.
If you want to be correct, use INFORMATION_SCHEMA. SELECT * FROM information_schema.tables WHERE table_schema = 'yourdb' AND table_name = 'testtable' LIMIT 1; Alternatively, you can use SHOW TABLES SHOW TABLES LIKE 'yourtable'; If there is a row in the resultset, table exists.
MySQL
8,829,102
229
Here is a gross oversimplification of an intense setup I am working with. table_1 and table_2 both have auto-increment surrogate primary keys as the ID. info is a table that contains information about both table_1 and table_2. table_1 (id, field) table_2 (id, field, field) info ( ???, field) I am trying to decided if I should make the primary key of info a composite of the IDs from table_1 and table_2. If I were to do this, which of these makes most sense? ( in this example I am combining ID 11209 with ID 437 ) INT(9) 11209437 (i can imagine why this is bad) VARCHAR (10) 11209-437 DECIMAL (10,4) 11209.437 Or something else? Would this be fine to use this as the Primary Key on a MYSQL MYISAM DB?
I would use a composite (multi-column) key. CREATE TABLE INFO ( t1ID INT, t2ID INT, PRIMARY KEY (t1ID, t2ID) ) This way you can have t1ID and t2ID as foreign keys pointing to their respective tables as well.
MySQL
5,835,978
229
I accidentally installed the PowerPC version of MySQL on my Intel Mac in Snow Leopard, and it installed without a problem but of course doesn't run properly. I just didn't pay enough attention. Now when I try to install the correct x86 version it says that it can't install because a newer version is already installed. A Google query led me to perform these actions/delete these files to uninstall it: sudo rm /usr/local/mysql sudo rm -rf /usr/local/mysql* sudo rm -rf /Library/StartupItems/MySQLCOM sudo rm -rf /Library/PreferencePanes/MySQL* rm -rf ~/Library/PreferencePanes/MySQL* sudo rm -rf /Library/Receipts/mysql* sudo rm -rf /Library/Receipts/MySQL* And finally removed the line MYSQLCOM=-YES- from /etc/hostconfig They haven't seemed to help at all. I am still receiving the same message about there being a newer version. I tried installing an even newer version (the current Beta) and it also gave me the same message about a newer version already being installed. I can't uninstall it from the Prefs Pane because I never installed the PrefPane also.
Try running also sudo rm -rf /var/db/receipts/com.mysql.*
MySQL
1,436,425
229
I recently installed MySQL and it seems I have to reset the password after install. It won't let me do anything else. Now I already reset the password the usual way: update user set password = password('XXX') where user = root; (BTW: took me ages to work out that MySQL for some bizarre reason has renamed the field 'password' to 'authentication_string'. I am quite upset about changes like that.) Unfortunately it seems I need to change the password a different way that is unknown to me. Maybe someone here has already come across that problem?
If this is NOT your first time setting up the password, try this method: mysql> UPDATE mysql.user SET Password=PASSWORD('your_new_password') WHERE User='root'; And if you get the following error, there is a high chance that you have never set your password before: ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement. To set up your password for the first time: mysql> SET PASSWORD = PASSWORD('your_new_password'); Query OK, 0 rows affected, 1 warning (0.01 sec) Reference: https://dev.mysql.com/doc/refman/5.6/en/alter-user.html
MySQL
33,467,337
228
In MySQL, is there a way to set the "total" fields to zero if they are NULL? Here is what I have: SELECT uo.order_id, uo.order_total, uo.order_status, (SELECT SUM(uop.price * uop.qty) FROM uc_order_products uop WHERE uo.order_id = uop.order_id ) AS products_subtotal, (SELECT SUM(upr.amount) FROM uc_payment_receipts upr WHERE uo.order_id = upr.order_id ) AS payment_received, (SELECT SUM(uoli.amount) FROM uc_order_line_items uoli WHERE uo.order_id = uoli.order_id ) AS line_item_subtotal FROM uc_orders uo WHERE uo.order_status NOT IN ("future", "canceled") AND uo.uid = 4172; The data comes out fine, except the NULL fields should be 0. How can I return 0 for NULL in MySQL?
Use IFNULL: IFNULL(expr1, 0) From the documentation: If expr1 is not NULL, IFNULL() returns expr1; otherwise it returns expr2. IFNULL() returns a numeric or string value, depending on the context in which it is used.
MySQL
3,997,327
228
I'm using the below code to pull some results from the database with Laravel 5. BookingDates::where('email', Input::get('email'))->orWhere('name', 'like', Input::get('name'))->get() However, the orWhereLike doesn't seem to be matching any results. What does that code produce in terms of MySQL statements? I'm trying to achieve something like the following: select * from booking_dates where email='[email protected]' or name like '%John%'
If you want to see what is run in the database use dd(DB::getQueryLog()) to see what queries were run. Try this BookingDates::where('email', Input::get('email')) ->orWhere('name', 'like', '%' . Input::get('name') . '%')->get();
MySQL
30,761,950
227
Are table names in MySQL case sensitive? On my Windows development machine the code I have is able to query my tables which appear to be all lowercase. When I deploy to the test server in our datacenter the table names appear to start with an uppercase letter. The servers we use are all on Ubuntu.
In general: Database and table names are not case sensitive in Windows, and case sensitive in most varieties of Unix. In MySQL, databases correspond to directories within the data directory. Each table within a database corresponds to at least one file within the database directory. Consequently, the case sensitivity of the underlying operating system plays a part in the case sensitivity of database and table names. One can configure how tables names are stored on the disk using the system variable lower_case_table_names (in the my.cnf configuration file under [mysqld]). Read the section: 10.2.2 Identifier Case Sensitivity for more information.
MySQL
6,134,006
227
What's the best way to do following: SELECT * FROM users WHERE created >= today; Note: created is a datetime field.
SELECT * FROM users WHERE created >= CURDATE(); But I think you mean created < today You can compare datetime with date, for example: SELECT NOW() < CURDATE() gives 0, SELECT NOW() = CURDATE() gives 1.
MySQL
5,182,275
226
What is the best SQL data type for currency values? I'm using MySQL but would prefer a database independent type.
Something like Decimal(19,4) usually works pretty well in most cases. You can adjust the scale and precision to fit the needs of the numbers you need to store. Even in SQL Server, I tend not to use "money" as it's non-standard.
MySQL
628,637
226
In MySQL, can I select columns only where something exists? For example, I have the following query: select phone, phone2 from jewishyellow.users where phone like '813%' and phone2 I'm trying to select only the rows where phone starts with 813 and phone2 has something in it.
Compare value of phone2 with empty string: select phone, phone2 from jewishyellow.users where phone like '813%' and phone2<>'' Note that NULL value is interpreted as false.
MySQL
1,869,264
224
I had this previously in my normal mysql_* connection: mysql_set_charset("utf8",$link); mysql_query("SET NAMES 'UTF8'"); Do I need it for the PDO? And where should I have it? $connect = new PDO("mysql:host=$host;dbname=$db", $user, $pass, array(PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION));
You'll have it in your connection string like: "mysql:host=$host;dbname=$db;charset=utf8mb4" HOWEVER, prior to PHP 5.3.6, the charset option was ignored. If you're running an older version of PHP, you must do it like this: $dbh = new PDO("mysql:host=$host;dbname=$db", $user, $password); $dbh->exec("set names utf8mb4");
MySQL
4,361,459
223
Is there a way to detect if a value is a number in a MySQL query? Such as SELECT * FROM myTable WHERE isANumber(col1) = true
You can use Regular Expression too... it would be like: SELECT * FROM myTable WHERE col1 REGEXP '^[0-9]+$'; Reference: http://dev.mysql.com/doc/refman/5.1/en/regexp.html
MySQL
5,064,977
222
I'm trying to figure out how to locate all occurrences of a url in a database. I want to search all tables and all fields. But I have no idea where to start or if it's even possible.
A simple solution would be doing something like this: mysqldump -u myuser --no-create-info --extended-insert=FALSE databasename | grep -i "<search string>"
MySQL
562,457
222
I am not very familiar with databases and the theories behind how they work. Is it any slower from a performance standpoint (inserting/updating/querying) to use Strings for Primary Keys than integers? For Example I have a database that would have about 100 million row like mobile number, name and email. mobile number and email would be unique. so can I have the mobile number or email as a primary key, well it effect my query performance when I search based on email or mobile number. similarly the primary key well be used as foreign key in 5 to 6 tables or even more. I am using MySQL database
Technically yes, but if a string makes sense to be the primary key then you should probably use it. This all depends on the size of the table you're making it for and the length of the string that is going to be the primary key (longer strings == harder to compare). I wouldn't necessarily use a string for a table that has millions of rows, but the amount of performance slowdown you'll get by using a string on smaller tables will be minuscule to the headaches that you can have by having an integer that doesn't mean anything in relation to the data.
MySQL
517,579
222
To find out the start command for mysqld (using a mac) I can do: ps aux|grep mysql I get the following output, which allows me to start mysql server. /usr/local/mysql/bin/mysqld --basedir=/usr/local/mysql --datadir=... How would I find the necessary command to stop mysql from the command line?
Try: /usr/local/mysql/bin/mysqladmin -u root -p shutdown Or: sudo mysqld stop Or: sudo /usr/local/mysql/bin/mysqld stop Or: sudo mysql.server stop If you install the Launchctl in OSX you can try: MacPorts sudo launchctl unload -w /Library/LaunchDaemons/org.macports.mysql.plist sudo launchctl load -w /Library/LaunchDaemons/org.macports.mysql.plist Note: this is persistent after reboot. Homebrew launchctl unload -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist Binary installer sudo /Library/StartupItems/MySQLCOM/MySQLCOM stop sudo /Library/StartupItems/MySQLCOM/MySQLCOM start sudo /Library/StartupItems/MySQLCOM/MySQLCOM restart I found that in: https://stackoverflow.com/a/102094/58768
MySQL
11,091,414
221
I have the following table schema which maps user_customers to permissions on a live MySQL database: mysql> describe user_customer_permission; +------------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_customer_id | int(11) | NO | PRI | NULL | | | permission_id | int(11) | NO | PRI | NULL | | +------------------+---------+------+-----+---------+----------------+ 3 rows in set (0.00 sec) I would like to remove the primary keys for user_customer_id and permission_id and retain the primary key for id. When I run the command: alter table user_customer_permission drop primary key; I get the following error: ERROR 1075 (42000): Incorrect table definition; there can be only one auto column and it must be defined as a key How can I drop a column's primary key?
Without an index, maintaining an autoincrement column becomes too expensive, that's why MySQL requires an autoincrement column to be a leftmost part of an index. You should remove the autoincrement property before dropping the key: ALTER TABLE user_customer_permission MODIFY id INT NOT NULL; ALTER TABLE user_customer_permission DROP PRIMARY KEY; Note that you have a composite PRIMARY KEY which covers all three columns and id is not guaranteed to be unique. If it happens to be unique, you can make it to be a PRIMARY KEY and AUTO_INCREMENT again: ALTER TABLE user_customer_permission MODIFY id INT NOT NULL PRIMARY KEY AUTO_INCREMENT;
MySQL
2,111,291
221
create table check2(f1 varchar(20),f2 varchar(20)); creates a table with the default collation latin1_general_ci; alter table check2 collate latin1_general_cs; show full columns from check2; shows the individual collation of the columns as 'latin1_general_ci'. Then what is the effect of the alter table command?
To change the default character set and collation of a table including those of existing columns (note the convert to clause): alter table <some_table> convert to character set utf8mb4 collate utf8mb4_unicode_ci; Edited the answer, thanks to the prompting of some comments: Should avoid recommending utf8. It's almost never what you want, and often leads to unexpected messes. The utf8 character set is not fully compatible with UTF-8. The utf8mb4 character set is what you want if you want UTF-8. – Rich Remer Mar 28 '18 at 23:41 and That seems quite important, glad I read the comments and thanks @RichRemer . Nikki , I think you should edit that in your answer considering how many views this gets. See here https://dev.mysql.com/doc/refman/8.0/en/charset-unicode-utf8.html and here What is the difference between utf8mb4 and utf8 charsets in MySQL? – Paulpro Mar 12 at 17:46
MySQL
742,205
221
I'm stumped, I don't know how to go about doing this. Basically I just want to create a table, but if it exists it needs to be dropped and re-created, not truncated, but if it doesn't exist just create it. Would anyone be able to help?
Just put DROP TABLE IF EXISTS `tablename`; before your CREATE TABLE statement. That statement drops the table if it exists but will not throw an error if it does not.
MySQL
20,155,989
220
I am continuously receiving this error. I am using mySQL Workbench and from what I am finding is that root's schema privileges are null. There are no privileges at all. I am having troubles across platforms that my server is used for and this has been all of a sudden issue. [email protected] apparently has a lot of access but I am logged in as that, but it just assigns to localhost anyways - localhost has no privileges. I have done a few things like FLUSH HOSTS, FLUSH PRIVILEGES, etc but have found no success from that or the internet. How can I get root its access back? I find this frustrating because when I look around people expect you to "have access" but I don't have access so I can't go into command line or anything and GRANT myself anything. When running SHOW GRANTS FOR root this is what I get in return: Error Code: 1141. There is no such grant defined for user 'root' on host '%'
If you have that same problem in MySql 5.7.+ : Access denied for user 'root'@'localhost' it's because MySql 5.7 by default allow to connect with socket, which means you just connect with sudo mysql. If you run sql : SELECT user,authentication_string,plugin,host FROM mysql.user; then you will see it : +------------------+-------------------------------------------+-----------------------+-----------+ | user | authentication_string | plugin | host | +------------------+-------------------------------------------+-----------------------+-----------+ | root | | auth_socket | localhost | | mysql.session | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost | | mysql.sys | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost | | debian-sys-maint | *497C3D7B50479A812B89CD12EC3EDA6C0CB686F0 | mysql_native_password | localhost | +------------------+-------------------------------------------+-----------------------+-----------+ 4 rows in set (0.00 sec) To allow connection with root and password, then update the values in the table with command : ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'Current-Root-Password'; FLUSH PRIVILEGES; Then run the select command again and you'll see it has changed : +------------------+-------------------------------------------+-----------------------+-----------+ | user | authentication_string | plugin | host | +------------------+-------------------------------------------+-----------------------+-----------+ | root | *2F2377C1BC54BE827DC8A4EE051CBD57490FB8C6 | mysql_native_password | localhost | | mysql.session | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost | | mysql.sys | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost | | debian-sys-maint | *497C3D7B50479A812B89CD12EC3EDA6C0CB686F0 | mysql_native_password | localhost | +------------------+-------------------------------------------+-----------------------+-----------+ 4 rows in set (0.00 sec) And that's it. You can run this process after running and completing the sudo mysql_secure_installation command. For mariadb, use SET PASSWORD FOR 'root'@'localhost' = PASSWORD('manager'); to set password. More at https://mariadb.com/kb/en/set-password/
MySQL
17,975,120
220
So here's what I want to do on my MySQL database. I would like to do: SELECT * FROM itemsOrdered WHERE purchaseOrder_ID = '@purchaseOrdered_ID' AND status = 'PENDING' If that would not return any rows, which is possible through if(dr.HasRows == false), I would now create an UPDATE in the purchaseOrder database: UPDATE purchaseOrder SET purchaseOrder_status = 'COMPLETED' WHERE purchaseOrder_ID = '@purchaseOrder_ID' How would I be able to make this process a little shorter?
For your specific query, you can do: UPDATE purchaseOrder SET purchaseOrder_status = 'COMPLETED' WHERE purchaseOrder_ID = '@purchaseOrder_ID' and not exists (SELECT * FROM itemsOrdered WHERE purchaseOrder_ID = '@purchaseOrdered_ID' AND status = 'PENDING' ) However, I might guess that you are looping at a higher level. To set all such values, try this: UPDATE purchaseOrder SET purchaseOrder_status = 'COMPLETED' WHERE not exists (SELECT 1 FROM itemsOrdered WHERE itemsOrdered.purchaseOrder_ID = purchaseOrder.purchaseOrdered_ID AND status = 'PENDING' limit 1 )
MySQL
13,991,817
220
Our previous programmer set the wrong collation in a table (Mysql). He set it up with Latin collation, when it should be UTF8, and now I have issues. Every record with Chinese and Japan character turn to ??? character. Is possible to change collation and get back the detail of character?
change database collation: ALTER DATABASE <database_name> CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci; change table collation: ALTER TABLE <table_name> CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci; change column collation: ALTER TABLE <table_name> MODIFY <column_name> VARCHAR(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci; What do the parts of utf8mb4_0900_ai_ci mean? 3 bytes -- utf8 4 bytes -- utf8mb4 (new) v4.0 -- _unicode_ v5.20 -- _unicode_520_ v9.0 -- _0900_ (new) _bin -- just compare the bits; don't consider case folding, accents, etc _ci -- explicitly case insensitive (A=a) and implicitly accent insensitive (a=á) _ai_ci -- explicitly case insensitive and accent insensitive _as (etc) -- accent-sensitive (etc) _bin -- simple, fast _general_ci -- fails to compare multiletters; eg ss=ß, somewhat fast ... -- slower _0900_ -- (8.0) much faster because of a rewrite More info: What are the differences between utf8_general_ci and utf8_unicode_ci? What's the difference between utf8_general_ci and utf8_unicode_ci? How to change collation of database, table, column? What's the difference between utf8_general_ci and utf8_unicode_ci?
MySQL
5,906,585
220
Well here's my problem I have three tables; regions, countries, states. Countries can be inside of regions, states can be inside of regions. Regions are the top of the food chain. Now I'm adding a popular_areas table with two columns; region_id and popular_place_id. Is it possible to make popular_place_id be a foreign key to either countries OR states. I'm probably going to have to add a popular_place_type column to determine whether the id is describing a country or state either way.
What you're describing is called Polymorphic Associations. That is, the "foreign key" column contains an id value that must exist in one of a set of target tables. Typically the target tables are related in some way, such as being instances of some common superclass of data. You'd also need another column along side the foreign key column, so that on each row, you can designate which target table is referenced. CREATE TABLE popular_places ( user_id INT NOT NULL, place_id INT NOT NULL, place_type VARCHAR(10) -- either 'states' or 'countries' -- foreign key is not possible ); There's no way to model Polymorphic Associations using SQL constraints. A foreign key constraint always references one target table. Polymorphic Associations are supported by frameworks such as Rails and Hibernate. But they explicitly say that you must disable SQL constraints to use this feature. Instead, the application or framework must do equivalent work to ensure that the reference is satisfied. That is, the value in the foreign key is present in one of the possible target tables. Polymorphic Associations are weak with respect to enforcing database consistency. The data integrity depends on all clients accessing the database with the same referential integrity logic enforced, and also the enforcement must be bug-free. Here are some alternative solutions that do take advantage of database-enforced referential integrity: Create one extra table per target. For example popular_states and popular_countries, which reference states and countries respectively. Each of these "popular" tables also reference the user's profile. CREATE TABLE popular_states ( state_id INT NOT NULL, user_id INT NOT NULL, PRIMARY KEY(state_id, user_id), FOREIGN KEY (state_id) REFERENCES states(state_id), FOREIGN KEY (user_id) REFERENCES users(user_id), ); CREATE TABLE popular_countries ( country_id INT NOT NULL, user_id INT NOT NULL, PRIMARY KEY(country_id, user_id), FOREIGN KEY (country_id) REFERENCES countries(country_id), FOREIGN KEY (user_id) REFERENCES users(user_id), ); This does mean that to get all of a user's popular favorite places you need to query both of these tables. But it means you can rely on the database to enforce consistency. Create a places table as a supertable. As Abie mentions, a second alternative is that your popular places reference a table like places, which is a parent to both states and countries. That is, both states and countries also have a foreign key to places (you can even make this foreign key also be the primary key of states and countries). CREATE TABLE popular_areas ( user_id INT NOT NULL, place_id INT NOT NULL, PRIMARY KEY (user_id, place_id), FOREIGN KEY (place_id) REFERENCES places(place_id) ); CREATE TABLE states ( state_id INT NOT NULL PRIMARY KEY, FOREIGN KEY (state_id) REFERENCES places(place_id) ); CREATE TABLE countries ( country_id INT NOT NULL PRIMARY KEY, FOREIGN KEY (country_id) REFERENCES places(place_id) ); Use two columns. Instead of one column that may reference either of two target tables, use two columns. These two columns may be NULL; in fact only one of them should be non-NULL. CREATE TABLE popular_areas ( place_id SERIAL PRIMARY KEY, user_id INT NOT NULL, state_id INT, country_id INT, CONSTRAINT UNIQUE (user_id, state_id, country_id), -- UNIQUE permits NULLs CONSTRAINT CHECK (state_id IS NOT NULL OR country_id IS NOT NULL), FOREIGN KEY (state_id) REFERENCES places(place_id), FOREIGN KEY (country_id) REFERENCES places(place_id) ); In terms of relational theory, Polymorphic Associations violates First Normal Form, because the popular_place_id is in effect a column with two meanings: it's either a state or a country. You wouldn't store a person's age and their phone_number in a single column, and for the same reason you shouldn't store both state_id and country_id in a single column. The fact that these two attributes have compatible data types is coincidental; they still signify different logical entities. Polymorphic Associations also violates Third Normal Form, because the meaning of the column depends on the extra column which names the table to which the foreign key refers. In Third Normal Form, an attribute in a table must depend only on the primary key of that table. Re comment from @SavasVedova: I'm not sure I follow your description without seeing the table definitions or an example query, but it sounds like you simply have multiple Filters tables, each containing a foreign key that references a central Products table. CREATE TABLE Products ( product_id INT PRIMARY KEY ); CREATE TABLE FiltersType1 ( filter_id INT PRIMARY KEY, product_id INT NOT NULL, FOREIGN KEY (product_id) REFERENCES Products(product_id) ); CREATE TABLE FiltersType2 ( filter_id INT PRIMARY KEY, product_id INT NOT NULL, FOREIGN KEY (product_id) REFERENCES Products(product_id) ); ...and other filter tables... Joining the products to a specific type of filter is easy if you know which type you want to join to: SELECT * FROM Products INNER JOIN FiltersType2 USING (product_id) If you want the filter type to be dynamic, you must write application code to construct the SQL query. SQL requires that the table be specified and fixed at the time you write the query. You can't make the joined table be chosen dynamically based on the values found in individual rows of Products. The only other option is to join to all filter tables using outer joins. Those that have no matching product_id will just be returned as a single row of nulls. But you still have to hardcode all the joined tables, and if you add new filter tables, you have to update your code. SELECT * FROM Products LEFT OUTER JOIN FiltersType1 USING (product_id) LEFT OUTER JOIN FiltersType2 USING (product_id) LEFT OUTER JOIN FiltersType3 USING (product_id) ... Another way to join to all filter tables is to do it serially: SELECT * FROM Product INNER JOIN FiltersType1 USING (product_id) UNION ALL SELECT * FROM Products INNER JOIN FiltersType2 USING (product_id) UNION ALL SELECT * FROM Products INNER JOIN FiltersType3 USING (product_id) ... But this format still requires you to write references to all tables. There's no getting around that.
MySQL
441,001
220