question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I am evaluating what might be the best migration option. Currently, I am on a sharded MySQL (horizontal partition), with most of my data stored in JSON blobs. I do not have any complex SQL queries (already migrated away after since I partitioned my db). Right now, it seems like both MongoDB and Cassandra would be likely options. My situation: Lots of reads in every query, less regular writes Not worried about "massive" scalability More concerned about simple setup, maintenance and code Minimize hardware/server cost
Lots of reads in every query, fewer regular writes Both databases perform well on reads where the hot data set fits in memory. Both also emphasize join-less data models (and encourage denormalization instead), and both provide indexes on documents or rows, although MongoDB's indexes are currently more flexible. Cassandra's storage engine provides constant-time writes no matter how big your data set grows. Writes are more problematic in MongoDB, partly because of the b-tree based storage engine, but more because of the multi-granularity locking it does. For analytics, MongoDB provides a custom map/reduce implementation; Cassandra provides native Hadoop support, including for Hive (a SQL data warehouse built on Hadoop map/reduce) and Pig (a Hadoop-specific analysis language that many think is a better fit for map/reduce workloads than SQL). Cassandra also supports use of Spark. Not worried about "massive" scalability If you're looking at a single server, MongoDB is probably a better fit. For those more concerned about scaling, Cassandra's no-single-point-of-failure architecture will be easier to set up and more reliable. (MongoDB's global write lock tends to become more painful, too.) Cassandra also gives a lot more control over how your replication works, including support for multiple data centers. More concerned about simple setup, maintenance and code Both are trivial to set up, with reasonable out-of-the-box defaults for a single server. Cassandra is simpler to set up in a multi-server configuration since there are no special-role nodes to worry about. If you're presently using JSON blobs, MongoDB is an insanely good match for your use case, given that it uses BSON to store the data. You'll be able to have richer and more queryable data than you would in your present database. This would be the most significant win for Mongo.
MongoDB
2,892,729
764
I'm looking for an operator, which allows me to check, if the value of a field contains a certain string. Something like: db.users.findOne({$contains:{"username":"son"}}) Is that possible?
You can do it with the following code. db.users.findOne({"username" : {$regex : "son"}});
MongoDB
10,610,131
749
If I have this schema... person = { name : String, favoriteFoods : Array } ... where the favoriteFoods array is populated with strings. How can I find all persons that have "sushi" as their favorite food using mongoose? I was hoping for something along the lines of: PersonModel.find({ favoriteFoods : { $contains : "sushi" }, function(...) {...}); (I know that there is no $contains in mongodb, just explaining what I was expecting to find before knowing the solution)
As favouriteFoods is a simple array of strings, you can just query that field directly: PersonModel.find({ favouriteFoods: "sushi" }, ...); // favouriteFoods contains "sushi" But I'd also recommend making the string array explicit in your schema: person = { name : String, favouriteFoods : [String] } The relevant documentation can be found here: https://docs.mongodb.com/manual/tutorial/query-arrays/
MongoDB
18,148,166
722
I am stuck between these two NoSQL databases. In my project, I will be creating a database within a database. For example, I need a solution to create dynamic tables. So users can create tables with columns and rows. I think either MongoDB or CouchDB will be good for this, but I am not sure which one. I will also need efficient paging as well.
Of C, A & P (Consistency, Availability & Partition tolerance) which 2 are more important to you? Quick reference, the Visual Guide To NoSQL Systems MongodB : Consistency and Partition Tolerance CouchDB : Availability and Partition Tolerance A blog post, Cassandra vs MongoDB vs CouchDB vs Redis vs Riak vs HBase vs Membase vs Neo4j comparison has 'Best used' scenarios for each NoSQL database compared. Quoting the link, MongoDB: If you need dynamic queries. If you prefer to define indexes, not map/reduce functions. If you need good performance on a big DB. If you wanted CouchDB, but your data changes too much, filling up disks. CouchDB : For accumulating, occasionally changing data, on which pre-defined queries are to be run. Places where versioning is important. A recent (Feb 2012) and more comprehensive comparison by Riyad Kalla, MongoDB : Master-Slave Replication ONLY CouchDB : Master-Master Replication A blog post (Oct 2011) by someone who tried both, A MongoDB Guy Learns CouchDB commented on the CouchDB's paging being not as useful. A dated (Jun 2009) benchmark by Kristina Chodorow (part of team behind MongoDB), I'd go for MongoDB.
MongoDB
12,437,790
692
I can't find anywhere it has been documented this. By default, the find() operation will get the records from beginning. How can I get the last N records in mongodb? Edit: also I want the returned result ordered from less recent to most recent, not the reverse.
If I understand your question, you need to sort in ascending order. Assuming you have some id or date field called "x" you would do ... .sort() db.foo.find().sort({x:1}); The 1 will sort ascending (oldest to newest) and -1 will sort descending (newest to oldest.) If you use the auto created _id field it has a date embedded in it ... so you can use that to order by ... db.foo.find().sort({_id:1}); That will return back all your documents sorted from oldest to newest. Natural Order You can also use a Natural Order mentioned above ... db.foo.find().sort({$natural:1}); Again, using 1 or -1 depending on the order you want. Use .limit() Lastly, it's good practice to add a limit when doing this sort of wide open query so you could do either ... db.foo.find().sort({_id:1}).limit(50); or db.foo.find().sort({$natural:1}).limit(50);
MongoDB
4,421,207
692
I want to design a question structure with some comments. Which relationship should I use for comments: embed or reference? A question with some comments, like stackoverflow, would have a structure like this: Question title = 'aaa' content = 'bbb' comments = ??? At first, I thought of using embedded comments (I think embed is recommended in MongoDB), like this: Question title = 'aaa' content = 'bbb' comments = [ { content = 'xxx', createdAt = 'yyy'}, { content = 'xxx', createdAt = 'yyy'}, { content = 'xxx', createdAt = 'yyy'} ] It is clear, but I'm worried about this case: If I want to edit a specified comment, how do I get its content and its question? There is no _id to let me find one, nor question_ref to let me find its question. (Is there perhaps a way to do this without _id and question_ref?) Do I have to use ref rather than embed? Do I then have to create a new collection for comments?
This is more an art than a science. The Mongo Documentation on Schemas is a good reference, but here are some things to consider: Put as much in as possible The joy of a Document database is that it eliminates lots of Joins. Your first instinct should be to place as much in a single document as you can. Because MongoDB documents have structure, and because you can efficiently query within that structure (this means that you can take the part of the document that you need, so document size shouldn't worry you much) there is no immediate need to normalize data like you would in SQL. In particular any data that is not useful apart from its parent document should be part of the same document. Separate data that can be referred to from multiple places into its own collection. This is not so much a "storage space" issue as it is a "data consistency" issue. If many records will refer to the same data it is more efficient and less error prone to update a single record and keep references to it in other places. Document size considerations MongoDB imposes a 4MB (16MB with 1.8) size limit on a single document. In a world of GB of data this sounds small, but it is also 30 thousand tweets or 250 typical Stack Overflow answers or 20 flicker photos. On the other hand, this is far more information than one might want to present at one time on a typical web page. First consider what will make your queries easier. In many cases concern about document sizes will be premature optimization. Complex data structures: MongoDB can store arbitrary deep nested data structures, but cannot search them efficiently. If your data forms a tree, forest or graph, you effectively need to store each node and its edges in a separate document. (Note that there are data stores specifically designed for this type of data that one should consider as well) It has also been pointed out than it is impossible to return a subset of elements in a document. If you need to pick-and-choose a few bits of each document, it will be easier to separate them out. Data Consistency MongoDB makes a trade off between efficiency and consistency. The rule is changes to a single document are always atomic, while updates to multiple documents should never be assumed to be atomic. There is also no way to "lock" a record on the server (you can build this into the client's logic using for example a "lock" field). When you design your schema consider how you will keep your data consistent. Generally, the more that you keep in a document the better. For what you are describing, I would embed the comments, and give each comment an id field with an ObjectID. The ObjectID has a time stamp embedded in it so you can use that instead of created at if you like.
MongoDB
5,373,198
638
There's a typo in my MongoDB database name and I'm looking to rename the database. I can copy and delete like so... db.copyDatabase('old_name', 'new_name'); use old_name db.dropDatabase(); Is there a command to rename a database?
You could do this, if you're using MongoDB < 4.2 (ref): db.copyDatabase("db_to_rename","db_renamed","localhost") use db_to_rename db.dropDatabase(); Editorial Note: this is the same approach used in the question itself but has proven useful to others regardless.
MongoDB
9,201,832
601
Is there a way to tell Mongo to pretty print output? Currently, everything is output to a single line and it's difficult to read, especially with nested arrays and documents.
(note: this is answer to original version of the question, which did not have requirements for "default") You can ask it to be pretty. db.collection.find().pretty()
MongoDB
9,146,123
575
I've been playing around storing tweets inside mongodb, each object looks like this: { "_id" : ObjectId("4c02c58de500fe1be1000005"), "contributors" : null, "text" : "Hello world", "user" : { "following" : null, "followers_count" : 5, "utc_offset" : null, "location" : "", "profile_text_color" : "000000", "friends_count" : 11, "profile_link_color" : "0000ff", "verified" : false, "protected" : false, "url" : null, "contributors_enabled" : false, "created_at" : "Sun May 30 18:47:06 +0000 2010", "geo_enabled" : false, "profile_sidebar_border_color" : "87bc44", "statuses_count" : 13, "favourites_count" : 0, "description" : "", "notifications" : null, "profile_background_tile" : false, "lang" : "en", "id" : 149978111, "time_zone" : null, "profile_sidebar_fill_color" : "e0ff92" }, "geo" : null, "coordinates" : null, "in_reply_to_user_id" : 149183152, "place" : null, "created_at" : "Sun May 30 20:07:35 +0000 2010", "source" : "web", "in_reply_to_status_id" : { "floatApprox" : 15061797850 }, "truncated" : false, "favorited" : false, "id" : { "floatApprox" : 15061838001 } How would I write a query which checks the created_at and finds all objects between 18:47 and 19:00? Do I need to update my documents so the dates are stored in a specific format?
Querying for a Date Range (Specific Month or Day) in the MongoDB Cookbook has a very good explanation on the matter, but below is something I tried out myself and it seems to work. items.save({ name: "example", created_at: ISODate("2010-04-30T00:00:00.000Z") }) items.find({ created_at: { $gte: ISODate("2010-04-29T00:00:00.000Z"), $lt: ISODate("2010-05-01T00:00:00.000Z") } }) => { "_id" : ObjectId("4c0791e2b9ec877893f3363b"), "name" : "example", "created_at" : "Sun May 30 2010 00:00:00 GMT+0300 (EEST)" } Based on my experiments you will need to serialize your dates into a format that MongoDB supports, because the following gave undesired search results. items.save({ name: "example", created_at: "Sun May 30 18.49:00 +0000 2010" }) items.find({ created_at: { $gte:"Mon May 30 18:47:00 +0000 2015", $lt: "Sun May 30 20:40:36 +0000 2010" } }) => { "_id" : ObjectId("4c079123b9ec877893f33638"), "name" : "example", "created_at" : "Sun May 30 18.49:00 +0000 2010" } In the second example no results were expected, but there was still one gotten. This is because a basic string comparison is done.
MongoDB
2,943,222
560
I am using my new mac for the first time today. I am following the get started guide on the mongodb.org up until the step where one creates the /data/db directory. btw, I used the homebrew route. So I open a terminal, and I think I am at what you called the Home Directory, for when I do "ls", I see folders of Desktop Application Movies Music Pictures Documents and Library. So I did a mkdir -p /data/db first, it says permission denied. I kept trying different things for half and hour and finally : mkdir -p data/db worked. and when I "ls", a directory of data and nested in it a db folder do exist. then I fire up mongod and it complains about not finding data/db Have I done something wrong? Now I have done the sudo mkdir -p /data/db and when I do a "ls" I do see the data dir and the db dir. inside the db dir though, there is absolutely nothing in it and when I now run mongod Sun Oct 30 19:35:19 [initandlisten] exception in initAndListen: 10309 Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating Sun Oct 30 19:35:19 dbexit: Sun Oct 30 19:35:19 [initandlisten] shutdown: going to close listening sockets... Sun Oct 30 19:35:19 [initandlisten] shutdown: going to flush diaglog... Sun Oct 30 19:35:19 [initandlisten] shutdown: going to close sockets... Sun Oct 30 19:35:19 [initandlisten] shutdown: waiting for fs preallocator... Sun Oct 30 19:35:19 [initandlisten] shutdown: lock for final commit... Sun Oct 30 19:35:19 [initandlisten] shutdown: final commit... Sun Oct 30 19:35:19 [initandlisten] shutdown: closing all files... Sun Oct 30 19:35:19 [initandlisten] closeAllFiles() finished Sun Oct 30 19:35:19 [initandlisten] shutdown: removing fs lock... Sun Oct 30 19:35:19 [initandlisten] couldn't remove fs lock errno:9 Bad file descriptor Sun Oct 30 19:35:19 dbexit: really exiting now EDIT Getting error message for sudo chown mongod:mongod /data/db chown: mongod: Invalid argument Thanks, everyone!
You created the directory in the wrong place /data/db means that it's directly under the '/' root directory, whereas you created 'data/db' (without the leading /) probably just inside another directory, such as the '/root' homedirectory. You need to create this directory as root Either you need to use sudo , e.g. sudo mkdir -p /data/db Or you need to do su - to become superuser, and then create the directory with mkdir -p /data/db Note: MongoDB also has an option where you can create the data directory in another location, but that's generally not a good idea, because it just slightly complicates things such as DB recovery, because you always have to specify the db-path manually. I wouldn't recommend doing that. Edit: the error message you're getting is "Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied". The directory you created doesn't seem to have the correct permissions and ownership -- it needs to be writable by the user who runs the MongoDB process. To see the permissions and ownership of the '/data/db/' directory, do this: (this is what the permissions and ownership should look like) $ ls -ld /data/db/ drwxr-xr-x 4 mongod mongod 4096 Oct 26 10:31 /data/db/ The left side 'drwxr-xr-x' shows the permissions for the User, Group, and Others. 'mongod mongod' shows who owns the directory, and which group that directory belongs to. Both are called 'mongod' in this case. If your '/data/db' directory doesn't have the permissions and ownership above, do this: First check what user and group your mongo user has: # grep mongo /etc/passwd mongod:x:498:496:mongod:/var/lib/mongo:/bin/false You should have an entry for mongod in /etc/passwd , as it's a daemon. sudo chmod 0755 /data/db sudo chown -R 498:496 /data/db # using the user-id , group-id You can also use the user-name and group-name, as follows: (they can be found in /etc/passwd and /etc/group ) sudo chown -R mongod:mongod /data/db that should make it work.. In the comments below, some people used this: sudo chown -R `id -u` /data/db sudo chmod -R go+w /data/db or sudo chown -R $USER /data/db sudo chmod -R go+w /data/db The disadvantage is that $USER is an account which has a login shell. Daemons should ideally not have a shell for security reasons, that's why you see /bin/false in the grep of the password file above. Check here to better understand the meaning of the directory permissions: http://www.perlfect.com/articles/chmod.shtml Maybe also check out one of the tutorials you can find via Google: "UNIX for beginners"
MongoDB
7,948,789
550
In MongoDB, is it possible to update the value of a field using the value from another field? The equivalent SQL would be something like: UPDATE Person SET Name = FirstName + ' ' + LastName And the MongoDB pseudo-code would be: db.person.update( {}, { $set : { name : firstName + ' ' + lastName } );
The best way to do this is in version 4.2+ which allows using the aggregation pipeline in the update document and the updateOne, updateMany, or update(deprecated in most if not all languages drivers) collection methods. MongoDB 4.2+ Version 4.2 also introduced the $set pipeline stage operator, which is an alias for $addFields. I will use $set here as it maps with what we are trying to achieve. db.collection.<update method>( {}, [ {"$set": {"name": { "$concat": ["$firstName", " ", "$lastName"]}}} ] ) Note that square brackets in the second argument to the method specify an aggregation pipeline instead of a plain update document because using a simple document will not work correctly. MongoDB 3.4+ In 3.4+, you can use $addFields and the $out aggregation pipeline operators. db.collection.aggregate( [ { "$addFields": { "name": { "$concat": [ "$firstName", " ", "$lastName" ] } }}, { "$out": <output collection name> } ] ) Note that this does not update your collection but instead replaces the existing collection or creates a new one. Also, for update operations that require "typecasting", you will need client-side processing, and depending on the operation, you may need to use the find() method instead of the .aggreate() method. MongoDB 3.2 and 3.0 The way we do this is by $projecting our documents and using the $concat string aggregation operator to return the concatenated string. You then iterate the cursor and use the $set update operator to add the new field to your documents using bulk operations for maximum efficiency. Aggregation query: var cursor = db.collection.aggregate([ { "$project": { "name": { "$concat": [ "$firstName", " ", "$lastName" ] } }} ]) MongoDB 3.2 or newer You need to use the bulkWrite method. var requests = []; cursor.forEach(document => { requests.push( { 'updateOne': { 'filter': { '_id': document._id }, 'update': { '$set': { 'name': document.name } } } }); if (requests.length === 500) { //Execute per 500 operations and re-init db.collection.bulkWrite(requests); requests = []; } }); if(requests.length > 0) { db.collection.bulkWrite(requests); } MongoDB 2.6 and 3.0 From this version, you need to use the now deprecated Bulk API and its associated methods. var bulk = db.collection.initializeUnorderedBulkOp(); var count = 0; cursor.snapshot().forEach(function(document) { bulk.find({ '_id': document._id }).updateOne( { '$set': { 'name': document.name } }); count++; if(count%500 === 0) { // Excecute per 500 operations and re-init bulk.execute(); bulk = db.collection.initializeUnorderedBulkOp(); } }) // clean up queues if(count > 0) { bulk.execute(); } MongoDB 2.4 cursor["result"].forEach(function(document) { db.collection.update( { "_id": document._id }, { "$set": { "name": document.name } } ); })
MongoDB
3,974,985
545
I'm doing development on MongoDB. For totally non-evil purposes, I sometimes want to blow away everything in a database—that is, to delete every single collection, and whatever else might be lying around, and start from scratch. Is there a single line of code that will let me do this? Bonus points for giving both a MongoDB console method and a MongoDB Ruby driver method.
In the mongo shell: use [database]; db.dropDatabase(); And to remove the users: db.dropAllUsers();
MongoDB
3,366,397
539
We offer a platform for video- and audio-clips, photos and vector-grafics. We started with MySQL as the database backend and recently included MongoDB for storing all meta-information of the files, because MongoDB better fits the requirements. For example: photos may have Exif information, videos may have audio-tracks where we to want to store the meta-information of, too. Videos and vector-graphics don't share any common meta-information, etc. so I know, that MongoDB is perfect to store this unstructured data and keep it searchable. However, we continue developing our platform and adding features. Now one of the next steps will be providing a forum for our users. The question that now arises is: use the MySQL database, which would be a good choice for storing forums and forum-posts, etc. or use MongoDB for this, too? So the question is: when to use MongoDB and when to use a RDBMS. What would you take, mongoDB or MySQL, if you had the choice and why would you take it?
In NoSQL: If Only It Was That Easy, the author writes about MongoDB: MongoDB is not a key/value store, it’s quite a bit more. It’s definitely not a RDBMS either. I haven’t used MongoDB in production, but I have used it a little building a test app and it is a very cool piece of kit. It seems to be very performant and either has, or will have soon, fault tolerance and auto-sharding (aka it will scale). I think Mongo might be the closest thing to a RDBMS replacement that I’ve seen so far. It won’t work for all data sets and access patterns, but it’s built for your typical CRUD stuff. Storing what is essentially a huge hash, and being able to select on any of those keys, is what most people use a relational database for. If your DB is 3NF and you don’t do any joins (you’re just selecting a bunch of tables and putting all the objects together, AKA what most people do in a web app), MongoDB would probably kick ass for you. Then, in the conclusion: The real thing to point out is that if you are being held back from making something super awesome because you can’t choose a database, you are doing it wrong. If you know mysql, just use it. Optimize when you actually need to. Use it like a k/v store, use it like a rdbms, but for god sake, build your killer app! None of this will matter to most apps. Facebook still uses MySQL, a lot. Wikipedia uses MySQL, a lot. FriendFeed uses MySQL, a lot. NoSQL is a great tool, but it’s certainly not going to be your competitive edge, it’s not going to make your app hot, and most of all, your users won’t care about any of this. What am I going to build my next app on? Probably Postgres. Will I use NoSQL? Maybe. I might also use Hadoop and Hive. I might keep everything in flat files. Maybe I’ll start hacking on Maglev. I’ll use whatever is best for the job. If I need reporting, I won’t be using any NoSQL. If I need caching, I’ll probably use Tokyo Tyrant. If I need ACIDity, I won’t use NoSQL. If I need a ton of counters, I’ll use Redis. If I need transactions, I’ll use Postgres. If I have a ton of a single type of documents, I’ll probably use Mongo. If I need to write 1 billion objects a day, I’d probably use Voldemort. If I need full text search, I’d probably use Solr. If I need full text search of volatile data, I’d probably use Sphinx. I like this article, I find it very informative, it gives a good overview of the NoSQL landscape and hype. But, and that's the most important part, it really helps to ask yourself the right questions when it comes to choose between RDBMS and NoSQL. Worth the read IMHO. Alternate link to article
MongoDB
1,476,295
531
Perhaps it's the time, perhaps it's me drowning in sparse documentation and not being able to wrap my head around the concept of updating in Mongoose :) Here's the deal: I have a contact schema and model (shortened properties): var mongoose = require('mongoose'), Schema = mongoose.Schema; var mongooseTypes = require("mongoose-types"), useTimestamps = mongooseTypes.useTimestamps; var ContactSchema = new Schema({ phone: { type: String, index: { unique: true, dropDups: true } }, status: { type: String, lowercase: true, trim: true, default: 'on' } }); ContactSchema.plugin(useTimestamps); var Contact = mongoose.model('Contact', ContactSchema); I receive a request from the client, containing the fields I need and use my model thusly: mongoose.connect(connectionString); var contact = new Contact({ phone: request.phone, status: request.status }); And now we reach the problem: If I call contact.save(function(err){...}) I'll receive an error if the contact with the same phone number already exists (as expected - unique) I can't call update() on contact, since that method does not exist on a document If I call update on the model: Contact.update({phone:request.phone}, contact, {upsert: true}, function(err{...}) I get into an infinite loop of some sorts, since the Mongoose update implementation clearly doesn't want an object as the second parameter. If I do the same, but in the second parameter I pass an associative array of the request properties {status: request.status, phone: request.phone ...} it works - but then I have no reference to the specific contact and cannot find out its createdAt and updatedAt properties. So the bottom line, after all I tried: given a document contact, how do I update it if it exists, or add it if it doesn't? Thanks for your time.
Mongoose now supports this natively with findOneAndUpdate (calls MongoDB findAndModify). The upsert = true option creates the object if it doesn't exist. defaults to false. var query = {'username': req.user.username}; req.newData.username = req.user.username; MyModel.findOneAndUpdate(query, req.newData, {upsert: true}, function(err, doc) { if (err) return res.send(500, {error: err}); return res.send('Succesfully saved.'); }); In older versions Mongoose does not support these hooks with this method: defaults setters validators middleware
MongoDB
7,267,102
510
I want to execute mongo commands in shell script, e.g. in a script test.sh: #!/bin/sh mongo myDbName db.mycollection.findOne() show collections When I execute this script via ./test.sh, then the connection to MongoDB is established, but the following commands are not executed. How to execute other commands through shell script test.sh?
You can also evaluate a command using the --eval flag, if it is just a single command. mongo --eval "printjson(db.serverStatus())" Please note: if you are using Mongo operators, starting with a $ sign, you'll want to surround the eval argument in single quotes to keep the shell from evaluating the operator as an environment variable: mongo --eval 'db.mycollection.update({"name":"foo"},{$set:{"this":"that"}});' myDbName Otherwise you may see something like this: mongo --eval "db.test.update({\"name\":\"foo\"},{$set:{\"this\":\"that\"}});" > E QUERY SyntaxError: Unexpected token :
MongoDB
4,837,673
496
{ name: 'book', tags: { words: ['abc','123'], lat: 33, long: 22 } } Suppose this is a document. How do I remove "words" completely from all the documents in this collection? I want all documents to be without "words": { name: 'book', tags: { lat: 33, long: 22 } }
Try this: If your collection was 'example' db.example.update({}, {$unset: {words:1}}, false, true); Refer this: http://www.mongodb.org/display/DOCS/Updating#Updating-%24unset UPDATE: The above link no longer covers '$unset'ing. Be sure to add {multi: true} if you want to remove this field from all of the documents in the collection; otherwise, it will only remove it from the first document it finds that matches. See this for updated documentation: https://docs.mongodb.com/manual/reference/operator/update/unset/ Example: db.example.update({}, {$unset: {words:1}} , {multi: true});
MongoDB
6,851,933
487
Suppose you have the following documents in my collection: { "_id":ObjectId("562e7c594c12942f08fe4192"), "shapes":[ { "shape":"square", "color":"blue" }, { "shape":"circle", "color":"red" } ] }, { "_id":ObjectId("562e7c594c12942f08fe4193"), "shapes":[ { "shape":"square", "color":"black" }, { "shape":"circle", "color":"green" } ] } Do query: db.test.find({"shapes.color": "red"}, {"shapes.color": 1}) Or db.test.find({shapes: {"$elemMatch": {color: "red"}}}, {"shapes.color": 1}) Returns matched document (Document 1), but always with ALL array items in shapes: { "shapes": [ {"shape": "square", "color": "blue"}, {"shape": "circle", "color": "red"} ] } However, I'd like to get the document (Document 1) only with the array that contains color=red: { "shapes": [ {"shape": "circle", "color": "red"} ] } How can I do this?
MongoDB 2.2's new $elemMatch projection operator provides another way to alter the returned document to contain only the first matched shapes element: db.test.find( {"shapes.color": "red"}, {_id: 0, shapes: {$elemMatch: {color: "red"}}}); Returns: {"shapes" : [{"shape": "circle", "color": "red"}]} In 2.2 you can also do this using the $ projection operator, where the $ in a projection object field name represents the index of the field's first matching array element from the query. The following returns the same results as above: db.test.find({"shapes.color": "red"}, {_id: 0, 'shapes.$': 1}); MongoDB 3.2 Update Starting with the 3.2 release, you can use the new $filter aggregation operator to filter an array during projection, which has the benefit of including all matches, instead of just the first one. db.test.aggregate([ // Get just the docs that contain a shapes element where color is 'red' {$match: {'shapes.color': 'red'}}, {$project: { shapes: {$filter: { input: '$shapes', as: 'shape', cond: {$eq: ['$$shape.color', 'red']} }}, _id: 0 }} ]) Results: [ { "shapes" : [ { "shape" : "circle", "color" : "red" } ] } ]
MongoDB
3,985,214
485
I was wondering if anyone can tell me if MongoDB or CouchDB are ready for a production environment. I'm now looking at these storage solutions (I'm favouring MongoDB at the moment), however these projects are quite young and so I foresee that I'm going to have to work quite hard to convince my manager that we should adopt this new technology. What I'd like to know is: Who is using MongoDB or CouchDB today in a production environment? How are you using MongoDB/CouchDB? What problems (if any) did you come across when you adopted this new storage mechanism (and how did you overcome them)? How did you deal with any migration issues that you had to deal with? Do you have any good/bad experiences with either of these solutions that you'd like to share?
I'm the CTO of 10gen (developers of MongoDB) so I'm a bit biased, but I also manage a few sites that are using MongoDB in production. businessinsider has been using mongo in production for over a year now. They are using it for everything from users and blog posts, to every image on the site. shopwiki is using it for a few things including real time analytics and a caching layer. They are doing over 1000 writes per second to a fairly large database. If you go to the mongodb Production Deployments page you'll see some people who are using mongo in production. If you have any questions about the scale or scope of production deployments, post on our user list and we'll be more than happy to help.
MongoDB
895,762
485
How can I add a new field to every document in an existent collection? I know how to update an existing document's field but not how to add a new field to every document in a collection. How can I do this in the mongo shell?
Same as the updating existing collection field, $set will add a new fields if the specified field does not exist. Check out this example: > db.foo.find() > db.foo.insert({"test":"a"}) > db.foo.find() { "_id" : ObjectId("4e93037bbf6f1dd3a0a9541a"), "test" : "a" } > item = db.foo.findOne() { "_id" : ObjectId("4e93037bbf6f1dd3a0a9541a"), "test" : "a" } > db.foo.update({"_id" :ObjectId("4e93037bbf6f1dd3a0a9541a") },{$set : {"new_field":1}}) > db.foo.find() { "_id" : ObjectId("4e93037bbf6f1dd3a0a9541a"), "new_field" : 1, "test" : "a" } EDIT: In case you want to add a new_field to all your collection, you have to use empty selector, and set multi flag to true (last param) to update all the documents db.your_collection.update( {}, { $set: {"new_field": 1} }, false, true ) EDIT: In the above example last 2 fields false, true specifies the upsert and multi flags. Upsert: If set to true, creates a new document when no document matches the query criteria. Multi: If set to true, updates multiple documents that meet the query criteria. If set to false, updates one document. This is for Mongo versions prior to 2.2. For latest versions the query is changed a bit db.your_collection.update({}, {$set : {"new_field":1}}, {upsert:false, multi:true})
MongoDB
7,714,216
475
What I want is not a comparison between Redis and MongoDB. I know they are different; the performance and the API is totally different. Redis is very fast, but the API is very 'atomic'. MongoDB will eat more resources, but the API is very very easy to use, and I am very happy with it. They're both awesome, and I want to use Redis in deployment as much as I can, but it is hard to code. I want to use MongoDB in development as much as I can, but it needs an expensive machine. So what do you think about the use of both of them? When to pick Redis? When to pick MongoDB?
I would say, it depends on kind of dev team you are and your application needs. For example, if you require a lot of querying, that mostly means it would be more work for your developers to use Redis, where your data might be stored in variety of specialized data structures, customized for each type of object for efficiency. In MongoDB the same queries might be easier because the structure is more consistent across your data. On the other hand, in Redis, sheer speed of the response to those queries is the payoff for the extra work of dealing with the variety of structures your data might be stored with. MongoDB offers simplicity, much shorter learning curve for developers with traditional DB and SQL experience. However, Redis's non-traditional approach requires more effort to learn, but greater flexibility. Eg. A cache layer can probably be better implemented in Redis. For more schema-able data, MongoDB is better. [Note: both MongoDB and Redis are technically schemaless] If you ask me, my personal choice is Redis for most requirements. Lastly, I hope by now you have seen http://antirez.com/post/MongoDB-and-Redis.html
MongoDB
5,400,163
466
I want to set up user name & password authentication for my MongoDB instance, so that any remote access will ask for the user name & password. I tried the tutorial from the MongoDB site and did following: use admin db.addUser('theadmin', '12345'); db.auth('theadmin','12345'); After that, I exited and ran mongo again. And I don't need password to access it. Even if I connect to the database remotely, I am not prompted for user name & password. UPDATE Here is the solution I ended up using 1) At the mongo command line, set the administrator: use admin; db.addUser('admin','123456'); 2) Shutdown the server and exit db.shutdownServer(); exit 3) Restart mongod with --auth $ sudo ./mongodb/bin/mongod --auth --dbpath /mnt/db/ 4) Run mongo again in 2 ways: i) run mongo first then login: $ ./mongodb/bin/mongo localhost:27017 use admin db.auth('admin','123456'); ii) run & login to mongo in command line. $ ./mongodb/bin/mongo localhost:27017/admin -u admin -p 123456 The username & password will work the same way for mongodump and mongoexport.
Wow so many complicated/confusing answers here. This is as of v3.4. Short answer. Start MongoDB without access control (/data/db or where your db is). mongod --dbpath /data/db Connect to the instance. mongo Create the user. use some_db db.createUser( { user: "myNormalUser", pwd: "xyz123", roles: [ { role: "readWrite", db: "some_db" }, { role: "read", db: "some_other_db" } ] } ) Stop the MongoDB instance and start it again with access control. mongod --auth --dbpath /data/db Connect and authenticate as the user. use some_db db.auth("myNormalUser", "xyz123") db.foo.insert({x:1}) use some_other_db db.foo.find({}) Long answer: Read this if you want to properly understand. It's really simple. I'll dumb the following down https://docs.mongodb.com/manual/tutorial/enable-authentication/ If you want to learn more about what the roles actually do read more here: https://docs.mongodb.com/manual/reference/built-in-roles/ Start MongoDB without access control. mongod --dbpath /data/db Connect to the instance. mongo Create the user administrator. The following creates a user administrator in the admin authentication database. The user is a dbOwner over the some_db database and NOT over the admin database, this is important to remember. use admin db.createUser( { user: "myDbOwner", pwd: "abc123", roles: [ { role: "dbOwner", db: "some_db" } ] } ) Or if you want to create an admin which is admin over any database: use admin db.createUser( { user: "myUserAdmin", pwd: "abc123", roles: [ { role: "userAdminAnyDatabase", db: "admin" } ] } ) Stop the MongoDB instance and start it again with access control. mongod --auth --dbpath /data/db Connect and authenticate as the user administrator towards the admin authentication database, NOT towards the some_db authentication database. The user administrator was created in the admin authentication database, the user does not exist in the some_db authentication database. use admin db.auth("myDbOwner", "abc123") You are now authenticated as a dbOwner over the some_db database. So now if you wish to read/write/do stuff directly towards the some_db database you can change to it. use some_db //...do stuff like db.foo.insert({x:1}) // remember that the user administrator had dbOwner rights so the user may write/read, if you create a user with userAdmin they will not be able to read/write for example. More on roles: https://docs.mongodb.com/manual/reference/built-in-roles/ If you wish to make additional users which aren't user administrators and which are just normal users continue reading below. Create a normal user. This user will be created in the some_db authentication database down below. use some_db db.createUser( { user: "myNormalUser", pwd: "xyz123", roles: [ { role: "readWrite", db: "some_db" }, { role: "read", db: "some_other_db" } ] } ) Exit the mongo shell, re-connect, authenticate as the user. use some_db db.auth("myNormalUser", "xyz123") db.foo.insert({x:1}) use some_other_db db.foo.find({}) Last but not least due to users not reading the commands I posted correctly regarding the --auth flag, you can set this value in the configuration file for mongoDB if you do not wish to set it as a flag.
MongoDB
4,881,208
446
I'm using Mongoose version 3 with MongoDB version 2.2. I've noticed a __v field has started appearing in my MongoDB documents. Is it something to do with versioning? How is it used?
From here: The versionKey is a property set on each document when first created by Mongoose. This keys value contains the internal revision of the document. The name of this document property is configurable. The default is __v. If this conflicts with your application you can configure as such: new Schema({..}, { versionKey: '_somethingElse' })
MongoDB
12,495,891
438
Below is my code var mongoose = require('mongoose'); mongoose.connect('mongodb://localhost/test'); var Cat = mongoose.model('Cat', { name: String, age: {type: Number, default: 20}, create: {type: Date, default: Date.now} }); Cat.findOneAndUpdate({age: 17}, {$set:{name:"Naomi"}},function(err, doc){ if(err){ console.log("Something wrong when updating data!"); } console.log(doc); }); I already have some record in my mongo database and I would like to run this code to update name for which age is 17 and then print result out in the end of code. However, why I still get same result from console(not the modified name) but when I go to mongo db command line and type "db.cats.find();". The result came with modified name. Then I go back to run this code again and the result is modified. My question is: If the data was modified, then why I still got original data at first time when console.log it.
Why this happens? The default is to return the original, unaltered document. If you want the new, updated document to be returned you have to pass an additional argument: an object with the new property set to true. From the mongoose docs: Query#findOneAndUpdate Model.findOneAndUpdate(conditions, update, options, (error, doc) => { // error: any errors that occurred // doc: the document before updates are applied if `new: false`, or after updates if `new = true` }); Available options new: bool - if true, return the modified document rather than the original. defaults to false (changed in 4.0) Solution Pass {new: true} if you want the updated result in the doc variable: // V--- THIS WAS ADDED Cat.findOneAndUpdate({age: 17}, {$set:{name:"Naomi"}}, {new: true}, (err, doc) => { if (err) { console.log("Something wrong when updating data!"); } console.log(doc); });
MongoDB
32,811,510
432
I've found this question answered for C# and Perl, but not in the native interface. I thought this would work: db.theColl.find( { _id: ObjectId("4ecbe7f9e8c1c9092c000027") } ) The query returned no results. I found the 4ecbe7f9e8c1c9092c000027 by doing db.theColl.find() and grabbing an ObjectId. There are several thousand objects in that collection. I've read all the pages that I could find on the mongodb.org website and didn't find it. Is this just a strange thing to do? It seems pretty normal to me.
Not strange at all, people do this all the time. Make sure the collection name is correct (case matters) and that the ObjectId is exact. Documentation is here > db.test.insertOne({x: 1}) > db.test.find() // no criteria { "_id" : ObjectId("4ecc05e55dd98a436ddcc47c"), "x" : 1 } > db.test.find({"_id" : ObjectId("4ecc05e55dd98a436ddcc47c")}) // explicit { "_id" : ObjectId("4ecc05e55dd98a436ddcc47c"), "x" : 1 } > db.test.find(ObjectId("4ecc05e55dd98a436ddcc47c")) // shortcut { "_id" : ObjectId("4ecc05e55dd98a436ddcc47c"), "x" : 1 }
MongoDB
8,233,014
409
Example: > db.stuff.save({"foo":"bar"}); > db.stuff.find({"foo":"bar"}).count(); 1 > db.stuff.find({"foo":"BAR"}).count(); 0
You could use a regex. In your example that would be: db.stuff.find( { foo: /^bar$/i } ); I must say, though, maybe you could just downcase (or upcase) the value on the way in rather than incurring the extra cost every time you find it. Obviously this wont work for people's names and such, but maybe use-cases like tags.
MongoDB
1,863,399
405
I have been very excited about MongoDb and have been testing it lately. I had a table called posts in MySQL with about 20 million records indexed only on a field called 'id'. I wanted to compare speed with MongoDB and I ran a test which would get and print 15 records randomly from our huge databases. I ran the query about 1,000 times each for mysql and MongoDB and I am suprised that I do not notice a lot of difference in speed. Maybe MongoDB is 1.1 times faster. That's very disappointing. Is there something I am doing wrong? I know that my tests are not perfect but is MySQL on par with MongoDb when it comes to read intensive chores. Note: I have dual core + ( 2 threads ) i7 cpu and 4GB ram I have 20 partitions on MySQL each of 1 million records Sample Code Used For Testing MongoDB <?php function microtime_float() { list($usec, $sec) = explode(" ", microtime()); return ((float)$usec + (float)$sec); } $time_taken = 0; $tries = 100; // connect $time_start = microtime_float(); for($i=1;$i<=$tries;$i++) { $m = new Mongo(); $db = $m->swalif; $cursor = $db->posts->find(array('id' => array('$in' => get_15_random_numbers()))); foreach ($cursor as $obj) { //echo $obj["thread_title"] . "<br><Br>"; } } $time_end = microtime_float(); $time_taken = $time_taken + ($time_end - $time_start); echo $time_taken; function get_15_random_numbers() { $numbers = array(); for($i=1;$i<=15;$i++) { $numbers[] = mt_rand(1, 20000000) ; } return $numbers; } ?> Sample Code For Testing MySQL <?php function microtime_float() { list($usec, $sec) = explode(" ", microtime()); return ((float)$usec + (float)$sec); } $BASE_PATH = "../src/"; include_once($BASE_PATH . "classes/forumdb.php"); $time_taken = 0; $tries = 100; $time_start = microtime_float(); for($i=1;$i<=$tries;$i++) { $db = new AQLDatabase(); $sql = "select * from posts_really_big where id in (".implode(',',get_15_random_numbers()).")"; $result = $db->executeSQL($sql); while ($row = mysql_fetch_array($result) ) { //echo $row["thread_title"] . "<br><Br>"; } } $time_end = microtime_float(); $time_taken = $time_taken + ($time_end - $time_start); echo $time_taken; function get_15_random_numbers() { $numbers = array(); for($i=1;$i<=15;$i++) { $numbers[] = mt_rand(1, 20000000); } return $numbers; } ?>
MongoDB is not magically faster. If you store the same data, organised in basically the same fashion, and access it exactly the same way, then you really shouldn't expect your results to be wildly different. After all, MySQL and MongoDB are both GPL, so if Mongo had some magically better IO code in it, then the MySQL team could just incorporate it into their codebase. People are seeing real world MongoDB performance largely because MongoDB allows you to query in a different manner that is more sensible to your workload. For example, consider a design that persisted a lot of information about a complicated entity in a normalised fashion. This could easily use dozens of tables in MySQL (or any relational db) to store the data in normal form, with many indexes needed to ensure relational integrity between tables. Now consider the same design with a document store. If all of those related tables are subordinate to the main table (and they often are), then you might be able to model the data such that the entire entity is stored in a single document. In MongoDB you can store this as a single document, in a single collection. This is where MongoDB starts enabling superior performance. In MongoDB, to retrieve the whole entity, you have to perform: One index lookup on the collection (assuming the entity is fetched by id) Retrieve the contents of one database page (the actual binary json document) So a b-tree lookup, and a binary page read. Log(n) + 1 IOs. If the indexes can reside entirely in memory, then 1 IO. In MySQL with 20 tables, you have to perform: One index lookup on the root table (again, assuming the entity is fetched by id) With a clustered index, we can assume that the values for the root row are in the index 20+ range lookups (hopefully on an index) for the entity's pk value These probably aren't clustered indexes, so the same 20+ data lookups once we figure out what the appropriate child rows are. So the total for mysql, even assuming that all indexes are in memory (which is harder since there are 20 times more of them) is about 20 range lookups. These range lookups are likely comprised of random IO — different tables will definitely reside in different spots on disk, and it's possible that different rows in the same range in the same table for an entity might not be contiguous (depending on how the entity has been updated, etc). So for this example, the final tally is about 20 times more IO with MySQL per logical access, compared to MongoDB. This is how MongoDB can boost performance in some use cases.
MongoDB
9,702,643
382
I'd like to get the names of all the keys in a MongoDB collection. For example, from this: db.things.insert( { type : ['dog', 'cat'] } ); db.things.insert( { egg : ['cat'] } ); db.things.insert( { type : [] } ); db.things.insert( { hello : [] } ); I'd like to get the unique keys: type, egg, hello
You could do this with MapReduce: mr = db.runCommand({ "mapreduce" : "my_collection", "map" : function() { for (var key in this) { emit(key, null); } }, "reduce" : function(key, stuff) { return null; }, "out": "my_collection" + "_keys" }) Then run distinct on the resulting collection so as to find all the keys: db[mr.result].distinct("_id") ["foo", "bar", "baz", "_id", ...]
MongoDB
2,298,870
378
Following is my user schema in user.js model - var userSchema = new mongoose.Schema({ local: { name: { type: String }, email : { type: String, require: true, unique: true }, password: { type: String, require:true }, }, facebook: { id : { type: String }, token : { type: String }, email : { type: String }, name : { type: String } } }); var User = mongoose.model('User',userSchema); module.exports = User; This is how I am using it in my controller - var user = require('./../models/user.js'); This is how I am saving it in the db - user({'local.email' : req.body.email, 'local.password' : req.body.password}).save(function(err, result){ if(err) res.send(err); else { console.log(result); req.session.user = result; res.send({"code":200,"message":"Record inserted successfully"}); } }); Error - {"name":"MongoError","code":11000,"err":"insertDocument :: caused by :: 11000 E11000 duplicate key error index: mydb.users.$email_1 dup key: { : null }"} I checked the db collection and no such duplicate entry exists, let me know what I am doing wrong ? FYI - req.body.email and req.body.password are fetching values. I also checked this post but no help STACK LINK If I removed completely then it inserts the document, otherwise it throws error "Duplicate" error even I have an entry in the local.email
The error message is saying that there's already a record with null as the email. In other words, you already have a user without an email address. The relevant documentation for this: If a document does not have a value for the indexed field in a unique index, the index will store a null value for this document. Because of the unique constraint, MongoDB will only permit one document that lacks the indexed field. If there is more than one document without a value for the indexed field or is missing the indexed field, the index build will fail with a duplicate key error. You can combine the unique constraint with the sparse index to filter these null values from the unique index and avoid the error. unique indexes Sparse indexes only contain entries for documents that have the indexed field, even if the index field contains a null value. In other words, a sparse index is ok with multiple documents all having null values. sparse indexes From comments: Your error says that the key is named mydb.users.$email_1 which makes me suspect that you have an index on both users.email and users.local.email (The former being old and unused at the moment). Removing a field from a Mongoose model doesn't affect the database. Check with mydb.users.getIndexes() if this is the case and manually remove the unwanted index with mydb.users.dropIndex(<name>).
MongoDB
24,430,220
371
I have an array of _ids and I want to get all docs accordingly, what's the best way to do it ? Something like ... // doesn't work ... of course ... model.find({ '_id' : [ '4ed3ede8844f0f351100000c', '4ed3f117a844e0471100000d', '4ed3f18132f50c491100000e' ] }, function(err, docs){ console.log(docs); }); The array might contain hundreds of _ids.
The find function in mongoose is a full query to mongoDB. This means you can use the handy mongoDB $in clause, which works just like the SQL version of the same. model.find({ '_id': { $in: [ mongoose.Types.ObjectId('4ed3ede8844f0f351100000c'), mongoose.Types.ObjectId('4ed3f117a844e0471100000d'), mongoose.Types.ObjectId('4ed3f18132f50c491100000e') ]} }, function(err, docs){ console.log(docs); }); This method will work well even for arrays containing tens of thousands of ids. (See Efficiently determine the owner of a record) I would recommend that anybody working with mongoDB read through the Advanced Queries section of the excellent Official mongoDB Docs
MongoDB
8,303,900
362
I'm using the node-mongodb-native driver with MongoDB to write a website. I have some questions about how to manage connections: Is it enough using only one MongoDB connection for all requests? Are there any performance issues? If not, can I setup a global connection to use in the whole application? If not, is it good if I open a new connection when request arrives, and close it when handled the request? Is it expensive to open and close a connection? Should I use a global connection pool? I hear the driver has a native connection pool. Is it a good choice? If I use a connection pool, how many connections should be used? Are there other things I should notice?
The primary committer to node-mongodb-native says: You open do MongoClient.connect once when your app boots up and reuse the db object. It's not a singleton connection pool each .connect creates a new connection pool. So, to answer your question directly, reuse the db object that results from MongoClient.connect(). This gives you pooling, and will provide a noticeable speed increase as compared with opening/closing connections on each db action.
MongoDB
10,656,574
361
I am playing around with MongoDB trying to figure out how to do a simple SELECT province, COUNT(*) FROM contest GROUP BY province But I can't seem to figure it out using the aggregate function. I can do it using some really weird group syntax db.user.group({ "key": { "province": true }, "initial": { "count": 0 }, "reduce": function(obj, prev) { if (true != null) if (true instanceof Array) prev.count += true.length; else prev.count++; } }); But is there an easier/faster way using the aggregate function?
This would be the easier way to do it using aggregate: db.contest.aggregate([ {"$group" : {_id:"$province", count:{$sum:1}}} ])
MongoDB
23,116,330
352
I know how to list all collections in a particular database, but how do I list all available databases in MongoDB shell?
Listing all the databases in mongoDB console is using the command show dbs. For more information on mongo shell commands, refer the MongoDB Shell (mongosh) documentation.
MongoDB
25,947,929
349
I have a data like this in mongodb { "latitude" : "", "longitude" : "", "course" : "", "battery" : "0", "imei" : "0", "altitude" : "F:3.82V", "mcc" : "07", "mnc" : "007B", "lac" : "2A83", "_id" : ObjectId("4f0eb2c406ab6a9d4d000003"), "createdAt" : ISODate("2012-01-12T20:15:31Z") } How do I query db.gpsdatas.find({'createdAt': ??what here??}), so that it returns the above data result to me from the db?
You probably want to make a range query, for example, all items created after a given date: db.gpsdatas.find({"createdAt" : { $gte : new ISODate("2012-01-12T20:15:31Z") }}); I'm using $gte (greater than or equals), because this is often used for date-only queries, where the time component is 00:00:00. If you really want to find a date that equals another date, the syntax would be db.gpsdatas.find({"createdAt" : new ISODate("2012-01-12T20:15:31Z") });
MongoDB
8,835,757
344
Suppose the mongodb document(table) 'users' is { _id: 1, name: { first: 'John', last: 'Backus' }, birth: new Date('Dec 03, 1924'), death: new Date('Mar 17, 2007'), contribs: ['Fortran', 'ALGOL', 'Backus-Naur Form', 'FP'], awards: [ { award: 'National Medal', year: 1975, by: 'NSF' }, { award: 'Turing Award', year: 1977, by: 'ACM' } ] } // ...and other object(person)s I want to find the person who has the award 'National Medal' and must be awarded in year 1975 There could be other persons who have this award in different years. How can I find this person using award type and year. So I can get exact person.
The right way is: db.users.find({awards: {$elemMatch: {award:'National Medal', year:1975}}}) $elemMatch allows you to match more than one component within the same array element. Without $elemMatch mongo will look for users with National Medal in some year and some award in the year 1975, but not for users with National Medal in 1975. See MongoDB $elemMatch Documentation for more info. See Read Operations Documentation for more information about querying documents with arrays.
MongoDB
14,040,562
339
Can you share your thoughts how would you implement data versioning in MongoDB. (I've asked similar question regarding Cassandra. If you have any thoughts which db is better for that please share) Suppose that I need to version records in an simple address book. (Address book records are stored as flat json objects). I expect that the history: will be used infrequently will be used all at once to present it in a "time machine" fashion there won't be more versions than few hundred to a single record. history won't expire. I'm considering the following approaches: Create a new object collection to store history of records or changes to the records. It would store one object per version with a reference to the address book entry. Such records would looks as follows: { '_id': 'new id', 'user': user_id, 'timestamp': timestamp, 'address_book_id': 'id of the address book record' 'old_record': {'first_name': 'Jon', 'last_name':'Doe' ...} } This approach can be modified to store an array of versions per document. But this seems to be slower approach without any advantages. Store versions as serialized (JSON) object attached to address book entries. I'm not sure how to attach such objects to MongoDB documents. Perhaps as an array of strings. (Modelled after Simple Document Versioning with CouchDB)
The first big question when diving in to this is "how do you want to store changesets"? Diffs? Whole record copies? My personal approach would be to store diffs. Because the display of these diffs is really a special action, I would put the diffs in a different "history" collection. I would use the different collection to save memory space. You generally don't want a full history for a simple query. So by keeping the history out of the object you can also keep it out of the commonly accessed memory when that data is queried. To make my life easy, I would make a history document contain a dictionary of time-stamped diffs. Something like this: { _id : "id of address book record", changes : { 1234567 : { "city" : "Omaha", "state" : "Nebraska" }, 1234568 : { "city" : "Kansas City", "state" : "Missouri" } } } To make my life really easy, I would make this part of my DataObjects (EntityWrapper, whatever) that I use to access my data. Generally these objects have some form of history, so that you can easily override the save() method to make this change at the same time. UPDATE: 2015-10 It looks like there is now a spec for handling JSON diffs. This seems like a more robust way to store the diffs / changes.
MongoDB
4,185,105
337
If you have subdocument arrays, Mongoose automatically creates ids for each one. Example: { _id: "mainId" subDocArray: [ { _id: "unwantedId", field: "value" }, { _id: "unwantedId", field: "value" } ] } Is there a way to tell Mongoose to not create ids for objects within an array?
It's simple, you can define this in the subschema : var mongoose = require("mongoose"); var subSchema = mongoose.Schema({ // your subschema content }, { _id : false }); var schema = mongoose.Schema({ // schema content subSchemaCollection : [subSchema] }); var model = mongoose.model('tablename', schema);
MongoDB
17,254,008
336
Basically I have a mongodb collection called 'people' whose schema is as follows: people: { name: String, friends: [{firstName: String, lastName: String}] } Now, I have a very basic express application that connects to the database and successfully creates 'people' with an empty friends array. In a secondary place in the application, a form is in place to add friends. The form takes in firstName and lastName and then POSTs with the name field also for reference to the proper people object. What I'm having a hard time doing is creating a new friend object and then "pushing" it into the friends array. I know that when I do this via the mongo console I use the update function with $push as my second argument after the lookup criteria, but I can't seem to find the appropriate way to get mongoose to do this. db.people.update({name: "John"}, {$push: {friends: {firstName: "Harry", lastName: "Potter"}}});
Assuming, var friend = { firstName: 'Harry', lastName: 'Potter' }; There are two options you have: Update the model in-memory, and save (plain javascript array.push): person.friends.push(friend); person.save(done); or PersonModel.update( { _id: person._id }, { $push: { friends: friend } }, done ); I always try and go for the first option when possible, because it'll respect more of the benefits that mongoose gives you (hooks, validation, etc.). However, if you are doing lots of concurrent writes, you will hit race conditions where you'll end up with nasty version errors to stop you from replacing the entire model each time and losing the previous friend you added. So only go to the latter when it's absolutely necessary.
MongoDB
33,049,707
331
I am trying to add authorization to my MongoDB. I am doing all this on Linux with MongoDB 2.6.1. My mongod.conf file is in the old compatibility format (this is how it came with the installation). 1) I created admin user as described here in (3) http://docs.mongodb.org/manual/tutorial/add-user-administrator/ 2) I then edited mongod.conf by uncommenting this line auth = true 3) Finally I rebooted the mongod service and I tried to login with: /usr/bin/mongo localhost:27017/admin -u sa -p pwd 4) I can connect but it says this upon connect. MongoDB shell version: 2.6.1 connecting to: localhost:27017/admin Welcome to the MongoDB shell! The current date/time is: Thu May 29 2014 17:47:16 GMT-0400 (EDT) Error while trying to show server startup warnings: not authorized on admin to execute command { getLog: "startupWarnings" } 5) Now it seems this sa user I created has no permissions at all. root@test02:~# mc MongoDB shell version: 2.6.1 connecting to: localhost:27017/admin Welcome to the MongoDB shell! The current date/time is: Thu May 29 2014 17:57:03 GMT-0400 (EDT) Error while trying to show server startup warnings: not authorized on admin to execute command { getLog: "startupWarnings" } [admin] 2014-05-29 17:57:03.011 >>> use admin switched to db admin [admin] 2014-05-29 17:57:07.889 >>> show collections 2014-05-29T17:57:10.377-0400 error: { "$err" : "not authorized for query on admin.system.namespaces", "code" : 13 } at src/mongo/shell/query.js:131 [admin] 2014-05-29 17:57:10.378 >>> use test switched to db test [test] 2014-05-29 17:57:13.466 >>> show collections 2014-05-29T17:57:15.930-0400 error: { "$err" : "not authorized for query on test.system.namespaces", "code" : 13 } at src/mongo/shell/query.js:131 [test] 2014-05-29 17:57:15.931 >>> What is the problem? I repeated this whole procedure 3 times and I think I did it all as specified in the MongoDB docs. But it doesn't work. I was expecting this sa user to be authorized to do anything so that he can then create other users and give them more specific permissions.
I was also scratching my head around the same issue, and everything worked after I set the role to be root when adding the first admin user. use admin db.createUser( { user: 'admin', pwd: 'password', roles: [ { role: 'root', db: 'admin' } ] } ); exit; If you have already created the admin user, you can change the role like this: use admin; db.grantRolesToUser('admin', [{ role: 'root', db: 'admin' }]) For a complete authentication setting reference, see the steps I've compiled after hours of research over the internet.
MongoDB
23,943,651
330
I am writing a webapp with Node.js and mongoose. How can I paginate the results I get from a .find() call? I would like a functionality comparable to "LIMIT 50,100" in SQL.
I'm am very disappointed by the accepted answers in this question. This will not scale. If you read the fine print on cursor.skip( ): The cursor.skip() method is often expensive because it requires the server to walk from the beginning of the collection or index to get the offset or skip position before beginning to return result. As offset (e.g. pageNumber above) increases, cursor.skip() will become slower and more CPU intensive. With larger collections, cursor.skip() may become IO bound. To achieve pagination in a scaleable way combine a limit( ) along with at least one filter criterion, a createdOn date suits many purposes. MyModel.find( { createdOn: { $lte: request.createdOnBefore } } ) .limit( 10 ) .sort( '-createdOn' )
MongoDB
5,539,955
328
db.foo.find().limit(300) won't do it. It still prints out only 20 documents. db.foo.find().toArray() db.foo.find().forEach(printjson) will both print out very expanded view of each document instead of the 1-line version for find():
DBQuery.shellBatchSize = 300 MongoDB Docs - Configure the mongo Shell - Change the mongo Shell Batch Size
MongoDB
3,705,517
322
I have a basic Node.js app that I am trying to get off the ground using the Express framework. I have a views folder where I have an index.html file. But I receive the following error when loading the web page: Error: Cannot find module 'html' Below is my code. var express = require('express'); var app = express.createServer(); app.use(express.staticProvider(__dirname + '/public')); app.get('/', function(req, res) { res.render('index.html'); }); app.listen(8080, '127.0.0.1') What am I missing here?
You can have jade include a plain HTML page: in views/index.jade include plain.html in views/plain.html <!DOCTYPE html> ... and app.js can still just render jade: res.render(index)
MongoDB
4,529,586
321
I have a node.js application that pulls some data and sticks it into an object, like this: var results = new Object(); User.findOne(query, function(err, u) { results.userId = u._id; } When I do an if/then based on that stored ID, the comparison is never true: if (results.userId == AnotherMongoDocument._id) { console.log('This is never true'); } When I do a console.log of the two id's, they match exactly: User id: 4fc67871349bb7bf6a000002 AnotherMongoDocument id: 4fc67871349bb7bf6a000002 I am assuming this is some kind of datatype problem, but I'm not sure how to convert results.userId to a datatype that will result in the above comparison being true and my outsourced brain (aka Google) has been unable to help.
Mongoose uses the mongodb-native driver, which uses the custom ObjectID type. You can compare ObjectIDs with the .equals() method. With your example, results.userId.equals(AnotherMongoDocument._id). The ObjectID type also has a toString() method, if you wish to store a stringified version of the ObjectID in JSON format, or a cookie. If you use ObjectID = require("mongodb").ObjectID (requires the mongodb-native library) you can check if results.userId is a valid identifier with results.userId instanceof ObjectID. Etc.
MongoDB
11,637,353
312
Is there a simple way to do this?
The best way is to do a mongodump then mongorestore. You can select the collection via: mongodump -d some_database -c some_collection [Optionally, zip the dump (zip some_database.zip some_database/* -r) and scp it elsewhere] Then restore it: mongorestore -d some_other_db -c some_or_other_collection dump/some_collection.bson Existing data in some_or_other_collection will be preserved. That way you can "append" a collection from one database to another. Prior to version 2.4.3, you will also need to add back your indexes after you copy over your data. Starting with 2.4.3, this process is automatic, and you can disable it with --noIndexRestore.
MongoDB
11,554,762
309
I have a problem when querying mongoDB with nested objects notation: db.messages.find( { headers : { From: "[email protected]" } } ).count() 0 db.messages.find( { 'headers.From': "[email protected]" } ).count() 5 I can't see what I am doing wrong. I am expecting nested object notation to return the same result as the dot notation query. Where am I wrong?
db.messages.find( { headers : { From: "[email protected]" } } ) This queries for documents where headers equals { From: ... }, i.e. contains no other fields. db.messages.find( { 'headers.From': "[email protected]" } ) This only looks at the headers.From field, not affected by other fields contained in, or missing from, headers. Dot-notation docs
MongoDB
16,002,659
303
Assuming I have a collection in MongoDB with 5000 records, each containing something similar to: { "occupation":"Doctor", "name": { "first":"Jimmy", "additional":"Smith" } Is there an easy way to rename the field "additional" to "last" in all documents? I saw the $rename operator in the documentation but I'm not really clear on how to specify a subfield.
You can use: db.foo.update({}, { $rename: { "name.additional": "name.last" } }, false, true); Or to just update the docs which contain the property: db.foo.update({ "name.additional": { $exists: true } }, { $rename: { "name.additional": "name.last" } }, false, true); The false, true in the method above are: { upsert:false, multi:true }. You need the multi:true to update all your records. Or you can use the former way: remap = function (x) { if (x.additional) { db.foo.update({ _id: x._id }, { $set: { "name.last": x.name.additional }, $unset: { "name.additional": 1 } }); } } db.foo.find().forEach(remap); In MongoDB 3.2 you can also use db.students.updateMany({}, { $rename: { "oldname": "newname" } }) The general syntax of this is db.collection.updateMany(filter, update, options) https://docs.mongodb.com/manual/reference/method/db.collection.updateMany/
MongoDB
9,254,351
299
Is there a set of preferred naming conventions for MongoDB entitites such as databases, collections, field names? I was thinking along these lines: Databases: consist of the purpose (word in singular) and end with “db” – all lower case: imagedb, resumedb, memberdb, etc. Collections: plural in lower case: images, resumes, Document fields: lowerCamelCase, e.g. memberFirstName, fileName, etc
Keep'em short: Optimizing Storage of Small Objects, SERVER-863. Silly but true. I guess pretty much the same rules that apply to relation databases should apply here. And after so many decades there is still no agreement whether RDBMS tables should be named singular or plural... MongoDB speaks JavaScript, so utilize JS naming conventions of camelCase. MongoDB official documentation mentions you may use underscores, also built-in identifier is named _id (but this may be be to indicate that _id is intended to be private, internal, never displayed or edited.
MongoDB
5,916,080
293
With the NoSQL movement growing based on document-based databases, I've looked at MongoDB lately. I have noticed a striking similarity with how to treat items as "Documents", just like Lucene does (and users of Solr). So, the question: Why would you want to use NoSQL (MongoDB, Cassandra, CouchDB, etc) over Lucene (or Solr) as your "database"? What I am (and I am sure others are) looking for in an answer is some deep-dive comparisons of them. Let's skip over relational database discussions all together, as they serve a different purpose. Lucene gives some serious advantages, such as powerful searching and weight systems. Not to mention facets in Solr (which Solr is being integrated into Lucene soon, yay!). You can use Lucene documents to store IDs, and access the documents as such just like MongoDB. Mix it with Solr, and you now get a WebService-based, load balanced solution. You can even throw in a comparison of out-of-proc cache providers such as Velocity or MemCached when talking about similar data storing and scalability of MongoDB. The restrictions around MongoDB reminds me of using MemCached, but I can use Microsoft's Velocity and have more grouping and list collection power over MongoDB (I think). Can't get any faster or scalable than caching data in memory. Even Lucene has a memory provider. MongoDB (and others) do have some advantages, such as the ease of use of their API. New up a document, create an id, and store it. Done. Nice and easy.
This is a great question, something I have pondered over quite a bit. I will summarize my lessons learned: You can easily use Lucene/Solr in lieu of MongoDB for pretty much all situations, but not vice versa. Grant Ingersoll's post sums it up here. MongoDB etc. seem to serve a purpose where there is no requirement of searching and/or faceting. It appears to be a simpler and arguably easier transition for programmers detoxing from the RDBMS world. Unless one's used to it Lucene & Solr have a steeper learning curve. There aren't many examples of using Lucene/Solr as a datastore, but Guardian has made some headway and summarize this in an excellent slide-deck, but they too are non-committal on totally jumping on Solr bandwagon and "investigating" combining Solr with CouchDB. Finally, I will offer our experience, unfortunately cannot reveal much about the business-case. We work on the scale of several TB of data, a near real-time application. After investigating various combinations, decided to stick with Solr. No regrets thus far (6-months & counting) and see no reason to switch to some other. Summary: if you do not have a search requirement, Mongo offers a simple & powerful approach. However if search is key to your offering, you are likely better off sticking to one tech (Solr/Lucene) and optimizing the heck out of it - fewer moving parts. My 2 cents, hope that helped.
MongoDB
3,215,029
284
Given this document saved in MongoDB { _id : ..., some_key: { param1 : "val1", param2 : "val2", param3 : "val3" } } An object with new information on param2 and param3 from the outside world needs to be saved var new_info = { param2 : "val2_new", param3 : "val3_new" }; I want to merge / overlay the new fields over the existing state of the object so that param1 doesn't get removed Doing this db.collection.update( { _id:...} , { $set: { some_key : new_info } } Will lead to MongoDB is doing exactly as it was asked, and sets some_key to that value. replacing the old one. { _id : ..., some_key: { param2 : "val2_new", param3 : "val3_new" } } What is the way to have MongoDB update only new fields (without stating them one by one explicitly)? to get this: { _id : ..., some_key: { param1 : "val1", param2 : "val2_new", param3 : "val3_new" } } I'm using the Java client, but any example will be appreciated
I solved it with my own function. If you want to update specified field in document you need to address it clearly. Example: { _id : ..., some_key: { param1 : "val1", param2 : "val2", param3 : "val3" } } If you want to update param2 only, it's wrong to do: db.collection.update( { _id:...} , { $set: { some_key : new_info } } //WRONG You must use: db.collection.update( { _id:...} , { $set: { "some_key.param2" : new_info } } So i wrote a function something like that: function _update($id, $data, $options=array()){ $temp = array(); foreach($data as $key => $value) { $temp["some_key.".$key] = $value; } $collection->update( array('_id' => $id), array('$set' => $temp) ); } _update('1', array('param2' => 'some data'));
MongoDB
10,290,621
283
> db.data.update({'name': 'zero'}, {'$set': {'value': 0}}) > db.data.findOne({'name': 'zero}) {'name': 'zero', 'value': 0.0} How do I get Mongo to insert an integer? Thank you
db.data.update({'name': 'zero'}, {'$set': {'value': NumberInt(0)}}) You can also use NumberLong.
MongoDB
8,218,484
280
I wanted to use the mongodb database, but I noticed that there are two different databases with either their own website and installation methods: mongodb and mongoose. So I came up asking myself this question: "Which one do I use?". So in order to answer this question I ask the community if you could explain what are the differences between these two? And if possible pros and cons? Because they really look very similar to me.
I assume you already know that MongoDB is a NoSQL database system which stores data in the form of BSON documents. Your question, however is about the packages for Node.js. In terms of Node.js, mongodb is the native driver for interacting with a mongodb instance and mongoose is an Object modeling tool for MongoDB. mongoose is built on top of the mongodb driver to provide programmers with a way to model their data. EDIT: I do not want to comment on which is better, as this would make this answer opinionated. However I will list some advantages and disadvantages of using both approaches. Using mongoose, a user can define the schema for the documents in a particular collection. It provides a lot of convenience in the creation and management of data in MongoDB. On the downside, learning mongoose can take some time, and has some limitations in handling schemas that are quite complex. However, if your collection schema is unpredictable, or you want a Mongo-shell like experience inside Node.js, then go ahead and use the mongodb driver. It is the simplest to pick up. The downside here is that you will have to write larger amounts of code for validating the data, and the risk of errors is higher.
MongoDB
28,712,248
263
I have a collected named foo hypothetically. Each instance of foo has a field called lastLookedAt which is a UNIX timestamp since epoch. I'd like to be able to go through the MongoDB client and set that timestamp for all existing documents (about 20,000 of them) to the current timestamp. What's the best way of handling this?
Regardless of the version, for your example, the <update> is: { $set: { lastLookedAt: Date.now() / 1000 } } However, depending on your version of MongoDB, the query will look different. Regardless of version, the key is that the empty condition {} will match any document. In the Mongo shell, or with any MongoDB client: $version >= 3.2: db.foo.updateMany( {}, <update> ) {} is the condition (the empty condition matches any document) 3.2 > $version >= 2.2: db.foo.update( {}, <update>, { multi: true } ) {} is the condition (the empty condition matches any document) {multi: true} is the "update multiple documents" option $version < 2.2: db.foo.update( {}, <update>, false, true ) {} is the condition (the empty condition matches any document) false is for the "upsert" parameter true is for the "multi" parameter (update multiple records)
MongoDB
9,038,547
262
I am not a database expert and have no formal computer science background, so bear with me. I want to know the kinds of real world negative things that can happen if you use an old MongoDB version prior to v4, which were not ACID compliant. This applies to any ACID noncompliant database. I understand that MongoDB can perform Atomic Operations, but that they don't "support traditional locking and complex transactions", mostly for performance reasons. I also understand the importance of database transactions, and the example of when your database is for a bank, and you're updating several records that all need to be in sync, you want the transaction to revert back to the initial state if there's a power outage so credit equals purchase, etc. But when I get into conversations about MongoDB, those of us that don't know the technical details of how databases are actually implemented start throwing around statements like: MongoDB is way faster than MySQL and Postgres, but there's a tiny chance, like 1 in a million, that it "won't save correctly". That "won't save correctly" part is referring to this understanding: If there's a power outage right at the instant you're writing to MongoDB, there's a chance for a particular record (say you're tracking pageviews in documents with 10 attributes each), that one of the documents only saved 5 of the attributes… which means over time your pageview counters are going to be "slightly" off. You'll never know by how much, you know they'll be 99.999% correct, but not 100%. This is because, unless you specifically made this a mongodb atomic operation, the operation is not guaranteed to have been atomic. So my question is, what is the correct interpretation of when and why MongoDB may not "save correctly"? What parts of ACID does it not satisfy, and under what circumstances, and how do you know when that 0.001% of your data is off? Can't this be fixed somehow? If not, this seems to mean that you shouldn't store things like your users table in MongoDB, because a record might not save. But then again, that 1/1,000,000 user might just need to "try signing up again", no? I am just looking for maybe a list of when/why negative things happen with an ACID noncompliant database like MongoDB, and ideally if there's a standard workaround (like run a background job to cleanup data, or only use SQL for this, etc.).
It's actually not correct that MongoDB is not ACID-compliant. On the contrary, MongoDB is ACID-compilant at the document level. Any update to a single document is Atomic: it either fully completes or it does not Consistent: no reader will see a "partially applied" update Isolated: again, no reader will see a "dirty" read Durable: (with the appropriate write concern) What MongoDB doesn't have is transactions -- that is, multiple-document updates that can be rolled back and are ACID-compliant. Note that you can build transactions on top of the ACID-compliant updates to a single document, by using two-phase commit.
MongoDB
7,149,890
257
Is there a function to turn a string into an objectId in node using mongoose? The schema specifies that something is an ObjectId, but when it is saved from a string, mongo tells me it is still just a string. The _id of the object, for instance, is displayed as objectId("blah").
You can do it like so: var mongoose = require('mongoose'); var id = mongoose.Types.ObjectId('4edd40c86762e0fb12000003');
MongoDB
6,578,178
257
Is there a way to add created_at and updated_at fields to a mongoose schema, without having to pass them in everytime new MyModel() is called? The created_at field would be a date and only added when a document is created. The updated_at field would be updated with new date whenever save() is called on a document. I have tried this in my schema, but the field does not show up unless I explicitly add it: var ItemSchema = new Schema({ name : { type: String, required: true, trim: true }, created_at : { type: Date, required: true, default: Date.now } });
UPDATE: (5 years later) Note: If you decide to use Kappa Architecture (Event Sourcing + CQRS), then you do not need updated date at all. Since your data is an immutable, append-only event log, you only ever need event created date. Similar to the Lambda Architecture, described below. Then your application state is a projection of the event log (derived data). If you receive a subsequent event about existing entity, then you'll use that event's created date as updated date for your entity. This is a commonly used (and commonly misunderstood) practice in miceroservice systems. UPDATE: (4 years later) If you use ObjectId as your _id field (which is usually the case), then all you need to do is: let document = { updatedAt: new Date(), } Check my original answer below on how to get the created timestamp from the _id field. If you need to use IDs from external system, then check Roman Rhrn Nesterov's answer. UPDATE: (2.5 years later) You can now use the #timestamps option with mongoose version >= 4.0. let ItemSchema = new Schema({ name: { type: String, required: true, trim: true } }, { timestamps: true }); If set timestamps, mongoose assigns createdAt and updatedAt fields to your schema, the type assigned is Date. You can also specify the timestamp fileds' names: timestamps: { createdAt: 'created_at', updatedAt: 'updated_at' } Note: If you are working on a big application with critical data you should reconsider updating your documents. I would advise you to work with immutable, append-only data (lambda architecture). What this means is that you only ever allow inserts. Updates and deletes should not be allowed! If you would like to "delete" a record, you could easily insert a new version of the document with some timestamp/version filed and then set a deleted field to true. Similarly if you want to update a document – you create a new one with the appropriate fields updated and the rest of the fields copied over.Then in order to query this document you would get the one with the newest timestamp or the highest version which is not "deleted" (the deleted field is undefined or false`). Data immutability ensures that your data is debuggable – you can trace the history of every document. You can also rollback to previous version of a document if something goes wrong. If you go with such an architecture ObjectId.getTimestamp() is all you need, and it is not Mongoose dependent. ORIGINAL ANSWER: If you are using ObjectId as your identity field you don't need created_at field. ObjectIds have a method called getTimestamp(). ObjectId("507c7f79bcf86cd7994f6c0e").getTimestamp() This will return the following output: ISODate("2012-10-15T21:26:17Z") More info here How do I extract the created date out of a Mongo ObjectID In order to add updated_at filed you need to use this: var ArticleSchema = new Schema({ updated_at: { type: Date } // rest of the fields go here }); ArticleSchema.pre('save', function(next) { this.updated_at = Date.now(); next(); });
MongoDB
12,669,615
255
I have a Mongo document which holds an array of elements. I'd like to reset the .handled attribute of all objects in the array where .profile = XX. The document is in the following form: { "_id": ObjectId("4d2d8deff4e6c1d71fc29a07"), "user_id": "714638ba-2e08-2168-2b99-00002f3d43c0", "events": [{ "handled": 1, "profile": 10, "data": "....." } { "handled": 1, "profile": 10, "data": "....." } { "handled": 1, "profile": 20, "data": "....." } ... ] } so, I tried the following: .update({"events.profile":10},{$set:{"events.$.handled":0}},false,true) However it updates only the first matched array element in each document. (That's the defined behaviour for $ - the positional operator.) How can I update all matched array elements?
With the release of MongoDB 3.6 ( and available in the development branch from MongoDB 3.5.12 ) you can now update multiple array elements in a single request. This uses the filtered positional $[<identifier>] update operator syntax introduced in this version: db.collection.update( { "events.profile":10 }, { "$set": { "events.$[elem].handled": 0 } }, { "arrayFilters": [{ "elem.profile": 10 }], "multi": true } ) The "arrayFilters" as passed to the options for .update() or even .updateOne(), .updateMany(), .findOneAndUpdate() or .bulkWrite() method specifies the conditions to match on the identifier given in the update statement. Any elements that match the condition given will be updated. Noting that the "multi" as given in the context of the question was used in the expectation that this would "update multiple elements" but this was not and still is not the case. It's usage here applies to "multiple documents" as has always been the case or now otherwise specified as the mandatory setting of .updateMany() in modern API versions. NOTE Somewhat ironically, since this is specified in the "options" argument for .update() and like methods, the syntax is generally compatible with all recent release driver versions. However this is not true of the mongo shell, since the way the method is implemented there ( "ironically for backward compatibility" ) the arrayFilters argument is not recognized and removed by an internal method that parses the options in order to deliver "backward compatibility" with prior MongoDB server versions and a "legacy" .update() API call syntax. So if you want to use the command in the mongo shell or other "shell based" products ( notably Robo 3T ) you need a latest version from either the development branch or production release as of 3.6 or greater. See also positional all $[] which also updates "multiple array elements" but without applying to specified conditions and applies to all elements in the array where that is the desired action. Also see Updating a Nested Array with MongoDB for how these new positional operators apply to "nested" array structures, where "arrays are within other arrays". IMPORTANT - Upgraded installations from previous versions "may" have not enabled MongoDB features, which can also cause statements to fail. You should ensure your upgrade procedure is complete with details such as index upgrades and then run db.adminCommand( { setFeatureCompatibilityVersion: "3.6" } ) Or higher version as is applicable to your installed version. i.e "4.0" for version 4 and onwards at present. This enabled such features as the new positional update operators and others. You can also check with: db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } ) To return the current setting
MongoDB
4,669,178
247
Assume we have the following collection, which I have few questions about: { "_id" : ObjectId("4faaba123412d654fe83hg876"), "user_id" : 123456, "total" : 100, "items" : [ { "item_name" : "my_item_one", "price" : 20 }, { "item_name" : "my_item_two", "price" : 50 }, { "item_name" : "my_item_three", "price" : 30 } ] } I want to increase the price for "item_name":"my_item_two" and if it doesn't exists, it should be appended to the "items" array. How can I update two fields at the same time? For example, increase the price for "my_item_three" and at the same time increase the "total" (with the same value). I prefer to do this on the MongoDB side, otherwise I have to load the document in client-side (Python) and construct the updated document and replace it with the existing one in MongoDB. This is what I have tried and works fine if the object exists: db.test_invoice.update({user_id : 123456 , "items.item_name":"my_item_one"} , {$inc: {"items.$.price": 10}}) However, if the key doesn't exist, it does nothing. Also, it only updates the nested object. There is no way with this command to update the "total" field as well.
For question #1, let's break it into two parts. First, increment any document that has "items.item_name" equal to "my_item_two". For this you'll have to use the positional "$" operator. Something like: db.bar.update( {user_id : 123456 , "items.item_name" : "my_item_two" } , {$inc : {"items.$.price" : 1} } , false , true); Note that this will only increment the first matched subdocument in any array (so if you have another document in the array with "item_name" equal to "my_item_two", it won't get incremented). But this might be what you want. The second part is trickier. We can push a new item to an array without a "my_item_two" as follows: db.bar.update( {user_id : 123456, "items.item_name" : {$ne : "my_item_two" }} , {$addToSet : {"items" : {'item_name' : "my_item_two" , 'price' : 1 }} } , false , true); For your question #2, the answer is easier. To increment the total and the price of item_three in any document that contains "my_item_three," you can use the $inc operator on multiple fields at the same time. Something like: db.bar.update( {"items.item_name" : {$ne : "my_item_three" }} , {$inc : {total : 1 , "items.$.price" : 1}} , false , true);
MongoDB
10,522,347
246
I don't seem to be able to get even the most basic date query to work in MongoDB. With a document that looks something like this: { "_id" : "foobar/201310", "ap" : "foobar", "dt" : ISODate("2013-10-01T00:00:00.000Z"), "tl" : 375439 } And a query that looks like this: { "dt" : { "$gte" : { "$date" : "2013-10-01T00:00:00.000Z" } } } I get 0 results from executing: db.mycollection.find({ "dt" : { "$gte" : { "$date" : "2013-10-01T00:00:00.000Z"}} }) Any idea why this doesn't work? For reference, this query is being produced by Spring's MongoTemplate so I don't have direct control over the query that is ultimately sent to MongoDB. (P.S.) > db.version() 2.4.7 Thanks!
Although $date is a part of MongoDB Extended JSON and that's what you get as default with mongoexport, I don't think you can really use it as a part of the query. If try exact search with $date like below: db.foo.find({dt: {"$date": "2012-01-01T15:00:00.000Z"}}) you'll get the error: error: { "$err" : "invalid operator: $date", "code" : 10068 } Try this: db.mycollection.find({ "dt" : {"$gte": new Date("2013-10-01T00:00:00.000Z")} }) or (following comments by @user3805045): db.mycollection.find({ "dt" : {"$gte": ISODate("2013-10-01T00:00:00.000Z")} }) ISODate may be also required to compare dates without time (noted by @MattMolnar). According to Data Types in the mongo Shell both should be equivalent: The mongo shell provides various methods to return the date, either as a string or as a Date object: Date() method which returns the current date as a string. new Date() constructor which returns a Date object using the ISODate() wrapper. ISODate() constructor which returns a Date object using the ISODate() wrapper. and using ISODate should still return a Date object. {"$date": "ISO-8601 string"} can be used when strict JSON representation is required. One possible example is Hadoop connector.
MongoDB
19,819,870
242
What command should I use to create a MongoDB dump of my database?
To dump your database for backup you call this command on your terminal mongodump --db database_name --collection collection_name To import your backup file to mongodb you can use the following command on your terminal mongorestore --db database_name path_to_bson_file
MongoDB
4,880,874
242
I have mongoDB 3.2 installed locally for Windows 7. I would like to find out its specific version (like is it 3.2.1, or 3.2.3 or...). How could I find it? If I open the database shell (mongo.exe), I can see it outputs: MongoDB shell version: 3.2.0 But that's just the shell version, and I'm not sure whether it's the same as my real database version.
Just run your console and type: db.version() https://docs.mongodb.com/manual/reference/method/db.version/
MongoDB
38,160,412
241
How would I find duplicate fields in a mongo collection. I'd like to check if any of the "name" fields are duplicates. { "name" : "ksqn291", "__v" : 0, "_id" : ObjectId("540f346c3e7fc1054ffa7086"), "channel" : "Sales" } Many thanks!
Use aggregation on name and get name with count > 1: db.collection.aggregate([ {"$group" : { "_id": "$name", "count": { "$sum": 1 } } }, {"$match": {"_id" :{ "$ne" : null } , "count" : {"$gt": 1} } }, {"$project": {"name" : "$_id", "_id" : 0} } ]); To sort the results by most to least duplicates: db.collection.aggregate([ {"$group" : { "_id": "$name", "count": { "$sum": 1 } } }, {"$match": {"_id" :{ "$ne" : null } , "count" : {"$gt": 1} } }, {"$sort": {"count" : -1} }, {"$project": {"name" : "$_id", "_id" : 0} } ]); To use with another column name than "name", change "$name" to "$column_name"
MongoDB
26,984,799
239
How can I set up MongoDB so it can run as a Windows service?
After trying for several hours, I finally did it. Make sure: you added the <MONGODB_PATH>\bin directory to the system variable PATH run command prompt as administrator Steps: step 1: execute this command: D:\mongodb\bin>mongod --remove Step 2: execute this command after opening command prompt as administrator: D:\mongodb\bin>mongod --dbpath=D:\mongodb --logpath=D:\mongodb\log.txt --install NOTE: you can also append --serviceName MongoDB after the command above. That's All! After that right there in the command prompt execute: services.msc // OR net start MongoDB And look for MongoDB service and click start. NOTE: Make sure to run command prompt as administrator. If you don't do this, your log file (D:\mongodb\log.txt in the above example) will contain lines like these: 2016-11-11T15:24:54.618-0800 I CONTROL [main] Trying to install Windows service 'MongoDB' 2016-11-11T15:24:54.618-0800 I CONTROL [main] Error connecting to the Service Control Manager: Access is denied. (5) and if you try to start the service from a non-admin console, (i.e. net start MongoDB or Start-Service MongoDB in PowerShell), you'll get a response like this: System error 5 has occurred. Access is denied. or this: Start-Service : Service 'MongoDB (MongoDB)' cannot be started due to the following error: Cannot open MongoDB service on computer '.'. At line:1 char:1 + Start-Service MongoDB + ~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : OpenError: (System.ServiceProcess.ServiceController:ServiceController) [Start-Service], ServiceCommandException + FullyQualifiedErrorId : CouldNotStartService,Microsoft.PowerShell.Commands.StartServiceComman
MongoDB
2,438,055
230
I am coming from riak and redis where I never had an issue with this services starting, or to interact. This is a pervasive problem with mongo and am rather clueless. Restarting does not help.I am new to mongo. mongo MongoDB shell version: 2.2.1 connecting to: test Fri Nov 9 16:44:06 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91 exception: connect failed This is what I see in the logs. now open) Fri Nov 9 16:44:34 [conn47] end connection 10.29.16.208:5306 (1 connection now open) Fri Nov 9 16:45:04 [initandlisten] connection accepted from 10.29.16.208:5307 #48 (2 connections now open) Fri Nov 9 16:45:04 [conn48] end connection 10.29.16.208:5307 (1 connection now open) Fri Nov 9 16:45:04 [initandlisten] connection accepted from 10.29.16.208:5308 #49 (2 connections now open) Fri Nov 9 16:45:04 [conn49] end connection 10.29.16.208:5308 (1 connection now open) Fri Nov 9 16:45:34 [initandlisten] connection accepted from 10.29.16.208:5316 #50 (2 connections now open) Fri Nov 9 16:45:34 [conn50] end connection 10.29.16.208:5316 (1 connection now open) Fri Nov 9 16:45:34 [initandlisten] connection accepted from 10.29.16.208:5317 #51 (2 connections now open) Fri Nov 9 16:45:34 [conn51] end connection 10.29.16.208:5317 (1 connection now open) Fri Nov 9 16:46:04 [initandlisten] connection accepted from 10.29.16.208:5320 #52 (2 connections now open) Fri Nov 9 16:46:04 [conn52] end connection 10.29.16.208:5320 (1 connection now open) Fri Nov 9 16:46:04 [initandlisten] connection accepted from 10.29.16.208:5321 #53 (2 connections now open) Fri Nov 9 16:46:04 [conn53] end connection 10.29.16.208:5321 (1 conn
Normally this caused because you didn't start mongod process before you try starting mongo shell. Start mongod server mongod Open another terminal window Start mongo shell mongo
MongoDB
13,312,358
227
I'm creating a sort of background job queue system with MongoDB as the data store. How can I "listen" for inserts to a MongoDB collection before spawning workers to process the job? Do I need to poll every few seconds to see if there are any changes from last time, or is there a way my script can wait for inserts to occur? This is a PHP project that I am working on, but feel free to answer in Ruby or language agnostic.
What you are thinking of sounds a lot like triggers. MongoDB does not have any support for triggers, however some people have "rolled their own" using some tricks. The key here is the oplog. When you run MongoDB in a Replica Set, all of the MongoDB actions are logged to an operations log (known as the oplog). The oplog is basically just a running list of the modifications made to the data. Replicas Sets function by listening to changes on this oplog and then applying the changes locally. Does this sound familiar? I cannot detail the whole process here, it is several pages of documentation, but the tools you need are available. First some write-ups on the oplog - Brief description - Layout of the local collection (which contains the oplog) You will also want to leverage tailable cursors. These will provide you with a way to listen for changes instead of polling for them. Note that replication uses tailable cursors, so this is a supported feature.
MongoDB
9,691,316
227
Here is array structure contact: { phone: [ { number: "+1786543589455", place: "New Jersey", createdAt: "" } { number: "+1986543589455", place: "Houston", createdAt: "" } ] } Here I only know the mongo id(_id) and phone number(+1786543589455) and I need to remove that whole corresponding array element from document. i.e zero indexed element in phone array is matched with phone number and need to remove the corresponding array element. contact: { phone: [ { number: "+1986543589455", place: "Houston", createdAt: "" } ] } I tried with following update method collection.update( { _id: id, 'contact.phone': '+1786543589455' }, { $unset: { 'contact.phone.$.number': '+1786543589455'} } ); But it removes number: +1786543589455 from inner array object, not zero indexed element in phone array. Tried with pull also without a success. How to remove the array element in mongodb?
Try the following query: collection.update( { _id: id }, { $pull: { 'contact.phone': { number: '+1786543589455' } } } ); It will find document with the given _id and remove the phone +1786543589455 from its contact.phone array. You can use $unset to unset the value in the array (set it to null), but not to remove it completely.
MongoDB
16,959,099
223
Is it possible for the same exact Mongo ObjectId to be generated for a document in two different collections? I realize that it's definitely very unlikely, but is it possible? Without getting too specific, the reason I ask is that with an application that I'm working on we show public profiles of elected officials who we hope to convert into full fledged users of our site. We have separate collections for users and the elected officials who aren't currently members of our site. There are various other documents containing various pieces of data about the elected officials that all map back to the person using their elected official ObjectId. After creating the account we still highlight the data that's associated to the elected official but they now also are a part of the users collection with a corresponding users ObjectId to map their profile to interactions with our application. We had begun converting our application from MySql to Mongo a few months ago and while we're in transition we store the legacy MySql id for both of these data types and we're also starting to now store the elected official Mongo ObjectId in the users document to map back to the elected official data. I was pondering just specifying the new user ObjectId as the previous elected official ObjectId to make things simpler but wanted to make sure that it wasn't possible to have a collision with any existing user ObjectId. Thanks for your insight. Edit: Shortly after posting this question, I realized that my proposed solution wasn't a very good idea. It would be better to just keep the current schema that we have in place and just link to the elected official '_id' in the users document.
Short Answer Just to add a direct response to your initial question: YES, if you use BSON Object ID generation, then for most drivers the IDs are almost certainly going to be unique across collections. See below for what "almost certainly" means. Long Answer The BSON Object ID's generated by Mongo DB drivers are highly likely to be unique across collections. This is mainly because of the last 3 bytes of the ID, which for most drivers is generated via a static incrementing counter. That counter is collection-independent; it's global. The Java driver, for example, uses a randomly initialized, static AtomicInteger. So why, in the Mongo docs, do they say that the IDs are "highly likely" to be unique, instead of outright saying that they WILL be unique? Three possibilities can occur where you won't get a unique ID (please let me know if there are more): Before this discussion, recall that the BSON Object ID consists of: [4 bytes seconds since epoch, 3 bytes machine hash, 2 bytes process ID, 3 bytes counter] Here are the three possibilities, so you judge for yourself how likely it is to get a dupe: 1) Counter overflow: there are 3 bytes in the counter. If you happen to insert over 16,777,216 (2^24) documents in a single second, on the same machine, in the same process, then you may overflow the incrementing counter bytes and end up with two Object IDs that share the same time, machine, process, and counter values. 2) Counter non-incrementing: some Mongo drivers use random numbers instead of incrementing numbers for the counter bytes. In these cases, there is a 1/16,777,216 chance of generating a non-unique ID, but only if those two IDs are generated in the same second (i.e. before the time section of the ID updates to the next second), on the same machine, in the same process. 3) Machine and process hash to the same values. The machine ID and process ID values may, in some highly unlikely scenario, map to the same values for two different machines. If this occurs, and at the same time the two counters on the two different machines, during the same second, generate the same value, then you'll end up with a duplicate ID. These are the three scenarios to watch out for. Scenario 1 and 3 seem highly unlikely, and scenario 2 is totally avoidable if you're using the right driver. You'll have to check the source of the driver to know for sure.
MongoDB
4,677,237
223
I have a JSON file consisting of about 2000 records. Each record which will correspond to a document in the mongo database is formatted as follows: {jobID:"2597401", account:"XXXXX", user:"YYYYY", pkgT:{"pgi/7.2-5":{libA:["libpgc.so"],flavor:["default"]}}, startEpoch:"1338497979", runTime:"1022", execType:"user:binary", exec:"/share/home/01482/XXXXX/appker/ranger/NPB3.3.1/NPB3.3-MPI/bin/ft.D.64", numNodes:"4", sha1:"5a79879235aa31b6a46e73b43879428e2a175db5", execEpoch:1336766742, execModify: new Date("Fri May 11 15:05:42 2012"), startTime: new Date("Thu May 31 15:59:39 2012"), numCores:"64", sizeT:{bss:"1881400168",text:"239574",data:"22504"}}, Each record is on a single line in the JSON file, and the only line breaks are at the end of every record. Therefore, each line in the document starts with "{jobID:"... I am trying to import these into a mongo database using the following command: mongoimport --db dbName --collection collectionName --file fileName.json However, I get the following error: Sat Mar 2 01:26:12 Assertion: 10340:Failure parsing JSON string near: ,execModif 0x10059f12b 0x100562d5c 0x100562e9c 0x10025eb98 0x10000e643 0x100010b60 0x10055c4cc 0x1000014b7 0x100001454 0 mongoimport 0x000000010059f12b _ZN5mongo15printStackTraceERSo + 43 1 mongoimport 0x0000000100562d5c _ZN5mongo11msgassertedEiPKc + 204 2 mongoimport 0x0000000100562e9c _ZN5mongo11msgassertedEiRKSs + 12 3 mongoimport 0x000000010025eb98 _ZN5mongo8fromjsonEPKcPi + 1576 4 mongoimport 0x000000010000e643 _ZN6Import8parseRowEPSiRN5mongo7BSONObjERi + 2739 5 mongoimport 0x0000000100010b60 _ZN6Import3runEv + 7376 6 mongoimport 0x000000010055c4cc _ZN5mongo4Tool4mainEiPPc + 5436 7 mongoimport 0x00000001000014b7 main + 55 8 mongoimport 0x0000000100001454 start + 52 Sat Mar 2 01:26:12 exception:BSON representation of supplied JSON is too large: Failure parsing JSON string near: ,execModif Sat Mar 2 01:26:12 Sat Mar 2 01:26:12 imported 0 objects Sat Mar 2 01:26:12 ERROR: encountered 1941 errors I do not know what the problem is. Can someone recommend a solution?
I was able to fix the error using the following query: mongoimport --db dbName --collection collectionName --file fileName.json --jsonArray Hopefully this is helpful to someone.
MongoDB
15,171,622
222
How can I store images in a MongoDB database rather than just text? Can I create an array of images in a MongoDB database? Will it be possible to do the same for videos?
Please see the GridFS docs for details on storing such binary data. Support for your specific language should be linked to at the bottom of the screen.
MongoDB
4,796,914
222
It's widely mentioned that Redis is "Blazing Fast" and mongoDB is fast too. But, I'm having trouble finding actual numbers comparing the results of the two. Given similar configurations, features and operations (and maybe showing how the factor changes with different configurations and operations), etc, is Redis 10x faster?, 2x faster?, 5x faster? I'm ONLY speaking of performance. I understand that mongoDB is a different tool and has a richer feature set. This is not the "Is mongoDB better than Redis" debate. I'm asking, by what margin does Redis outperform mongoDB? At this point, even cheap benchmarks are better than no benchmarks.
Rough results from the following benchmark: 2x write, 3x read. Here's a simple benchmark in python you can adapt to your purposes, I was looking at how well each would perform simply setting/retrieving values: #!/usr/bin/env python2.7 import sys, time from pymongo import Connection import redis # connect to redis & mongodb redis = redis.Redis() mongo = Connection().test collection = mongo['test'] collection.ensure_index('key', unique=True) def mongo_set(data): for k, v in data.iteritems(): collection.insert({'key': k, 'value': v}) def mongo_get(data): for k in data.iterkeys(): val = collection.find_one({'key': k}, fields=('value',)).get('value') def redis_set(data): for k, v in data.iteritems(): redis.set(k, v) def redis_get(data): for k in data.iterkeys(): val = redis.get(k) def do_tests(num, tests): # setup dict with key/values to retrieve data = {'key' + str(i): 'val' + str(i)*100 for i in range(num)} # run tests for test in tests: start = time.time() test(data) elapsed = time.time() - start print "Completed %s: %d ops in %.2f seconds : %.1f ops/sec" % (test.__name__, num, elapsed, num / elapsed) if __name__ == '__main__': num = 1000 if len(sys.argv) == 1 else int(sys.argv[1]) tests = [mongo_set, mongo_get, redis_set, redis_get] # order of tests is significant here! do_tests(num, tests) Results for with mongodb 1.8.1 and redis 2.2.5 and latest pymongo/redis-py: $ ./cache_benchmark.py 10000 Completed mongo_set: 10000 ops in 1.40 seconds : 7167.6 ops/sec Completed mongo_get: 10000 ops in 2.38 seconds : 4206.2 ops/sec Completed redis_set: 10000 ops in 0.78 seconds : 12752.6 ops/sec Completed redis_get: 10000 ops in 0.89 seconds : 11277.0 ops/sec Take the results with a grain of salt of course! If you are programming in another language, using other clients/different implementations, etc your results will vary wildy. Not to mention your usage will be completely different! Your best bet is to benchmark them yourself, in precisely the manner you are intending to use them. As a corollary you'll probably figure out the best way to make use of each. Always benchmark for yourself!
MongoDB
5,252,577
220
let's say I run this query in Mongoose: Room.find({}, (err,docs) => { }).sort({date:-1}); This doesn't work!
Sorting in Mongoose has evolved over the releases such that some of these answers are no longer valid. As of the 4.1.x release of Mongoose, a descending sort on the date field can be done in any of the following ways: Room.find({}).sort('-date').exec((err, docs) => { ... }); Room.find({}).sort({date: -1}).exec((err, docs) => { ... }); Room.find({}).sort({date: 'desc'}).exec((err, docs) => { ... }); Room.find({}).sort({date: 'descending'}).exec((err, docs) => { ... }); Room.find({}).sort([['date', -1]]).exec((err, docs) => { ... }); Room.find({}, null, {sort: '-date'}, (err, docs) => { ... }); Room.find({}, null, {sort: {date: -1}}, (err, docs) => { ... }); For an ascending sort, omit the - prefix on the string version or use values of 1, asc, or ascending.
MongoDB
5,825,520
219
I'd like to generate a MongoDB ObjectId with Mongoose. Is there a way to access the ObjectId constructor from Mongoose? This question is about generating a new ObjectId from scratch. The generated ID is a brand new universally unique ID. Another question asks about creating an ObjectId from an existing string representation. In this case, you already have a string representation of an ID—it may or may not be universally unique—and you are parsing it into an ObjectId.
You can find the ObjectId constructor on require('mongoose').Types. Here is an example: var mongoose = require('mongoose'); var id = mongoose.Types.ObjectId(); id is a newly generated ObjectId. Note: As Joshua Sherman points out, with Mongoose 6 you must prefix the call with new: var id = new mongoose.Types.ObjectId(); You can read more about the Types object at Mongoose#Types documentation.
MongoDB
17,899,750
218
Not Sure what I'm doing wrong, here is my check.js var db = mongoose.createConnection('localhost', 'event-db'); db.on('error', console.error.bind(console, 'connection error:')); var a1= db.once('open',function(){ var user = mongoose.model('users',{ name:String, email:String, password:String, phone:Number, _enabled:Boolean }); user.find({},{},function (err, users) { mongoose.connection.close(); console.log("Username supplied"+username); //doSomethingHere }) }); and here is my insert.js var mongoose = require('mongoose'); mongoose.connect('mongodb://localhost/event-db') var user = mongoose.model('users',{ name:String, email:String, password: String, phone:Number, _enabled:Boolean }); var new_user = new user({ name:req.body.name, email: req.body.email, password: req.body.password, phone: req.body.phone, _enabled:false }); new_user.save(function(err){ if(err) console.log(err); }); Whenever I'm trying to run check.js, I'm getting this error Cannot overwrite 'users' model once compiled. I understand that this error comes due to mismatching of Schema, but I cannot see where this is happening ? I'm pretty new to mongoose and nodeJS. Here is what I'm getting from the client interface of my MongoDB: MongoDB shell version: 2.4.6 connecting to: test > use event-db switched to db event-db > db.users.find() { "_id" : ObjectId("52457d8718f83293205aaa95"), "name" : "MyName", "email" : "[email protected]", "password" : "myPassword", "phone" : 900001123, "_enable" : true } >
Another reason you might get this error is if you use the same model in different files but your require path has a different case. For example, in my situation I had require('./models/User') in one file, and then in another file where I needed access to the User model, I had require('./models/user'). I guess the lookup for modules & mongoose is treating it as a different file. Once I made sure the case matched in both it was no longer an issue.
MongoDB
19,051,041
214
I'm trying to use the sort feature when querying my mongoDB, but it is failing. The same query works in the MongoDB console but not here. Code is as follows: import pymongo from pymongo import Connection connection = Connection() db = connection.myDB print db.posts.count() for post in db.posts.find({}, {'entities.user_mentions.screen_name':1}).sort({u'entities.user_mentions.screen_name':1}): print post The error I get is as follows: Traceback (most recent call last): File "find_ow.py", line 7, in <module> for post in db.posts.find({}, {'entities.user_mentions.screen_name':1}).sort({'entities.user_mentions.screen_name':1},1): File "/Library/Python/2.6/site-packages/pymongo-2.0.1-py2.6-macosx-10.6-universal.egg/pymongo/cursor.py", line 430, in sort File "/Library/Python/2.6/site-packages/pymongo-2.0.1-py2.6-macosx-10.6-universal.egg/pymongo/helpers.py", line 67, in _index_document TypeError: first item in each key pair must be a string I found a link elsewhere that says I need to place a 'u' infront of the key if using pymongo, but that didn't work either. Anyone else get this to work or is this a bug.
.sort(), in pymongo, takes key and direction as parameters. So if you want to sort by, let's say, id then you should .sort("_id", 1) For multiple fields: .sort([("field1", pymongo.ASCENDING), ("field2", pymongo.DESCENDING)])
MongoDB
8,109,122
214
I've seen using strings, integer timestamps and mongo datetime objects.
The best way is to store native JavaScript Date objects, which map onto BSON native Date objects. > db.test.insert({date: ISODate()}) > db.test.insert({date: new Date()}) > db.test.find() { "_id" : ObjectId("..."), "date" : ISODate("2014-02-10T10:50:42.389Z") } { "_id" : ObjectId("..."), "date" : ISODate("2014-02-10T10:50:57.240Z") } The native type supports a whole range of useful methods out of the box, which you can use in your map-reduce jobs, for example. If you need to, you can easily convert Date objects to and from Unix timestamps1), using the getTime() method and Date(milliseconds) constructor, respectively. 1) Strictly speaking, the Unix timestamp is measured in seconds. The JavaScript Date object measures in milliseconds since the Unix epoch.
MongoDB
3,778,428
213
I saw the following code in this commit for MongoDB's Java Connection driver, and it appears at first to be a joke of some sort. What does the following code do? if (!((_ok) ? true : (Math.random() > 0.1))) { return res; } (EDIT: the code has been updated since posting this question)
After inspecting the history of that line, my main conclusion is that there has been some incompetent programming at work. That line is gratuitously convoluted. The general form a? true : b for boolean a, b is equivalent to the simple a || b The surrounding negation and excessive parentheses convolute things further. Keeping in mind De Morgan's laws it is a trivial observation that this piece of code amounts to if (!_ok && Math.random() <= 0.1) return res; The commit that originally introduced this logic had if (_ok == true) { _logger.log( Level.WARNING , "Server seen down: " + _addr, e ); } else if (Math.random() < 0.1) { _logger.log( Level.WARNING , "Server seen down: " + _addr ); } —another example of incompetent coding, but notice the reversed logic: here the event is logged if either _ok or in 10% of other cases, whereas the code in 2. returns 10% of the times and logs 90% of the times. So the later commit ruined not only clarity, but correctness itself. I think in the code you have posted we can actually see how the author intended to transform the original if-then somehow literally into its negation required for the early return condition. But then he messed up and inserted an effective "double negative" by reversing the inequality sign. Coding style issues aside, stochastic logging is quite a dubious practice all by itself, especially since the log entry does not document its own peculiar behavior. The intention is, obviously, reducing restatements of the same fact: that the server is currently down. The appropriate solution is to log only changes of the server state, and not each its observation, let alone a random selection of 10% such observations. Yes, that takes just a little bit more effort, so let's see some. I can only hope that all this evidence of incompetence, accumulated from inspecting just three lines of code, does not speak fairly of the project as a whole, and that this piece of work will be cleaned up ASAP.
MongoDB
16,833,100
212
So I'm attempting to find all records who have a field set and isn't null. I try using $exists, however according to the MongoDB documentation, this query will return fields who equal null. $exists does match documents that contain the field that stores the null value. So I'm now assuming I'll have to do something like this: db.collection.find({ "fieldToCheck" : { $exists : true, $not : null } }) Whenever I try this however, I get the error [invalid use of $not] Anyone have an idea of how to query for this?
Use $ne (for "not equal") db.collection.find({ "fieldToCheck": { $ne: null } })
MongoDB
19,868,016
210
The question is as basic as it is simple... How do you log all queries in a "tail"able log file in mongodb? I have tried: setting the profiling level setting the slow ms parameter starting mongod with the -vv option The /var/log/mongodb/mongodb.log keeps showing just the current number of active connections...
You can log all queries: $ mongo MongoDB shell version: 2.4.9 connecting to: test > use myDb switched to db myDb > db.getProfilingLevel() 0 > db.setProfilingLevel(2) { "was" : 0, "slowms" : 1, "ok" : 1 } > db.getProfilingLevel() 2 > db.system.profile.find().pretty() Source: http://docs.mongodb.org/manual/reference/method/db.setProfilingLevel/ db.setProfilingLevel(2) means "log all operations".
MongoDB
15,204,341
210
I am just starting out with MongoDB and one of the things that I have noticed is that it uses BSON to store data internally. However the documentation is not exactly clear on what BSON is and how it is used in MongoDB. Can someone explain it to me, please?
BSON is the binary encoding of JSON-like documents that MongoDB uses when storing documents in collections. It adds support for data types like Date and binary that aren't supported in JSON. In practice, you don't have to know much about BSON when working with MongoDB, you just need to use the native types of your language and the supplied types (e.g. ObjectId) of its driver when constructing documents and they will be mapped into the appropriate BSON type by the driver.
MongoDB
12,438,280
210
CSV file with contact information: Name,Address,City,State,ZIP Jane Doe,123 Main St,Whereverville,CA,90210 John Doe,555 Broadway Ave,New York,NY,10010 Running this doesn't add documents to the database: $ mongoimport -d mydb -c things --type csv --file locations.csv --headerline Trace says imported 1 objects, but in the MongoDB shell running db.things.find() doesn't show any new documents. What am I missing?
Your example worked for me with MongoDB 1.6.3 and 1.7.3. Example below was for 1.7.3. Are you using an older version of MongoDB? $ cat > locations.csv Name,Address,City,State,ZIP Jane Doe,123 Main St,Whereverville,CA,90210 John Doe,555 Broadway Ave,New York,NY,10010 ctrl-d $ mongoimport -d mydb -c things --type csv --file locations.csv --headerline connected to: 127.0.0.1 imported 3 objects $ mongo MongoDB shell version: 1.7.3 connecting to: test > use mydb switched to db mydb > db.things.find() { "_id" : ObjectId("4d32a36ed63d057130c08fca"), "Name" : "Jane Doe", "Address" : "123 Main St", "City" : "Whereverville", "State" : "CA", "ZIP" : 90210 } { "_id" : ObjectId("4d32a36ed63d057130c08fcb"), "Name" : "John Doe", "Address" : "555 Broadway Ave", "City" : "New York", "State" : "NY", "ZIP" : 10010 }
MongoDB
4,686,500
210
When sending a request to /customers/41224d776a326fb40f000001 and a document with _id 41224d776a326fb40f000001 does not exist, doc is null and I'm returning a 404: Controller.prototype.show = function(id, res) { this.model.findById(id, function(err, doc) { if (err) { throw err; } if (!doc) { res.send(404); } return res.send(doc); }); }; However, when _id does not match what Mongoose expects as "format" (I suppose) for example with GET /customers/foo a strange error is returned: CastError: Cast to ObjectId failed for value "foo" at path "_id". So what's this error?
Mongoose's findById method casts the id parameter to the type of the model's _id field so that it can properly query for the matching doc. This is an ObjectId but "foo" is not a valid ObjectId so the cast fails. This doesn't happen with 41224d776a326fb40f000001 because that string is a valid ObjectId. One way to resolve this is to add a check prior to your findById call to see if id is a valid ObjectId or not like so: if (id.match(/^[0-9a-fA-F]{24}$/)) { // Yes, it's a valid ObjectId, proceed with `findById` call. }
MongoDB
14,940,660
208
My host came with a mongodb instance and there is no /db directory so now I am wondering what I can do to find out where the data is actually being stored.
mongod defaults the database location to /data/db/. If you run ps -xa | grep mongod and you don't see a --dbpath which explicitly tells mongod to look at that parameter for the db location and you don't have a dbpath in your mongodb.conf, then the default location will be: /data/db/ and you should look there.
MongoDB
7,247,474
206
I'm trying to select only a specific field with exports.someValue = function(req, res, next) { //query with mongoose var query = dbSchemas.SomeValue.find({}).select('name'); query.exec(function (err, someValue) { if (err) return next(err); res.send(someValue); }); }; But in my json response i'm receiving also the _id, my document schema only has two fiels, _id and name [{"_id":70672,"name":"SOME VALUE 1"},{"_id":71327,"name":"SOME VALUE 2"}] Why???
The _id field is always present unless you explicitly exclude it. Do so using the - syntax: exports.someValue = function(req, res, next) { //query with mongoose var query = dbSchemas.SomeValue.find({}).select('name -_id'); query.exec(function (err, someValue) { if (err) return next(err); res.send(someValue); }); }; Or explicitly via an object: exports.someValue = function(req, res, next) { //query with mongoose var query = dbSchemas.SomeValue.find({}).select({ "name": 1, "_id": 0}); query.exec(function (err, someValue) { if (err) return next(err); res.send(someValue); }); };
MongoDB
24,348,437
205
This question is about making an architectural choice prior to delving into the details of experimentation and implementation. It's about the suitability, in scalability and performance terms, of elasticsearch v.s. MongoDB, for a somewhat specific purpose. Hypothetically both store data objects that have fields and values, and allow querying that body of objects. So presumably filtering out subsets of the objects according to fields selected ad-hoc, is something fit for both. My application will revolve around selecting objects according to criteria. It would select objects by filtering simultaneously by more than a single field, put differently, its query filtering criteria would typically comprise anywhere between 1 and 5 fields, maybe more in some cases. Whereas the fields chosen as filters would be a subset of a much larger amount of fields. Picture some 20 field names existing, and each query is an attempt to filter the objects by few fields out of those overall 20 fields (It can be less or more than 20 overall field names existing, I just used this number to demonstrate the ratio of fields to fields used as filters in every discrete query). The filtering can be by the existence of the chosen fields, as well as by the field values, e.g. filtering out objects that have field A, and their field B is between x and y, and their field C is equal to w. My application will be continuously doing this sort of filtering, whereas there would be nothing or very little constant in terms of which fields are used for the filtering at any moment. Perhaps in elasticsearch indexes need to be defined, but maybe even without indexes speed is at par with that of MongoDB. As per the data getting into the store, there are no special details about that.. the objects would be almost never changed after having been inserted. Perhaps old objects would need to be dropped, I'd like to assume both data stores support expire deleting stuff internally or by an application made query. (Less frequently, objects that fit a certain query would need to be dropped as well). What do you think? And, have you experimented this aspect? I am interested in the performance and the scalability of it, of each of the two data stores, for this kind of task. This is the sort of an architectural desing question, and details of store-specific options or query cornerstones that should make it well architected are welcome as a demonstration of a fully thought-out suggestion. Thanks!
First off, there is an important distinction to make here: MongoDB is a general purpose database, Elasticsearch is a distributed text search engine backed by Lucene. People have been talking about using Elasticsearch as a general purpose database but know that it was not its' original design. I think that general purpose NoSQL databases and search engines are headed for consolidation but as it stands, the two come from two very different camps. We are using both MongoDB and Elasticsearch in my company. We store our data in MongoDB and use Elasticsearch exclusively for its' full-text search capabilities. We only send a subset of the mongo data fields that we need to query to elastic. Our use case differs from yours in that our Mongo data changes all the time: a record, or a subset of the fields of a record, can be updated several times a day and this can call for re-indexing of that record to elastic. For that reason alone, using elastic as the sole data store is not a good option for us, as we can't update select fields; we would need to re-index a document in its' entirety. This is not an elastic limitation, this is how Lucene works, the underlying search engine behind elastic. In your case, the fact that records won't be changed once stored saves you from having to make that choice. Having said that, if data safety is a concern, I would think twice about using Elasticsearch as the only storage mechanism for your data. It may get there at some point but I'm not sure it's there yet. In terms of speed, not only is Elastic/Lucene on par with the querying speed of Mongo, in your case where there is "very little constant in terms of which fields are used for the filtering at any moment", it could be orders of magnitude faster, especially as the datasets become larger. The difference lies in the underlying query implementations: Elastic/Lucene use the Vector Space Model and inverted indexes for Information Retrieval, which are highly efficient ways of comparing record similarity against a query. When you query Elastic/Lucene, it already knows the answer; most of its' work lies in ranking the results for you by the most likely ones to match your query terms. This is an important point: search engines, as opposed to databases, can't guarantee you exact results; they rank results by how close they get to your query. It just so happens that most of the times, the results are close to exact. Mongo's approach is that of a more general purpose data store; it compares JSON documents against one another. You can get great performance out of it by all means, but you need to carefully craft your indexes to match the queries you will be running. Specifically, if you have multiple fields by which you will query, you need to carefully craft your compound keys so that they reduce the dataset that will be queried as fast as possible. E.g. your first key should filter down the majority of your dataset, your second should further filter down what left, and so on and so forth. If your queries don't match the keys and the order of those keys in the defined indexes, your performance will drop quite a bit. On the other hand, Mongo is a true database, so if accuracy is what what you need, the answers it will give will be spot on. For expiring old records, Elastic has a built in TTL feature. Mongo just introduced it as of version 2.2 I think. Since I don't know your other requirements such as expected data size, transactions, accuracy or what your filters will look like, it's hard to make any specific recommendations. Hopefully, there is enough here to get you started.
MongoDB
12,723,239
205
Is it possible to query for a specific date ? I found in the mongo Cookbook that we can do it for a range Querying for a Date Range Like that : db.posts.find({"created_on": {"$gte": start, "$lt": end}}) But is it possible for a specific date ? This doesn't work : db.posts.find({"created_on": new Date(2012, 7, 14) })
That should work if the dates you saved in the DB are without time (just year, month, day). Chances are that the dates you saved were new Date(), which includes the time components. To query those times you need to create a date range that includes all moments in a day. db.posts.find({ //query today up to tonight created_on: { $gte: new Date(2012, 7, 14), $lt: new Date(2012, 7, 15) } })
MongoDB
11,973,304
204
Is there a way to specify a condition of "where document doesn't contain field" ? For example, I want to only find the first of these 2 because it doesn't have the "price" field. {"fruit":"apple", "color":"red"} {"fruit":"banana", "color":"yellow", "price":"2.00"}
Try the $exists operator: db.mycollection.find({ "price" : { "$exists" : false } }) and see its documentation.
MongoDB
8,567,469
204
I am trying to change the type of a field from within the mongo shell. I am doing this... db.meta.update( {'fields.properties.default': { $type : 1 }}, {'fields.properties.default': { $type : 2 }} ) But it's not working!
The only way to change the $type of the data is to perform an update on the data where the data has the correct type. In this case, it looks like you're trying to change the $type from 1 (double) to 2 (string). So simply load the document from the DB, perform the cast (new String(x)) and then save the document again. If you need to do this programmatically and entirely from the shell, you can use the find(...).forEach(function(x) {}) syntax. In response to the second comment below. Change the field bad from a number to a string in collection foo. db.foo.find( { 'bad' : { $type : 1 } } ).forEach( function (x) { x.bad = new String(x.bad); // convert field to string db.foo.save(x); });
MongoDB
4,973,095
204
How can I populate "components" in the example document: { "__v": 1, "_id": "5252875356f64d6d28000001", "pages": [ { "__v": 1, "_id": "5252875a56f64d6d28000002", "page": { "components": [ "525287a01877a68528000001" ] } } ], "author": "Book Author", "title": "Book Title" } This is my JS where I get document by Mongoose: Project.findById(id).populate('pages').exec(function(err, project) { res.json(project); });
Mongoose 4.5 support this Project.find(query) .populate({ path: 'pages', populate: { path: 'components', model: 'Component' } }) .exec(function(err, docs) {}); And you can join more than one deep level. Edit 03/17/2021: This is the library's implementation, what it do behind the scene is make another query to fetch thing for you and then join in memory. Although this work but we really should not rely on. It will make your db design look like SQL tables. This is costly operation and does not scale well. Please try to design your document so that it reduce join.
MongoDB
19,222,520
201
I was surprised to find that the following example code only updates a single document: > db.test.save({"_id":1, "foo":"bar"}); > db.test.save({"_id":2, "foo":"bar"}); > db.test.update({"foo":"bar"}, {"$set":{"test":"success!"}}); > db.test.find({"test":"success!"}).count(); 1 I know I can loop through and keep updating until they're all changed, but that seems terribly inefficient. Is there a better way?
Multi update was added recently, so is only available in the development releases (1.1.3). From the shell you do a multi update by passing true as the fourth argument to update(), where the the third argument is the upsert argument: db.test.update({foo: "bar"}, {$set: {test: "success!"}}, false, true); For versions of mongodb 2.2+ you need to set option multi true to update multiple documents at once. db.test.update({foo: "bar"}, {$set: {test: "success!"}}, {multi: true}) For versions of mongodb 3.2+ you can also use new method updateMany() to update multiple documents at once, without the need of separate multi option. db.test.updateMany({foo: "bar"}, {$set: {test: "success!"}})
MongoDB
1,740,023
200
Every day, I receive a stock of documents (an update). What I want to do is insert each item that does not already exist. I also want to keep track of the first time I inserted them, and the last time I saw them in an update. I don't want to have duplicate documents. I don't want to remove a document which has previously been saved, but is not in my update. 95% (estimated) of the records are unmodified from day to day. I am using the Python driver (pymongo). What I currently do is (pseudo-code): for each document in update: existing_document = collection.find_one(document) if not existing_document: document['insertion_date'] = now else: document = existing_document document['last_update_date'] = now my_collection.save(document) My problem is that it is very slow (40 mins for less than 100 000 records, and I have millions of them in the update). I am pretty sure there is something builtin for doing this, but the document for update() is mmmhhh.... a bit terse.... (http://www.mongodb.org/display/DOCS/Updating ) Can someone advise how to do it faster?
Sounds like you want to do an upsert. MongoDB has built-in support for this. Pass an extra parameter to your update() call: {upsert:true}. For example: key = {'key':'value'} data = {'key2':'value2', 'key3':'value3'}; coll.update(key, data, upsert=True); #In python upsert must be passed as a keyword argument This replaces your if-find-else-update block entirely. It will insert if the key doesn't exist and will update if it does. Before: {"key":"value", "key2":"Ohai."} After: {"key":"value", "key2":"value2", "key3":"value3"} You can also specify what data you want to write: data = {"$set":{"key2":"value2"}} Now your selected document will update the value of key2 only and leave everything else untouched.
MongoDB
2,801,008
196
So I've been learning Spring in the couples of week, been following this tutorial Building a RESTful Web Service All was well until I tried to integrate it to mongodb. So I follow this tutorial. Accessing Data with MongoDB But my practice is partially still using the first one. So my project directory structure is like this. src/ ├── main/ │ └── java/ | ├── model/ | | └── User.java | ├── rest/ | | ├── Application.java | | ├── IndexController.java | | └── UsersController.java | └── service/ | └── UserService.java └── resources/ └── application.properties This is my model/User.java file package main.java.model; import org.springframework.data.annotation.Id; import org.springframework.data.mongodb.core.mapping.Document; @Document(collection="user") public class User { private int age; private String country; @Id private String id; private String name; public User() { super(); } public String getId() { return id; } public void setId(String id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getCountry() { return country; } public void setCountry(String country) { this.country = country; } } This is my rest/UsersController.java file package main.java.rest; import java.util.List; import main.java.service.UserService; import main.java.model.User; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RestController @RequestMapping(value = "/users") public class UsersController { @Autowired UserService userService; @RequestMapping(method = RequestMethod.GET) public List<User> getAllUsers() { return userService.findAll(); } } This is my service/UserService.java file package main.java.service; import java.util.List; import main.java.model.User; import org.springframework.data.mongodb.repository.MongoRepository; public interface UserService extends MongoRepository<User, String> { public List<User> findAll(); } I could compile them (I'm using gradle for compilation because I'm following the tutorial), but when I run the jar file it was throwing this error. APPLICATION FAILED TO START Description: Field userService in main.java.rest.UsersController required a bean of type 'main.java.service.UserService' that could not be found. Action: Consider defining a bean of type 'main.java.service.UserService' in your configuration. Not sure what is wrong I start googling around and found that I need to include Beans.xml file and register the userService in it. I did that but it's not working. I'm really new to this so I really have no clue on what's going on.
Solved it. So by default, all packages that falls under @SpringBootApplication declaration will be scanned. Assuming my main class ExampleApplication that has @SpringBootApplication declaration is declared inside com.example.something, then all components that falls under com.example.something is scanned while com.example.applicant will not be scanned. So, there are two ways to do it based on this question. Use @SpringBootApplication(scanBasePackages={ "com.example.something", "com.example.application"}) That way, the application will scan all the specified components, but I think what if the scale were getting bigger ? So I use the second approach, by restructuring my packages and it worked ! Now my packages structure became like this. src/ ├── main/ │ └── java/ | ├── com.example/ | | └── Application.java | ├── com.example.model/ | | └── User.java | ├── com.example.controller/ | | ├── IndexController.java | | └── UsersController.java | └── com.example.service/ | └── UserService.java └── resources/ └── application.properties
MongoDB
42,907,553
195
I'm a little bit confused by the findAndModify method in MongoDB. What's the advantage of it over the update method? For me, it seems that it just returns the item first and then updates it. But why do I need to return the item first? I read the MongoDB: the definitive guide and it says that it is handy for manipulating queues and performing other operations that need get-and-set style atomicity. But I didn't understand how it achieves this. Can somebody explain this to me?
If you fetch an item and then update it, there may be an update by another thread between those two steps. If you update an item first and then fetch it, there may be another update in-between and you will get back a different item than what you updated. Doing it "atomically" means you are guaranteed that you are getting back the exact same item you are updating - i.e. no other operation can happen in between.
MongoDB
10,778,493
195
I'm using mongoose in a script that is not meant to run continuously, and I'm facing what seems to be a very simple issue yet I can't find an answer; simply put once I make a call to any mongoose function that sends requests to mongodb my nodejs instance never stops and I have to kill it manually with, say, Ctrl+c or Program.exit(). The code looks roughly like this: var mongoose = require('mongoose'); // if my program ends after this line, it shuts down as expected, my guess is that the connection is not really done here but only on the first real request ? mongoose.connect('mongodb://localhost:27017/somedb'); // define some models // if I include this line for example, node never stop afterwards var MyModel = mongoose.model('MyModel', MySchema); I tried adding calls to mongoose.disconnect() but no to result. Aside from that, everything works fine (finding, saving, ...). This is the exact same issue as this person, sadly he did not receive any answer: https://groups.google.com/group/mongoose-orm/browse_thread/thread/c72cc1c51c76e661 Thanks EDIT: accepted the answer below as it is technically correct, but if anyone ever hit this problem again, it seems that mongoose and/or the mongodb driver does not actually close the connection when you ask it to if there are still queries running. It does not even remember the disconnect call at all, it does not do it once queries are finished running; it just discards your call with no exception thrown or anything of the sort, and never actually close the connection. So there you have it: make sure that every query has been processed before calling disconnect() if you want it to actually work.
You can close the connection with mongoose.connection.close()
MongoDB
8,813,838
195
For example, I have these documents: { "addr": "address1", "book": "book1" }, { "addr": "address2", "book": "book1" }, { "addr": "address1", "book": "book5" }, { "addr": "address3", "book": "book9" }, { "addr": "address2", "book": "book5" }, { "addr": "address2", "book": "book1" }, { "addr": "address1", "book": "book1" }, { "addr": "address15", "book": "book1" }, { "addr": "address9", "book": "book99" }, { "addr": "address90", "book": "book33" }, { "addr": "address4", "book": "book3" }, { "addr": "address5", "book": "book1" }, { "addr": "address77", "book": "book11" }, { "addr": "address1", "book": "book1" } and so on.How can I make a request, which will describe the top N addresses and the top M books per address?Example of expected result: address1 | book_1: 5 | book_2: 10 | book_3: 50 | total: 65 ______________________ address2 | book_1: 10 | book_2: 10 |... | book_M: 10 | total: M*10... ______________________ addressN | book_1: 20 | book_2: 20 |... | book_M: 20 | total: M*20
TLDR Summary In modern MongoDB releases you can brute force this with $slice just off the basic aggregation result. For "large" results, run parallel queries instead for each grouping ( a demonstration listing is at the end of the answer ), or wait for SERVER-9377 to resolve, which would allow a "limit" to the number of items to $push to an array. db.books.aggregate([ { "$group": { "_id": { "addr": "$addr", "book": "$book" }, "bookCount": { "$sum": 1 } }}, { "$group": { "_id": "$_id.addr", "books": { "$push": { "book": "$_id.book", "count": "$bookCount" }, }, "count": { "$sum": "$bookCount" } }}, { "$sort": { "count": -1 } }, { "$limit": 2 }, { "$project": { "books": { "$slice": [ "$books", 2 ] }, "count": 1 }} ]) MongoDB 3.6 Preview Still not resolving SERVER-9377, but in this release $lookup allows a new "non-correlated" option which takes an "pipeline" expression as an argument instead of the "localFields" and "foreignFields" options. This then allows a "self-join" with another pipeline expression, in which we can apply $limit in order to return the "top-n" results. db.books.aggregate([ { "$group": { "_id": "$addr", "count": { "$sum": 1 } }}, { "$sort": { "count": -1 } }, { "$limit": 2 }, { "$lookup": { "from": "books", "let": { "addr": "$_id" }, "pipeline": [ { "$match": { "$expr": { "$eq": [ "$addr", "$$addr"] } }}, { "$group": { "_id": "$book", "count": { "$sum": 1 } }}, { "$sort": { "count": -1 } }, { "$limit": 2 } ], "as": "books" }} ]) The other addition here is of course the ability to interpolate the variable through $expr using $match to select the matching items in the "join", but the general premise is a "pipeline within a pipeline" where the inner content can be filtered by matches from the parent. Since they are both "pipelines" themselves we can $limit each result separately. This would be the next best option to running parallel queries, and actually would be better if the $match were allowed and able to use an index in the "sub-pipeline" processing. So which is does not use the "limit to $push" as the referenced issue asks, it actually delivers something that should work better. Original Content You seem have stumbled upon the top "N" problem. In a way your problem is fairly easy to solve though not with the exact limiting that you ask for: db.books.aggregate([ { "$group": { "_id": { "addr": "$addr", "book": "$book" }, "bookCount": { "$sum": 1 } }}, { "$group": { "_id": "$_id.addr", "books": { "$push": { "book": "$_id.book", "count": "$bookCount" }, }, "count": { "$sum": "$bookCount" } }}, { "$sort": { "count": -1 } }, { "$limit": 2 } ]) Now that will give you a result like this: { "result" : [ { "_id" : "address1", "books" : [ { "book" : "book4", "count" : 1 }, { "book" : "book5", "count" : 1 }, { "book" : "book1", "count" : 3 } ], "count" : 5 }, { "_id" : "address2", "books" : [ { "book" : "book5", "count" : 1 }, { "book" : "book1", "count" : 2 } ], "count" : 3 } ], "ok" : 1 } So this differs from what you are asking in that, while we do get the top results for the address values the underlying "books" selection is not limited to only a required amount of results. This turns out to be very difficult to do, but it can be done though the complexity just increases with the number of items you need to match. To keep it simple we can keep this at 2 matches at most: db.books.aggregate([ { "$group": { "_id": { "addr": "$addr", "book": "$book" }, "bookCount": { "$sum": 1 } }}, { "$group": { "_id": "$_id.addr", "books": { "$push": { "book": "$_id.book", "count": "$bookCount" }, }, "count": { "$sum": "$bookCount" } }}, { "$sort": { "count": -1 } }, { "$limit": 2 }, { "$unwind": "$books" }, { "$sort": { "count": 1, "books.count": -1 } }, { "$group": { "_id": "$_id", "books": { "$push": "$books" }, "count": { "$first": "$count" } }}, { "$project": { "_id": { "_id": "$_id", "books": "$books", "count": "$count" }, "newBooks": "$books" }}, { "$unwind": "$newBooks" }, { "$group": { "_id": "$_id", "num1": { "$first": "$newBooks" } }}, { "$project": { "_id": "$_id", "newBooks": "$_id.books", "num1": 1 }}, { "$unwind": "$newBooks" }, { "$project": { "_id": "$_id", "num1": 1, "newBooks": 1, "seen": { "$eq": [ "$num1", "$newBooks" ]} }}, { "$match": { "seen": false } }, { "$group":{ "_id": "$_id._id", "num1": { "$first": "$num1" }, "num2": { "$first": "$newBooks" }, "count": { "$first": "$_id.count" } }}, { "$project": { "num1": 1, "num2": 1, "count": 1, "type": { "$cond": [ 1, [true,false],0 ] } }}, { "$unwind": "$type" }, { "$project": { "books": { "$cond": [ "$type", "$num1", "$num2" ]}, "count": 1 }}, { "$group": { "_id": "$_id", "count": { "$first": "$count" }, "books": { "$push": "$books" } }}, { "$sort": { "count": -1 } } ]) So that will actually give you the top 2 "books" from the top two "address" entries. But for my money, stay with the first form and then simply "slice" the elements of the array that are returned to take the first "N" elements. Demonstration Code The demonstration code is appropriate for usage with current LTS versions of NodeJS from v8.x and v10.x releases. That's mostly for the async/await syntax, but there is nothing really within the general flow that has any such restriction, and adapts with little alteration to plain promises or even back to plain callback implementation. index.js const { MongoClient } = require('mongodb'); const fs = require('mz/fs'); const uri = 'mongodb://localhost:27017'; const log = data => console.log(JSON.stringify(data, undefined, 2)); (async function() { try { const client = await MongoClient.connect(uri); const db = client.db('bookDemo'); const books = db.collection('books'); let { version } = await db.command({ buildInfo: 1 }); version = parseFloat(version.match(new RegExp(/(?:(?!-).)*/))[0]); // Clear and load books await books.deleteMany({}); await books.insertMany( (await fs.readFile('books.json')) .toString() .replace(/\n$/,"") .split("\n") .map(JSON.parse) ); if ( version >= 3.6 ) { // Non-correlated pipeline with limits let result = await books.aggregate([ { "$group": { "_id": "$addr", "count": { "$sum": 1 } }}, { "$sort": { "count": -1 } }, { "$limit": 2 }, { "$lookup": { "from": "books", "as": "books", "let": { "addr": "$_id" }, "pipeline": [ { "$match": { "$expr": { "$eq": [ "$addr", "$$addr" ] } }}, { "$group": { "_id": "$book", "count": { "$sum": 1 }, }}, { "$sort": { "count": -1 } }, { "$limit": 2 } ] }} ]).toArray(); log({ result }); } // Serial result procesing with parallel fetch // First get top addr items let topaddr = await books.aggregate([ { "$group": { "_id": "$addr", "count": { "$sum": 1 } }}, { "$sort": { "count": -1 } }, { "$limit": 2 } ]).toArray(); // Run parallel top books for each addr let topbooks = await Promise.all( topaddr.map(({ _id: addr }) => books.aggregate([ { "$match": { addr } }, { "$group": { "_id": "$book", "count": { "$sum": 1 } }}, { "$sort": { "count": -1 } }, { "$limit": 2 } ]).toArray() ) ); // Merge output topaddr = topaddr.map((d,i) => ({ ...d, books: topbooks[i] })); log({ topaddr }); client.close(); } catch(e) { console.error(e) } finally { process.exit() } })() books.json { "addr": "address1", "book": "book1" } { "addr": "address2", "book": "book1" } { "addr": "address1", "book": "book5" } { "addr": "address3", "book": "book9" } { "addr": "address2", "book": "book5" } { "addr": "address2", "book": "book1" } { "addr": "address1", "book": "book1" } { "addr": "address15", "book": "book1" } { "addr": "address9", "book": "book99" } { "addr": "address90", "book": "book33" } { "addr": "address4", "book": "book3" } { "addr": "address5", "book": "book1" } { "addr": "address77", "book": "book11" } { "addr": "address1", "book": "book1" }
MongoDB
22,932,364
194
How to I get mongo to use a mounted drive on ec2? I really do not understand. I attached a volume on ec2 formatted the drive as root and start as root and yet as root I cant access? I am running on ubuntu 12.04. No other mongo is running I see that mongo made a 'db' dir in /data i.e. /data/db cd / ls -al drwxr-xr-x 4 root root 4096 Mar 5 16:28 data cd /data ls -al total 28 drwxr-xr-x 4 root root 4096 Mar 5 16:28 . drwxr-xr-x 24 root root 4096 Mar 5 16:28 .. drwxr-xr-x 2 root root 4096 Mar 5 16:28 db drwx------ 2 root root 16384 Mar 5 16:20 lost+found sudo mkfs.ext3 /dev/xvdh sudo mkdir /data sudo su - -c 'echo "/dev/xvdh %s auto noatime 0 0" | sudo tee -a /etc/fstab' sudo mount /data sudo service mongodb start mongodb start/running, process 17169 sudo ps -ef | grep mongod ubuntu 15763 15634 0 16:32 pts/2 00:00:00 tail -f mongodb.log ubuntu 18049 15766 0 16:43 pts/3 00:00:00 grep --color=auto mongod Tue Mar 5 16:33:15 [initandlisten] MongoDB starting : pid=15890 port=27017 dbpath=/data 64-bit host=aws-mongo-server-east-staging-20130305161917 Tue Mar 5 16:33:15 [initandlisten] db version v2.2.3, pdfile version 4.5 Tue Mar 5 16:33:15 [initandlisten] git version: f570771a5d8a3846eb7586eaffcf4c2f4a96bf08 Tue Mar 5 16:33:15 [initandlisten] build info: Linux ip-10-2-29-40 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_49 Tue Mar 5 16:33:15 [initandlisten] options: { bind_ip: "10.157.60.27", config: "/etc/mongodb.conf", dbpath: "/data", logappend: "true", logpath: "/var/log/mongodb/mongodb.log", replSet: "heythat" } Tue Mar 5 16:33:15 [initandlisten] exception in initAndListen: 10309 Unable to create/open lock file: /data/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating Tue Mar 5 16:33:15 dbexit: Tue Mar 5 16:33:15 [initandlisten] shutdown: going to close listening sockets... Tue Mar 5 16:33:15 [initandlisten] shutdown: going to flush diaglog... Tue Mar 5 16:33:15 [initandlisten] shutdown: going to close sockets... Tue Mar 5 16:33:15 [initandlisten] shutdown: waiting for fs preallocator... Tue Mar 5 16:33:15 [initandlisten] shutdown: lock for final commit... Tue Mar 5 16:33:15 [initandlisten] shutdown: final commit... Tue Mar 5 16:33:15 [initandlisten] shutdown: closing all files... Tue Mar 5 16:33:15 [initandlisten] closeAllFiles() finished Tue Mar 5 16:33:15 [initandlisten] shutdown: removing fs lock... Tue Mar 5 16:33:15 [initandlisten] couldn't remove fs lock errno:9 Bad file descriptor Tue Mar 5 16:33:15 dbexit: really exiting now Below is if I restart when I remove a lock file.... Tue Mar 5 16:59:15 [initandlisten] MongoDB starting : pid=21091 port=27017 dbpath=/data 64-bit host=aws-mongo-server-east-staging-20130305161917 Tue Mar 5 16:59:15 [initandlisten] db version v2.2.3, pdfile version 4.5 Tue Mar 5 16:59:15 [initandlisten] git version: f570771a5d8a3846eb7586eaffcf4c2f4a96bf08 Tue Mar 5 16:59:15 [initandlisten] build info: Linux ip-10-2-29-40 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_49 Tue Mar 5 16:59:15 [initandlisten] options: { bind_ip: "10.157.60.27", config: "/etc/mongodb.conf", dbpath: "/data", logappend: "true", logpath: "/var/log/mongodb/mongodb.log", replSet: "heythat" } Tue Mar 5 16:59:15 [initandlisten] exception in initAndListen: 10309 Unable to create/open lock file: /data/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating Tue Mar 5 16:59:15 dbexit: Tue Mar 5 16:59:15 [initandlisten] shutdown: going to close listening sockets... Tue Mar 5 16:59:15 [initandlisten] shutdown: going to flush diaglog... Tue Mar 5 16:59:15 [initandlisten] shutdown: going to close sockets... Tue Mar 5 16:59:15 [initandlisten] shutdown: waiting for fs preallocator... Tue Mar 5 16:59:15 [initandlisten] shutdown: lock for final commit... Tue Mar 5 16:59:15 [initandlisten] shutdown: final commit... Tue Mar 5 16:59:15 [initandlisten] shutdown: closing all files... Tue Mar 5 16:59:15 [initandlisten] closeAllFiles() finished Tue Mar 5 16:59:15 [initandlisten] shutdown: removing fs lock... Tue Mar 5 16:59:15 [initandlisten] couldn't remove fs lock errno:9 Bad file descriptor Tue Mar 5 16:59:15 dbexit: really exiting now
I use this method to solve the problem: sudo chown -R mongodb:mongodb /data/db
MongoDB
15,229,412
193
I need to be able to start/stop MongoDB on the cli. It is quite simple to start: ./mongod But to stop mongo DB, I need to run open mongo shell first and then type two commands: $ ./mongo use admin db.shutdownServer() So I don't know how to stop mongo DB in one line. Any help?
Starting and Stopping MongoDB is covered in the MongoDB manual. It explains the various options of stopping MongoDB through the shell, cli, drivers etc. It also details the risks of incorrectly stopping MongoDB (such as data corruption) and talks about the different kill signals. Additionally, if you have installed MongoDB using a package manager for Ubuntu or Debian then you can stop mongodb (currently mongod in ubuntu) as follows: Upstart: sudo service mongod stop Sysvinit: sudo /etc/init.d/mongod stop Or on Mac OS X Find PID of mongod process using $ top Kill the process by $ kill <PID> (the Mongo docs have more info on this) Or on Red Hat based systems: service mongod stop Or on Windows if you have installed as a service named MongoDB: net stop MongoDB And if not installed as a service (as of Windows 7+) you can run: taskkill /f /im mongod.exe To learn more about the problems of an unclean shutdown, how to best avoid such a scenario and what to do in the event of an unclean shutdown, please see: Recover Data after an Unexpected Shutdown.
MongoDB
11,774,887
192
When i run this sql in phpmyadmin SELECT @@SQL_MODE, @@GLOBAL.SQL_MODE; it shows @@SQL_MODE STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION @@GLOBAL.SQL_MODE STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION How to Disable strict mode on MariaDB using phpmydmin?
Edit via SSH /etc/my.cnf file Add sql_mode=NO_ENGINE_SUBSTITUTION restart MariaDB and it will fix the issue *edit - if You have error while restarting msyql service try to add "[mysqld]" above in my.cnf
MariaDB
57,381,392
11
I'm doing unit/integration tests. SQLite doesn't support RIGHT JOIN and FULL OUTER JOIN. Is there any way to work with MySQL (or MariaDB) completely stored in memory? MySQL has MEMORY table engine, however this may generate inconsistency in my tests. I need some alternative to :memory: from SQLite but with the same features as MySQL. My problem is performance. SQLite database in-memory speeds up my testing process, however some queries aren't compatible with SQLite. I also do not find it good practice to do the tests in SQLite if the production database is MariaDB.
MariaDB has the MEMORY storage engine: It is best-used for read-only caches of data from other tables, or for temporary work areas. That sounds exactly right for quick setup and teardown of a database during automated testing.
MariaDB
51,523,432
11
I'm using Sequelize version 4.3.0 on nodejs(v6.11.0) application having Mariadb (mysql Ver 15.1 Distrib 10.0.29-MariaDB, for debian-linux-gnu (i686) using readline 5.2 ) on Ubuntu 16.04. when application starts and calls function: Sequelize.sync(); Then sequelize connection manager throws following error: Unhandled rejection SequelizeConnectionError: Client does not support authentication protocol requested by server; consider upgrading MariaDB client at Utils.Promise.tap.then.catch.err (/home/dariksoft/cars/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:146:17) at tryCatcher (/home/dariksoft/cars/node_modules/bluebird/js/release/util.js:16:23) at Promise._settlePromiseFromHandler (/home/dariksoft/cars/node_modules/bluebird/js/release/promise.js:512:31) at Promise._settlePromise (/home/dariksoft/cars/node_modules/bluebird/js/release/promise.js:569:18) at Promise._settlePromise0 (/home/dariksoft/cars/node_modules/bluebird/js/release/promise.js:614:10) at Promise._settlePromises (/home/dariksoft/cars/node_modules/bluebird/js/release/promise.js:689:18) at Async._drainQueue (/home/dariksoft/cars/node_modules/bluebird/js/release/async.js:133:16) at Async._drainQueues (/home/dariksoft/cars/node_modules/bluebird/js/release/async.js:143:10) at Immediate.Async.drainQueues (/home/dariksoft/cars/node_modules/bluebird/js/release/async.js:17:14) at runCallback (timers.js:672:20) at tryOnImmediate (timers.js:645:5) at processImmediate [as _immediateCallback] (timers.js:617:5) I updated mariadb-server and mariadb-client but the problem already exists! Anyone can help me to solve this problem ?
I found the answer: login to mysql command line and write the following commands: use mysql; update user set authentication_string=password(''),plugin='mysql_native_password' where user='root';
MariaDB
45,051,927
11
Is there a notable difference in query performance, if the index is set on datetime type column, instead of boolean type column (and querying is done on that column)? In my current design I got 2 columns: is_active TINYINT(1), indexed deleted_at DATETIME query is SELECT * FROM table WHERE is_active = 1; Would it be any slower, if I made an index on deleted_at column instead, and ran queries like this SELECT * FROM table WHERE deleted_at is null; ?
Here is a MariaDB (10.0.19) benchmark with 10M rows (using the sequence plugin): drop table if exists test; CREATE TABLE `test` ( `id` MEDIUMINT UNSIGNED NOT NULL, `is_active` TINYINT UNSIGNED NOT NULL, `deleted_at` TIMESTAMP NULL, PRIMARY KEY (`id`), INDEX `is_active` (`is_active`), INDEX `deleted_at` (`deleted_at`) ) ENGINE=InnoDB select seq id , rand(1)<0.5 as is_active , case when rand(1)<0.5 then null else '2017-03-18' - interval floor(rand(2)*1000000) second end as deleted_at from seq_1_to_10000000; To measure the time I use set profiling=1 and run show profile after executing a query. From the profiling result I take the value of Sending data since everything else is altogether less than one msec. TINYINT index: SELECT COUNT(*) FROM test WHERE is_active = 1; Runtime: ~ 738 msec TIMESTAMP index: SELECT COUNT(*) FROM test WHERE deleted_at is null; Runtime: ~ 748 msec Index size: select database_name, table_name, index_name, stat_value*@@innodb_page_size from mysql.innodb_index_stats where database_name = 'tmp' and table_name = 'test' and stat_name = 'size' Result: database_name | table_name | index_name | stat_value*@@innodb_page_size ----------------------------------------------------------------------- tmp | test | PRIMARY | 275513344 tmp | test | deleted_at | 170639360 tmp | test | is_active | 97107968 Note that while TIMESTAMP (4 Bytes) is 4 times as long as TYNYINT (1 Byte), the index size is not even twice as large. But the index size can be significant if it doesn't fit into memory. So when i change innodb_buffer_pool_size from 1G to 50M i get the following numbers: TINYINT: ~ 960 msec TIMESTAMP: ~ 1500 msec Update To address the question more directly I did some changes to the data: Instead of TIMESTAMP I use DATETIME Since entries are usually rarely deleted I use rand(1)<0.99 (1% deleted) instead of rand(1)<0.5 (50% deleted) Table size changed from 10M to 1M rows. SELECT COUNT(*) changed to SELECT * Index size: index_name | stat_value*@@innodb_page_size ------------------------------------------ PRIMARY | 25739264 deleted_at | 12075008 is_active | 11026432 Since 99% of deleted_at values are NULL there is no significant difference in index size, though a non empty DATETIME requires 8 Bytes (MariaDB). SELECT * FROM test WHERE is_active = 1; -- 782 msec SELECT * FROM test WHERE deleted_at is null; -- 829 msec Dropping both indexes both queries execute in about 350 msec. And dropping the is_active column the deleted_at is null query executes in 280 msec. Note that this is still not a realistic scenario. You will unlikely want to select 990K rows out of 1M and deliver it to the user. You will probably also have more columns (maybe including text) in the table. But it shows, that you probably don't need the is_active column (if it doesn't add additional information), and that any index is in best case useless for selecting non deleted entries. However an index can be usefull to select deleted rows: SELECT * FROM test WHERE is_active = 0; Executes in 10 msec with index and in 170 msec without index. SELECT * FROM test WHERE deleted_at is not null; Executes in 11 msec with index and in 167 msec without index. Dropping the is_active column it executes in 4 msec with index and in 150 msec without index. So if this scenario somehow fits your data the conclusion would be: Drop the is_active column and don't create an index on deleted_at column if you are rarely selecting deleted entries. Or adjust the benchmark to your needs and make your own conclusion.
MariaDB
42,875,220
11
I have a query that runs in about 20 seconds on a MySQL 5.1 server but takes almost 15 minutes on a MariaDB 5.5 server. Usual suspects like key_buffer_size and tmp_table_size and max_heap_table_size are all equal (128M). Most settings are equal as far as I can see (query_cache,etc) The query: SELECT products.id, concat(publications.company_name,' [',publications.quote,'] ', products.name) as n, products.impressions, products.contacts, is_channel, sl.i, count(*) FROM products LEFT JOIN publications ON products.publications_id = publications.id LEFT OUTER JOIN ( SELECT adspace.id AS i, slots.products_id FROM adspace LEFT JOIN slots ON adspace.slots_id = slots.id AND adspace.end > '2016-01-25 10:28:49' WHERE adspace.active = 1) AS sl ON sl.products_id = products.id WHERE 1 = 1 AND publications.active=1 GROUP BY products.id ORDER BY n ASC; The only difference is in the explain fase: Old server (MySQL 5.1) +----+-------------+--------------+--------+---------------+---------+---------+-----------------------------------------+--------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+---------------+---------+---------+-----------------------------------------+--------+---------------------------------+ | 1 | PRIMARY | products | ALL | NULL | NULL | NULL | NULL | 6568 | Using temporary; Using filesort | | 1 | PRIMARY | publications | eq_ref | PRIMARY | PRIMARY | 4 | db.products.publications_id | 1 | Using where | | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 94478 | | | 2 | DERIVED | adspace | ALL | NULL | NULL | NULL | NULL | 101454 | Using where | | 2 | DERIVED | slots | eq_ref | PRIMARY | PRIMARY | 4 | db.adspace.slots_id | 1 | | +----+-------------+--------------+--------+---------------+---------+---------+-----------------------------------------+--------+---------------------------------+ New server (MariaDB 5.5) +------+-------------+--------------+--------+---------------+---------+---------+-----------------------------------------+--------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +------+-------------+--------------+--------+---------------+---------+---------+-----------------------------------------+--------+---------------------------------+ | 1 | SIMPLE | products | ALL | test_idx | NULL | NULL | NULL | 6557 | Using temporary; Using filesort | | 1 | SIMPLE | publications | eq_ref | PRIMARY | PRIMARY | 4 | db.products.publications_id | 1 | Using where | | 1 | SIMPLE | adspace | ALL | NULL | NULL | NULL | NULL | 100938 | Using where | | 1 | SIMPLE | slots | eq_ref | PRIMARY | PRIMARY | 4 | db.adspace.slots_id | 1 | Using where | +------+-------------+--------------+--------+---------------+---------+---------+-----------------------------------------+--------+---------------------------------+ An index was added to the products table on the new server to speed things up, to no avail. Engine variables: Old server: mysql> show variables like '%engine%'; +---------------------------+--------+ | Variable_name | Value | +---------------------------+--------+ | engine_condition_pushdown | ON | | storage_engine | MyISAM | +---------------------------+--------+ mysql> show variables like '%buffer_pool%'; +-------------------------+---------+ | Variable_name | Value | +-------------------------+---------+ | innodb_buffer_pool_size | 8388608 | +-------------------------+---------+ New server: MariaDB [db]> show variables like '%engine%'; +---------------------------+--------+ | Variable_name | Value | +---------------------------+--------+ | default_storage_engine | InnoDB | | engine_condition_pushdown | OFF | | storage_engine | InnoDB | +---------------------------+--------+ MariaDB [db]> show variables like '%buffer_pool%'; +---------------------------------------+-----------+ | Variable_name | Value | +---------------------------------------+-----------+ | innodb_blocking_buffer_pool_restore | OFF | | innodb_buffer_pool_instances | 1 | | innodb_buffer_pool_populate | OFF | | innodb_buffer_pool_restore_at_startup | 0 | | innodb_buffer_pool_shm_checksum | ON | | innodb_buffer_pool_shm_key | 0 | | innodb_buffer_pool_size | 134217728 | +---------------------------------------+-----------+ All tables used in the query are MyISAM (both old and new server) Profiling showed that the old query spend around 16 seconds in 'copying to tmp table' and the new server around 800 seconds in this fase. New server all has SSD disks for storage and old servers have normal disks. Edit: I also have a MySQL 5.5 server and there the query only take around 10 seconds. Also with all the same settings as far as I can see. I tried to summarise it in a table: Location: Customer Own Customer MySQL Type: MySQL MySQL MariaDB Mysql Version: 5.1.56-community-log 5.5.39-1-log (Debian) 5.5.44-MariaDB-log HDD: Normal Normal SSD Type: Virtual Real Virtual Query time: ~15s ~10s ~15min DB engine: MyISAM InnoDB InnoDB Table Engine: MyISAM MyISAM MyISAM I don't want to rewrite the query (although it could use some work) but I want to find the difference between the 2 machines, my guess is a setting that isn't ideal in MariaDB but I can't find it.
From the explanation above can be seen that the Derived Table Merge Optimization is used. That unfortunately in your case means that instead of only one full table scan over adspacesome ~6k are done. A possible solution is to disable the optimization before the query by issuing set optimizer_switch='derived_merge=off';. A backward compatible alternatively would be adding GROUP BY adspace.id, slots.products_id to the subquery (if it does not change the results - safest is grouping over PKs of all joined tables) which forbids the merge by having different semantics. There is one reported optimizer bug about that - your case might help with it.
MariaDB
35,011,477
11
I'm trying to setup a database schema on MariaDB which originally comes from PostgreSQL. What is the correct syntax in MariaDB for: ALTER DATABASE mydbname OWNER TO someuser and similar for granting privileges: GRANT ALL PRIVILEGES ON DATABASE mydbname TO someotheruser Both works on PostgreSQL and is valid SQL-99 syntax. MariaDB (the PHPMyAdmin frontend) gives me: #1064 - You have an error in your SQL syntax;
Often privileges are not part of the SQL standard because every database does this there own way. MySQL and MariaDB do not have database owners like postgres. They do have a privilege systems to allow or deny accounts certain rights. The second would look like: GRANT ALL PRIVILEGES ON database.table TO 'user'@'host' [IDENTIFIED BY 'password'] Where the IDENTIFIED is optional. If you want to grant access to all tables, as most do, you can use the asterisk. Setting the GRANT ALL on a specific database effectively prevents the 'user' from accessing other schema objects. The same can be accomplished by creating a GRANT USAGE ON ..
MariaDB
26,561,218
11