question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
i wonder wether there is a solution (or a need for) an ORM with Graph-Database (f.e. Neo4j). I'm tracking relationships (A is related to B which is related to A via C etc., thus constructing a large graph) of entities (including additional attributes for those entities) and need to store them in a DB, and i think a graph database would fit this task perfectly.
Now, with sql-like DBs, i use sqlalchemyś ORM to store my objects, especially because of the fact that i can retrieve objects from the db and work with them in a pythonic style (use their methods etc.).
Is there any object-mapping solution for Neo4j or other Graph-DB, so that i can store and retrieve python objects into and from the Graph-DB and work with them easily?
Or would you write some functions or adapters like in the python sqlite documentation (http://docs.python.org/library/sqlite3.html#letting-your-object-adapt-itself) to retrieve and store objects?
| Shameless plug... there is also my own ORM which you may also want to checkout: https://github.com/robinedwards/neomodel
It's built on top of py2neo, using cypher and rest API calls under hood, i.e no dependency on gremlin.
| Neo4j | 8,356,626 | 20 |
I have a general question about modeling in a graph database that I just can't seem to wrap my head around.
How do you model this type of relationship: "Newton invented Calculus"?
In a simple graph, you could model it like this:
Newton (node) -> invented (relationship) -> Calculus (node)
...so you'd have a bunch of "invented" graph relationships as you added more people and inventions.
The problem is, you start needing to add a bunch of properties to the relationship:
invention_date
influential_concepts
influential_people
books_inventor_wrote
...and you'll want to start creating relationships between those properties and other nodes, such as:
influential_people: relationship to person nodes
books_inventor_wrote: relationship to book nodes
So now it seems like the "real-world relationships" ("invented") should actually be a node in the graph, and the graph should look like this:
Newton (node) -> (relationship) -> Invention of Calculus (node) -> (relationship) -> Calculus (node)
And to complicate things more, other people are also participated in the invention of Calculus, so the graph now becomes something like:
Newton (node) ->
(relationship) ->
Newton's Calculus Invention (node) ->
(relationship) ->
Invention of Calculus (node) ->
(relationship) ->
Calculus (node)
Leibniz (node) ->
(relationship) ->
Leibniz's Calculus Invention (node) ->
(relationship) ->
Invention of Calculus (node) ->
(relationship) ->
Calculus (node)
So I ask the question because it seems like you don't want to set properties on the actual graph database "relationship" objects, because you may want to at some point treat them as nodes in the graph.
Is this correct?
I have been studying the Freebase Metaweb Architecture, and they seem to be treating everything as a node. For example, Freebase has the idea of a Mediator/CVT, where you can create a "Performance" node that links an "Actor" node to a "Film" node, like here: http://www.freebase.com/edit/topic/en/the_last_samurai. Not quite sure if this is the same issue though.
What are some guiding principles you use to figure out if the "real-world relationship" should actually be a graph node rather than a graph relationship?
If there are any good books on this topic I would love to know. Thanks!
| Some of these things, such as invention_date, can be stored as properties on the edges as in most graph databases edges can have properties in the same way that vertexes can have properties. For example you could do something like this (code follows TinkerPop's Blueprints):
Graph graph = new Neo4jGraph("/tmp/my_graph");
Vertex newton = graph.addVertex(null);
newton.setProperty("given_name", "Isaac");
newton.setProperty("surname", "Newton");
newton.setProperty("birth_year", 1643); // use Gregorian dates...
newton.setProperty("type", "PERSON");
Vertex calculus = graph.addVertex(null);
calculus.setProperty("type", "KNOWLEDGE");
Edge newton_calculus = graph.addEdge(null, newton, calculus, "DISCOVERED");
newton_calculus.setProperty("year", 1666);
Now, lets expand it a little bit and add in Liebniz:
Vertex liebniz = graph.addVertex(null);
liebniz.setProperty("given_name", "Gottfried");
liebniz.setProperty("surnam", "Liebniz");
liebniz.setProperty("birth_year", "1646");
liebniz.setProperty("type", "PERSON");
Edge liebniz_calculus = graph.addEdge(null, liebniz, calculus, "DISCOVERED");
liebniz_calculus.setProperty("year", 1674);
Adding in the books:
Vertex principia = graph.addVertex(null);
principia.setProperty("title", "Philosophiæ Naturalis Principia Mathematica");
principia.setProperty("year_first_published", 1687);
Edge newton_principia = graph.addEdge(null, newton, principia, "AUTHOR");
Edge principia_calculus = graph.addEdge(null, principia, calculus, "SUBJECT");
To find out all of the books that Newton wrote on things he discovered we can construct a graph traversal. We start with Newton, follow the out links from him to things he discovered, then traverse links in reverse to get books on that subject and again go reverse on a link to get the author. If the author is Newton then go back to the book and return the result. This query is written in Gremlin, a Groovy based domain specific language for graph traversals:
newton.out("DISCOVERED").in("SUBJECT").as("book").in("AUTHOR").filter{it == newton}.back("book").title.unique()
Thus, I hope I've shown a little how a clever traversal can be used to avoid issues with creating intermediate nodes to represent edges. In a small database it won't matter much, but in a large database you're going to suffer large performance hits doing that.
Yes, it is sad that you can't associate edges with other edges in a graph, but that's a limitation of the data structures of these databases. Sometimes it makes sense to make everything a node, for example, in Mediator/CVT a performance has a bit more concreteness too it. Individuals may wish address only Tom Cruise's performance in "The Last Samurai" in a review. However, for most graph databases I've found that application of some graph traversals can get me what I want out of the database.
| Neo4j | 7,536,142 | 20 |
How can i inject a properties file containing a Map to be used as additional constructor arg using the field.
With a Map being loaded from a properties file
the bean is currently setup using:
<bean id="graphDbService" class="org.neo4j.kernel.EmbeddedGraphDatabase"
init-method="enableRemoteShell" destroy-method="shutdown">
<constructor-arg index="0" value= "data/neo4j-db"/>
<constructor-arg index="1" value=? />
</bean>
Java Equivalent:
Map<String,String> configuration = EmbeddedGraphDatabase.loadConfigurations( "neo4j_config.props" );
GraphDatabaseService graphDb = new EmbeddedGraphDatabase( "data/neo4j-db", configuration );
Thanks
| Something like this:
<bean id="configuration" class="org.neo4j.kernel.EmbeddedGraphDatabase"
factory-method="loadConfigurations">
<constructor-arg value="neo4j_config.props"/>
</bean>
<bean id="graphDbService" class="org.neo4j.kernel.EmbeddedGraphDatabase"
init-method="enableRemoteShell" destroy-method="shutdown">
<constructor-arg index="0" value="data/neo4j-db"/>
<constructor-arg index="1" ref="configuration" />
</bean>
This takes advantage of the ability to create beans using arbitrary static factory methods, in this case using loadConfigurations() as a factory method to create the configuration bean, which is then injected into the proper constructor of EmbeddedGraphDatabase.
| Neo4j | 3,466,437 | 20 |
Is there a .NET version/binding for Neo4j?
It looks like exactly what I want, but I'm working in C# on .NET.
Thanks
| I think you best bet at the moment is to use the REST server. There's a blog post with a proof of concept .NET client: Neo4j .NET Client over HTTP using REST and json.
Update: Now there's actually two different .Net Neo4j REST clients:
Neo4RestNet
Neo4jRestSharp
| Neo4j | 2,720,271 | 20 |
According to https://neo4j-contrib.github.io/neo4j-apoc-procedures/, one only needs to download the binary jar from http://github.com/neo4j-contrib/neo4j-apoc-procedures/releases/3.1.0.3 to place into the folder "Neo4j CE 3.1.1\plugins".
I did so. However, I was unable to call "call apoc.help("apoc")" from the http://localhost:7474/browser/.
| I'm using Red Hat Linux specifically Oracle-7 and here is how I got it working
Download the apoc-<version>.jar into the /var/lib/neo4j/plugins directory
chown neo4j:neo4j apoc-<version>.jar
chmod 755 apoc-<version>.jar
Open the neo4j.conf at /etc/neo4j/neo4j.conf and replace the line #dbms.security.procedures.whitelist=apoc.coll.*,apoc.load.* with dbms.security.procedures.whitelist=apoc.coll.*,apoc.load.*,apoc.* and save it.
Restart the Neo4j service by issuing the command systemctl restart neo4j
Note: Make sure that you have the right version of apoc jar downloaded. I'm using the neo4j version 3.5.5 and the apoc jar version I'm using is apoc-3.5.0.3-all.jar. Alos make sure that you have the dbms.directories.plugins=/var/lib/neo4j/plugins uncommented in the /etc/neo4j/neo4j.conf
| Neo4j | 42,740,355 | 19 |
When you add a Node to Neo4j and you access your graph via the Neo4j Browser, the Node that was created is displayed (as a circle) and the Name property is outputted as the primary property for the Node. You can tell which Nodes are which by the name field, without having to click on them. If you do not specify a Name property, the node is just a blank circle.
I'm wondering if there is a way to specify the default "label" when visually viewing a graph via the Browser, so that I don't have to use the "name" property in order to know which Nodes are which?
| This is quite simple to achieve.
At the top you see the Label of the node (Type of node), for example
:User.
At the bottom of that panel, you should be able to see the Label
(User) along with color and size options.
At the right corner there should be an arrow "<"
Click this to expand your options
There should be an option to select the caption, which is the
property you want to display by default instead of name.
| Neo4j | 37,495,220 | 19 |
What is the best way to cleanup the graph from all nodes and relationships via Cypher?
At http://neo4j.com/docs/stable/query-delete.html#delete-delete-a-node-and-connected-relationships the example
MATCH (n)
OPTIONAL MATCH (n)-[r]-()
DELETE n,r
has the note:
This query isn’t for deleting large amounts of data
So, is the following better?
MATCH ()-[r]-() DELETE r
and
MATCH (n) DELETE n
Or is there another way that is better for large graphs?
| As you've mentioned the most easy way is to stop Neo4j, drop the data/graph.db folder and restart it.
Deleting a large graph via Cypher will be always slower but still doable if you use a proper transaction size to prevent memory issues (remember transaction are built up in memory first before they get committed). Typically 50-100k atomic operations is a good idea. You can add a limit to your deletion statement to control tx sizes and report back how many nodes have been deleted. Rerun this statement until a value of 0 is returned back:
MATCH (n)
OPTIONAL MATCH (n)-[r]-()
WITH n,r LIMIT 50000
DELETE n,r
RETURN count(n) as deletedNodesCount
| Neo4j | 29,711,757 | 19 |
As far as I understand it the IDs given by Neo4j (ID(node)) are unstable and behave somewhat like row numbers in SQL. Since IDs are mostly used for relations in SQL and these are easily modeled in Neo4j, there doesn't seem to be much use for IDs, but then how do you solve retrieval of specific nodes? Having a REST API which is supposed to have unique routes for each node (e.g. /api/concept/23) seems like a pretty standard case for web applications.
But despite it being so fundamental, the only viable way I found were either via
language specific frameworks
as an unconnected node which maintains the increments:
// get unique id
MERGE (id:UniqueId{name:'Person'})
ON CREATE SET id.count = 1
ON MATCH SET id.count = id.count + 1
WITH id.count AS uid
// create Person node
CREATE (p:Person{id:uid,firstName:'Gabriel',lastName:'Smith'})
RETURN p AS person
Source: http://www.neo4j.org/graphgist?8012859
Is there really not a simpler way and if not, is there a particular reason for it? Is my approach an anti-pattern in the context of Neo4j?
| Neo4j internal ids are a bit more stable than sql row id's as they will never change during a transaction for e.g.
And indeed exposing them for external usage is not recommended. I know there are some intentions at Neo internals to implement such a feature.
Basically people tend to use two solutions for this :
Using a UUID generator at the application level like for PHP : https://packagist.org/packages/rhumsaa/uuid and add a label/uuid unique constraint on all nodes.
Using a very handful Neo4j plugin like https://github.com/graphaware/neo4j-uuid that will add uuid properties on the fly, so it remove you the burden to handle it at the application level and it is easier to manage the persistence state of your node objects
| Neo4j | 29,434,020 | 19 |
SSDs are commonplace now; Amazon EBS is backed by SSDs, and hence most of the cloud databases now also run on SSDs (Heroku PostgreSQL, etc.). Databases and related architectures were traditionally designed with the idea the random access is bad - this is no longer the case with SSDs.
How do SSDs effect the following?
Database design - DBs are designed to minimize disk seeks (WAL, B-trees). How do SSDs change the internals and tuning of a DB design?
Application development - The working assumption has always been that (a) You want to server users request from memory, not DB, and (2) that access to DB is IO bound. With SSDs, retrieving data from the DB can be fast enough, and DB access is often network bound. Does this reduce the need for in-memory databases? Obviously you still want to pre-compute expensive operations, but you can potentially just store them in a DB
Specialized Databases - There're quite a few DBs that do things that relational DBs are suppose to be bad at (partially because of random data access). One such example are graph DBs(Neo4j) that store nodes and adjacency-lists on disk justin a compact way. Are these databases as useful if we can deploy a RDBMS on SSDs and not worry about random access?
| First, SSDs don't make random access free. Just cheaper. In particular, random writes remain very expensive, though that's mitigated in small random writes by a durable write-back cache.
WAL would be very expensive on SSDs if the SSD truly flushed it to the underlying media - but it doesn't. It accumulates it in write-back cache and periodicaly flushes it in whole erase-block sized chunks. So WAL actually works really well on SDDs, as there's never any need for a read/modify/write cycle for a partial erase-block write.
I'm sure there are opportunities to be had in tree structure storage for indexes on SSDs. That's not something we've really explored in PostgreSQL yet.
Most of the SSD-based DB servers I work with remain thoroughly disk I/O bound for normal operation. SSDs are fast, but not magic. Even PCI-E integrated SSDs can't compete with RAM, and big workloads tend to quickly saturate the SSD's write-back cache and queues.
Similarly, walking an adjacency list in a RDBMS is still far from free in computational terms, the on disk representation is less compact than in a graph DB, etc. There's a lot to be gained from specialization where you need it.
To truly look at what ultra-fast storage does to DBs you need to go a step further and look at PCIe RAM-based storage devices that're insanely, ridiculously fast.
BTW, in a great many ways an SSD isn't that different to a SCSI HBA with a big battery-backed write cache. These have been around for a long time. An SSD will tend to have better random reads, but it's otherwise pretty similar.
| Neo4j | 26,640,769 | 19 |
I was wondering how
WHERE id(n) = id
compares to
START n = node(id)
as most of the time I do not select nodes by id (at least in number of code appearances) and therefore like to do it always in the match
| The two statements are identical. START is the syntax to be used in Neo4j 1.x. From Neo4j 2.0 the MATCH variant should be preferred, maybe START will get deprecated at some future release.
| Neo4j | 21,651,479 | 19 |
I am working on windows. I have created a text file of Cypher query using notepad. How can I run the query in the file using Neo4jShell or Neo4j web interface console.
| On Debian/Ubuntu or any *nix installations, use the following from terminal:
$ neo4j-shell -c < path-to-cypher-query-file.cql
Note that each cypher query in the file must end in a semicolon and must be separated by a blank line from the other query. Also, the .cql ending (file format) is not mandatory.
| Neo4j | 17,462,306 | 19 |
I'm new to MongoDB Compass tool and am trying to update a field in my collection. Please can someone suggest where the update query must be written. Could find no options or panes in the tool to write custom queries be it selection / updation for that matter.
In the Default Window only the selection/projection/restriction options are found.
Any help is much appreciated.
| In the latest version, there is a "_MongoSH" in the bottom left corner of the window.
Thx to @Boštjan Pišler for the hint about a new feature.
Old answer:
I had the same issue, it looks like a simple feature to implement (since document updates are possible) but... AFAIK there is no such option in compass, you can do it through mongodb shell (CLI client).
| MongoDB | 49,110,169 | 120 |
I'm preparing a database creation script in Node.js and Mongoose.
How can I check if the database already exists, and if so, drop (delete) it using Mongoose?
I could not find a way to drop it with Mongoose.
| There is no method for dropping a collection from mongoose, the best you can do is remove the content of one :
Model.remove({}, function(err) {
console.log('collection removed')
});
But there is a way to access the mongodb native javascript driver, which can be used for this
mongoose.connection.collections['collectionName'].drop( function(err) {
console.log('collection dropped');
});
Warning
Make a backup before trying this in case anything goes wrong!
| MongoDB | 10,081,452 | 120 |
I have collection that contains documents with below schema. I want to filter/find all documents that contain the gender female and aggregate the sum of brainscore. I tried the below statement and it shows a invalid pipeline error.
db['!all'].aggregate({ $and: [ {'GENDER' : 'F'} , {'DOB' : { $gte : 19400801, $lte : 20131231 }} ] }, { $group : { _id : "$GENDER", totalscore : { $sum : "$BRAINSCORE" } } } )
Schema:
{
"_id" : ObjectId("53f63fc8f2b643f6ebb8a1a9"),
"DOB" : 19690112,
"GENDER" : "F",
"BRAINSCORE" : 65
},
{
"_id" : ObjectId("53f63fc8f2b643f6ebb8a1a2"),
"DOB" : 19950116,
"GENDER" : "F",
"BRAINSCORE" : 44
},
{
"_id" : ObjectId("53f63fc8f2b643f6ebb8a902"),
"DOB" : 19430216,
"GENDER" : "M",
"BRAINSCORE" : 71
}
| You have to use $match:
db['!all'].aggregate([
{$match:
{'GENDER': 'F',
'DOB':
{ $gte: 19400801,
$lte: 20131231 } } },
{$group:
{_id: "$GENDER",
totalscore:{ $sum: "$BRAINSCORE"}}}
])
Outputs:
{ "_id" : "F", "totalscore" : 109 }
| MongoDB | 25,436,630 | 119 |
I cannot manually or automatically populate the creator field on a newly saved object ... the only way I can find is to re-query for the objects I already have which I would hate to do.
This is the setup:
var userSchema = new mongoose.Schema({
name: String,
});
var User = db.model('User', userSchema);
var bookSchema = new mongoose.Schema({
_creator: { type: mongoose.Schema.Types.ObjectId, ref: 'User' },
description: String,
});
var Book = db.model('Book', bookSchema);
This is where I am pulling my hair
var user = new User();
user.save(function(err) {
var book = new Book({
_creator: user,
});
book.save(function(err){
console.log(book._creator); // is just an object id
book._creator = user; // still only attaches the object id due to Mongoose magic
console.log(book._creator); // Again: is just an object id
// I really want book._creator to be a user without having to go back to the db ... any suggestions?
});
});
EDIT: latest mongoose fixed this issue and added populate functionality, see the new accepted answer.
| You should be able to use the Model's populate function to do this: http://mongoosejs.com/docs/api.html#model_Model.populate In the save handler for book, instead of:
book._creator = user;
you'd do something like:
Book.populate(book, {path:"_creator"}, function(err, book) { ... });
Probably too late an answer to help you, but I was stuck on this recently, and it might be useful for others.
| MongoDB | 13,525,480 | 119 |
I followed the MongoDb Docs to setup my first MongoDb,
When I start MongoDB using the command
C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe
I get the following error
exception in initAndListen: 29 Data directory C:\data\db\ not found., terminating
shutdown: going to close listening sockets...
shutdown: going to flush diaglog...
now exiting
shutting down with code:100
| MongoDB needs a folder to store the database. Create a C:\data\db\ directory:
mkdir C:\data\db
and then start MongoDB:
C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe
Sometimes C:\data\db folder already exists due to previous installation. So if for this reason mongod.exe does not work, you may delete all the contents from C:\data\db folder and execute mongod.exeagain.
| MongoDB | 41,420,466 | 118 |
I need to search an ObjectId with python using pymongo but I always get an error.
import pymongo
from pymongo import MongoClient
from pymongo import ObjectId
gate = collection.find({'_id': ObjectId(modem["dis_imei"])})
Any ideas how to search?
| I use pymongo 2.4.1.
from bson.objectid import ObjectId
[i for i in dbm.neo_nodes.find({"_id": ObjectId(obj_id_to_find)})]
| MongoDB | 16,073,865 | 118 |
I'm using Mongoose, MongoDB, and Node.js.
I would like to define a schema where one of its fields is a date\timestamp.
I would like to use this field in order to return all of the records that have been updated in the last 5 minutes.
Due to the fact that in Mongoose I can't use the Timestamp() method, I understand that my only option is to use the following JavaScript method:
time : { type: Number, default: (new Date()).getTime() }
It's probably not the most efficient way for querying a humongous database.
Is there a more efficient way of implementing this.
Is there a way to implement this with Mongoose and be able to use a MongoDB timestamp?
| Edit - 20 March 2016
Mongoose now support timestamps for collections.
Please consider the answer of @bobbyz below. Maybe this is what you are looking for.
Original answer
Mongoose supports a Date type (which is basically a timestamp):
time : { type : Date, default: Date.now }
With the above field definition, any time you save a document with an unset time field, Mongoose will fill in this field with the current time.
Source: http://mongoosejs.com/docs/guide.html
| MongoDB | 10,006,218 | 118 |
I'm trying to connect to my mongoDB server via the connection string given to me by mongo:
"mongodb+srv://david:[email protected]/test?retryWrites=true"
In my code I am calling the connection through mongoose like this (obviously putting in my password):
const mongoose = require('mongoose');
const db = 'mongodb+srv://david:<password>@cluster0-re3gq.mongodb.net/test?retryWrites=true'
mongoose
.connect(db, {
useNewUrlParser: true,
useCreateIndex: true
})
.then(() => console.log('MongoDB connected...'))
.catch(err => console.log(err));
When I run the code I am getting the following error
"MongoError: bad auth Authentication failed."
Any ideas of what that could mean?
| I had the same problem, and in my case, the answer was as simple as removing the angle brackets "<"and ">" around <password>. I had been trying: my_login_id:<my_password>, when it should have been my_login_id:my_password.
| MongoDB | 55,695,565 | 117 |
How does one use Mongo Compass and search by ObjectID? I've been searching for the documentation for this but haven't been successful with anything. I have tried:
{ "_id" : "58f8085dc1840e050034d98f" }
{ "$oid" : "58f8085dc1840e050034d98f" }
{ "id" : "58f8085dc1840e050034d98f" }
None of those seem to work and it's getting quite frustrating. Also, sidenote - is it possible to set the skip/limit when displaying documents in Compass?
| UPDATE Newer versions of Compass now support querying ObjectId similar to how they would be queried via the mongo shell (the $oid syntax will not work in these newer versions):
{_id: ObjectId('58f8085dc1840e050034d98f')}
If you're using an older version before 1.10.x you, enter the following into the query box:
{"_id":{"$oid":"58f8085dc1840e050034d98f"}}
It's also worth pointing out that in the UI you can click on one of the _ids and it will auto-populate the query box with the query based on what you clicked. You can also shift+click on multiple fields to create compound (and-ed) query criteria, or you can click and drag to select a range.
Skip and Limit are support for versions >= 1.8.x does support skip and limit when browsing under the Documents tab. Click the "Options" button on the right side of the Query Bar. See the Query Bar docs for illustration and details.
The Schema tab only supports limit, as this will do a sampling of documents and skip doesn't really make sense in that context.
In order to click on the _ids you need to be on the Schema tab. If your _ids are of type ObjectId, the visualization of the distribution will appear as a date range and you can drag over one or more lines to populate the query based on _id. If your _ids are some other type, some portion of them will display individually and you can click, drag, or shift-click over them.
| MongoDB | 43,525,523 | 117 |
Where is this error coming from? I am not using ensureIndex or createIndex in my Nodejs application anywhere. I am using yarn package manager.
Here is my code in index.js
import express from 'express';
import path from 'path';
import bodyParser from 'body-parser';
import mongoose from 'mongoose';
import Promise from 'bluebird';
dotenv.config();
mongoose.Promise = Promise;
mongoose.connect('mongodb://localhost:27017/bookworm', { useNewUrlParser: true });
const app = express();
| The issue is that mongoose still uses collection.ensureIndex and should be updated by them in the near future. To get rid of the message you can downgrade by using version 5.2.8 in your package.json (and delete any caches, last resort is to uninstall it the install it with npm install [email protected]):
"mongoose": "^5.2.8"
EDIT:
As of this edit, Mongoose is now at v5.4.13. Per their docs, these are the fixes for the deprecation warnings...
mongoose.set('useNewUrlParser', true);
mongoose.set('useFindAndModify', false);
mongoose.set('useCreateIndex', true);
Replace update() with updateOne(), updateMany(), or replaceOne()
Replace remove() with deleteOne() or deleteMany().
Replace count() with countDocuments(), unless you want to count how many documents are in the whole collection (no filter). In the latter case, use estimatedDocumentCount().
| MongoDB | 51,960,171 | 116 |
I'm installing MongoDB on an Ubuntu 14.04 machine, using the instructions at:
https://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
So I run:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
And then:
echo "deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list
Followed by:
sudo apt-get update
I then get the following warning at the end of the update:
W: GPG error: http://repo.mongodb.org trusty/mongodb-org/3.2 Release:
The following signatures were invalid: BADSIG D68FA50FEA312927 MongoDB
3.2 Release Signing Key
If I ignore the warning and try to run:
sudo apt-get install -y mongodb-org
I get:
WARNING: The following packages cannot be authenticated!
mongodb-org-shell mongodb-org-server mongodb-org-mongos
mongodb-org-tools mongodb-org E: There are problems and -y was used
without --force-yes
Any ideas on how to resolve? Thanks!
| Update all expired keys from Ubuntu key server in one command:
sudo apt-key list | \
grep "expired: " | \
sed -ne 's|pub .*/\([^ ]*\) .*|\1|gp' | \
xargs -n1 sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys
Command explanation:
sudo apt-key list - lists all keys installed in the system;
grep "expired: " - leave only lines with expired keys;
sed -ne 's|pub .*/\([^ ]*\) .*|\1|gp' - extracts keys;
xargs -n1 sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys - updates keys from Ubuntu key server by found expired ones.
Source
| MongoDB | 34,733,340 | 116 |
I would like to drop into the mongo shell in the terminal on my MacBook. However, I'm interested in connecting to a Mongo instance that is running in the cloud (compose.io instance via Heroku addon). I have the name, password, host, port, and database name from the MongoDB URI:
mongodb://username:[email protected]:10011/my_database
I have installed mongodb on my MacBook using Homebrew not because I want Mongo running on my Mac, but just to get access to the mongo shell program in order to connect to this remote database.
However, I can't find the right command to get me the full shell access I would like. Using instructions found here http://docs.mongodb.org/manual/reference/program/mongo/ (search for "remote") I am able to get what looks like a connection, but without giving my username or password I am not fully connected. Running db.auth(username, password) returns 1 (as opposed to "auth fails" when I provide incorrect username and password), but I continue to get an "unauthorized" error message when issuing the show dbs command.
| You are probably connecting fine but don't have sufficient privileges to run show dbs.
You don't need to run the db.auth if you pass the auth in the command line:
mongo somewhere.mongolayer.com:10011/my_database -u username -p password
Once you connect are you able to see collections?
> show collections
If so all is well and you just don't have admin privileges to the database and can't run the show dbs
| MongoDB | 26,813,912 | 116 |
If I have a mongo instance running, how can I check what port numbers it is listening on from the shell? I thought that db.serverStatus() would do it but I don't see it. I see this
"connections" : {
"current" : 3,
"available" : 816
Which is close... but no. Suggestions? I've read the docs and can't seem to find any command that will do this.
| You can do this from the Operating System shell by running:
sudo lsof -iTCP -sTCP:LISTEN | grep mongo
| MongoDB | 9,346,431 | 116 |
I'm trying to select a document by id
I've tried:
collection.update({ "_id": { "$oid": + theidID } }
collection.update({ "_id": theidID }
collection.update({ "_id.$oid": theidID }}
Also tried:
collection.update({ _id: new ObjectID(theidID ) }
This gives me an error 500...
var mongo = require('mongodb')
var BSON = mongo.BSONPure;
var o_id = new BSON.ObjectID(theidID );
collection.update({ _id: o_id }
None of these work. How to select by _id?
| var mongo = require('mongodb');
var o_id = new mongo.ObjectID(theidID);
collection.update({'_id': o_id});
| MongoDB | 4,902,569 | 116 |
Bit of an odd one on query performance... I need to run a query which does a total count of documents, and can also return a result set that can be limited and offset.
So, I have 57 documents in total, and the user wants 10 documents offset by 20.
I can think of 2 ways of doing this, first is query for all 57 documents (returned as an array), then using array.slice return the documents they want. The second option is to run 2 queries, the first one using mongo's native 'count' method, then run a second query using mongo's native $limit and $skip aggregators.
Which do you think would scale better? Doing it all in one query, or running two separate ones?
Edit:
// 1 query
var limit = 10;
var offset = 20;
Animals.find({}, function (err, animals) {
if (err) {
return next(err);
}
res.send({count: animals.length, animals: animals.slice(offset, limit + offset)});
});
// 2 queries
Animals.find({}, {limit:10, skip:20} function (err, animals) {
if (err) {
return next(err);
}
Animals.count({}, function (err, count) {
if (err) {
return next(err);
}
res.send({count: count, animals: animals});
});
});
| I suggest you to use 2 queries:
db.collection.count() will return total number of items. This value is stored somewhere in Mongo and it is not calculated.
db.collection.find().skip(20).limit(10) here I assume you could use a sort by some field, so do not forget to add an index on this field. This query will be fast too.
I think that you shouldn't query all items and than perform skip and take, cause later when you have big data you will have problems with data transferring and processing.
| MongoDB | 13,935,733 | 115 |
I am kind of new to mac as well as mongodb.
I have a weird doubt, accessing the database created using mongodb on mac?
I know, in windows there is a folder called c:\data\db, where my database files are stored.
How and where in mac, the database is stored.
I remember doing something like
sudo mkdir -p /data/db
sudo chown `id -u` /data/db
to create such a folder on mac, but I didn't find any database file in this folder, though i created a database.
Where are the database files saved on mac?
Any help would be really appreciated.
| If MongoDB is installed on macOS via Homebrew, the default data directory depends on the type of processor in the system.
Intel Processor
Apple Silicon Processor (M1, M2, etc)
Data Directory
/usr/local/var/mongodb
/opt/homebrew/var/mongodb
Configuration file
/usr/local/etc/mongod.conf
/opt/homebrew/etc/mongod.conf
Log directory
/usr/local/var/log/mongodb
/opt/homebrew/var/log/mongodb
Run brew --prefix to see where Homebrew installed these files.
See the MongoDB "Install on macOS" documentation for additional details.
| MongoDB | 13,827,915 | 115 |
I want to combine two OR-queries with AND in Monoose, like in this SQL statement:
SELECT * FROM ... WHERE (a = 1 OR b = 1) AND (c=1 OR d=1)
I tried this in a NodeJS module which only gets the model object from the main application:
/********** Main application ***********/
var query = MyModel.find({});
myModule1.addCondition(query);
myModule2.addCondition(query);
query.exec(...)
/************ myModule1 ***************/
exports.addCondition = function(query) {
query.or({a: 1}, {b: 1});
}
/************ myModule2 ***************/
exports.addCondition = function(query) {
query.or({c: 1}, {d: 1});
}
But this doesn't work, all OR-conditions will get joined together like in this SQL statement:
SELECT * FROM ... WHERE a = 1 OR b = 1 OR c=1 OR d=1
How can I combine the two conditions of myModule1 and myModule2 with AND in Mongoose?
| It's probably easiest to create your query object directly as:
Test.find({
$and: [
{ $or: [{a: 1}, {b: 1}] },
{ $or: [{c: 1}, {d: 1}] }
]
}, function (err, results) {
...
}
But you can also use the Query#and helper that's available in recent 3.x Mongoose releases:
Test.find()
.and([
{ $or: [{a: 1}, {b: 1}] },
{ $or: [{c: 1}, {d: 1}] }
])
.exec(function (err, results) {
...
});
| MongoDB | 13,272,824 | 115 |
It seems mongo does not allow insertion of keys with a dot (.) or dollar sign ($) however when I imported a JSON file that contained a dot in it using the mongoimport tool it worked fine. The driver is complaining about trying to insert that element.
This is what the document looks like in the database:
{
"_id": {
"$oid": "..."
},
"make": "saab",
"models": {
"9.7x": [
2007,
2008,
2009,
2010
]
}
}
Am I doing this all wrong and should not be using hash maps like that with external data (i.e. the models) or can I escape the dot somehow? Maybe I am thinking too much Javascript-like.
| MongoDB doesn't support keys with a dot in them so you're going to have to preprocess your JSON file to remove/replace them before importing it or you'll be setting yourself up for all sorts of problems.
There isn't a standard workaround to this issue, the best approach is too dependent upon the specifics of the situation. But I'd avoid any key encoder/decoder approach if possible as you'll continue to pay the inconvenience of that in perpetuity, where a JSON restructure would presumably be a one-time cost.
| MongoDB | 12,397,118 | 115 |
I am using mongoose (node), what is the best way to output id instead of _id?
| Given you're using Mongoose, you can use 'virtuals', which are essentially fake fields that Mongoose creates. They're not stored in the DB, they just get populated at run time:
// Duplicate the ID field.
Schema.virtual('id').get(function(){
return this._id.toHexString();
});
// Ensure virtual fields are serialised.
Schema.set('toJSON', {
virtuals: true
});
Any time toJSON is called on the Model you create from this Schema, it will include an 'id' field that matches the _id field Mongo generates. Likewise you can set the behaviour for toObject in the same way.
See:
http://mongoosejs.com/docs/api.html
http://mongoosejs.com/docs/guide.html#toJSON
http://mongoosejs.com/docs/guide.html#toObject
You can abstract this into a BaseSchema all your models then extend/invoke to keep the logic in one place. I wrote the above while creating an Ember/Node/Mongoose app, since Ember really prefers to have an 'id' field to work with.
| MongoDB | 7,034,848 | 115 |
The three types of NoSQL databases I've read about is key-value, column-oriented, and document-oriented.
Key-value is pretty straight forward - a key with a plain value.
I've seen document-oriented databases described as like key-value, but the value can be a structure, like a JSON object. Each "document" can have all, some, or none of the same keys as another.
Column oriented seems to be very much like document oriented in that you don't specify a structure.
So what is the difference between these two, and why would you use one over the other?
I've specifically looked at MongoDB and Cassandra. I basically need a dynamic structure that can change, but not affect other values. At the same time I need to be able to search/filter specific keys and run reports. With CAP, AP is the most important to me. The data can "eventually" be synced across nodes, just as long as there is no conflict or loss of data. Each user would get their own "table".
| The main difference is that document stores (e.g. MongoDB and CouchDB) allow arbitrarily complex documents, i.e. subdocuments within subdocuments, lists with documents, etc. whereas column stores (e.g. Cassandra and HBase) only allow a fixed format, e.g. strict one-level or two-level dictionaries.
| MongoDB | 7,565,012 | 114 |
I tried to run it and it said an error like the title. and
this is my code:
const URI = process.env.MONGODB_URL;
mongoose.connect(URI, {
useCreateIndex: true,
useFindAndModify: false,
useNewUrlParser: true,
useUnifiedTopology: true
}, err => {
if(err) throw err;
console.log('Connected to MongoDB!!!')
})
I set the MONGODB_URL in .env :
MONGODB_URL = mongodb+srv://username:<password>@cluster0.accdl.mongodb.net/website?retryWrites=true&w=majority
How to fix it?
| From the Mongoose 6.0 docs:
useNewUrlParser, useUnifiedTopology, useFindAndModify, and useCreateIndex are no longer supported options. Mongoose 6 always behaves as if useNewUrlParser, useUnifiedTopology, and useCreateIndex are true, and useFindAndModify is false. Please remove these options from your code.
| MongoDB | 68,958,221 | 113 |
I have Category model:
Category:
...
articles: [{type:ObjectId, ref:'Article'}]
Article model contains ref to Account model.
Article:
...
account: {type:ObjectId, ref:'Account'}
So, with populated articles Category model will be:
{ //category
articles: //this field is populated
[ { account: 52386c14fbb3e9ef28000001, // I want this field to be populated
date: Fri Sep 20 2013 00:00:00 GMT+0400 (MSK),
title: 'Article 1' } ],
title: 'Category 1' }
The questions is: how to populate subfield (account) of a populated field ([articles])? Here is how I do it now:
globals.models.Category
.find
issue : req.params.id
null
sort:
order: 1
.populate("articles") # this populates only article field, article.account is not populated
.exec (err, categories) ->
console.log categories
I know it was discussed here: Mongoose: Populate a populated field but no real solution was found
| Firstly, update mongoose 3 to 4 & then use the simplest way for deep population in mongoose as shown below:
Suppose you have Blog schema having userId as ref Id & then in User you have some review as ref Id for schema Review. So Basically, you have three schemas:
Blog
User
Review
And, you have to query from blog, which user owns this blog & the user review.
So you can query your result as :
BlogModel
.find()
.populate({
path : 'userId',
populate : {
path : 'reviewId'
}
})
.exec(function (err, res) {
})
| MongoDB | 18,867,628 | 113 |
Specifically, I want to print the results of a mongodb find() to a file. The JSON object is too large so I'm unable to view the entire object with the shell window size.
| The shell provides some nice but hidden features because it's an interactive environment.
When you run commands from a javascript file via mongo commands.js you won't get quite identical behavior.
There are two ways around this.
(1) fake out the shell and make it think you are in interactive mode
$ mongo dbname << EOF > output.json
db.collection.find().pretty()
EOF
or
(2) use Javascript to translate the result of a find() into a printable JSON
mongo dbname command.js > output.json
where command.js contains this (or its equivalent):
printjson( db.collection.find().toArray() )
This will pretty print the array of results, including [ ] - if you don't want that you can iterate over the array and printjson() each element.
By the way if you are running just a single Javascript statement you don't have to put it in a file and instead you can use:
$ mongo --quiet dbname --eval 'printjson(db.collection.find().toArray())' > output.json
| MongoDB | 13,104,800 | 113 |
In the following example, assume the document is in the db.people collection.
How to remove the 3rd element of the interests array by it's index?
{
"_id" : ObjectId("4d1cb5de451600000000497a"),
"name" : "dannie",
"interests" : [
"guitar",
"programming",
"gadgets",
"reading"
]
}
This is my current solution:
var interests = db.people.findOne({"name":"dannie"}).interests;
interests.splice(2,1)
db.people.update({"name":"dannie"}, {"$set" : {"interests" : interests}});
Is there a more direct way?
| There is no straight way of pulling/removing by array index. In fact, this is an open issue http://jira.mongodb.org/browse/SERVER-1014 , you may vote for it.
The workaround is using $unset and then $pull:
db.lists.update({}, {$unset : {"interests.3" : 1 }})
db.lists.update({}, {$pull : {"interests" : null}})
Update: as mentioned in some of the comments this approach is not atomic and can cause some race conditions if other clients read and/or write between the two operations. If we need the operation to be atomic, we could:
Read the document from the database
Update the document and remove the item in the array
Replace the document in the database. To ensure the document has not changed since we read it, we can use the update if current pattern described in the mongo docs
| MongoDB | 4,588,303 | 113 |
Is there an easy way to get the ID (ObjectID) of the last inserted document of a mongoDB instance using the Java driver?
| I just realized you can do this:
BasicDBObject doc = new BasicDBObject( "name", "Matt" );
collection.insert( doc );
ObjectId id = (ObjectId)doc.get( "_id" );
| MongoDB | 3,338,999 | 113 |
I've just arrived to Node.js and see that there are many libs to use with the MongoDB, the most popular seem to be these two: (mongoose and mongodb). Can I get pros and cons of those extensions? Are there better alternatives to these two?
Edit: Found a new library that seems also interesting node-mongolian and is "Mongolian DeadBeef is an awesome Mongo DB node.js driver that attempts to closely approximate the mongodb shell." (readme.md)
https://github.com/marcello3d/node-mongolian
This is just to add more resources to new people that view this, so basically Mongolian its like an ODM...
| Mongoose is higher level and uses the MongoDB driver (it's a dependency, check the package.json), so you'll be using that either way given those options. The question you should be asking yourself is, "Do I want to use the raw driver, or do I need an object-document modeling tool?" If you're looking for an object modeling (ODM, a counterpart to ORMs from the SQL world) tool to skip some lower level work, you want Mongoose.
If you want a driver, because you intend to break a lot of rules that an ODM might enforce, go with MongoDB. If you want a fast driver, and can live with some missing features, give Mongolian DeadBeef a try: https://github.com/marcello3d/node-mongolian
| MongoDB | 9,232,562 | 112 |
I would like to know if there're a command to drop every databases from my MongoDB?
I know if I want to drop only one datatable, I just need to type the name of the database like the code below but I dont want to have to specify it.
mongo DB_NAME --eval 'db.dropDatabase();'
| you can create a javascript loop that do the job and then execute it in the mongoconsole.
var dbs = db.getMongo().getDBNames()
for(var i in dbs){
db = db.getMongo().getDB( dbs[i] );
print( "dropping db " + db.getName() );
db.dropDatabase();
}
save it to dropall.js and then execute:
mongo dropall.js
| MongoDB | 6,376,436 | 112 |
I want to set one of my fields as primary key. I am using MongoDB as my NoSQL.
| _id field is reserved for primary key in mongodb, and that should be a unique value. If you don't set anything to _id it will automatically fill it with "MongoDB Id Object". But you can put any unique info into that field.
Additional info: http://www.mongodb.org/display/DOCS/BSON
Hope it helps.
| MongoDB | 3,298,963 | 112 |
2 days old with Mongo and I have a SQL background so bear with me. As with mysql, it is very convenient to be in the MySQL command line and output the results of a query to a file on the machine. I am trying to understand how I can do the same with Mongo, while being in the shell
I can easily get the output of a query I want by being outside of the shell and executing the following command:
mongo localhost:27017/dbname --eval "printjson(db.collectionName.findOne())" > sample.json
The above way is fine, but it requires me to exit the mongo shell or open a new terminal tab to execute this command. It would be very convenient if I could simply do this while still being inside the shell.
P.S: the Question is an offshoot of a question I posted on SO
| AFAIK, there is no a interactive option for output to file, there is a previous SO question related with this: Printing mongodb shell output to File
However, you can log all the shell session if you invoked the shell with tee command:
$ mongo | tee file.txt
MongoDB shell version: 2.4.2
connecting to: test
> printjson({this: 'is a test'})
{ "this" : "is a test" }
> printjson({this: 'is another test'})
{ "this" : "is another test" }
> exit
bye
Then you'll get a file with this content:
MongoDB shell version: 2.4.2
connecting to: test
> printjson({this: 'is a test'})
{ "this" : "is a test" }
> printjson({this: 'is another test'})
{ "this" : "is another test" }
> exit
bye
To remove all the commands and keep only the json output, you can use a command similar to:
tail -n +3 file.txt | egrep -v "^>|^bye" > output.json
Then you'll get:
{ "this" : "is a test" }
{ "this" : "is another test" }
| MongoDB | 22,565,231 | 109 |
i'm running mongo 1.8.2 and trying to see how to cleanly shut it down on Mac.
on our ubuntu servers i can shutdown mongo cleanly from the mongo shell with:
> use admin
> db.shutdownServer()
but on my Mac, it does not kill the mongod process. the output shows that it 'should be' shutdown but when i ps -ef | grep mongo it shows me an active process. also, i can still open a mongo shell and query my dbs like it was never shutdown.
the output from my db.shutdownServer() locally is:
MongoDB shell version: 1.8.2
connecting to: test
> use admin
switched to db admin
> db.shutdownServer()
Tue Dec 13 11:44:21 DBClientCursor::init call() failed
Tue Dec 13 11:44:21 query failed : admin.$cmd { shutdown: 1.0 } to: 127.0.0.1
server should be down...
Tue Dec 13 11:44:21 trying reconnect to 127.0.0.1
Tue Dec 13 11:44:21 reconnect 127.0.0.1 failed couldn't connect to server 127.0.0.1
Tue Dec 13 11:44:21 Error: error doing query: unknown shell/collection.js:150
i know i can just kill the process but i'd like to do it more cleanly.
| It's probably because launchctl is managing your mongod instance. If you want to start and shutdown mongod instance, unload that first:
launchctl unload -w ~/Library/LaunchAgents/org.mongodb.mongod.plist
Then start mongod manually:
mongod -f path/to/mongod.conf --fork
You can find your mongod.conf location from ~/Library/LaunchAgents/org.mongodb.mongod.plist.
After that, db.shutdownServer() would work just fine.
Added Feb 22 2014:
If you have mongodb installed via homebrew, homebrew actually has a handy brew services command. To show current running services:
brew services list
To start mongodb:
brew services start mongodb-community
To stop mongodb if it's already running:
brew services stop mongodb-community
Update*
As edufinn pointed out in the comment, brew services is now available as user-defined command and can be installed with following command: brew tap gapple/services.
| MongoDB | 8,495,293 | 109 |
Background
I'm prototyping a conversion from our RDBMS database to MongoDB. While denormalizing, it seems as if I have two choices, one which leads to many (millions) of smaller documents or one which leads to fewer (hundreds of thousands) large documents.
If I could distill it down to a simple analog, it would be the difference between a collection with fewer Customer documents like this (in Java):
class Customer {
private String name;
private Address address;
// each CreditCard has hundreds of Payment instances
private Set<CreditCard> creditCards;
}
or a collection with many, many Payment documents like this:
class Payment {
private Customer customer;
private CreditCard creditCard;
private Date payDate;
private float payAmount;
}
Question
Is MongoDB designed to prefer many, many small documents or fewer large documents? Does the answer mostly depend on what queries I plan on running? (i.e. How many credit cards does customer X have? vs What was the average amount all customers paid last month?)
I've looked around a lot but I didn't stumble into any MongoDB schema best practices that would help me answer my question.
| You'll definitely need to optimize for the queries you're doing.
Here's my best guess based on your description.
You'll probably want to know all Credit Cards for each Customer, so keep an array of those within the Customer Object. You'll also probably want to have a Customer reference for each Payment. This will keep the Payment document relatively small.
The Payment object will automatically have its own ID and index. You'll probably want to add an index on the Customer reference as well.
This will allow you to quickly search for Payments by Customer without storing the whole customer object every time.
If you want to answer questions like "What was the average amount all customers paid last month" you're instead going to want a map / reduce for any sizeable dataset. You're not getting this response "real-time". You'll find that storing a "reference" to Customer is probably good enough for these map-reduces.
So to answer your question directly: Is MongoDB designed to prefer many, many small documents or fewer large documents?
MongoDB is designed to find indexed entries very quickly. MongoDB is very good at finding a few needles in a large haystack. MongoDB is not very good at finding most of the needles in the haystack. So build your data around your most common use cases and write map/reduce jobs for the rarer use cases.
| MongoDB | 3,038,703 | 109 |
The main collection is retailer, which contains an array for stores. Each store contains an array of offers (you can buy in this store). This offers array has an array of sizes. (See example below)
Now I try to find all offers, which are available in the size L.
{
"_id" : ObjectId("56f277b1279871c20b8b4567"),
"stores" : [
{
"_id" : ObjectId("56f277b5279871c20b8b4783"),
"offers" : [
{
"_id" : ObjectId("56f277b1279871c20b8b4567"),
"size": [
"XS",
"S",
"M"
]
},
{
"_id" : ObjectId("56f277b1279871c20b8b4567"),
"size": [
"S",
"L",
"XL"
]
}
]
}
}
I've try this query: db.getCollection('retailers').find({'stores.offers.size': 'L'})
I expect some Output like that:
{
"_id" : ObjectId("56f277b1279871c20b8b4567"),
"stores" : [
{
"_id" : ObjectId("56f277b5279871c20b8b4783"),
"offers" : [
{
"_id" : ObjectId("56f277b1279871c20b8b4567"),
"size": [
"S",
"L",
"XL"
]
}
]
}
}
But the Output of my Query contains also the non matching offer with size XS,X and M.
How I can force MongoDB to return only the offers, which matched my query?
Greetings and thanks.
|
So the query you have actually selects the "document" just like it should. But what you are looking for is to "filter the arrays" contained so that the elements returned only match the condition of the query.
The real answer is of course that unless you are really saving a lot of bandwidth by filtering out such detail then you should not even try, or at least beyond the first positional match.
MongoDB has a positional $ operator which will return an array element at the matched index from a query condition. However, this only returns the "first" matched index of the "outer" most array element.
db.getCollection('retailers').find(
{ 'stores.offers.size': 'L'},
{ 'stores.$': 1 }
)
In this case, it means the "stores" array position only. So if there were multiple "stores" entries, then only "one" of the elements that contained your matched condition would be returned. But, that does nothing for the inner array of "offers", and as such every "offer" within the matchd "stores" array would still be returned.
MongoDB has no way of "filtering" this in a standard query, so the following does not work:
db.getCollection('retailers').find(
{ 'stores.offers.size': 'L'},
{ 'stores.$.offers.$': 1 }
)
The only tools MongoDB actually has to do this level of manipulation is with the aggregation framework. But the analysis should show you why you "probably" should not do this, and instead just filter the array in code.
In order of how you can achieve this per version.
First with MongoDB 3.2.x with using the $filter operation:
db.getCollection('retailers').aggregate([
{ "$match": { "stores.offers.size": "L" } },
{ "$project": {
"stores": {
"$filter": {
"input": {
"$map": {
"input": "$stores",
"as": "store",
"in": {
"_id": "$$store._id",
"offers": {
"$filter": {
"input": "$$store.offers",
"as": "offer",
"cond": {
"$setIsSubset": [ ["L"], "$$offer.size" ]
}
}
}
}
}
},
"as": "store",
"cond": { "$ne": [ "$$store.offers", [] ]}
}
}
}}
])
Then with MongoDB 2.6.x and above with $map and $setDifference:
db.getCollection('retailers').aggregate([
{ "$match": { "stores.offers.size": "L" } },
{ "$project": {
"stores": {
"$setDifference": [
{ "$map": {
"input": {
"$map": {
"input": "$stores",
"as": "store",
"in": {
"_id": "$$store._id",
"offers": {
"$setDifference": [
{ "$map": {
"input": "$$store.offers",
"as": "offer",
"in": {
"$cond": {
"if": { "$setIsSubset": [ ["L"], "$$offer.size" ] },
"then": "$$offer",
"else": false
}
}
}},
[false]
]
}
}
}
},
"as": "store",
"in": {
"$cond": {
"if": { "$ne": [ "$$store.offers", [] ] },
"then": "$$store",
"else": false
}
}
}},
[false]
]
}
}}
])
And finally in any version above MongoDB 2.2.x where the aggregation framework was introduced.
db.getCollection('retailers').aggregate([
{ "$match": { "stores.offers.size": "L" } },
{ "$unwind": "$stores" },
{ "$unwind": "$stores.offers" },
{ "$match": { "stores.offers.size": "L" } },
{ "$group": {
"_id": {
"_id": "$_id",
"storeId": "$stores._id",
},
"offers": { "$push": "$stores.offers" }
}},
{ "$group": {
"_id": "$_id._id",
"stores": {
"$push": {
"_id": "$_id.storeId",
"offers": "$offers"
}
}
}}
])
Lets break down the explanations.
MongoDB 3.2.x and greater
So generally speaking, $filter is the way to go here since it is designed with the purpose in mind. Since there are multiple levels of the array, you need to apply this at each level. So first you are diving into each "offers" within "stores" to examime and $filter that content.
The simple comparison here is "Does the "size" array contain the element I am looking for". In this logical context, the short thing to do is use the $setIsSubset operation to compare an array ("set") of ["L"] to the target array. Where that condition is true ( it contains "L" ) then the array element for "offers" is retained and returned in the result.
In the higher level $filter, you are then looking to see if the result from that previous $filter returned an empty array [] for "offers". If it is not empty, then the element is returned or otherwise it is removed.
MongoDB 2.6.x
This is very similar to the modern process except that since there is no $filter in this version you can use $map to inspect each element and then use $setDifference to filter out any elements that were returned as false.
So $map is going to return the whole array, but the $cond operation just decides whether to return the element or instead a false value. In the comparison of $setDifference to a single element "set" of [false] all false elements in the returned array would be removed.
In all other ways, the logic is the same as above.
MongoDB 2.2.x and up
So below MongoDB 2.6 the only tool for working with arrays is $unwind, and for this purpose alone you should not use the aggregation framework "just" for this purpose.
The process indeed appears simple, by simply "taking apart" each array, filtering out the things you don't need then putting it back together. The main care is in the "two" $group stages, with the "first" to re-build the inner array, and the next to re-build the outer array. There are distinct _id values at all levels, so these just need to be included at every level of grouping.
But the problem is that $unwind is very costly. Though it does have purpose still, it's main usage intent is not to do this sort of filtering per document. In fact in modern releases it's only usage should be when an element of the array(s) needs to become part of the "grouping key" itself.
Conclusion
So it's not a simple process to get matches at multiple levels of an array like this, and in fact it can be extremely costly if implemented incorrectly.
Only the two modern listings should ever be used for this purpose, as they employ a "single" pipeline stage in addition to the "query" $match in order to do the "filtering". The resulting effect is little more overhead than the standard forms of .find().
In general though, those listings still have an amount of complexity to them, and indeed unless you are really drastically reducing the content returned by such filtering in a way that makes a significant improvement in bandwidth used between the server and client, then you are better of filtering the result of the initial query and basic projection.
db.getCollection('retailers').find(
{ 'stores.offers.size': 'L'},
{ 'stores.$': 1 }
).forEach(function(doc) {
// Technically this is only "one" store. So omit the projection
// if you wanted more than "one" match
doc.stores = doc.stores.filter(function(store) {
store.offers = store.offers.filter(function(offer) {
return offer.size.indexOf("L") != -1;
});
return store.offers.length != 0;
});
printjson(doc);
})
So working with the returned object "post" query processing is far less obtuse than using the aggregation pipeline to do this. And as stated the only "real" diffrerence would be that you are discarding the other elements on the "server" as opposed to removing them "per document" when received, which may save a little bandwidth.
But unless you are doing this in a modern release with only $match and $project, then the "cost" of processing on the server will greatly outweigh the "gain" of reducing that network overhead by stripping the unmatched elements first.
In all cases, you get the same result:
{
"_id" : ObjectId("56f277b1279871c20b8b4567"),
"stores" : [
{
"_id" : ObjectId("56f277b5279871c20b8b4783"),
"offers" : [
{
"_id" : ObjectId("56f277b1279871c20b8b4567"),
"size" : [
"S",
"L",
"XL"
]
}
]
}
]
}
| MongoDB | 36,229,123 | 108 |
I am using MongoDB 2.2.2 for 32-bit Windows7 machine. I have a complex aggregation query in a .js file. I need to execute this file on the shell and direct the output to a CSV file. I ensure that the query returns a "flat" json (no nested keys), so it is inherently convertible to a neat csv.
I know about load() and eval(). eval() requires me to paste the whole query into the shell and allows only printjson() inside the script, while I need csv. And, the second way: load()..It prints the output on the screen, and again in json format.
Is there a way Mongo can do this conversion from json to csv? (I need csv file to prepare charts on the data). I am thinking:
1. Either mongo has a built-in command for this that I can't find right now.
2. Mongo can't do it for me; I can at most send the json output to a file which I then need to convert to csv myself.
3. Mongo can send the json output to a temporary collection, the contents of which can be easily mongoexported to csv format. But I think only map-reduce queries support output collections. Is that right? I need it for an aggregation query.
Thanks for any help :)
| I know this question is old but I spend an hour trying to export a complex query to csv and I wanted to share my thoughts. First I couldn't get any of the json to csv converters to work (although this one looked promising). What I ended up doing was manually writing the csv file in my mongo script.
This is a simple version but essentially what I did:
print("name,id,email");
db.User.find().forEach(function(user){
print(user.name+","+user._id.valueOf()+","+user.email);
});
This I just piped the query to stdout
mongo test export.js > out.csv
where test is the name of the database I use.
| MongoDB | 14,478,304 | 108 |
This error happens when I tried to update upsert item:
Updating the path 'x' would create a conflict at 'x'
| Field should appear either in $set, or in $setOnInsert. Not in both.
| MongoDB | 50,947,772 | 107 |
I'm sure I'm missing something very basic in MongoDB queries, can't seem to get this simple condition.
Consider this collection
> db.tests.find()
{ "_id" : ObjectId("..."), "name" : "Test1" , "deleted" : true}
{ "_id" : ObjectId("..."), "name" : "Test2" , "deleted" : false}
{ "_id" : ObjectId("..."), "name" : "Test3" }
I would simply like to query all the items that are "not deleted"
I know how to find the item that has a "deleted" flag set to true:
> db.tests.find({deleted:true})
{ "_id" : ObjectId("..."), "name" : "Test1" , "deleted" : true}
But how do I find all items that are NOT "deleted" (e.g. negate the above query, or in other words, any items that either doesn't have a "deleted" field, or have it with value false
What I tried by guessing (please don't laugh...)
> db.tests.find({$not : {deleted: true}})
(returns no results)
> db.tests.find({$not : {$eq:{deleted:true}}})
error: { "$err" : "invalid operator: $eq", "code" : 10068 }
> db.tests.find({deleted:{$not: true}})
error: { "$err" : "invalid use of $not", "code" : 13041 }
> db.tests.find({deleted:{$not: {$eq:true}}})
error: { "$err" : "invalid use of $not", "code" : 13034 }
What am I missing?
| db.tests.find({deleted: {$ne: true}})
Where $ne stands for "not equal". (Documentation on mongodb operators)
| MongoDB | 18,837,486 | 107 |
I like to to go find a user in mongoDb by looking for a user called value.
The problem with:
username: 'peter'
is that i dont find it if the username is "Peter", or "PeTER".. or something like that.
So i want to do like sql
SELECT * FROM users WHERE username LIKE 'peter'
Hope you guys get what im askin for?
Short: 'field LIKE value' in mongoose.js/mongodb
| For those that were looking for a solution here it is:
var name = 'Peter';
model.findOne({name: new RegExp('^'+name+'$', "i")}, function(err, doc) {
//Do your action here..
});
| MongoDB | 9,824,010 | 107 |
Is there a way to see a list of indices on a collection in mongodb in shell? i read through http://www.mongodb.org/display/DOCS/Indexes but i dont see anything
| From the shell:
db.test.getIndexes()
For shell help you should try:
help;
db.help();
db.test.help();
| MongoDB | 2,789,865 | 107 |
I am new to MongoDB. I am trying to install MongoDb 3.0 on Ubuntu 13.0 LTS, which is a VM on Windows 7 Host. I have installed MongoDB successfully (packages etc.), but when I execute the command sudo service mongod start, I get the following error in the "/var/log/mongodb/mongod.log" log file. Can anyone help me understanding this error. There is nothing on internet related to this.
2015-04-23T00:12:00.876-0400 I CONTROL ***** SERVER RESTARTED *****
2015-04-23T00:12:00.931-0400 E NETWORK [initandlisten] Failed to unlink socket file /tmp/mongodb-27017.sock errno:1 Operation not permitted
2015-04-23T00:12:00.931-0400 I - [initandlisten] Fatal Assertion 28578
2015-04-23T00:12:00.931-0400 I - [initandlisten]
| I have fixed this issue myself, by deleting the mongodb-27017.sock file . I ran the service after deleting this file, which worked fine. However, I am still not sure the root cause of the issue. The output of the command ls - lat /tmp/mongodb-27017.sock is now
srwx------ 1 mongodb nogroup 0 Apr 23 06:24 /tmp/mongodb-27017.sock
The best option is not to delete the lock file
Instead, check the file user and group user. Set both to the current user:
First run: whoami
Then run: sudo chown <output of the above command> /tmp/mongodb-27017.sock
Next run: sudo service mongod restart && sudo mongod
| MongoDB | 29,813,648 | 106 |
I am making a database for video games, each containing elements like name, genre, and and image of the game. Is it possible to put images into a json object for the db? If not is there a way around this?
| I can think of doing it in two ways:
1.
Storing the file in file system in any directory (say dir1) and renaming it which ensures that the name is unique for every file (may be a timestamp) (say xyz123.jpg), and then storing this name in some DataBase. Then while generating the JSON you pull this filename and generate a complete URL (which will be http://example.com/dir1/xyz123.png )and insert it in the JSON.
2.
Base 64 Encoding, It's basically a way of encoding arbitrary binary data in ASCII text. It takes 4 characters per 3 bytes of data, plus potentially a bit of padding at the end. Essentially each 6 bits of the input is encoded in a 64-character alphabet. The "standard" alphabet uses A-Z, a-z, 0-9 and + and /, with = as a padding character. There are URL-safe variants. So this approach will allow you to put your image directly in the MongoDB, while storing it Encode the image and decode while fetching it, it has some of its own drawbacks:
base64 encoding makes file sizes roughly 33% larger than their original binary representations, which means more data down the wire (this might be exceptionally painful on mobile networks)
data URIs aren’t supported on IE6 or IE7.
base64 encoded data may possibly take longer to process than binary data.
Source
Converting Image to DATA URI
A.) Canvas
Load the image into an Image-Object, paint it to a canvas and convert the canvas back to a dataURL.
function convertToDataURLviaCanvas(url, callback, outputFormat){
var img = new Image();
img.crossOrigin = 'Anonymous';
img.onload = function(){
var canvas = document.createElement('CANVAS');
var ctx = canvas.getContext('2d');
var dataURL;
canvas.height = this.height;
canvas.width = this.width;
ctx.drawImage(this, 0, 0);
dataURL = canvas.toDataURL(outputFormat);
callback(dataURL);
canvas = null;
};
img.src = url;
}
Usage
convertToDataURLviaCanvas('http://bit.ly/18g0VNp', function(base64Img){
// Base64DataURL
});
Supported input formats
image/png, image/jpeg, image/jpg, image/gif, image/bmp, image/tiff, image/x-icon, image/svg+xml, image/webp, image/xxx
B.) FileReader
Load the image as blob via XMLHttpRequest and use the FileReader API to convert it to a data URL.
function convertFileToBase64viaFileReader(url, callback){
var xhr = new XMLHttpRequest();
xhr.responseType = 'blob';
xhr.onload = function() {
var reader = new FileReader();
reader.onloadend = function () {
callback(reader.result);
}
reader.readAsDataURL(xhr.response);
};
xhr.open('GET', url);
xhr.send();
}
This approach
lacks in browser support
has better compression
works for other file types as well.
Usage
convertFileToBase64viaFileReader('http://bit.ly/18g0VNp', function(base64Img){
// Base64DataURL
});
Source
| MongoDB | 34,485,420 | 104 |
users
{
"_id":"12345",
"admin":1
},
{
"_id":"123456789",
"admin":0
}
posts
{
"content":"Some content",
"owner_id":"12345",
"via":"facebook"
},
{
"content":"Some other content",
"owner_id":"123456789",
"via":"facebook"
}
Here is a sample from my mongodb. I want to get all the posts which has "via" attribute equal to "facebook" and posted by an admin ("admin":1). I couldn't figure out how to acquire this query. Since mongodb is not a relational database, I couldn't do a join operation. What could be the solution ?
| You can use $lookup ( multiple ) to get the records from multiple collections:
Example:
If you have more collections ( I have 3 collections for demo here, you can have more than 3 ). and I want to get the data from 3 collections in single object:
The collection are as:
db.doc1.find().pretty();
{
"_id" : ObjectId("5901a4c63541b7d5d3293766"),
"firstName" : "shubham",
"lastName" : "verma"
}
db.doc2.find().pretty();
{
"_id" : ObjectId("5901a5f83541b7d5d3293768"),
"userId" : ObjectId("5901a4c63541b7d5d3293766"),
"address" : "Gurgaon",
"mob" : "9876543211"
}
db.doc3.find().pretty();
{
"_id" : ObjectId("5901b0f6d318b072ceea44fb"),
"userId" : ObjectId("5901a4c63541b7d5d3293766"),
"fbURLs" : "http://www.facebook.com",
"twitterURLs" : "http://www.twitter.com"
}
Now your query will be as below:
db.doc1.aggregate([
{ $match: { _id: ObjectId("5901a4c63541b7d5d3293766") } },
{
$lookup:
{
from: "doc2",
localField: "_id",
foreignField: "userId",
as: "address"
}
},
{
$unwind: "$address"
},
{
$project: {
__v: 0,
"address.__v": 0,
"address._id": 0,
"address.userId": 0,
"address.mob": 0
}
},
{
$lookup:
{
from: "doc3",
localField: "_id",
foreignField: "userId",
as: "social"
}
},
{
$unwind: "$social"
},
{
$project: {
__v: 0,
"social.__v": 0,
"social._id": 0,
"social.userId": 0
}
}
]).pretty();
Then Your result will be:
{
"_id" : ObjectId("5901a4c63541b7d5d3293766"),
"firstName" : "shubham",
"lastName" : "verma",
"address" : {
"address" : "Gurgaon"
},
"social" : {
"fbURLs" : "http://www.facebook.com",
"twitterURLs" : "http://www.twitter.com"
}
}
If you want all records from each collections then you should remove below line from query:
{
$project: {
__v: 0,
"address.__v": 0,
"address._id": 0,
"address.userId": 0,
"address.mob": 0
}
}
{
$project: {
"social.__v": 0,
"social._id": 0,
"social.userId": 0
}
}
After removing above code you will get total record as:
{
"_id" : ObjectId("5901a4c63541b7d5d3293766"),
"firstName" : "shubham",
"lastName" : "verma",
"address" : {
"_id" : ObjectId("5901a5f83541b7d5d3293768"),
"userId" : ObjectId("5901a4c63541b7d5d3293766"),
"address" : "Gurgaon",
"mob" : "9876543211"
},
"social" : {
"_id" : ObjectId("5901b0f6d318b072ceea44fb"),
"userId" : ObjectId("5901a4c63541b7d5d3293766"),
"fbURLs" : "http://www.facebook.com",
"twitterURLs" : "http://www.twitter.com"
}
}
| MongoDB | 6,502,541 | 104 |
I am using pymongo to query for all items in a region (actually it is to query for all venues in a region on a map). I used db.command(SON()) before to search in a spherical region, which can return me a dictionary and in the dictionary there is a key called results which contains the venues. Now I need to search in a square area and I am suggested to use db.places.find, however, this returns me a pymongo.cursor.Cursor class and I have no idea how to extract the venue results from it.
Does anyone know whether I should convert the cursor into a dict and extract the results out, or use another method to query for items in a square region?
BTW, db is pymongo.database.Database class
The code is:
>>> import pymongo
>>> db = pymongo.MongoClient(host).PSRC
>>> resp = db.places.find({"loc": {"$within": {"$box": [[ll_lng,ll_lat], [ur_lng,ur_lat]]}}})
>>> for doc in resp:
>>> print(doc)
I have values of ll_lng, ll_lat, ur_lng and ur_lat, use these values but it prints nothing from this codes
| The find method returns a Cursor instance, which allows you to iterate over all matching documents.
To get the first document that matches the given criteria, you need to use find_one. The result of find_one is a dictionary.
You can always use the list constructor to return a list of all the documents in the collection but bear in mind that this will load all the data in memory and may not be what you want.
You should do that if you need to reuse the cursor and have a good reason not to use rewind()
Demo using find:
>>> import pymongo
>>> conn = pymongo.MongoClient()
>>> db = conn.test #test is my database
>>> col = db.spam #Here spam is my collection
>>> cur = col.find()
>>> cur
<pymongo.cursor.Cursor object at 0xb6d447ec>
>>> for doc in cur:
... print(doc) # or do something with the document
...
{'a': 1, '_id': ObjectId('54ff30faadd8f30feb90268f'), 'b': 2}
{'a': 1, 'c': 3, '_id': ObjectId('54ff32a2add8f30feb902690'), 'b': 2}
Demo using find_one:
>>> col.find_one()
{'a': 1, '_id': ObjectId('54ff30faadd8f30feb90268f'), 'b': 2}
| MongoDB | 28,968,660 | 103 |
I created a dump with mongodump on computer A (ubuntu 12.04 server). I moved it to computer B (ubuntu 12.04 server) and typed:
mongorestore -db db_name --drop db_dump_path
It failed and it reported:
connected to: 127.0.0.1
terminate called after throwing an instance of 'std::runtime_error'
what(): locale::facet::_S_create_c_locale name not valid
Aborted
I've successfully accomplished this operation before and this strange behavior has never occurred. What do I need to do to fix this?
| On my distro "locale-gen" was not installed and it turned out all I had to do is set the LC_ALL environment variable.
so the following command fixed it:
export LC_ALL="en_US.UTF-8"
hopefully it will help someone else...
| MongoDB | 19,100,708 | 103 |
At the moment I use save to add a single document. Suppose I have an array of documents that I wish to store as single objects. Is there a way of adding them all with a single function call and then getting a single callback when it is done? I could add all the documents individually but managing the callbacks to work out when everything is done would be problematic.
| Mongoose does now support passing multiple document structures to Model.create. To quote their API example, it supports being passed either an array or a varargs list of objects with a callback at the end:
Candy.create({ type: 'jelly bean' }, { type: 'snickers' }, function (err, jellybean, snickers) {
if (err) // ...
});
Or
var array = [{ type: 'jelly bean' }, { type: 'snickers' }];
Candy.create(array, function (err, jellybean, snickers) {
if (err) // ...
});
Edit: As many have noted, this does not perform a true bulk insert - it simply hides the complexity of calling save multiple times yourself. There are answers and comments below explaining how to use the actual Mongo driver to achieve a bulk insert in the interest of performance.
| MongoDB | 10,266,512 | 103 |
The two types of objects seem to be so close to one another that having both feels redundant. What is the point of having both schemas and models?
| EDIT: Although this has been useful for many people, as mentioned in the comments it answers the "how" rather than the why. Thankfully, the why of the question has been answered elsewhere also, with this answer to another question. This has been linked in the comments for some time but I realise that many may not get that far when reading.
Often the easiest way to answer this type of question is with an example. In this case, someone has already done it for me :)
Take a look here:
http://rawberg.com/blog/nodejs/mongoose-orm-nested-models/
EDIT: The original post (as mentioned in the comments) seems to no longer exist, so I am reproducing it below. Should it ever return, or if it has just moved, please let me know.
It gives a decent description of using schemas within models in mongoose and why you would want to do it, and also shows you how to push tasks via the model while the schema is all about the structure etc.
Original Post:
Let’s start with a simple example of embedding a schema inside a model.
var TaskSchema = new Schema({
name: String,
priority: Number
});
TaskSchema.virtual('nameandpriority')
.get( function () {
return this.name + '(' + this.priority + ')';
});
TaskSchema.method('isHighPriority', function() {
if(this.priority === 1) {
return true;
} else {
return false;
}
});
var ListSchema = new Schema({
name: String,
tasks: [TaskSchema]
});
mongoose.model('List', ListSchema);
var List = mongoose.model('List');
var sampleList = new List({name:'Sample List'});
I created a new TaskSchema object with basic info a task might have. A Mongoose virtual attribute is setup to conveniently combine the name and priority of the Task. I only specified a getter here but virtual setters are supported as well.
I also defined a simple task method called isHighPriority to demonstrate how methods work with this setup.
In the ListSchema definition you’ll notice how the tasks key is configured to hold an array of TaskSchema objects. The task key will become an instance of DocumentArray which provides special methods for dealing with embedded Mongo documents.
For now I only passed the ListSchema object into mongoose.model and left the TaskSchema out. Technically it's not necessary to turn the TaskSchema into a formal model since we won’t be saving it in it’s own collection. Later on I’ll show you how it doesn’t harm anything if you do and it can help to organize all your models in the same way especially when they start spanning multiple files.
With the List model setup let’s add a couple tasks to it and save them to Mongo.
var List = mongoose.model('List');
var sampleList = new List({name:'Sample List'});
sampleList.tasks.push(
{name:'task one', priority:1},
{name:'task two', priority:5}
);
sampleList.save(function(err) {
if (err) {
console.log('error adding new list');
console.log(err);
} else {
console.log('new list successfully saved');
}
});
The tasks attribute on the instance of our List model (sampleList) works like a regular JavaScript array and we can add new tasks to it using push. The important thing to notice is the tasks are added as regular JavaScript objects. It’s a subtle distinction that may not be immediately intuitive.
You can verify from the Mongo shell that the new list and tasks were saved to mongo.
db.lists.find()
{ "tasks" : [
{
"_id" : ObjectId("4dd1cbeed77909f507000002"),
"priority" : 1,
"name" : "task one"
},
{
"_id" : ObjectId("4dd1cbeed77909f507000003"),
"priority" : 5,
"name" : "task two"
}
], "_id" : ObjectId("4dd1cbeed77909f507000001"), "name" : "Sample List" }
Now we can use the ObjectId to pull up the Sample List and iterate through its tasks.
List.findById('4dd1cbeed77909f507000001', function(err, list) {
console.log(list.name + ' retrieved');
list.tasks.forEach(function(task, index, array) {
console.log(task.name);
console.log(task.nameandpriority);
console.log(task.isHighPriority());
});
});
If you run that last bit of code you’ll get an error saying the embedded document doesn’t have a method isHighPriority. In the current version of Mongoose you can’t access methods on embedded schemas directly. There’s an open ticket to fix it and after posing the question to the Mongoose Google Group, manimal45 posted a helpful work-around to use for now.
List.findById('4dd1cbeed77909f507000001', function(err, list) {
console.log(list.name + ' retrieved');
list.tasks.forEach(function(task, index, array) {
console.log(task.name);
console.log(task.nameandpriority);
console.log(task._schema.methods.isHighPriority.apply(task));
});
});
If you run that code you should see the following output on the command line.
Sample List retrieved
task one
task one (1)
true
task two
task two (5)
false
With that work-around in mind let’s turn the TaskSchema into a Mongoose model.
mongoose.model('Task', TaskSchema);
var Task = mongoose.model('Task');
var ListSchema = new Schema({
name: String,
tasks: [Task.schema]
});
mongoose.model('List', ListSchema);
var List = mongoose.model('List');
The TaskSchema definition is the same as before so I left it out. Once its turned into a model we can still access it’s underlying Schema object using dot notation.
Let’s create a new list and embed two Task model instances within it.
var demoList = new List({name:'Demo List'});
var taskThree = new Task({name:'task three', priority:10});
var taskFour = new Task({name:'task four', priority:11});
demoList.tasks.push(taskThree.toObject(), taskFour.toObject());
demoList.save(function(err) {
if (err) {
console.log('error adding new list');
console.log(err);
} else {
console.log('new list successfully saved');
}
});
As we’re embedding the Task model instances into the List we’re calling toObject on them to convert their data into plain JavaScript objects that the List.tasks DocumentArray is expecting. When you save model instances this way your embedded documents will contain ObjectIds.
The complete code example is available as a gist. Hopefully these work-arounds help smooth things over as Mongoose continues to develop. I’m still pretty new to Mongoose and MongoDB so please feel free to share better solutions and tips in the comments. Happy data modeling!
| MongoDB | 9,127,174 | 103 |
I've been searching the web looking for best practices for configuring MongoOptions for the MongoDB Java driver and I haven't come up with much other than the API. This search started after I ran into the "com.mongodb.DBPortPool$SemaphoresOut: Out of semaphores to get db
connection" error and by increasing the connections/multiplier I was able to solve that problem. I'm looking for links to or your best practices in configuring these options for production.
The options for the 2.4 driver include:
http://api.mongodb.org/java/2.4/com/mongodb/MongoOptions.html
autoConnectRetry
connectionsPerHost
connectTimeout
maxWaitTime
socketTimeout
threadsAllowedToBlockForConnectionMultiplier
The newer drivers have more options and I would be interested in hearing about those as well.
| Updated to 2.9 :
autoConnectRetry simply means the driver will automatically attempt to reconnect to the server(s) after unexpected disconnects. In production environments you usually want this set to true.
connectionsPerHost are the amount of physical connections a single Mongo instance (it's singleton so you usually have one per application) can establish to a mongod/mongos process. At time of writing the java driver will establish this amount of connections eventually even if the actual query throughput is low (in order words you will see the "conn" statistic in mongostat rise until it hits this number per app server).
There is no need to set this higher than 100 in most cases but this setting is one of those "test it and see" things. Do note that you will have to make sure you set this low enough so that the total amount of connections to your server do not exceed
db.serverStatus().connections.available
In production we currently have this at 40.
connectTimeout. As the name suggest number of milliseconds the driver will wait before a connection attempt is aborted. Set timeout to something long (15-30 seconds) unless there's a realistic, expected chance this will be in the way of otherwise succesful connection attempts. Normally if a connection attempt takes longer than a couple of seconds your network infrastructure isn't capable of high throughput.
maxWaitTime. Number of ms a thread will wait for a connection to become available on the connection pool, and raises an exception if this does not happen in time. Keep default.
socketTimeout. Standard socket timeout value. Set to 60 seconds (60000).
threadsAllowedToBlockForConnectionMultiplier. Multiplier for connectionsPerHost that denotes the number of threads that are allowed to wait for connections to become available if the pool is currently exhausted. This is the setting that will cause the "com.mongodb.DBPortPool$SemaphoresOut: Out of semaphores to get db connection" exception. It will throw this exception once this thread queue exceeds the threadsAllowedToBlockForConnectionMultiplier value. For example, if the connectionsPerHost is 10 and this value is 5 up to 50 threads can block before the aforementioned exception is thrown.
If you expect big peaks in throughput that could cause large queues temporarily increase this value. We have it at 1500 at the moment for exactly that reason. If your query load consistently outpaces the server you should just improve your hardware/scaling situation accordingly.
readPreference. (UPDATED, 2.8+) Used to determine the default read preference and replaces "slaveOk". Set up a ReadPreference through one of the class factory method. A full description of the most common settings can be found at the end of this post
w. (UPDATED, 2.6+) This value determines the "safety" of the write. When this value is -1 the write will not report any errors regardless of network or database errors. WriteConcern.NONE is the appropriate predefined WriteConcern for this. If w is 0 then network errors will make the write fail but mongo errors will not. This is typically referred to as "fire and forget" writes and should be used when performance is more important than consistency and durability. Use WriteConcern.NORMAL for this mode.
If you set w to 1 or higher the write is considered safe. Safe writes perform the write and follow it up by a request to the server to make sure the write succeeded or retrieve an error value if it did not (in other words, it sends a getLastError() command after you write). Note that until this getLastError() command is completed the connection is reserved. As a result of that and the additional command the throughput will be signficantly lower than writes with w <= 0. With a w value of exactly 1 MongoDB guarantees the write succeeded (or verifiably failed) on the instance you sent the write to.
In the case of replica sets you can use higher values for w whcih tell MongoDB to send the write to at least "w" members of the replica set before returning (or more accurately, wait for the replication of your write to "w" members). You can also set w to the string "majority" which tells MongoDB to perform the write to the majority of replica set members (WriteConcern.MAJORITY). Typicall you should set this to 1 unless you need raw performance (-1 or 0) or replicated writes (>1). Values higher than 1 have a considerable impact on write throughput.
fsync. Durability option that forces mongo to flush to disk after each write when enabled. I've never had any durability issues related to a write backlog so we have this on false (the default) in production.
j *(NEW 2.7+)*. Boolean that when set to true forces MongoDB to wait for a successful journaling group commit before returning. If you have journaling enabled you can enable this for additional durability. Refer to http://www.mongodb.org/display/DOCS/Journaling to see what journaling gets you (and thus why you might want to enable this flag).
ReadPreference
The ReadPreference class allows you to configure to what mongod instances queries are routed if you are working with replica sets. The following options are available :
ReadPreference.primary() : All reads go to the repset primary member only. Use this if you require all queries to return consistent (the most recently written) data. This is the default.
ReadPreference.primaryPreferred() : All reads go to the repset primary member if possible but may query secondary members if the primary node is not available. As such if the primary becomes unavailable reads become eventually consistent, but only if the primary is unavailable.
ReadPreference.secondary() : All reads go to secondary repset members and the primary member is used for writes only. Use this only if you can live with eventually consistent reads. Additional repset members can be used to scale up read performance although there are limits to the amount of (voting) members a repset can have.
ReadPreference.secondaryPreferred() : All reads go to secondary repset members if any of them are available. The primary member is used exclusively for writes unless all secondary members become unavailable. Other than the fallback to the primary member for reads this is the same as ReadPreference.secondary().
ReadPreference.nearest() : Reads go to the nearest repset member available to the database client. Use only if eventually consistent reads are acceptable. The nearest member is the member with the lowest latency between the client and the various repset members. Since busy members will eventually have higher latencies this should also automatically balance read load although in my experience secondary(Preferred) seems to do so better if member latencies are relatively consistent.
Note : All of the above have tag enabled versions of the same method which return TaggableReadPreference instances instead. A full description of replica set tags can be found here : Replica Set Tags
| MongoDB | 6,520,439 | 103 |
I am trying to distribute a set of connected applications running in several linked containers that includes a mongo database that is required to:
be distributed containing some seed data;
allow users to add additional data.
Ideally the data will also be persisted in a linked data volume container.
I can get the data into the mongo container using a mongo base instance that doesn't mount any volumes (dockerhub image: psychemedia/mongo_nomount - this is essentially the base mongo Dockerfile without the VOLUME /data/db statement) and a Dockerfile config along the lines of:
ADD . /files
WORKDIR /files
RUN mkdir -p /data/db && mongod --fork --logpath=/tmp/mongodb.log && sleep 20 && \
mongoimport --db testdb --collection testcoll --type csv --headerline --file ./testdata.csv #&& mongod --shutdown
where ./testdata.csv is in the same directory (./mongo-with-data) as the Dockerfile.
My docker-compose config file includes the following:
mongo:
#image: mongo
build: ./mongo-with-data
ports:
- "27017:27017"
#Ideally we should be able to mount this against a host directory
#volumes:
# - ./db/mongo/:/data/db
#volumes_from:
# - devmongodata
#devmongodata:
# command: echo created
# image: busybox
# volumes:
# - /data/db
Whenever I try to mount a VOLUME it seems as if the original seeded data - which is stored in /data/db - is deleted. I guess that when a volume is mounted to /data/db it replaces whatever is there currently.
That said, the docker userguide suggests that: Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization? So I expected the data to persist if I placed the VOLUME command after the seeding RUN command?
So what am I doing wrong?
The long view is that I want to automate the build of several linked containers, and then distribute a Vagrantfile/docker-compose YAML file that will fire up a set of linked apps, that includes a pre-seeded mongo database with a (partially pre-populated) persistent data container.
| I do this using another docker container whose only purpose is to seed mongo, then exit. I suspect this is the same idea as ebaxt's, but when I was looking for an answer to this, I just wanted to see a quick-and-dirty, yet straightforward, example. So here is mine:
docker-compose.yml
mongodb:
image: mongo
ports:
- "27017:27017"
mongo-seed:
build: ./mongo-seed
depends_on:
- mongodb
# my webserver which uses mongo (not shown in example)
webserver:
build: ./webserver
ports:
- "80:80"
depends_on:
- mongodb
mongo-seed/Dockerfile
FROM mongo
COPY init.json /init.json
CMD mongoimport --host mongodb --db reach-engine --collection MyDummyCollection --type json --file /init.json --jsonArray
mongo-seed/init.json
[
{
"name": "Joe Smith",
"email": "[email protected]",
"age": 40,
"admin": false
},
{
"name": "Jen Ford",
"email": "[email protected]",
"age": 45,
"admin": true
}
]
| MongoDB | 31,210,973 | 102 |
I have a hard time believing this question hasn't been asked and answered somewhere already, but I can't find any trace of it.
I have a MongoDB aggregation query that needs to group by a boolean: the existence of another field.
For example let's start with this collection:
> db.test.find()
{ "_id" : ObjectId("53fbede62827b89e4f86c12e"),
"field" : ObjectId("53fbede62827b89e4f86c12d"), "name" : "Erik" }
{ "_id" : ObjectId("53fbee002827b89e4f86c12f"), "name" : "Erik" }
{ "_id" : ObjectId("53fbee092827b89e4f86c131"),
"field" : ObjectId("53fbee092827b89e4f86c130"), "name" : "John" }
{ "_id" : ObjectId("53fbee122827b89e4f86c132"), "name" : "Ben" }
2 documents have "field", and 2 don't.
Note that each value of "field" may be different; we just want to group on its existence (or non-nullness works for me too, I don't have any null values stored).
I've tried using $project, but $exists doesn't exist there, and $cond and $ifNull haven't helped me. The field always appears to exist, even when it doesn't:
> db.test.aggregate(
{$project:{fieldExists:{$cond:[{$eq:["$field", null]}, false, true]}}},
{$group:{_id:"$fieldExists", count:{$sum:1}}}
)
{ "_id" : true, "count" : 4 }
I would expect the following much simpler aggregate to work, but for some reason $exists isn't supported in this way:
> db.test.aggregate({$group:{_id:{$exists:"$field"}, count:{$sum:1}}})
assert: command failed: {
"errmsg" : "exception: invalid operator '$exists'",
"code" : 15999,
"ok" : 0
} : aggregate failed
Error: command failed: {
"errmsg" : "exception: invalid operator '$exists'",
"code" : 15999,
"ok" : 0
} : aggregate failed
at Error (<anonymous>)
at doassert (src/mongo/shell/assert.js:11:14)
at Function.assert.commandWorked (src/mongo/shell/assert.js:244:5)
at DBCollection.aggregate (src/mongo/shell/collection.js:1149:12)
at (shell):1:9
2014-08-25T19:19:42.344-0700 Error: command failed: {
"errmsg" : "exception: invalid operator '$exists'",
"code" : 15999,
"ok" : 0
} : aggregate failed at src/mongo/shell/assert.js:13
Does anyone know how to get the desired result from a collection like this?
Expected result:
{ "_id" : true, "count" : 2 }
{ "_id" : false, "count" : 2 }
| I solved the same problem just last night, this way:
> db.test.aggregate({$group:{_id:{$gt:["$field", null]}, count:{$sum:1}}})
{ "_id" : true, "count" : 2 }
{ "_id" : false, "count" : 2 }
See http://docs.mongodb.org/manual/reference/bson-types/#bson-types-comparison-order for a full explanation of how this works.
Added From comment section:
To check if the value doesn't exist or is null use { $lte: ["$field", null] }
| MongoDB | 25,497,150 | 102 |
I'm using MongoDB to be my database. i have a data:
{
_id : '123'
friends: [
{name: 'allen', emails: [{email: '11111', using: 'true'}]}
]
}
now, i wanna to modify user's friends' emails ' email, whose _id is '123'
i write like this:
db.users.update ({_id: '123'}, {$set: {"friends.0.emails.$.email" : '2222'} })
it's easy, but , it's wrong , when the emails array has two or more data.
so, my question is:
how can i modify the data in a nested filed --- just have two or more nested array? Thanks.
| You need to use the Dot Notation for the arrays.
That is, you should replace the $ with the zero-based index of the element you're trying to update.
For example:
db.users.update ({_id: '123'}, { '$set': {"friends.0.emails.0.email" : '2222'} });
will update the first email of the first friend, and
db.users.update ({_id: '123'}, { '$set': {"friends.0.emails.1.email" : '2222'} })
will update the second email of the first friend.
| MongoDB | 19,603,542 | 102 |
Env:
MongoDB (3.2.0) with Mongoose
Collection:
users
Text Index creation:
BasicDBObject keys = new BasicDBObject();
keys.put("name","text");
BasicDBObject options = new BasicDBObject();
options.put("name", "userTextSearch");
options.put("unique", Boolean.FALSE);
options.put("background", Boolean.TRUE);
userCollection.createIndex(keys, options); // using MongoTemplate
Document:
{"name":"LEONEL"}
Queries:
db.users.find( { "$text" : { "$search" : "LEONEL" } } ) => FOUND
db.users.find( { "$text" : { "$search" : "leonel" } } ) => FOUND (search caseSensitive is false)
db.users.find( { "$text" : { "$search" : "LEONÉL" } } ) => FOUND (search with diacriticSensitive is false)
db.users.find( { "$text" : { "$search" : "LEONE" } } ) => FOUND (Partial search)
db.users.find( { "$text" : { "$search" : "LEO" } } ) => NOT FOUND (Partial search)
db.users.find( { "$text" : { "$search" : "L" } } ) => NOT FOUND (Partial search)
Any idea why I get 0 results using as query "LEO" or "L"?
Regex with Text Index Search is not allowed.
db.getCollection('users')
.find( { "$text" : { "$search" : "/LEO/i",
"$caseSensitive": false,
"$diacriticSensitive": false }} )
.count() // 0 results
db.getCollection('users')
.find( { "$text" : { "$search" : "LEO",
"$caseSensitive": false,
"$diacriticSensitive": false }} )
.count() // 0 results
MongoDB Documentation:
Text Search
$text
Text Indexes
Improve Text Indexes to support partial word match
| As at MongoDB 3.4, the text search feature is designed to support case-insensitive searches on text content with language-specific rules for stopwords and stemming. Stemming rules for supported languages are based on standard algorithms which generally handle common verbs and nouns but are unaware of proper nouns.
There is no explicit support for partial or fuzzy matches, but terms that stem to a similar result may appear to be working as such. For example: "taste", "tastes", and tasteful" all stem to "tast". Try the Snowball Stemming Demo page to experiment with more words and stemming algorithms.
Your results that match are all variations on the same word "LEONEL", and vary only by case and diacritic. Unless "LEONEL" can be stemmed to something shorter by the rules of your selected language, these are the only type of variations that will match.
If you want to do efficient partial matches you'll need to take a different approach. For some helpful ideas see:
Efficient Techniques for Fuzzy and Partial matching in MongoDB by John Page
Efficient Partial Keyword Searches by James Tan
There is a relevant improvement request you can watch/upvote in the MongoDB issue tracker: SERVER-15090: Improve Text Indexes to support partial word match.
| MongoDB | 44,833,817 | 101 |
I'm pretty new to Mongoose and MongoDB in general so I'm having a difficult time figuring out if something like this is possible:
Item = new Schema({
id: Schema.ObjectId,
dateCreated: { type: Date, default: Date.now },
title: { type: String, default: 'No Title' },
description: { type: String, default: 'No Description' },
tags: [ { type: Schema.ObjectId, ref: 'ItemTag' }]
});
ItemTag = new Schema({
id: Schema.ObjectId,
tagId: { type: Schema.ObjectId, ref: 'Tag' },
tagName: { type: String }
});
var query = Models.Item.find({});
query
.desc('dateCreated')
.populate('tags')
.where('tags.tagName').in(['funny', 'politics'])
.run(function(err, docs){
// docs is always empty
});
Is there a better way do this?
Edit
Apologies for any confusion. What I'm trying to do is get all Items that contain either the funny tag or politics tag.
Edit
Document without where clause:
[{
_id: 4fe90264e5caa33f04000012,
dislikes: 0,
likes: 0,
source: '/uploads/loldog.jpg',
comments: [],
tags: [{
itemId: 4fe90264e5caa33f04000012,
tagName: 'movies',
tagId: 4fe64219007e20e644000007,
_id: 4fe90270e5caa33f04000015,
dateCreated: Tue, 26 Jun 2012 00:29:36 GMT,
rating: 0,
dislikes: 0,
likes: 0
},
{
itemId: 4fe90264e5caa33f04000012,
tagName: 'funny',
tagId: 4fe64219007e20e644000002,
_id: 4fe90270e5caa33f04000017,
dateCreated: Tue, 26 Jun 2012 00:29:36 GMT,
rating: 0,
dislikes: 0,
likes: 0
}],
viewCount: 0,
rating: 0,
type: 'image',
description: null,
title: 'dogggg',
dateCreated: Tue, 26 Jun 2012 00:29:24 GMT
}, ... ]
With the where clause, I get an empty array.
| With a modern MongoDB greater than 3.2 you can use $lookup as an alternate to .populate() in most cases. This also has the advantage of actually doing the join "on the server" as opposed to what .populate() does which is actually "multiple queries" to "emulate" a join.
So .populate() is not really a "join" in the sense of how a relational database does it. The $lookup operator on the other hand, actually does the work on the server, and is more or less analogous to a "LEFT JOIN":
Item.aggregate(
[
{ "$lookup": {
"from": ItemTags.collection.name,
"localField": "tags",
"foreignField": "_id",
"as": "tags"
}},
{ "$unwind": "$tags" },
{ "$match": { "tags.tagName": { "$in": [ "funny", "politics" ] } } },
{ "$group": {
"_id": "$_id",
"dateCreated": { "$first": "$dateCreated" },
"title": { "$first": "$title" },
"description": { "$first": "$description" },
"tags": { "$push": "$tags" }
}}
],
function(err, result) {
// "tags" is now filtered by condition and "joined"
}
)
N.B. The .collection.name here actually evaluates to the "string" that is the actual name of the MongoDB collection as assigned to the model. Since mongoose "pluralizes" collection names by default and $lookup needs the actual MongoDB collection name as an argument ( since it's a server operation ), then this is a handy trick to use in mongoose code, as opposed to "hard coding" the collection name directly.
Whilst we could also use $filter on arrays to remove the unwanted items, this is actually the most efficient form due to Aggregation Pipeline Optimization for the special condition of as $lookup followed by both an $unwind and a $match condition.
This actually results in the three pipeline stages being rolled into one:
{ "$lookup" : {
"from" : "itemtags",
"as" : "tags",
"localField" : "tags",
"foreignField" : "_id",
"unwinding" : {
"preserveNullAndEmptyArrays" : false
},
"matching" : {
"tagName" : {
"$in" : [
"funny",
"politics"
]
}
}
}}
This is highly optimal as the actual operation "filters the collection to join first", then it returns the results and "unwinds" the array. Both methods are employed so the results do not break the BSON limit of 16MB, which is a constraint that the client does not have.
The only problem is that it seems "counter-intuitive" in some ways, particularly when you want the results in an array, but that is what the $group is for here, as it reconstructs to the original document form.
It's also unfortunate that we simply cannot at this time actually write $lookup in the same eventual syntax the server uses. IMHO, this is an oversight to be corrected. But for now, simply using the sequence will work and is the most viable option with the best performance and scalability.
Addendum - MongoDB 3.6 and upwards
Though the pattern shown here is fairly optimized due to how the other stages get rolled into the $lookup, it does have one failing in that the "LEFT JOIN" which is normally inherent to both $lookup and the actions of populate() is negated by the "optimal" usage of $unwind here which does not preserve empty arrays. You can add the preserveNullAndEmptyArrays option, but this negates the "optimized" sequence described above and essentially leaves all three stages intact which would normally be combined in the optimization.
MongoDB 3.6 expands with a "more expressive" form of $lookup allowing a "sub-pipeline" expression. Which not only meets the goal of retaining the "LEFT JOIN" but still allows an optimal query to reduce results returned and with a much simplified syntax:
Item.aggregate([
{ "$lookup": {
"from": ItemTags.collection.name,
"let": { "tags": "$tags" },
"pipeline": [
{ "$match": {
"tags": { "$in": [ "politics", "funny" ] },
"$expr": { "$in": [ "$_id", "$$tags" ] }
}}
]
}}
])
The $expr used in order to match the declared "local" value with the "foreign" value is actually what MongoDB does "internally" now with the original $lookup syntax. By expressing in this form we can tailor the initial $match expression within the "sub-pipeline" ourselves.
In fact, as a true "aggregation pipeline" you can do just about anything you can do with an aggregation pipeline within this "sub-pipeline" expression, including "nesting" the levels of $lookup to other related collections.
Further usage is a bit beyond the scope of what the question here asks, but in relation to even "nested population" then the new usage pattern of $lookup allows this to be much the same, and a "lot" more powerful in it's full usage.
Working Example
The following gives an example using a static method on the model. Once that static method is implemented the call simply becomes:
Item.lookup(
{
path: 'tags',
query: { 'tags.tagName' : { '$in': [ 'funny', 'politics' ] } }
},
callback
)
Or enhancing to be a bit more modern even becomes:
let results = await Item.lookup({
path: 'tags',
query: { 'tagName' : { '$in': [ 'funny', 'politics' ] } }
})
Making it very similar to .populate() in structure, but it's actually doing the join on the server instead. For completeness, the usage here casts the returned data back to mongoose document instances at according to both the parent and child cases.
It's fairly trivial and easy to adapt or just use as is for most common cases.
N.B The use of async here is just for brevity of running the enclosed example. The actual implementation is free of this dependency.
const async = require('async'),
mongoose = require('mongoose'),
Schema = mongoose.Schema;
mongoose.Promise = global.Promise;
mongoose.set('debug', true);
mongoose.connect('mongodb://localhost/looktest');
const itemTagSchema = new Schema({
tagName: String
});
const itemSchema = new Schema({
dateCreated: { type: Date, default: Date.now },
title: String,
description: String,
tags: [{ type: Schema.Types.ObjectId, ref: 'ItemTag' }]
});
itemSchema.statics.lookup = function(opt,callback) {
let rel =
mongoose.model(this.schema.path(opt.path).caster.options.ref);
let group = { "$group": { } };
this.schema.eachPath(p =>
group.$group[p] = (p === "_id") ? "$_id" :
(p === opt.path) ? { "$push": `$${p}` } : { "$first": `$${p}` });
let pipeline = [
{ "$lookup": {
"from": rel.collection.name,
"as": opt.path,
"localField": opt.path,
"foreignField": "_id"
}},
{ "$unwind": `$${opt.path}` },
{ "$match": opt.query },
group
];
this.aggregate(pipeline,(err,result) => {
if (err) callback(err);
result = result.map(m => {
m[opt.path] = m[opt.path].map(r => rel(r));
return this(m);
});
callback(err,result);
});
}
const Item = mongoose.model('Item', itemSchema);
const ItemTag = mongoose.model('ItemTag', itemTagSchema);
function log(body) {
console.log(JSON.stringify(body, undefined, 2))
}
async.series(
[
// Clean data
(callback) => async.each(mongoose.models,(model,callback) =>
model.remove({},callback),callback),
// Create tags and items
(callback) =>
async.waterfall(
[
(callback) =>
ItemTag.create([{ "tagName": "movies" }, { "tagName": "funny" }],
callback),
(tags, callback) =>
Item.create({ "title": "Something","description": "An item",
"tags": tags },callback)
],
callback
),
// Query with our static
(callback) =>
Item.lookup(
{
path: 'tags',
query: { 'tags.tagName' : { '$in': [ 'funny', 'politics' ] } }
},
callback
)
],
(err,results) => {
if (err) throw err;
let result = results.pop();
log(result);
mongoose.disconnect();
}
)
Or a little more modern for Node 8.x and above with async/await and no additional dependencies:
const { Schema } = mongoose = require('mongoose');
const uri = 'mongodb://localhost/looktest';
mongoose.Promise = global.Promise;
mongoose.set('debug', true);
const itemTagSchema = new Schema({
tagName: String
});
const itemSchema = new Schema({
dateCreated: { type: Date, default: Date.now },
title: String,
description: String,
tags: [{ type: Schema.Types.ObjectId, ref: 'ItemTag' }]
});
itemSchema.statics.lookup = function(opt) {
let rel =
mongoose.model(this.schema.path(opt.path).caster.options.ref);
let group = { "$group": { } };
this.schema.eachPath(p =>
group.$group[p] = (p === "_id") ? "$_id" :
(p === opt.path) ? { "$push": `$${p}` } : { "$first": `$${p}` });
let pipeline = [
{ "$lookup": {
"from": rel.collection.name,
"as": opt.path,
"localField": opt.path,
"foreignField": "_id"
}},
{ "$unwind": `$${opt.path}` },
{ "$match": opt.query },
group
];
return this.aggregate(pipeline).exec().then(r => r.map(m =>
this({ ...m, [opt.path]: m[opt.path].map(r => rel(r)) })
));
}
const Item = mongoose.model('Item', itemSchema);
const ItemTag = mongoose.model('ItemTag', itemTagSchema);
const log = body => console.log(JSON.stringify(body, undefined, 2));
(async function() {
try {
const conn = await mongoose.connect(uri);
// Clean data
await Promise.all(Object.entries(conn.models).map(([k,m]) => m.remove()));
// Create tags and items
const tags = await ItemTag.create(
["movies", "funny"].map(tagName =>({ tagName }))
);
const item = await Item.create({
"title": "Something",
"description": "An item",
tags
});
// Query with our static
const result = (await Item.lookup({
path: 'tags',
query: { 'tags.tagName' : { '$in': [ 'funny', 'politics' ] } }
})).pop();
log(result);
mongoose.disconnect();
} catch (e) {
console.error(e);
} finally {
process.exit()
}
})()
And from MongoDB 3.6 and upward, even without the $unwind and $group building:
const { Schema, Types: { ObjectId } } = mongoose = require('mongoose');
const uri = 'mongodb://localhost/looktest';
mongoose.Promise = global.Promise;
mongoose.set('debug', true);
const itemTagSchema = new Schema({
tagName: String
});
const itemSchema = new Schema({
title: String,
description: String,
tags: [{ type: Schema.Types.ObjectId, ref: 'ItemTag' }]
},{ timestamps: true });
itemSchema.statics.lookup = function({ path, query }) {
let rel =
mongoose.model(this.schema.path(path).caster.options.ref);
// MongoDB 3.6 and up $lookup with sub-pipeline
let pipeline = [
{ "$lookup": {
"from": rel.collection.name,
"as": path,
"let": { [path]: `$${path}` },
"pipeline": [
{ "$match": {
...query,
"$expr": { "$in": [ "$_id", `$$${path}` ] }
}}
]
}}
];
return this.aggregate(pipeline).exec().then(r => r.map(m =>
this({ ...m, [path]: m[path].map(r => rel(r)) })
));
};
const Item = mongoose.model('Item', itemSchema);
const ItemTag = mongoose.model('ItemTag', itemTagSchema);
const log = body => console.log(JSON.stringify(body, undefined, 2));
(async function() {
try {
const conn = await mongoose.connect(uri);
// Clean data
await Promise.all(Object.entries(conn.models).map(([k,m]) => m.remove()));
// Create tags and items
const tags = await ItemTag.insertMany(
["movies", "funny"].map(tagName => ({ tagName }))
);
const item = await Item.create({
"title": "Something",
"description": "An item",
tags
});
// Query with our static
let result = (await Item.lookup({
path: 'tags',
query: { 'tagName': { '$in': [ 'funny', 'politics' ] } }
})).pop();
log(result);
await mongoose.disconnect();
} catch(e) {
console.error(e)
} finally {
process.exit()
}
})()
| MongoDB | 11,303,294 | 101 |
Can anyone give example use cases of when you would benefit from using Redis and MongoDB in conjunction with each other?
| Redis and MongoDB can be used together with good results. A company well-known for running MongoDB and Redis (along with MySQL and Sphinx) is Craiglist. See this presentation from Jeremy Zawodny.
MongoDB is interesting for persistent, document oriented, data indexed in various ways. Redis is more interesting for volatile data, or latency sensitive semi-persistent data.
Here are a few examples of concrete usage of Redis on top of MongoDB.
Pre-2.2 MongoDB does not have yet an expiration mechanism. Capped collections cannot really be used to implement a real TTL. Redis has a TTL-based expiration mechanism, making it convenient to store volatile data. For instance, user sessions are commonly stored in Redis, while user data will be stored and indexed in MongoDB. Note that MongoDB 2.2 has introduced a low accuracy expiration mechanism at the collection level (to be used for purging data for instance).
Redis provides a convenient set datatype and its associated operations (union, intersection, difference on multiple sets, etc ...). It is quite easy to implement a basic faceted search or tagging engine on top of this feature, which is an interesting addition to MongoDB more traditional indexing capabilities.
Redis supports efficient blocking pop operations on lists. This can be used to implement an ad-hoc distributed queuing system. It is more flexible than MongoDB tailable cursors IMO, since a backend application can listen to several queues with a timeout, transfer items to another queue atomically, etc ... If the application requires some queuing, it makes sense to store the queue in Redis, and keep the persistent functional data in MongoDB.
Redis also offers a pub/sub mechanism. In a distributed application, an event propagation system may be useful. This is again an excellent use case for Redis, while the persistent data are kept in MongoDB.
Because it is much easier to design a data model with MongoDB than with Redis (Redis is more low-level), it is interesting to benefit from the flexibility of MongoDB for main persistent data, and from the extra features provided by Redis (low latency, item expiration, queues, pub/sub, atomic blocks, etc ...). It is indeed a good combination.
Please note you should never run a Redis and MongoDB server on the same machine. MongoDB memory is designed to be swapped out, Redis is not. If MongoDB triggers some swapping activity, the performance of Redis will be catastrophic. They should be isolated on different nodes.
| MongoDB | 10,696,463 | 101 |
i found this error when trying to run mongodb. I install it via homebrew. Please assist
Agungs-MacBook-Pro:~ agungmahaputra$ mongod
2017-12-26T15:31:15.911+0700 I CONTROL [initandlisten] MongoDB starting : pid=5189 port=27017 dbpath=/data/db 64-bit host=Agungs-MacBook-Pro.local
2017-12-26T15:31:15.911+0700 I CONTROL [initandlisten] db version v3.6.0
2017-12-26T15:31:15.911+0700 I CONTROL [initandlisten] git version: a57d8e71e6998a2d0afde7edc11bd23e5661c915
2017-12-26T15:31:15.911+0700 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2n 7 Dec 2017
2017-12-26T15:31:15.911+0700 I CONTROL [initandlisten] allocator: system
2017-12-26T15:31:15.911+0700 I CONTROL [initandlisten] modules: none
2017-12-26T15:31:15.911+0700 I CONTROL [initandlisten] build environment:
2017-12-26T15:31:15.911+0700 I CONTROL [initandlisten] distarch: x86_64
2017-12-26T15:31:15.911+0700 I CONTROL [initandlisten] target_arch: x86_64
2017-12-26T15:31:15.911+0700 I CONTROL [initandlisten] options: {}
2017-12-26T15:31:15.911+0700 E STORAGE [initandlisten] Failed to set up listener: SocketException: Address already in use
2017-12-26T15:31:15.911+0700 I CONTROL [initandlisten] now exiting
2017-12-26T15:31:15.911+0700 I CONTROL [initandlisten] shutting down with code:48
Agungs-MacBook-Pro:~ agungmahaputra$
| You can kill the previous mongod instance and start the new one.
To kill the previous mongod instance, first search for a list of tasks running on your machine by typing,
sudo lsof -iTCP -sTCP:LISTEN -n -P
Search for mongod COMMAND and its PID and type,
sudo kill <mongo_command_pid>
Now start your mongod instance by typing,
mongod
You can see MongoDB running successfully.
| MongoDB | 47,975,929 | 99 |
In MongoDB, is it possible to dump a database and restore the content to a different database? For example like this:
mongodump --db db1 --out dumpdir
mongorestore --db db2 --dir dumpdir
But it doesn't work. Here's the error message:
building a list of collections to restore from dumpdir dir
don't know what to do with subdirectory "dumpdir/db1", skipping...
done
| You need to actually point at the "database name" container directory "within" the output directory from the previous dump:
mongorestore -d db2 dumpdir/db1
And usually just <path> is fine as a positional argument rather than with -dir which would only be needed when "out of position" i.e "in the middle of the arguments list".
p.s. For archive backup file (tested with mongorestore v3.4.10)
mongorestore --gzip --archive=${BACKUP_FILE_GZ} --nsFrom "${DB_NAME}.*" --nsTo "${DB_NAME_RESTORE}.*"
| MongoDB | 36,321,899 | 99 |
What is the command to show the current db in the MongoDB shell?
(I failed to find it on Google)
| Found it by guessing :) Simply:
db
| MongoDB | 16,004,182 | 99 |
I am using a case insensitive search in Mongo, something similar to https://stackoverflow.com/q/5500823/1028488.
ie. I am using a regex with options i. But I am having trouble restricting the regex to just that word, it performs more like a 'Like' in SQL
eg: if I use query like
{"SearchWord" : { '$regex' : 'win', $options: '-i' }}, it shows me results for win, window & winter. How do i restrict it to jsut show win?
I tried /^win$/ but it's saying invalid JSON... Please suggest away.
| You can Use $options => i for case insensitive search. Giving some possible examples required for string match.
Exact case insensitive string
db.collection.find({name:{'$regex' : '^string$', '$options' : 'i'}})
Contains string
db.collection.find({name:{'$regex' : 'string', '$options' : 'i'}})
Start with string
db.collection.find({name:{'$regex' : '^string', '$options' : 'i'}})
End with string
db.collection.find({name:{'$regex' : 'string$', '$options' : 'i'}})
Doesn't Contains string
db.collection.find({name:{'$regex' : '^((?!string).)*$', '$options' : 'i'}})
Keep this as a bookmark, and a reference for any other alterations you may need.
http://www.cheatography.com/davechild/cheat-sheets/regular-expressions/
| MongoDB | 8,246,019 | 99 |
So I have an embedded document that tracks group memberships. Each embedded document has an ID pointing to the group in another collection, a start date, and an optional expire date.
I want to query for current members of a group. "Current" means the start time is less than the current time, and the expire time is greater than the current time OR null.
This conditional query is totally blocking me up. I could do it by running two queries and merging the results, but that seems ugly and requires loading in all results at once. Or I could default the expire time to some arbitrary date in the far future, but that seems even uglier and potentially brittle. In SQL I'd just express it with "(expires >= Now()) OR (expires IS NULL)" -- but I don't know how to do that in Mongo.
Any ideas? Thanks very much in advance.
| Just thought I'd update in-case anyone stumbles across this page in the future. As of 1.5.3, mongo now supports a real $or operator: http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-%24or
Your query of "(expires >= Now()) OR (expires IS NULL)" can now be rendered as:
{$or: [{expires: {$gte: new Date()}}, {expires: null}]}
| MongoDB | 2,008,032 | 99 |
I'm receiving the following warning from mongodb about THP
2015-03-06T21:01:15.526-0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-03-06T21:01:15.526-0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
But I did manage to turned THP off manually
frederick@UbuntuVirtual:~$ cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
frederick@UbuntuVirtual:~$ cat /sys/kernel/mm/transparent_hugepage/defrag
always madvise [never]
I did the trick by adding transparent_hugepage=never to GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub and adding
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
to /etc/rc.local
How on earth can I avoid the warning?
| Official MongoDB documentation gives several solutions for this issue. You can also try this solution, which worked for me:
Note: Try official documentation directives if MongoDB version is greater than 3.0
Open /etc/init.d/mongod file.
(if no such file you might check /etc/init.d/mongod, /etc/init/mongod.conf files - credit: the below comments)
Add the lines below immediately after chown $DAEMONUSER /var/run/mongodb.pid and before end script.
Restart mongod (service mongod restart).
Here are the lines to add to /etc/init.d/mongod:
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
That's it!
| MongoDB | 28,911,634 | 98 |
I'm trying to find all documents that do not contain at least one document with a specific field value. For example here is a sample collection:
{ _id : 1,
docs : [
{ foo : 1,
bar : 2},
{ foo : 3,
bar : 3}
]
},
{ _id : 2,
docs : [
{ foo : 2,
bar : 2},
{ foo : 3,
bar : 3}
]
}
I want to find every record where there is not a document in the docs block that does not contain at least one record with foo = 1. In the example above, only the second document should be returned.
I have tried the following, but it only tells me if there are any that don't match (which returns document 1.
db.collection.find({"docs": { $not: {$elemMatch: {foo: 1 } } } })
UPDATE: The query above actually does work. As many times happens, my data was wrong, not my code.
I have also looked at the $nin operator but the examples only show when the array contains a list of primitive values, not an additional document. When I've tried to do this with something like the following, it looks for the EXACT document rather than just the foo field I want.
db.collection.find({"docs": { $nin: {'foo':1 } } })
Is there anyway to accomplish this with the basic operators?
| Using $nin will work, but you have the syntax wrong. It should be:
db.collection.find({'docs.foo': {$nin: [1]}})
| MongoDB | 16,221,599 | 98 |
I am following the tutorials at docs.mongodb.org, I have completed the first tutorial which was to install mongodb on a Windows machine. I am now at the second stage which is getting started with mongodb development.
I am stuck at the first stage of this section which instructs me to type mongo into a system prompt. When I do this I simply get an error message saying the following:
'mongo' is not recognized as an internal or external command, operable program or batch file
I know this is probably something quite simple that I am doing wrong, does anyone have any ideas?
| You need to add Mongo's bin folder to the "Path" Environment Variable
Here's how on Windows 10:
Find Mongo's bin folder.
If you're not sure where it is, it's probably in C:\Program Files\MongoDB\Server\3.4\ 3.4 was the latest stable version at the time, this will be different for you probably.
It should look like this:
Notice this is the path to mongo.exe and mongod.exe. Adding this folder to the Path variable is telling Windows to search in this folder for executables matching your command when you run something in cmd. The search starts with the current working dir, and if it doesn't find your exe, goes on to search all the paths in Path till it finds it or it doesn't and it gives you that error you saw.
Copy the path to the bin folder. It should be C:\Program Files\MongoDB\Server\3.4\bin\ (Or whatever version you're using)
Press win, type env, Windows will suggest "Edit the System Environment Variables", click that.
On the Advanced tab, click "Environment Variables"
Highlight the "Path" variable, click "Edit":
This will bring up the "Edit environment variable" window, click "New"
This will start a new line in the list of folders:
Paste your path to the bin folder. Make sure it ends with a \ like so:
Press "OK", "OK", "OK"
Restart anything where you want to use the new path: CMD windows, Visual Studio terminals, etc. (Thanks to @id.ot for their comment)
Now you should be able to run mongod and mongo from anywhere in a command window.
| MongoDB | 15,053,893 | 98 |
Using this modified example from the Rails guides, how does one model a relational "has_many :through" association using mongoid?
The challenge is that mongoid does not support has_many :through as ActiveRecord does.
# doctor checking out patient
class Physician < ActiveRecord::Base
has_many :appointments
has_many :patients, :through => :appointments
has_many :meeting_notes, :through => :appointments
end
# notes taken during the appointment
class MeetingNote < ActiveRecord::Base
has_many :appointments
has_many :patients, :through => :appointments
has_many :physicians, :through => :appointments
end
# the patient
class Patient < ActiveRecord::Base
has_many :appointments
has_many :physicians, :through => :appointments
has_many :meeting_notes, :through => :appointments
end
# the appointment
class Appointment < ActiveRecord::Base
belongs_to :physician
belongs_to :patient
belongs_to :meeting_note
# has timestamp attribute
end
| Mongoid doesn't have has_many :through or an equivalent feature. It would not be so useful with MongoDB because it does not support join queries so even if you could reference a related collection via another it would still require multiple queries.
https://github.com/mongoid/mongoid/issues/544
Normally if you have a many-many relationship in a RDBMS you would model that differently in MongoDB using a field containing an array of 'foreign' keys on either side. For example:
class Physician
include Mongoid::Document
has_and_belongs_to_many :patients
end
class Patient
include Mongoid::Document
has_and_belongs_to_many :physicians
end
In other words you would eliminate the join table and it would have a similar effect to has_many :through in terms of access to the 'other side'. But in your case thats probably not appropriate because your join table is an Appointment class which carries some extra information, not just the association.
How you model this depends to some extent on the queries that you need to run but it seems as though you will need to add the Appointment model and define associations to Patient and Physician something like this:
class Physician
include Mongoid::Document
has_many :appointments
end
class Appointment
include Mongoid::Document
belongs_to :physician
belongs_to :patient
end
class Patient
include Mongoid::Document
has_many :appointments
end
With relationships in MongoDB you always have to make a choice between embedded or associated documents. In your model I would guess that MeetingNotes are a good candidate for an embedded relationship.
class Appointment
include Mongoid::Document
embeds_many :meeting_notes
end
class MeetingNote
include Mongoid::Document
embedded_in :appointment
end
This means that you can retrieve the notes together with an appointment all together, whereas you would need multiple queries if this was an association. You just have to bear in mind the 16MB size limit for a single document which might come into play if you have a very large number of meeting notes.
| MongoDB | 7,000,605 | 98 |
What are some GUIs to use with Mongo, and what features do they offer? I'm looking for facts here, not opinions on which interface is best.
| Official List from MongoDB
http://www.mongodb.org/display/DOCS/Admin+UIs
Web Based
For PHP, I'd recommend Rock Mongo. Solid, lots of great features, easy setup.
http://rockmongo.com/
If you don't want to install anything ... you can use MongoHQ's web interface (even if you your MongoDB isn't on MongoHQ.)
https://mongohq.com/home
Mac OS X
While MongoHub had been a decent option for a while it's bugs make it virtually unusable at this point ...
There is a more up-to-date (and less buggy) fork of the MongoHub project available: https://github.com/fotonauts/MongoHub-Mac you can download a binary here.
Windows
By far, the best UI (for Windows) currently out there is MongoVUE.
http://blog.mongovue.com/
WARNING/UPDATE: MongoVUE seems to be abandoned.
Looks great, lots of features, and if you are new it will really help you get going ...
http://blog.mongovue.com/features/
Here's a Q&A with the author too if you are interested ...
http://learnmongo.com/posts/qa-ishann-kumar-creator-of-mongovue/
| MongoDB | 4,269,688 | 98 |
I am currently trying out this tutorial for node express with mongodb
https://medium.com/@sunnykay/docker-development-workflow-node-express-mongo-4bb3b1f7eb1e
the first part works fine where to build the docker-compose.yml
it works totally fine building it locally so I tried to tag it and push into my dockerhub to learn and try more.
this is originally what's in the yml file followed by the tutorial
version: "2"
services:
web:
build: .
volumes:
- ./:/app
ports:
- "3000:3000"
this works like a charm when I use docker-compose build and docker-compose up
so I tried to push it to my dockerhub and I also tag it as node-test
I then changed the yml file into
version: "2"
services:
web:
image: "et4891/node-test"
volumes:
- ./:/app
ports:
- "3000:3000"
then I removed all images I have previously to make sure this also works...but when I run docker-compose build I see this message error: web uses an image, skipping and nothing happens.
I tried googling the error but nothing much I can find.
Can someone please give me a hand?
| I found out, I was being stupid.
I didn't need to run docker-compose build I can just directly run docker-compose up since then it'll pull the images down, the build is just to build locally
| MongoDB | 47,615,495 | 97 |
I am using Mongoose aggregation (MongoDB version 3.2).
I have a field users which is an array. I want to $project first item in this array to a new field user.
I tried
{ $project: {
user: '$users[0]',
otherField: 1
}},
{ $project: {
user: '$users.0',
otherField: 1
}},
{ $project: {
user: { $first: '$users'},
otherField: 1
}},
But neither works.
How can I do it correctly? Thanks
| Update:
Starting from v4.4 there is a dedicated operator $first:
{ $project: {
user: { $first: "$users" },
otherField: 1
}},
It's a syntax sugar to the
Original answer:
You can use arrayElemAt:
{ $project: {
user: { $arrayElemAt: [ "$users", 0 ] },
otherField: 1
}},
| MongoDB | 39,196,537 | 96 |
I read the documentation in the MongoDb and I used a simple proves and I only look that:
Push is sorting the array but addtoSet isn't it.
For me visually is the same, I don't know the difference.
Could anybody explain me the difference?
Another think if it could be in spanish or in a simple english, i'll aprecite it.
| $addToSet do not add the item to the given field if it already contains it, on the other hand $push will add the given object to field whether it exists or not.
{_id: "docId", items: [1, 2]}
db.items.update({_id:"docId"}, {$addToSet:{items: 2}}); // This won't update the document as it already contains 2
db.items.update({_id:"docId"}, {$push: {items:2}}); // this will update the document. new document {_id: "docId", items:[1,2,2]}
| MongoDB | 27,248,556 | 96 |
I recently installed mongodb-2.6.0 with Homebrew.
After successfully installed, I tried to connect using the mongo command. I am receiving the following errors which do not allow me to connect:
Failed to connect to 127.0.0.1:27017, reason: errno:61 Connection refused
Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed
| It can happen when the mongodb service is not running on the mac. To start it, I tried
brew services start mongodb
and it worked.
Edit: According to the discussion on this PR on homebrew: https://github.com/Homebrew/homebrew/issues/30628
brew services is deprecated, I looked around on SO and found these answers now answer the question: What is the correct way to start a mongod service on linux / OS X?
| MongoDB | 23,418,134 | 96 |
I am trying to migrate from sqlalchemy(SQlite) to using mongodb. I would like schema vertification. I amm looking at mongokit, but I want something which is similar to mappers, so that it would save from the object's property, and not a dict.
i would like a mapper so that i can use existing objects without modifying them.
| Another option is MongoEngine. The ORM for MongoEngine is very similar to the ORM used by Django.
Example (from the tutorial):
class Post(Document):
title = StringField(max_length=120, required=True)
author = ReferenceField(User)
class TextPost(Post):
content = StringField()
class ImagePost(Post):
image_path = StringField()
class LinkPost(Post):
link_url = StringField()
| MongoDB | 2,781,682 | 96 |
I'm used to using relational databases like MySQL or PostgreSQL, and combined with MVC frameworks such as Symfony, RoR or Django, and I think it works great.
But lately I've heard a lot about MongoDB which is a non-relational database, or, to quote the official definition,
a scalable, high-performance, open
source, schema-free, document-oriented
database.
I'm really interested in being on edge and want to be aware of all the options I'll have for a next project and choose the best technologies out there.
In which cases using MongoDB (or similar databases) is better than using a "classic" relational databases?
And what are the advantages of MongoDB vs MySQL in general?
Or at least, why is it so different?
If you have pointers to documentation and/or examples, it would be of great help too.
| Here are some of the advantages of MongoDB for building web applications:
A document-based data model. The basic unit of storage is analogous to JSON, Python dictionaries, Ruby hashes, etc. This is a rich data structure capable of holding arrays and other documents. This means you can often represent in a single entity a construct that would require several tables to properly represent in a relational db. This is especially useful if your data is immutable.
Deep query-ability. MongoDB supports dynamic queries on documents using a document-based query language that's nearly as powerful as SQL.
No schema migrations. Since MongoDB is schema-free, your code defines your schema.
A clear path to horizontal scalability.
You'll need to read more about it and play with it to get a better idea. Here's an online demo:
http://try.mongodb.org/
| MongoDB | 2,117,372 | 96 |
I am using mongoose findOneAndUpdate but still getting the error,
DeprecationWarning: collection.findAndModify is deprecated. Use findOneAndUpdate, findOneAndReplace or findOneAndDelete instead.
But I am not even using findAndModify, why is it converting my query to findAndModify?
| You need to set the option in the query useFindAndModify to false, as mentioned in the docs.
(search keyword Currently supported options are)
'useFindAndModify': true by default. Set to false to make
findOneAndUpdate() and findOneAndRemove() use native
findOneAndUpdate() rather than findAndModify().
and if you see the definition file of mongoose, where mentioned that it calls findAndModify update command.
/**
* Issues a mongodb findAndModify update command.
* Finds a matching document, updates it according to the update arg,
passing any options,
* and returns the found document (if any) to the callback. The query
executes immediately
* if callback is passed else a Query object is returned.
*/
findOneAndUpdate(): DocumentQuery<T | null, T>;
Recently updated in the mongoose docs (Click here) for these deprecation where mentioned:
Mongoose's findOneAndUpdate() long pre-dates the MongoDB driver's
findOneAndUpdate() function, so it uses the MongoDB driver's
findAndModify() function instead.
There are three ways or more by which you can avoid the use of FindAndModify:
At Global level: Set the option to false.
// Make Mongoose use `findOneAndUpdate()`. Note that this option is `true`
// by default, you need to set it to false.
mongoose.set('useFindAndModify', false);
At connection level: we can configure using the connection options:
mongoose.connect(uri, { useFindAndModify: false });
At Query level:
await ModelName.findOneAndUpdate({matchQuery},
{$set: updateData}, {useFindAndModify: false});
| MongoDB | 52,572,852 | 95 |
So as you all know, find() returns an array of results, with findOne() returning just a simply object.
With Angular, this makes a huge difference. Instead of going {{myresult[0].name}}, I can simply just write {{myresult.name}}.
I have found that the $lookup method in the aggregate pipeline returns an array of results instead of just a single object.
For example, I have two colletions:
users collection:
[{
"firstName": "John",
"lastName": "Smith",
"country": 123
}, {
"firstName": "Luke",
"lastName": "Jones",
"country": 321
}]
countries collection:
[{
"name": "Australia",
"code": "AU",
"_id": 123
}, {
"name": "New Zealand",
"code": "NZ",
"_id": 321
}]
My aggregate $lookup:
db.users.aggregate([{
$project: {
"fullName": {
$concat: ["$firstName", " ", "$lastName"]
},
"country": "$country"
}
}, {
$lookup: {
from: "countries",
localField: "country",
foreignField: "_id",
as: "country"
}
}])
The results from the query:
[{
"fullName": "John Smith",
"country": [{
"name": "Australia",
"code": "AU",
"_id": 123
}]
}, {
"fullName": "Luke Jones",
"country": [{
"name": "New Zealand",
"code": "NZ",
"_id": 321
}]
}]
As you can see by the above results, each country is an array instead of a single object like "country": {....}.
How can I have my $lookup return a single object instead of an array since it will only ever match a single document?
| You're almost there, you need to add another $project stage to your pipeline and use the $arrayElemAt to return the single element in the array.
db.users.aggregate(
[
{ "$project": {
"fullName": {
"$concat": [ "$firstName", " ", "$lastName"]
},
"country": "$country"
}},
{ "$lookup": {
"from": "countries",
"localField": "country",
"foreignField": "_id",
"as": "countryInfo"
}},
{ "$project": {
"fullName": 1,
"country": 1,
"countryInfo": { "$arrayElemAt": [ "$countryInfo", 0 ] }
}}
]
)
| MongoDB | 37,691,727 | 95 |
Let's say I insert the document.
post = { some dictionary }
mongo_id = mycollection.insert(post)
Now, let's say I want to add a field and update it. How do I do that? This doesn't seem to work.....
post = mycollection.find_one({"_id":mongo_id})
post['newfield'] = "abc"
mycollection.save(post)
| In pymongo you can update with:
mycollection.update({'_id':mongo_id}, {"$set": post}, upsert=False)
Upsert parameter will insert instead of updating if the post is not found in the database.
Documentation is available at mongodb site.
UPDATE For version > 3 use update_one instead of update:
mycollection.update_one({'_id':mongo_id}, {"$set": post}, upsert=False)
| MongoDB | 4,372,797 | 95 |
I have a mongoDB collection with millions of rows and I'm trying to optimize my queries. I'm currently using the aggregation framework to retrieve data and group them as I want. My typical aggregation query is something like : $match > $group > $ group > $project
However, I noticed that the last parts only take a few ms, the beginning is the slowest.
I tried to perform a query with only the $match filter, and then to perform the same query with collection.find. The aggregation query takes ~80ms while the find query takes 0 or 1ms.
I have indexes on pretty much each field so I guess this isn't the problem. Any idea on what could go wrong ? Or is it just a "normal" drawback of the aggregation framework ?
I could use find queries instead of aggregation queries, however I would have to perform a lot of processing after the request and this process can be done quickly with $group etc. so I would rather keep the aggregation framework.
Thanks,
EDIT :
Here is my criteria :
{
"action" : "click",
"timestamp" : {
"$gt" : ISODate("2015-01-01T00:00:00Z"),
"$lt" : ISODate("2015-02-011T00:00:00Z")
},
"itemId" : "5"
}
| The main purpose of the aggregation framework is to ease the query of a big number of entries and generate a low number of results that hold value to you.
As you have said, you can also use multiple find queries, but remember that you can not create new fields with find queries. On the other hand, the $group stage allows you to define your new fields.
If you would like to achieve the functionality of the aggregation framework, you would most likely have to run an initial find (or chain several ones), pull that information and further manipulate it with a programming language.
The aggregation pipeline might seem to take longer, but at least you know you only have to take into account the performance of one system - MongoDB engine.
Whereas, when it comes to manipulating the data returned from a find query, you would most likely have to further manipulate the data with a programming language, thus increasing the complexity depending on the intricacies of the programming language of choice.
| MongoDB | 28,364,319 | 94 |
I have this data in mongodb:
{
"name": "Amey",
"country": "India",
"region": "Dhule,Maharashtra"
}
and I want to retrieve the data while passing a field name as a variable in query.
Following does not work:
var name = req.params.name;
var value = req.params.value;
collection.findOne({name: value}, function(err, item) {
res.send(item);
});
How can I query mongodb keeping both field name and its value dynamic?
| You need to set the key of the query object dynamically:
var name = req.params.name;
var value = req.params.value;
var query = {};
query[name] = value;
collection.findOne(query, function (err, item) { ... });
When you do {name: value}, the key is the string 'name' and not the value of the variable name.
| MongoDB | 17,039,018 | 94 |
I was trying to run MongoDB:
E:\mongo\bin>mongod
mongod --help for help and startup options
Sun Nov 06 18:48:37
Sun Nov 06 18:48:37 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
Sun Nov 06 18:48:37
Sun Nov 06 18:48:37 [initandlisten] MongoDB starting : pid=7108 port=27017 dbpath=/data/db 32-bit host=pykhmer-PC
Sun Nov 06 18:48:37 [initandlisten]
Sun Nov 06 18:48:37 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
Sun Nov 06 18:48:37 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
Sun Nov 06 18:48:37 [initandlisten] ** with --journal, the limit is lower
Sun Nov 06 18:48:37 [initandlisten]
Sun Nov 06 18:48:37 [initandlisten] db version v2.0.1, pdfile version 4.5
Sun Nov 06 18:48:37 [initandlisten] git version: 3a5cf0e2134a830d38d2d1aae7e88cac31bdd684
Sun Nov 06 18:48:37 [initandlisten] build info: windows (5, 1, 2600, 2, 'Service Pack 3') BOOST_LIB_VERSION=1_42
Sun Nov 06 18:48:37 [initandlisten] options: {}
Sun Nov 06 18:48:37 [initandlisten] exception in initAndListen: 10296 dbpath (/data/db) does not exist, terminating
Sun Nov 06 18:48:37 dbexit:
Sun Nov 06 18:48:37 [initandlisten] shutdown: going to close listening sockets...
Sun Nov 06 18:48:37 [initandlisten] shutdown: going to flush diaglog...
Sun Nov 06 18:48:37 [initandlisten] shutdown: going to close sockets...
Sun Nov 06 18:48:37 [initandlisten] shutdown: waiting for fs preallocator...
Sun Nov 06 18:48:37 [initandlisten] shutdown: closing all files...
Sun Nov 06 18:48:37 [initandlisten] closeAllFiles() finished
Sun Nov 06 18:48:37 dbexit: really exiting now
E:\mongo\bin>mongo
MongoDB shell version: 2.0.1
connecting to: test
Sun Nov 06 18:48:42 Error: couldn't connect to server 127.0.0.1 shell/mongo.js:84
exception: connect failed
E:\mongo>ls
GNU-AGPL-3.0 README THIRD-PARTY-NOTICES bin data
I was looking at http://www.mongodb.org/display/DOCS/Quickstart+Windows
and following the instructions.
Could anyone tell me what is the problem with running MongoDB (I am using Windows 7)?
| After installing the MongoDB you should manually create a data folder.
By default MongoDB will store data in /data/db,
but it won't automatically create that directory. To create it, do:
$ sudo mkdir -p /data/db/
$ sudo chown `id -u` /data/db
You can also tell MongoDB to use a different data directory,
with the --dbpath option.
For more detailed information go to MongoDB wiki page.
| MongoDB | 8,029,064 | 94 |
I'm starting a hobby (non-revenue) project using Ruby on Rails. I've done a fair amount of development in Rails using Postgresql, and I can make a pretty good imitation of normalized schema. However, Mongrodb looks shiny and new. What better for trying out something new than a hobby project?
Think back to when you started using Mongodb. What techniques did you learn later that made you say, "If only I knew that when I started!" What plug-ins did you discover that you would have used right from the start, if only you had known? What references would you like to have had bookmarked?
| I would definitely second the recommendation of MongoMapper if you're going to be using MongoDB with Rails. I will warn you, however, that there is (so far) no documentation other than a couple blog posts. If you're not comfortable digging into the source code to see how things work, it's probably not for you yet.
If you're working outside of Rails, I'd recommend staying away from MongoMapper. Because it's working MongoDB into something similar to what we expect from a SQL-backed ORM, it doesn't really give you a good idea of the power of and of the different thinking behind MongoDB. Spend some time playing around with the lower-level ruby driver, and even in the javascript console.
The other thing I'd recommend, especially since you mentioned knowing how to normalize a schema, is not to think of MongoDB as a database for now. The way you organize your data in MongoDB is very different that with a relational database. Try to think about it more as a place to store and retrieve Ruby hashes. You can do some relational things with MongoDB, but I'd recommend sticking with only self-contained documents while you're trying to wrap your head around NoSQL.
As for what links you should look at, I'd highly recommend reading through everything you can on the MongoDB site. Their documentation is very good. Particularly, take a look at the advanced queries, multikey indexes, and MapReduce to get an idea of some of the unique advantages and strengths of a NoSQL database.
| MongoDB | 2,124,274 | 94 |
I'm working on a query to find cities with most zips for each state:
db.zips.distinct("state", db.zips.aggregate([
{ $group:
{ _id: {
state: "$state",
city: "$city"
},
numberOfzipcodes: {
$sum: 1
}
}
},
{ $sort: {
numberOfzipcodes: -1
}
}
])
)
The aggregate part of the query seems to work fine, but when I add the distinct I get an empty result.
Is this because I have state in the id? Can I do something like distinct("_id.state ?
| You can use $addToSet with the aggregation framework to count distinct objects.
For example:
db.collectionName.aggregate([{
$group: {_id: null, uniqueValues: {$addToSet: "$fieldName"}}
}])
Or extended to get your unique values into a proper list rather than a sub-document inside a null _id record:
db.collectionName.aggregate([
{ $group: {_id: null, myFieldName: {$addToSet: "$myFieldName"}}},
{ $unwind: "$myFieldName" },
{ $project: { _id: 0 }},
])
| MongoDB | 16,368,638 | 93 |
I have a process that returns a list of String MongoDB ids,
[512d5793abb900bf3e20d012, 512d5793abb900bf3e20d011]
And I want to fire a single query to Mongo and get the matching documents back in the same order as the list.
What is the shell notation to do this?
| After converting the strings into ObjectIds, you can use the $in operator to get the docs in the list. There isn't any query notation to get the docs back in the order of your list, but see here for some ways to handle that.
var ids = ['512d5793abb900bf3e20d012', '512d5793abb900bf3e20d011'];
var obj_ids = ids.map(function(id) { return ObjectId(id); });
db.test.find({_id: {$in: obj_ids}});
| MongoDB | 15,102,532 | 93 |
Replication seems to be a lot simpler than sharding, unless I am missing the benefits of what sharding is actually trying to achieve. Don't they both provide horizontal scaling?
| In the context of scaling MongoDB:
replication creates additional copies of the data and allows for automatic failover to another node. Replication may help with horizontal scaling of reads if you are OK to read data that potentially isn't the latest.
sharding allows for horizontal scaling of data writes by partitioning data across multiple servers using a shard key. It's important to choose a good shard key. For example, a poor choice of shard key could lead to "hot spots" of data only being written on a single shard.
A sharded environment does add more complexity because MongoDB now has to manage distributing data and requests between shards -- additional configuration and routing processes are added to manage those aspects.
Replication and sharding are typically combined to created a sharded cluster where each shard is supported by a replica set.
From a client application point of view you also have some control in relation to the replication/sharding interaction, in particular:
Read preferences
Write concerns
| MongoDB | 11,571,273 | 93 |
Is there a super UNIX like "root" user for MongoDB? I've been looking at http://docs.mongodb.org/manual/reference/user-privileges/ and have tried many combinations, but they all seem to lack in an area or another. Surely there is a role that is above all the ones listed there.
| The best superuser role would be the root.The Syntax is:
use admin
db.createUser(
{
user: "root",
pwd: "password",
roles: [ "root" ]
})
For more details look at built-in roles.
| MongoDB | 20,117,104 | 92 |
How would I mock out the database in my node.js application, which in this case uses mongodb as the backend for a blog REST API ?
Sure, I could set the database to a specific testing -database, but I would still save data and not test my code only, but also the database, so I am actually not doing unit testing but integration testing.
So what should one do? Create database wrappers as a middle layer between application and db and replace the DAL when in testing?
// app.js
var express = require('express');
app = express(),
mongo = require('mongoskin'),
db = mongo.db('localhost:27017/test?auto_reconnect');
app.get('/posts/:slug', function(req, res){
db.collection('posts').findOne({slug: req.params.slug}, function (err, post) {
res.send(JSON.stringify(post), 200);
});
});
app.listen(3000);
// test.js
r = require('requestah')(3000);
describe("Does some testing", function() {
it("Fetches a blogpost by slug", function(done) {
r.get("/posts/aslug", function(res) {
expect(res.statusCode).to.equal(200);
expect(JSON.parse(res.body)["title"]).to.not.equal(null);
return done();
});
});
));
| I don't think database related code can be properly tested without testing it with the database software. That's because the code you're testing is not just javascript but also the database query string. Even though in your case the queries look simple you can't rely on it being that way forever.
So any database emulation layer will necessarily implement the entire database (minus disk storage perhaps). By then you end up doing integration testing with the database emulator even though you call it unit testing. Another downside is that the database emulator may end up having a different set of bugs compared to the database and you may end up having to code for both the database emulator and the database (kind of like the situation with IE vs Firefox vs Chrome etc.).
Therefore, in my opinion, the only way to correctly test your code is to interface it with the real database.
| MongoDB | 12,526,160 | 92 |
We all know that Meteor offers the miniMongo driver which seamlessly allows the client to access the persistent layer (MongoDB).
If any client can access the persistent API how does one secure his application?
What are the security mechanisms that Meteor provides and in what context should they be used?
| When you create a app using meteor command, by default the app includes the following packages:
AUTOPUBLISH
INSECURE
Together, these mimic the effect of each client having full read/write access to the server's database. These are useful prototyping tools (development purposes only), but typically not appropriate for production applications. When you're ready for production release, just remove these packages.
To add more, Meteor supports Facebook / Twitter / and Much More packages to handle authentication, and the coolest is the Accounts-UI package
| MongoDB | 10,099,843 | 92 |
I am working with new official mongodb driver for golang. I have created one complex query to insert the data into mongo db and then sort it according to an element value. I am using a filter in which I have created the bson type using :-
filter := bson.D{{"autorefid", "100"}}
But It is showing a warning saying:
primitive.E composite literal uses unkeyed fields
The warnings are creating a mess in my code.
| The warnings can be stopped by setting the check flag to false.
$ go doc cmd/vet
By default all checks are performed. If any flags are explicitly set to true, only those tests are run. Conversely, if any flag is
explicitly set to false, only those tests are disabled. Thus
-printf=true runs the printf check, -printf=false runs all checks except the printf check.
Unkeyed composite literals
Flag: -composites
Composite struct literals that do not use the field-keyed syntax.
But the warning is due to not providing the keys name when setting the value in primitive.E struct.
Setting keys for primitive.E struct will remove the warning messages. For example
filter := bson.D{primitive.E{Key: "autorefid", Value: "100"}}
Package primitive contains types similar to Go primitives for BSON
types can do not have direct Go primitive representations.
type E struct {
Key string
Value interface{}
}
E represents a BSON element for a D. It is usually used inside a D.
For more information have a look at primitive.E
| MongoDB | 54,548,441 | 91 |
I'm new in nodeJS, started learning by following a trailer on youtube, everything goes well until I added the connect function if mongodb,
mongo.connect("mongodb://localhost:27017/mydb")
when I run my code on cmd (node start-app), get the following error,
MongoNetworkError: failed to connect to server [localhost:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017]
Could someone explain me which step I missed ?
my code :
var express = require("express");
var MongoClient = require('mongodb');
var url = "mongodb://localhost:27017/mydb";
var webService = require("./webService");
var server = express();
MongoClient.connect(url, function (err, db) {
if (err) throw err;
console.log("Database created!");
db.close();
});
server.use(express.urlencoded({ extended: true }));
server.set('views', __dirname);
server.get('/', function (request, response) {
response.sendFile(__dirname + '/MainPage.html');
});
server.get('/Sign', function (request, response) {
response.render(__dirname + '/Sign.ejs');
});
server.post("/signUp", webService.signUp);
server.post("/createUser", webService.createUser);
server.listen(5500);
| You have to install MongoDB database server first in your system and start it.
Use the below link to install MongoDB
https://docs.mongodb.com/manual/installation/
If you have installed MongoDB check if the server is in which state (start/stop). Try to connect through mongo shell client.
| MongoDB | 50,173,080 | 91 |
I'm trying to develop a class on the top of the mongoose with my custom methods, so I extended the mongoose with my own class but when I invoke to create a new car method it works but its strip and error, here I let you see what I'm trying to do.
I'm getting this warning
(node:3341) DeprecationWarning: Mongoose: mpromise (mongoose's default promise library) is deprecated, plug in your own promise library instead: http://mongoosejs.com/docs/promises.html
after I do
driver.createCar({
carName: 'jeep',
availableSeats: 4,
}, callback);
driver is an instance of Driver class
const carSchema = new Schema({
carName: String,
availableSeats: Number,
createdOn: { type: Date, default: Date.now },
});
const driverSchema = new Schema({
email: String,
name: String,
city: String,
phoneNumber: String,
cars: [carSchema],
userId: {
type: Schema.Types.ObjectId,
required: true,
},
createdOn: { type: Date, default: Date.now },
});
const DriverModel = mongoose.model('Driver', driverSchema);
class Driver extends DriverModel {
getCurrentDate() {
return moment().format();
}
create(cb) {
// save driver
this.createdOn = this.getCurrentDate();
this.save(cb);
}
remove(cb) {
super.remove({
_id: this._id,
}, cb);
}
createCar(carData, cb) {
this.cars.push(carData);
this.save(cb);
}
getCars() {
return this.cars;
}
}
any thoughts about what Im doing wrong?
| Here's what worked for me to clear up the issue, after reading docs:
http://mongoosejs.com/docs/promises.html
The example in the doc is using the bluebird promise library but I chose to go with native ES6 promises.
In the file where I'm calling mongoose.connect:
mongoose.Promise = global.Promise;
mongoose.connect('mongodb://10.7.0.3:27107/data/db');
[EDIT: Thanks to @SylonZero for bringing up a performance flaw in my answer. Since this answer is so greatly viewed, I feel a sense of duty to make this edit and to encourage the use of bluebird instead of native promises. Please read the answer below this one for more educated and experienced details. ]
| MongoDB | 38,138,445 | 91 |
How do I truncate a collection in MongoDB or is there such a thing?
Right now I have to delete 6 large collections all at once and I'm stopping the server, deleting the database files and then recreating the database and the collections in it. Is there a way to delete the data and leave the collection as it is? The delete operation takes very long time. I have millions of entries in the collections.
| To truncate a collection and keep the indexes use
db.<collection>.remove({})
| MongoDB | 16,493,902 | 91 |
Simple question, do arrays keep their order when stored in MongoDB?
| yep MongoDB keeps the order of the array.. just like Javascript engines..
| MongoDB | 9,013,916 | 91 |
I am running a web server on node the code for which is given below
var restify = require('restify');
var server = restify.createServer();
var quotes = [
{ author : 'Audrey Hepburn', text : "Nothing is impossible, the word itself says 'I'm possible'!"},
{ author : 'Walt Disney', text : "You may not realize it when it happens, but a kick in the teeth may be the best thing in the world for you"},
{ author : 'Unknown', text : "Even the greatest was once a beginner. Don't be afraid to take that first step."},
{ author : 'Neale Donald Walsch', text : "You are afraid to die, and you're afraid to live. What a way to exist."}
];
server.get('/', function(req, res) {
res.json(quotes);
});
server.get('/quote/random', function(req, res) {
var id = Math.floor(Math.random() * quotes.length);
var q = quotes[id];
res.json(q);
});
server.get('/quote/:id', function(req, res) {
if(quotes.length <= req.params.id || req.params.id < 0) {
res.statusCode = 404;
return res.send('Error 404: No quote found');
}
var q = quotes[req.params.id];
res.json(q);
});
server.listen(process.env.PORT || 3011);
And then i want to do a get request in the following code
var https = require('http');
/**
* HOW TO Make an HTTP Call - GET
*/
// options for GET
var optionsget = {
host : 'http://localhost',
port : 3010,
path : '/quote/random', // the rest of the url with parameters if needed
method : 'GET' // do GET
};
console.info('Options prepared:');
console.info(optionsget);
console.info('Do the GET call');
// do the GET request
var reqGet = https.request(optionsget, function(res) {
console.log("statusCode: ", res.statusCode);
// uncomment it for header details
// console.log("headers: ", res.headers);
res.on('data', function(d) {
console.info('GET result:\n');
process.stdout.write(d);
console.info('\n\nCall completed');
});
});
reqGet.end();
reqGet.on('error', function(e) {
console.error(e);
});
I am just starting with node and i dont even know if this is the right way.
I want to test the performance of express and restify.I have done a apache benchmark test for the server code i wrote and found contradicting results that restify is better.So i want to test out some more by making calls to remote services and then later read write to mongodb.The above code is my starting point.I am getting the error
{ [Error: getaddrinfo ENOTFOUND] code: 'ENOTFOUND', errno: 'ENOTFOUND', syscall: 'getaddrinfo' }
Am i atleast heading in the write direction? What is the right way to the kind of tests that i want to? Why did i get the result restify faster than express? Can anyone guide me to the best starting point tutorials for application in node/express/backbone and mongodb?
| getaddrinfo ENOTFOUND means client was not able to connect to given address.
Please try specifying host without http:
var optionsget = {
host : 'localhost',
port : 3010,
path : '/quote/random', // the rest of the url with parameters if needed
method : 'GET' // do GET
};
Regarding learning resources, you won't go wrong if you start with http://www.nodebeginner.org/ and then go through some good book to get more in-depth knowledge - I recommend Professional Node.js , but there's many out there.
| MongoDB | 23,259,697 | 90 |
For some collection with a field { wins: Number }, how could I use MongoDB Aggregation Framework to get the total number of wins across all documents in a collection?
Example:
If I have 3 documents with wins: 5, wins: 8, wins: 12 respectively, how could I use MongoDB Aggregation Framework to return the total number, i.e. total: 25.
| Sum
To get the sum of a grouped field when using the Aggregation Framework of MongoDB, you'll need to use $group and $sum:
db.characters.aggregate([ {
$group: {
_id: null,
total: {
$sum: "$wins"
}
}
} ] )
In this case, if you want to get the sum of all of the wins, you need to refer to the field name using the $ syntax as $wins which just fetches the values of the wins field from the grouped documents and sums them together.
Count
You can sum other values as well by passing in a specific value (as you'd done in your comment). If you had
{ "$sum" : 1 },
that would actually be a count of all of the wins, rather than a total.
| MongoDB | 17,044,587 | 90 |
Is there a way to store Enums as string names rather than ordinal values?
Example:
Imagine I've got this enum:
public enum Gender
{
Female,
Male
}
Now if some imaginary User exists with
...
Gender gender = Gender.Male;
...
it'll be stored in MongoDb database as { ... "Gender" : 1 ... }
but i'd prefer something like this { ... "Gender" : "Male" ... }
Is this possible? Custom mapping, reflection tricks, whatever.
My context: I use strongly typed collections over POCO (well, I mark ARs and use polymorphism occasionally). I've got a thin data access abstraction layer in a form of Unit Of Work. So I'm not serializing/deserializing each object but I can (and do) define some ClassMaps. I use official MongoDb driver + fluent-mongodb.
| using MongoDB.Bson;
using MongoDB.Bson.Serialization.Attributes;
using Newtonsoft.Json;
using Newtonsoft.Json.Converters;
public class Person
{
[JsonConverter(typeof(StringEnumConverter))] // JSON.Net
[BsonRepresentation(BsonType.String)] // Mongo
public Gender Gender { get; set; }
}
| MongoDB | 6,996,399 | 90 |
I have a problem where I want to be able to get all the unique cities for a collection, and my code looks something like this:
var mongoose = require("mongoose"),
Schema = mongoose.Schema;
var PersonSchema = new Schema({
name: String,
born_in_city: String
});
var Person = mongoose.model('Person', PersonSchema);
In native MongoDb I could just do db.person.distinct("born_in_city"), but there doesn't seem to be anything equivalent for Mongoose. Is the only option to iterate over all of the documents myself to do this, or is there a better solution?
In an attempt to use the underlying node-mongodb-native as suggested by the answerer I attempted to do this:
mongoose.connection.db.collections(function(err, collections){
collections[0].distinct('born_in_city', function( err, results ){
console.log( err, results );
});
});
However the results is empty and there's no error. I would also prefer to be able to fetch only the needed collection by name rather than have to filter what collections return if at all possible.
| Just to give an update for Mongoose 3.x:
MyModel.find().distinct('_id', function(error, ids) {
// ids is an array of all ObjectIds
});
| MongoDB | 6,043,847 | 90 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.