question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I want to use a graph database for a web application (involving a web of Users, Posts, Comments, Votes, Answers, Documents and Document-Merges and some other transitive relationships on Users and Documents). So I start asking myself if there is something like a design methodology for Graph Databases, i.e. a kind of analogon to the design principles recommended for Relational Databases (like those normal forms)?
Example questions (of many questions arising):
Is it a good idea, to create a Top-Node Users, having relationships ("exist") on any User-Node in the Database?
Is it a good idea to build in version management (i.e. create relationships (something like "follows")) pointing to updated versions of a Document / Post in a way that going back this relationship means watching the changes the document went through.
etc...
So, do we need a Graph Database Design Cookbook?
| The Gremlin User Group (http://tinkerpop.com/) and Neo4j User Group (https://groups.google.com/forum/?fromgroups#!forum/neo4j) are good places to discuss graph-database modeling.
You can create supernodes such as "Users," but it may be better and more performant to use indexes and create an index entry for each user with a key=element_type, value="user", id=user_node_id.
A "follows" relation is often used for people/friends like on Facebook and Twitter so I wouldn't use that for versioning. You can build a versioning system into to Neo4j that timestamps each entry and use a last-write wins algorithm, and there are other database systems like Datomic that have this built in.
See Lightbulb's model (https://github.com/espeed/lightbulb/blob/master/lightbulb/model.py) for an example blog model in Bulbs/Python (http://bulbflow.com).
| Neo4j | 10,753,331 | 12 |
I have several data sheets in the total size of 40G and would like to represent it in a graph (there could be several nodes per row, and nodes will contain most of the data in the row either in labels or properties).
Could Neo4J handle this? What is the largest DB size (quantity of nodes, size on disk, etc. ) tested so far?
| there are several installations with over 1B-2B relationships - capacity-wise, http://docs.neo4j.org/chunked/snapshot/capabilities-capacity.html is listing the current maximum.
| Neo4j | 8,781,791 | 12 |
I know that there are similar questions around on Stackoverflow but I don't feel they answer the following.
Graph Databases to my understanding store data following mostly this schema:
Table/Collection 1: store nodes with UID
Table/Collection 2: store relations referencing nodes via UID
This allows storing arbitrary types of graphs. Now as I understand triple stores store nothing but triples:
Triple/Collection 1: store triples (2 nodes, 1 relation)
Now I would see the following distinction regarding use cases:
Graph Databases: when you have known, static connections
Triple Stores: when you have loosely connected nodes and are often looking for new connections
I am confused by the fact that people do not seem to be discussing which one to use according to these criteria. Most article I find are talking about arguments like speed or compatibility. But is this not the most relevant point?
Put the other way round:
Imagine having a clearly connected, user defined graph. Why on earth would you want to store that as triples only, loosing all the info about connections? Or having to implement some custom solution storing IDs in the triple subject.
Imagine having loosely collected nodes that you want to query for unknown relations using SPARQL. Graph databases do support that. But for this they have to build another index I assume and would be slower?
EDIT:
I see that "loosing info about connections" is the wrong way to put it. If you do as shown in the accepted answer and insert several triples for 2 nodes + 1 relation then you keep all the info and specifically the info what exact nodes are connected.
| The main difference between graph databases and triple stores is how they model the graph. In a triple store (or quad store), the data tends to be very atomic. What I mean is that the "nodes" in the graph tend to be primitive data types like string, integer, date, etc. Relationships link primitives together, and so the "unit of discourse" in a triple store is a triple, and not a node or a relationship, typically.
By contrast, other graph databases are often called "property stores" because nodes are data containers that correspond to objects in a domain. A node stands in for an object, and has properties; they act as rich data types specified by the graph modelers, more than just primitive data types. In these graph databases, nodes and relationships are the "unit of discourse".
Let's say I have a person named "Bob" who knows "Susan". In RDF, it would be something like this:
<http://example.org/person/1> :hasName "Bob".
<http://example.org/person/1> foaf:knows <http://example.org/person/2>.
<http://example.org/person/2> :hasName "Susan".
In a graph database like neo4j, it would be this:
(a:Person {name: "Bob"})-[:KNOWS]->(b:Person {name: "Susan"})
Notice that in RDF, it's 3 relationships but only one of those relationships actually expresses semantics between two entities. The other two relationships are just tracking properties of a single higher-level entity (the person). In neo4j, it's 1 relationship amongst two nodes, with each node having a property. In RDF you'll tend to identify things by URI, in neo4j it's a database object that gets a database ID automatically. That's what I mean about the difference between a more atomic/primitive store (triple stores) and a richer property graph.
RDF and triple stores are mostly built for the kinds of architectural challenges you'd run into with the semantic web. For example, XML namespacing is built in, on the architectural assumption that you'll be mixing and matching the use of many different vocabularies and namespaces. (That right there is a very "semantic web" assumption). So in SPARQL and RDF you'll see typically at least the use of xsd, rdf, and rdfs namespaces concurrently, and probably also owl, skos, and many others. SPARQL and RDF/RDFS also have many hooks and features that are there explicitly to make things like ontology inference easier. You'll tend to identify things with URIs as a way of "namespacing your identifiers" but also because some people may want to de-reference the URI...again the assumption here is a wide data sharing arrangement between many parties.
Property stores by contrast are keyed towards different use cases, like flexible modeling of data within one model/namespace, mappings between objects and graphs for persistence of enterprise applications, rapid evolvability, and so on. You'll tend to identify things with your own scheme (or an internal database ID). An auto-incrementing integer may not be best form of ID for any random consumer on the web, (and they certainly can't be de-referenced like URLs) but they might not be your first thought for a company internal application.
So which is better? The more atomic triple store format, or a rich property graph? Do you need to mix and match many different vocabularies in one query or data model? Do you need to create an OWL ontology or do inference? Do you need to serialize a bunch of java objects in memory to a database? Do you need to do fast traversal of long paths? Those types of questions would guide your selection.
Graphs are graphs, both of them do graphs, and so I don't think there's much difference in terms of what they can represent, or how you go about thinking about a problem in "graph terms". The differences boil down to the architecture underneath of the hood, and what sorts of use cases you think you'll need. I won't tell you one is better than the other, but choose wisely.
| OrientDB | 30,166,007 | 81 |
I am currently on design phase of a MMO browser game, game will include tilemaps for some real time locations (so tile data for each cell) and a general world map. Game engine I prefer uses MongoDB for persistent data world.
I will also implement a shipping simulation (which I will explain more below) which is basically a Dijkstra module, I had decided to use a graph database hoping it will make things easier, found Neo4j as it is quite popular.
I was happy with MongoDB + Neo4J setup but then noticed OrientDB , which apparently acts like both MongoDB and Neo4J (best of both worlds?), they even have VS pages for MongoDB and Neo4J.
Point is, I heard some horror stories of MongoDB losing data (though not sure it still does) and I don't have such luxury. And for Neo4J, I am not big fan of 12K€ per year "startup friendly" cost although I'll probably not have a DB of millions of vertexes. OrientDB seems a viable option as there may be also be some opportunities of using one database solution.
In that case, a logical move might be jumping to OrientDB but it has a small community and tbh didn't find much reviews about it, MongoDB and Neo4J are popular tools widely used, I have concerns if OrientDB is an adventure.
My first question would be if you have any experience/opinion regarding these databases.
And second question would be which Graph Database is better for a shipping simulation. Used Database is expected to calculate cheapest route from any vertex to any vertex and traverse it (classic Dijkstra). But also have to change weights depending on situations like "country B has embargo on country A so any item originating from country A can't pass through B, there is flood at region XYZ so no land transport is possible" etc. Also that database is expected to cache results. I expect no more than 1000 vertexes but many edges.
Thanks in advance and apologies in advance if questions are a bit ambiguous
PS : I added ArangoDB at title but tbh, hadn't much chance to take a look.
Late edit as of 18-Apr-2016 : After evaluating responses to my questions and development strategies, I decided to use ArangoDB as their roadmap is more promising for me as they apparently not trying to add tons of hype features that are half baked.
| Disclaimer: I am the author and owner of OrientDB.
As developer, in general, I don't like companies that hide costs and let you play with their technology for a while and as soon as you're tight with it, start asking for money. Actually once you invested months to develop your application that use a non standard language or API you're screwed up: pay or migrate the application with huge costs.
You know, OrientDB is FREE for any usage, even commercial. Furthermore OrientDB supports standards like SQL (with extensions) and the main Java API is the TinkerPop Blueprints, the "JDBC" standard for Graph Databases. Furthermore OrientDB supports also Gremlin.
The OrientDB project is growing every day with new contributors and users. The Community Group (Free channel to ask support) is the most active community in GraphDB market.
If you have doubts with the GraphDB to use, my suggestion is to get what is closer to your needs, but then use standards as more as you can. In this way an eventual switch would have a low impact.
| OrientDB | 26,704,134 | 49 |
I am looking to dip my hands into the world of Multi-Model DBMS, I have no particular use cases, just want to start learning.
I find that there are two prominent ones - OrientDB vs ArangoDB, but was unable to find any meaningful comparison, unopinionated between them. Can someone shed some light on the difference in features between the two, and any caveats in using one over the other? If I learn one would I be able to easily transition to the other?
(I tagged FoundationDB as well, but it is proprietary and I probably won't consider it)
This question asks for a general comparison between OrientDB vs ArangoDB for someone looking to learn about Multi-model DBMS, and not an opinionated answer about which is better.
|
Disclaimer: I would no longer recommend OrientDB, see my comments below.
I can provide a slightly less biased opinion, having used both ArangoDB and OrientDB. It's still biased as I'm the author of OrientDB's node.js driver - oriento but I don't have a vested interest in either company or product, I've just necessarily used OrientDB more.
ArangoDB and OrientDB are both targeting a similar market and have a lot of similarities:
Both are multi-model, you can use them to store documents, graphs and simple key / values.
Both have support for Gremlin, but it's firmly a second class citizen compared to their own preferred query languages.
Both support server-side "stored procedures" in JavaScript. In both systems this comes via a slightly less than idiomatic JavaScript API, although ArangoDB's is a lot better. This is getting fixed in a forthcoming version of OrientDB.
Both offer REST APIs, both aim to be usable as an "API Server" via JavaScript request handlers. This is a lot more practical in ArangoDB than OrientDB.
Both are distributed under a permissive license.
Both are ACID and have transaction support, but in both the transactions are server-side operations - they're more like atomic batches of commands rather than the kinds of transactions you might be used to in a traditional RDBMS.
However, there are a lot of differences:
ArangoDB has no concept of "links", which are a very useful feature in OrientDB. They allow unidirectional relationships (just like a hyperlink on the web), without the overhead of edges.
ArangoDB is written in C++ (and JavaScript), whereas OrientDB is written in Java. Both have their advantages:
Being written in C++ means ArangoDB uses V8, the same high performance JavaScript engine that powers node.js and Google Chrome. Whereas being written in Java means OrientDB uses Nashorn, which is still fast but not the fastest. This means that ArangoDB can offer a greater level of compatibility with the node.js ecosystem compared to OrientDB.
Being written in Java means that OrientDB runs on more platforms, including e.g. Raspberry PI. It also means that OrientDB can leverage a lot of other technologies written in Java, e.g. OrientDB has superb full text / geospatial search support via Lucene, which is not available to ArangoDB.
OrientDB uses a dialect of SQL as its query language, whereas ArangoDB uses its own custom language called AQL. In theory, AQL is better because it's designed explicitly for the problem, in practise though it feels quite similar to SQL but with different keywords, and is yet another language to learn while OrientDB's implementation feels a lot more comfortable if you're used to SQL. SQL is declarative whereas AQL is imperative - YMMV here.
ArangoDB is a "mostly-memory" database, it works best when most of your data fits in RAM. This may or may not be suitable for your needs. OrientDB doesn't have this restriction (but also loves RAM).
OrientDB is fully object oriented - it supports classes with properties and inheritance. This is exceptionally useful because it means that your database structure can map 1-1 to your application structure, with no need for ugly hacks like ActiveRecord. ArangoDB supports something fairly similar via models in Foxx, but it's more like an optional addon rather than a core part of how the database works.
ArangoDB offers a lot of flexibility via Foxx, but it has not been designed by people with strong server-side JS backgrounds and reinvents the wheel a lot of the time. Rather than leveraging frameworks like express for their request handling, they created their own clone of Sinatra, which of course makes it almost the same as express (express is also a Sinatra clone), but subtly different, and means that none of express's middleware or plugins can be reused. Similarly, they embed V8, but not libuv, which means they do not offer the same non blocking APIs as node.js and therefore users cannot be sure about whether a given npm module will work there. This means that non trivial applications cannot use ArangoDB as a replacement for the backend, which negates a lot of the potential usefulness of Foxx.
OrientDB supports first class property level and database level indices. You can query and insert into specific indexes directly for maximum efficiency. I've not seen support for this in ArangoDB.
OrientDB is the more established option, with many high profile users. ArangoDB is newer, less well known, but growing fast.
ArangoDB's documentation is excellent, and they offer official drivers for many different programming languages. OrientDB's documentation is not quite as good, and while there are drivers for most platforms, they're community powered and therefore not always kept up to date with bleeding edge OrientDB features.
If you're using Java (or a Java bridge), you can embed OrientDB directly within your application, as a library. This use case is not possible in ArangoDB.
OrientDB has the concept of users and roles, as well as Record Level Security. This may be a killer feature for you, it is for me. It also supports token based authentication, so it's possible to use OrientDB as your primary means of authorizing/authenticating users. OrientDB also has LDAP integration. In contrast, ArangoDB support only a very simple auth option.
Both systems have their own advantages, so choosing between them comes down to your own situation:
If you're building a small application, and you're a web developer optimizing for developer productivity, it will probably be easier to get up and running quickly with ArangoDB.
If you're building a larger application, which could potentially store many gigabytes or terabytes of data, or have many thousands of concurrent users, or have "enterprise" use cases, or need fine grained security controls, OrientDB is the one for you.
If you're storing RDF or similarly structured linked data, choose OrientDB.
If you're using Java, just choose OrientDB.
Note: This is (my opinion of) the state of play today, things change quickly and I would not underestimate the ruthless efficiency of the awesome team behind ArangoDB, I just think that it's not quite there yet :)
Charles Pick (codemix.com)
| OrientDB | 28,553,942 | 36 |
There is some hype around graph databases. I'm wondering why.
What are the possible problems that one can be confronted with in today's web environment that can be solved using graph databases? And are graph databases suitable for classical applications, i.e. can one be used as a drop-in replacement for a Relational Database? So in fact it's two questions in one.
Related: Has anyone used Graph-based Databases (http://neo4j.org/)?
| Many relational representations of graphs aren't particularly efficient for all operations you might want to perform.
For example, if one wants the connected set of all nodes where edges satisfy a given predicate, starting from a given node, there's no natural way in SQL to express that. Likely you'll either do a query for edges with the predicate, and then have to exclude disconnected edges locally, or have a very verbose conversation with the database server following one set of links to the next in iterated queries.
Graphs aren't a general replacement for relational databases. RDBs deal primarily in sets (tables), while graphs are primarily interesting because of the "shape" of interconnections. With relational DBs you follow links of a predetermined depth (a fixed number of joins) between sets, with results progressively filtered and grouped, while graphs are usually navigated to arbitrary and recursively-defined depth (i.e. not a predetermined number of "joins"). You can abuse either to match the characteristics of the other, but they'll have different strengths.
| OrientDB | 1,159,190 | 21 |
Do you know any open source software that uses Orient DB? Or have you used that product yourself? Any experiences to share?
I have recently looked into Orient DB, and it has nice and interesting feature set (fast, embeddable in Java, simple API) but it seems that it is not widely used. Is it just because the Orient DB is a new player on the field?
| After the total failure of ODBMS (at least from an adoption point of view), it seems obvious to me that the NoSQL movement is perceived by (ex) ODBMS players (like Versant, db4o, Orient) as an opportunity for a resurrection.
This IMHO exactly the case of OrientDB which is the result of the rewrite of the Orient ODBMS engine as a document oriented database (in other words, re-branded to fit in the NoSQL niche market).
But while OrientDB benefits from the experience acquired in the ODBMS field (the author has more 10+ years of experience in this field and is member of the JDO expert group, how surprising), I'm not aware of any projects/customers using it (and I believe they would publish some testimonials if they had many of them). Some possible reaons:
The product is new.
Only a very few people might need a NoSQL solution.
The conjunction of both points means you won't see "mass adoption". At least, this is my opinion.
That being said, I agree that OrientDB looks interesting.
| OrientDB | 3,028,156 | 15 |
I will be constructing an ecommerce site, and would like to use a no-sql database, which will fit well with the plans for the app. But when it comes to which database would fit the job, im not sure. After comparing various DB's, the ones that seem best might be either mongo, couch, or even orientdb. I have seen arguments for all of them to be used or not used compared to something like MySQL. But between themselves (nosql databases), which one would fit well with an ecommerce solution?
Note, for the use case, i wont be having thousands of transactions a second. Or similarly high write rates. they will be moderate sure, but at a level that any established database could handle.
CouchDB: Has master to master replication, which I could really use. If not, I will still have to implement the same functionality in code anyways. I need to be able to have a users database, sync with the mothership. (users will have their own, potentially localhost database, that could sync with the main domains server). Couch is also fast, once your queries have been stored in the db.As i will probably have a higher need for read performance. Though not by a lot.
MongoDB: queries are very easy and user friendly. Also, with the fact that end users may need to query for certain things at a given time that I may not be able to account for ahead of time, this seems like it may be a better fit. I dont have to pre-store my queries in the db. Does support atomic transactions, though only when writing to a single document at a time.
OrientDB: A graph database. much different that most people are used to, but with the needs, it could fit very well too. Orient has the benefits of being schemaless, as well as having support for ACID transactions. There is a lot of customer, and product relationships that a graph database could be great with. Orient also support master to master replication, similar to couchdb.
Dont get me wrong, I can see how to build this traditionally with something like MySQL, but the ease and simplicity of a nosql solution, is very attractive. Although, in my case, needing a schemaless solution, would be much easier in nosql rather than mysql. a given product may have more or less items, than another. and avoiding recreating a table whenever a new field is added, is preferrable.
So between these 3 (or even others you think may be better), what features in each could potentially work for, or against me in regards to an ecommerce based site, when dealing with customer transactions?
Edit: The reason I am not using an existing solution, is because with the integrated features I need, there are no solutions available out there. We are also aiming to use this as a full product for our company. There will be a handful of other integrations than just sales. It is also going to be working with a store's POS system.
| Since e-commerce can encompass everything from shopping carts through to membership and recurring subscriptions, it is hard to guess exactly what requirements and complexity you are envisioning.
When constructing an e-commerce site, one of the early considerations should be investigating whether there is already an established e-commerce product or toolkit that could meet your requirements. There are many subtleties to processes like ordering, invoicing, payments, products, and customer relationships even when your use case appears to be straightforward. It may also be possible to separate your application into the catalog management aspects (possibly more custom) versus the billing (potentially third party, perhaps even via a hosted billing/payment API).
Another consideration should be who you are developing the e-commerce site for: is this to scratch your own itch, or for a client? Time, budget, and features for a custom build can be difficult to estimate and schedule .. and a niche choice of technology may make it difficult to find/hire additional development expertise.
A third consideration is what your language(s) of choice are for developing your application. Some languages will have more complete/mature/documented drivers and/or framework abstractions for the different databases.
That said, writing an e-commerce system appears to be a rite of passage for many developers ;-).
Edit: a lot has changed since this answer was originally posted in 2012 and you should definitely refer to current product information. For example, MongoDB has had support for Decimal128 values since MongoDB 3.4 (2016) and multi-document transactions since MongoDB 3.6 (2017).
| OrientDB | 12,488,720 | 14 |
I've been reading about OrientDB for a while now, and I'm a bit confused about the "Editions" of the software.
The main version sounds like it's just the document store, but things on the internet make it sound like it's both the document and graph database. http://nosql.mypopescu.com/post/1254869909/correction-orientdb-is-a-document-and-graph-store
What is the difference between that and the graph edition?
Does the graph edition just do graphing with nodes and edges, or is it the document graph database?
Update: What is the key value store version? How does it differ? Can you use alongside the other editions?
| OrientDB is a document-graph dbms because has the document-db features but handles relationships using direct links, not with JOINS like RDBMS.
So you can use the standard version to map even a graph. The Graph Edition is the Standard one with bundled the TinkerPop technology stack like the Gremlin language.
| OrientDB | 8,479,359 | 14 |
As per OrientDB docs it says
The Drop Property command removes a property from the schema. This
doesn't remove the property values in records, but just change the
schema information. Records will continue to have the property values
if any.
This is creating some issues in my code, making app throw null pointer exception for dropped properties. Is there any way to drop property and remove values from existing records too.
Thanks in advance.
| Found the solution here
To remove property from existing records to run following query:
UPDATE <class> REMOVE <PROPERTY> WHERE <CONDITION>
| OrientDB | 29,545,198 | 13 |
I want to use docker-compose to compose together php and several databases (orientdb, neo4j, etc). Then get into the php container and use the shell to execute commands.
Individually, all of my container work swimmingly, and when I compose them together, they all run. However, I cannot for the life of me figure out how to keep the php container alive so I can get into it for testing.
For simplicity, I'll just use a single database: orient-db.
My docker-compose.yml file:
version: '2'
services:
php:
build: .
links:
- orientdb
orientdb:
image: orientdb:latest
environment:
ORIENTDB_ROOT_PASSWORD: rootpwd
ports:
- "2424:2424"
- "2480:2480"
My "php" Dockerfile:
FROM php:5.6-cli
ADD . /spider
WORKDIR /spider
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin/ --filename=composer
RUN composer install --prefer-source --no-interaction
RUN yes | pecl install xdebug \
&& echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini
I have tried (among other things):
docker-compose up in one terminal and then docker attach in another
enabling tty and stdin_open in my compose file
using a /bin/bash command
variations of CMD exec vendor/bin/phpunit -D FOREGROUND
And some references I've tried:
How to keep Docker container running after starting services?
https://github.com/docker/compose/issues/1926
https://github.com/docker/compose/issues/423
| So docker-compose is just a stand-in for the docker-engine client. It maintains feature parity with the client. For diagnosing problems like this, you should drop the use of docker-compose until you get it working with the regular ole client. Based on your comments here and on the other answer, it just sounds like you're not running a container with a daemon process in the foreground. If you want to run an interactive shell in Docker, you have to use the -it flags (-t allocates a tty and -i initiates an interactive session). If you don't run Docker with those switches, your container won't survive you starting an interactive shell, e.g. php -a.
It helps to think of Docker as a fancy way to run a process and not a virtual machine. It's not some "environment" that exists outside of the lifetime of whatever process (and its children) you are running. Normally, PHP is invoked by some server (e.g. Apache, Nginx, etc). What you're implying here is that you need a PHP process to run "permanently" so that you can drop into the container and test some things. Except for the interactive shell, that's not going to be possible, and you need specifically to use the -it switch to keep an interactive shell process alive in your container. The real answer here is that you can't do what you're trying to do here (keep a PHP container running) without some related daemon/server process listening in the foreground. The reason for that is because that's not how PHP works. If you really want to get into a container from your PHP image, just drop into a shell on it:
docker run -it apollo/php /bin/bash
... And you'll start a container from your PHP image, and get a shell on the container (which will die as soon as you exit the shell). But again, just reiterating from my first paragraph, docker-compose is not the way to go here.
| OrientDB | 37,149,001 | 12 |
Im using OrientDB type graph. I need syntax of Gremlin for search same SQL LIKE operator
LIKE 'search%' or LIKE '%search%'
I've check with has and filter (in http://gremlindocs.com/). However it's must determine exact value is passed with type property. I think this is incorrect with logic of search.
Thanks for anything.
| For Cosmos Db Gremlin support
g.V().has('foo', TextP.containing('search'))
You can find the documentation Microsoft Gremlin Support docs And TinkerPop Reference
| OrientDB | 19,085,078 | 12 |
Are there any implementations, api or examples of OrientDB and C#. The reason I am looking at OrientDB is becuase it's the only one that I found that is a combination of Graph and Document.
Any suggestions on how I should try this.
My next choice is RavenDB, but I am not sure if it supports joins or linked documents?
Any thoughts...
| OrientDB has an official binary driver for .NET
look here http://orientdb.com/docs/3.0.x/
Example of usage OrientDB-NET.binary
string release = OClient.CreateDatabasePool("127.0.0.1", 2424, "ModelTestDB", ODatabaseType.Graph, "admin", "admin", 10, "ModelTestDBAlias");
using(ODatabase database = new ODatabase("ModelTestDBAlias"))
{
// prerequisites
database
.Create.Class("TestClass")
.Extends<OVertex>()
.Run();
OVertex createdVertex = database
.Create.Vertex("TestClass")
.Set("foo", "foo string value")
.Set("bar", 12345)
.Run();
}
| OrientDB | 5,421,083 | 10 |
When using MongoDB, are there any special patterns for making e.g. a paged view?
say a blog that lists the 10 latest posts where you can navigate backwards to older posts.
Or do one solve it with an index on e.g. blogpost.publishdate and just skip and limit the result?
| Using skip+limit is not a good way to do paging when performance is an issue, or with large collections; it will get slower and slower as you increase the page number. Using skip requires the server to walk though all the documents (or index values) from 0 to the offset (skip) value.
It is much better to use a range query (+ limit) where you pass in the last page's range value. For example if you are sorting by "publishdate" you would simple pass the last "publishdate" value as the criteria for the query to get the next page of data.
| MongoDB | 5,049,992 | 88 |
Using the code:
all_reviews = db_handle.find().sort('reviewDate', pymongo.ASCENDING)
print all_reviews.count()
print all_reviews[0]
print all_reviews[2000000]
The count prints 2043484, and it prints all_reviews[0].
However when printing all_reviews[2000000], I get the error:
pymongo.errors.OperationFailure: database error: Runner error: Overflow sort stage buffered data usage of 33554495 bytes exceeds internal limit of 33554432 bytes
How do I handle this?
| You're running into the 32MB limit on an in-memory sort:
https://docs.mongodb.com/manual/reference/limits/#Sort-Operations
Add an index to the sort field. That allows MongoDB to stream documents to you in sorted order, rather than attempting to load them all into memory on the server and sort them in memory before sending them to the client.
| MongoDB | 27,023,622 | 88 |
I am trying to fetch some ids that exist in a mongo database with the following code:
client = MongoClient('xx.xx.xx.xx', xxx)
db = client.test_database
db = client['...']
collection = db.test_collection
collection = db["..."]
for cursor in collection.find({ "$and" : [{ "followers" : { "$gt" : 2000 } }, { "followers" : { "$lt" : 3000 } }, { "list_followers" : { "$exists" : False } }] }):
print cursor['screenname']
print cursor['_id']['uid']
id = cursor['_id']['uid']
However, after a short while, I am receive this error:
pymongo.errors.CursorNotFound: cursor id '...' not valid at server.
I found this article which refers to that problem. Nevertheless it is not clear to me which solution to take. Is it possible to use find().batch_size(30)? What exactly does the above command do? Can I take all the database ids using batch_size?
| You're getting this error because the cursor is timing out on the server (after 10 minutes of inactivity).
From the pymongo documentation:
Cursors in MongoDB can timeout on the server if they’ve been open for
a long time without any operations being performed on them. This can
lead to an CursorNotFound exception being raised when attempting to
iterate the cursor.
When you call the collection.find method it queries a collection and it returns a cursor to the documents. To get the documents you iterate the cursor. When you iterate over the cursor the driver is actually making requests to the MongoDB server to fetch more data from the server. The amount of data returned in each request is set by the batch_size() method.
From the documentation:
Limits the number of documents returned in one batch. Each batch
requires a round trip to the server. It can be adjusted to optimize
performance and limit data transfer.
Setting the batch_size to a lower value will help you with the timeout errors errors, but it will increase the number of times you're going to get access the MongoDB server to get all the documents.
The default batch size:
For most queries, the first batch returns 101 documents or just enough
documents to exceed 1 megabyte. Batch size will not exceed the maximum BSON document size (16 MB).
There is no universal "right" batch size. You should test with different values and see what is the appropriate value for your use case i.e. how many documents can you process in a 10 minute window.
The last resort will be that you set no_cursor_timeout=True. But you need to be sure that the cursor is closed after you finish processing the data.
How to avoid it without try/except:
cursor = collection.find(
{"x": 1},
no_cursor_timeout=True
)
for doc in cursor:
# do something with doc
cursor.close()
| MongoDB | 24,199,729 | 88 |
How can I sort a MongoDB collection by a given field, case-insensitively? By default, I get A-Z before a-z.
| Update:
As of now mongodb have case insensitive indexes:
Users.find({})
.collation({locale: "en" })
.sort({name: 1})
.exec()
.then(...)
shell:
db.getCollection('users')
.find({})
.collation({'locale':'en'})
.sort({'firstName':1})
Update: This answer is out of date, 3.4 will have case insensitive indexes. Look to the JIRA for more information https://jira.mongodb.org/browse/SERVER-90
Unfortunately MongoDB does not yet have case insensitive indexes: https://jira.mongodb.org/browse/SERVER-90 and the task has been pushed back.
This means the only way to sort case insensitive currently is to actually create a specific "lower cased" field, copying the value (lower cased of course) of the sort field in question and sorting on that instead.
| MongoDB | 22,931,177 | 88 |
In so many introductory examples of using MongoDB, you see code like this:
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect("mongodb://localhost:port/adatabase", function(err, db)
{
/* Some operation... CRUD, etc. */
db.close();
});
If MongoDB is like any other database system, open and close operations are typically expensive time-wise.
So, my question is this: Is it OK to simply do the MongoClient.connect("... once, assign the returned db value to some module global, have various functions in the module do various database-related work (insert documents into collections, update documents, etc. etc.) when they're called by other parts of the application (and thereby re-use that db value), and then, when the application is done, only then do the close.
In other words, open and close are done once - not every time you need to go and do some database-related operation. And you keep re-using that db object that was returned during the initial open\connect, only to dispose of it at the end, with the close, when you're actually done with all your database-related work.
Obviously, since all the I/O is asynch, before the close you'd make sure that the last database operation completed before issuing the close. Seems like this should be OK, but i wanted to double-check just in case I'm missing something as I'm new to MongoDB. Thanks!
| Yes, that is fine and typical behavior. start your app, connect to db, do operations against the db for a long time, maybe re-connect if the connection ever dies unexpectedly, and then just never close the connection (just rely on the automatic close that happens when your process dies).
| MongoDB | 18,650,890 | 88 |
This is my appointment collection:
{ _id: ObjectId("518ee0bc9be1909012000002"), date: ISODate("2013-05-13T22:00:00Z"), patient:ObjectId("518ee0bc9be1909012000002") }
{ _id: ObjectId("518ee0bc9be1909012000002"), date: ISODate("2013-05-13T22:00:00Z"), patient:ObjectId("518ee0bc9be1909012000002") }
{ _id: ObjectId("518ee0bc9be1909012000002"), date: ISODate("2013-05-13T22:00:00Z"), patient:ObjectId("518ee0bc9be1909012000002") }
I used aggregate to get the following result
{date: ISODate("2013-05-13T22:00:00Z"),
patients:[ObjectId("518ee0bc9be1909012000002"),ObjectId("518ee0bc9be1909012000002"),ObjectId("518ee0bc9be1909012000002")] }
like this:
Appointments.aggregate([
{$group: {_id: '$date', patients: {$push: '$patient'}}},
{$project: {date: '$_id', patients: 1, _id: 0}}
], ...)
How can I populate the patient document
I trued this but it doesn't work ... Appointments.find({}).populate("patient").aggregate....
In other words, can i use populate and aggregate at the same statement
any help please
| With the latest version of mongoose (mongoose >= 3.6), you can but it requires a second query, and using populate differently. After your aggregation, do this:
Patients.populate(result, {path: "patient"}, callback);
See more at the Mongoose API and the Mongoose docs.
| MongoDB | 16,680,015 | 88 |
I have a NodeJS application with Mongoose ODM(Mongoose 3.3.1). I want to retrieve all fields except 1 from my collection.For Example: I have a collection Product Which have 6 fields,I want to select all except a field "Image" . I used "exclude" method, but got error..
This was my code.
var Query = models.Product.find();
Query.exclude('title Image');
if (req.params.id) {
Query.where('_id', req.params.id);
}
Query.exec(function (err, product) {
if (!err) {
return res.send({ 'statusCode': 200, 'statusText': 'OK', 'data': product });
} else {
return res.send(500);
}
});
But this returns error
Express
500 TypeError: Object #<Query> has no method 'exclude'.........
Also I tried, var Query = models.Product.find().exclude('title','Image'); and var Query = models.Product.find({}).exclude('title','Image'); But getting the same error. How to exclude one/(two) particular fields from a collection in Mongoose.
| Use query.select for field selection in the current (3.x) Mongoose builds.
Prefix a field name you want to exclude with a -; so in your case:
Query.select('-Image');
Quick aside: in JavaScript, variables starting with a capital letter should be reserved for constructor functions. So consider renaming Query as query in your code.
| MongoDB | 14,559,200 | 88 |
I have a document:
{ 'profile_set' :
[
{ 'name' : 'nick', 'options' : 0 },
{ 'name' : 'joe', 'options' : 2 },
{ 'name' : 'burt', 'options' : 1 }
]
}
and would like to add a new document to the profile_set set if the name doesn't already exist (regardless of the option).
So in this example if I tried to add:
{'name' : 'matt', 'options' : 0}
it should add it, but adding
{'name' : 'nick', 'options' : 2}
should do nothing because a document already exists with name nick even though the option is different.
Mongo seems to match against the whole element and I end up with to check if it's the same and I end up with
profile_set containing [{'name' : 'nick', 'options' : 0}, {'name' : 'nick', 'options' : 2}]
Is there a way to do this with $addToSet or do I have to push another command?
| You can qualify your update with a query object that prevents the update if the name is already present in profile_set. In the shell:
db.coll.update(
{_id: id, 'profile_set.name': {$ne: 'nick'}},
{$push: {profile_set: {'name': 'nick', 'options': 2}}})
So this will only perform the $push for a doc with a matching _id and where there isn't a profile_set element where name is 'nick'.
| MongoDB | 14,527,980 | 88 |
I have two keys A and B and their existence in the document is mutually exclusive. I have to group by A when A exists and group by B when B exists. So I am $projecting the required value into a computed key called MyKey on which I'll perform a $group. But it looks like I'm making a mistake with the syntax. I tried writing $project in two ways:
{
$project: {
MyKey: {
$cond: [{ $exists: ["$A", true] }, "$A", "$B"] }
}
}
and
{
$project: {
MyKey: {
$cond: [{ "A": { $exists: true } }, "$A", "$B"] }
}
}
But I keep getting the error:
{
"errmsg": "exception: invalid operator '$exists'",
"code" : 15999,
"ok" : 0
}
What's going wrong?
| Use $ifNull instead of $cond in your $project:
{ $project: {MyKey: {$ifNull: ['$A', '$B'] }}}
If A exists and is not null its value will be used; otherwise the value of B is used.
| MongoDB | 14,213,636 | 88 |
Working with Nodejs and MongoDB through Node MongoDB native driver. Need to retrieve some documents, and make modification, then save them right back. This is an example:
db.open(function (err, db) {
db.collection('foo', function (err, collection) {
var cursor = collection.find({});
cursor.each(function (err, doc) {
if (doc != null) {
doc.newkey = 'foo'; // Make some changes
db.save(doc); // Update the document
} else {
db.close(); // Closing the connection
}
});
});
});
With asynchronous nature, if the process of updating the document takes longer, then when cursor reaches the end of documents, database connection is closed. Not all updates are saved to the database.
If the db.close() is omitted, all the documents are correctly updated, but the application hangs, never exits.
I saw a post suggesting using a counter to track number of updates, when fall back to zero, then close the db. But am I doing anything wrong here? What is the best way to handle this kind of situation? Does db.close() have to be used to free up resource? Or does a new db connection needs to open?
| Here's a potential solution based on the counting approach (I haven't tested it and there's no error trapping, but it should convey the idea).
The basic strategy is: Acquire the count of how many records need to be updated, save each record asynchronously and a callback on success, which will decrement the count and close the DB if the count reaches 0 (when the last update finishes). By using {safe:true} we can ensure that each update is successful.
The mongo server will use one thread per connection, so it's good to either a) close unused connections, or b) pool/reuse them.
db.open(function (err, db) {
db.collection('foo', function (err, collection) {
var cursor = collection.find({});
cursor.count(function(err,count)){
var savesPending = count;
if(count == 0){
db.close();
return;
}
var saveFinished = function(){
savesPending--;
if(savesPending == 0){
db.close();
}
}
cursor.each(function (err, doc) {
if (doc != null) {
doc.newkey = 'foo'; // Make some changes
db.save(doc, {safe:true}, saveFinished);
}
});
})
});
});
| MongoDB | 8,373,905 | 88 |
I've been trying to discover how to use MongoDB with Node.js and in the docs it seems the suggested way is to use callbacks. Now, I know that it is just a matter of preference, but I really prefer using promises.
The problem is that I didn't find how to use them with MongoDB. Indeed, I've tried the following:
var MongoClient = require('mongodb').MongoClient;
var url = 'mongodb://localhost:27017/example';
MongoClient.connect(url).then(function (err, db) {
console.log(db);
});
And the result is undefined. In that case it seems this is not the way to do so.
Is there any way to use mongo db inside Node with promises instead of callbacks?
| Your approach is almost correct, just a tiny mistake in your argument
var MongoClient = require('mongodb').MongoClient
var url = 'mongodb://localhost:27017/example'
MongoClient.connect(url)
.then(function (db) { // <- db as first argument
console.log(db)
})
.catch(function (err) {})
| MongoDB | 37,911,838 | 87 |
while trying this mongo command in ubuntu
I am getting this error.
ritzysystem@ritzysystem-Satellite-L55-A:~$ mongo
MongoDB shell version: 2.6.1
connecting to: test
2014-10-06T12:59:35.802+0530 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2014-10-06T12:59:35.802+0530 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed
how can I rectify this is anyone had the same problem.
| Run the following command :
sudo rm /var/lib/mongodb/mongod.lock
sudo service mongod restart
Connection refused to MongoDB errno 111
MacOS:
rm /usr/local/var/mongodb/mongod.lock
sudo service mongod restart
| MongoDB | 26,211,671 | 87 |
I need to import (restore) a collection generated with mongodump into an existing database and I'd like the records to be merged into the existing collection.
Does mongorestore merge the records in the same collection or it will drop the existing collection before restoring the records?
| mongorestore will only drop the existing collection if you use the --drop argument.
If you don't use --drop, all documents will be inserted into the existing collection, unless a document with the same _id already exists. Documents with the same _id will be skipped, they are not merged. So mongorestore will never delete or modify any of the existing data by default.
| MongoDB | 4,021,762 | 87 |
I develop a new website and I want to use GridFS as storage for all user uploads, because it offers a lot of advantages compared to a normal filesystem storage.
Benchmarks with GridFS served by nginx indicate, that it's not as fast as a normal filesystem served by nginx.
Benchmark with nginx
Is anyone out there, who uses GridFS already in a production environment, or would use it for a new project?
| I use gridfs at work on one of our servers which is part of a price-comparing website with honorable traffic stats (arround 25k visitors per day). The server hasn't much ram, 2gigs, and even the cpu isn't really fast (Core 2 duo 1.8Ghz) but the server has plenty storage space : 10Tb (sata) in raid 0 configuration. The job the server is doing is very simple:
Each product on our price-comparer has an image (there are around 10 million products according to our product db), and the servers job is to download the image, resize it, store it on gridfs, and deliver it to the visitors browser... if it's not present in the grid... or... deliver it to the visitors browser if it's already stored in the grid. So, this could be called as a 'traditional cdn schema'.
We have stored and processed 4 million images on this server since it's up and running. The resize and store stuff is done by a simple php script... but for sure, a python script, or something like java could be faster.
Current data size : 11.23g
Current storage size : 12.5g
Indices : 5
Index size : 849.65m
About the reliability : This is very reliable. The server doesn't load, the index size is ok, queries are fast
About the speed : For sure, is it not fast as local file storage, maybe 10% slower, but fast enough to be used in realtime even when the image needs to be processed, which is in our case, very php dependant. Maintenance and development times have also been reduced: it became so simple to delete a single or multiple images : just query the db with a simple delete command. Another interesting thing : when we rebooted our old server, with local file storage (so million of files in thousands of folders), it sometimes hangs for hours cause the system was performing a file integrity check (this really took hours...). We do not have this problem any more with gridfs, our images are now stored in big mongodb chunks (2gb files)
So... on my mind... Yes, gridfs is fast and reliable enough to be used for production.
| MongoDB | 3,413,115 | 87 |
Due to simple setup and low costs I am considering using AWS S3 bucket instead of a NoSQL database to save simple user settings as a JSON (around 30 documents).
I researched the following disadvantages of not using a database which are not relevant for my use case:
Listing of buckets/files will cost you money.
No updates - you cannot update a file, just replace it.
No indexes.
Versioning will cost you $$.
No search
No transactions
No query API (SQL or NoSQL)
Are there any other disavantages of using a S3 bucket instead of a database?
| You are "considering using AWS S3 bucket instead of a NoSQL database", but the fact is that Amazon S3 effectively is a NoSQL database.
It is a very large Key-Value store. The Key is the filename, the Value is the contents of the file.
If your needs are simply "Store a value with this key" and "Retrieve a value with this key", then it would work just fine!
In fact, old orders on Amazon.com (more than a year old) are apparently archived to Amazon S3 since they are read-only (no returns, no changes).
While slower than DynamoDB, Amazon S3 certainly costs significantly less for storage!
| MongoDB | 56,108,144 | 86 |
I am someone new to mongoDB and has absolutely no knowledge regarding databases so would like to know what is a cluster in MongoDB and what is the point of connecting to one in MongoDB? Is it a must to connect to one or can we just connect to the localhost?
| A mongodb cluster is the word usually used for sharded cluster in mongodb. The main purposes of a sharded mongodb are:
Scale reads and writes along several nodes
Each node does not handle the whole data so you can separate data along all the nodes of the shard. Each node is a member of a shard (which is a replicaset, see below for the explanation) and the data are separated on all shards.
This is the representation of a mongodb sharded cluster from the official doc.
If you are starting with mongodb, I do not recommend you to shard your data. Shards are way more complicated to maintain and handle than replicasets are.
You should have a look at a basic replicaset. It is fault tolerant and sufficient for simple needs.
The ideas of a replicaset are :
Every data are repartited on each node
Only one node accept writes
A replicaset representation from the official doc
For simple apps there is no problem to have your mongodb cluster on the same host than your application. You can even have them on a single member replicaset but you won't be fault tolerant anymore.
| MongoDB | 43,445,975 | 86 |
I want to use Dockerizing MongoDB and store data in local volume.
But .. failed ...
It has mongo:latest images
kerydeMacBook-Pro:~ hu$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mongo latest b11eedbc330f 2 weeks ago 317.4 MB
ubuntu latest 6cc0fc2a5ee3 3 weeks ago 187.9 MB
I want to store the mono data in ~/data. so ---
kerydeMacBook-Pro:~ hu$ docker run -p 27017:27017 -v ~/data:/data/db --name mongo -d mongo
f570073fa3104a54a54f39dbbd900a7c9f74938e2e0f3f731ec8a3140a418c43
But ... it not work...
docker ps -- no daemon mongo
kerydeMacBook-Pro:~ hu$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
try to run "mongo" --failed
kerydeMacBook-Pro:~ hu$ docker exec -it f57 bash
Error response from daemon: Container f57 is not running
docker inspect mongo
kerydeMacBook-Pro:~ hu$ docker inspect mongo
[
{
"Id": "f570073fa3104a54a54f39dbbd900a7c9f74938e2e0f3f731ec8a3140a418c43",
"Created": "2016-02-15T02:19:01.617824401Z",
"Path": "/entrypoint.sh",
"Args": [
"mongod"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 100,
"Error": "",
"StartedAt": "2016-02-15T02:19:01.74102535Z",
"FinishedAt": "2016-02-15T02:19:01.806376434Z"
},
"Mounts": [
{
"Source": "/Users/hushuming/data",
"Destination": "/data/db",
"Mode": "",
"RW": true
},
{
"Name": "365e687c4e42a510878179962bea3c7699b020c575812c6af5a1718eeaf7b57a",
"Source": "/mnt/sda1/var/lib/docker/volumes/365e687c4e42a510878179962bea3c7699b020c575812c6af5a1718eeaf7b57a/_data",
"Destination": "/data/configdb",
"Driver": "local",
"Mode": "",
"RW": true
}
],
If I do not set data volume, mongo image can work!
But, when setting data volume, it can't. Who can help me?
| Try and check docker logs to see what was going on when the container stopped and go in "Existed" mode.
See also if specifying the full path for the volume would help:
docker run -p 27017:27017 -v /home/<user>/data:/data/db ...
The OP adds:
docker logs mongo
exception in initAndListen: 98
Unable to create/open lock file: /data/db/mongod.lock
errno:13 Permission denied
Is a mongod instance already running?
terminating 2016-02-15T06:19:17.638+0000
I CONTROL [initandlisten] dbexit: rc: 100
An errno:13 is what issue 30 is about.
This comment adds:
It's a file ownership/permission issue (not related to this docker image), either using boot2docker with VB or a vagrant box with VB.
Nevertheless, I managed to hack the ownership, remounting the /Users shared volume inside boot2docker to uid 999 and gid 999 (which are what mongo docker image uses) and got it to start:
$ boot2docker ssh
$ sudo umount /Users
$ sudo mount -t vboxsf -o uid=999,gid=999 Users /Users
But... mongod crashes due to filesystem type not being supported (mmap not working on vboxsf)
So the actual solution would be to try a DVC: Data Volume Container, because right now the mongodb doc mentions:
MongoDB requires a filesystem that supports fsync() on directories.
For example, HGFS and Virtual Box’s shared folders do not support this
operation.
So:
the mounting to OSX will not work for MongoDB because of the way that virtualbox shared folders work.
For a DVC (Data Volume Container), try docker volume create:
docker volume create mongodbdata
Then use it as:
docker run -p 27017:27017 -v mongodbdata:/data/db ...
And see if that works better.
As I mention in the comments:
A docker volume inspect mongodbdata (see docker volume inspect) will give you its path (that you can then backup if you need)
| MongoDB | 35,400,740 | 86 |
We've recently hit the >2 Million records for one of our main collections and now we started to suffer for major performance issues on that collection.
They documents in the collection have about 8 fields which you can filter by using UI and the results are supposed to sorted by a timestamp field the record was processed.
I've added several compound indexes with the filtered fields and the timetamp
e.g:
db.events.ensureIndex({somefield: 1, timestamp:-1})
I've also added couple of indexes for using several filters at once to hopefully achieve better performance. But some filters still take awfully long time to perform.
I've made sure that using explain that the queries do use the indexes I've created but performance is still not good enough.
I was wondering if sharding is the way to go now.. but we will soon start to have about 1 million new records per day in that collection.. so I'm not sure if it will scale well..
EDIT: example for a query:
> db.audit.find({'userAgent.deviceType': 'MOBILE', 'user.userName': {$in: ['[email protected]']}}).sort({timestamp: -1}).limit(25).explain()
{
"cursor" : "BtreeCursor user.userName_1_timestamp_-1",
"isMultiKey" : false,
"n" : 0,
"nscannedObjects" : 30060,
"nscanned" : 30060,
"nscannedObjectsAllPlans" : 120241,
"nscannedAllPlans" : 120241,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 1,
"nChunkSkips" : 0,
"millis" : 26495,
"indexBounds" : {
"user.userName" : [
[
"[email protected]",
"[email protected]"
]
],
"timestamp" : [
[
{
"$maxElement" : 1
},
{
"$minElement" : 1
}
]
]
},
"server" : "yarin:27017"
}
please note that deviceType has only 2 values in my collection.
| This is searching the needle in a haystack. We'd need some output of explain() for those queries that don't perform well. Unfortunately, even that would fix the problem only for that particular query, so here's a strategy on how to approach this:
Ensure it's not because of insufficient RAM and excessive paging
Enable the DB profiler (using db.setProfilingLevel(1, timeout) where timeout is the threshold for the number of milliseconds the query or command takes, anything slower will be logged)
Inspect the slow queries in db.system.profile and run the queries manually using explain()
Try to identify the slow operations in the explain() output, such as scanAndOrder or large nscanned, etc.
Reason about the selectivity of the query and whether it's possible to improve the query using an index at all. If not, consider disallowing the filter setting for the end-user or give him a warning dialog that the operation might be slow.
A key problem is that you're apparently allowing your users to combine filters at will. Without index intersectioning, that will blow up the number of required indexes dramatically.
Also, blindly throwing an index at every possible query is a very bad strategy. It's important to structure the queries and make sure the indexed fields have sufficient selectivity.
Let's say you have a query for all users with status "active" and some other criteria. But of the 5 million users, 3 million are active and 2 million aren't, so over 5 million entries there's only two different values. Such an index doesn't usually help. It's better to search for the other criteria first, then scan the results. On average, when returning 100 documents, you'll have to scan 167 documents, which won't hurt performance too badly. But it's not that simple. If the primary criterion is the joined_at date of the user and the likelihood of users discontinuing use with time is high, you might end up having to scan thousands of documents before finding a hundred matches.
So the optimization depends very much on the data (not only its structure, but also the data itself), its internal correlations and your query patterns.
Things get worse when the data is too big for the RAM, because then, having an index is great, but scanning (or even simply returning) the results might require fetching a lot of data from disk randomly which takes a lot of time.
The best way to control this is to limit the number of different query types, disallow queries on low selectivity information and try to prevent random access to old data.
If all else fails and if you really need that much flexibility in filters, it might be worthwhile to consider a separate search DB that supports index intersections, fetch the mongo ids from there and then get the results from mongo using $in. But that is fraught with its own perils.
-- EDIT --
The explain you posted is a beautiful example of a the problem with scanning low selectivity fields. Apparently, there's a lot of documents for "[email protected]". Now, finding those documents and sorting them descending by timestamp is pretty fast, because it's supported by high-selectivity indexes. Unfortunately, since there are only two device types, mongo needs to scan 30060 documents to find the first one that matches 'mobile'.
I assume this is some kind of web tracking, and the user's usage pattern makes the query slow (would he switch mobile and web on a daily basis, the query would be fast).
Making this particular query faster could be done using a compound index that contains the device type, e.g. using
a) ensureIndex({'username': 1, 'userAgent.deviceType' : 1, 'timestamp' :-1})
or
b) ensureIndex({'userAgent.deviceType' : 1, 'username' : 1, 'timestamp' :-1})
Unfortunately, that means that queries like find({"username" : "foo"}).sort({"timestamp" : -1}); can't use the same index anymore, so, as described, the number of indexes will grow very quickly.
I'm afraid there's no very good solution for this using mongodb at this time.
| MongoDB | 19,559,405 | 86 |
What is the difference between _id and id in mongoose? Which is better for referencing?
| From the documentation:
Mongoose assigns each of your schemas an id virtual getter by default
which returns the documents _id field cast to a string, or in the case
of ObjectIds, its hexString.
So, basically, the id getter returns a string representation of the document's _id (which is added to all MongoDB documents by default and have a default type of ObjectId).
Regarding what's better for referencing, that depends entirely on the context (i.e., do you want an ObjectId or a string). For example, if comparing id's, the string is probably better, as ObjectId's won't pass an equality test unless they are the same instance (regardless of what value they represent).
| MongoDB | 15,724,272 | 86 |
Just a simple query, for example with a double ref in the model.
Schema / Model
var OrderSchema = new Schema({
user: {
type : Schema.Types.ObjectId,
ref : 'User',
required: true
},
meal: {
type : Schema.Types.ObjectId,
ref : 'Meal',
required: true
},
});
var OrderModel = db.model('Order', OrderSchema);
Query
OrderModel.find()
.populate('user') // works
.populate('meal') // dont works
.exec(function (err, results) {
// callback
});
I already tried something like
.populate('user meal')
.populate(['user', 'meal'])
In fact only one of the populates works.
So, how do is get two populates working ?
| You're already using the correct syntax of:
OrderModel.find()
.populate('user')
.populate('meal')
.exec(function (err, results) {
// callback
});
Perhaps the meal ObjectId from the order isn't in the Meals collection?
| MongoDB | 12,821,596 | 86 |
I've installed the mongodb 2.0.3, using the mongodb-10gen debian package. Everything went well, except the service which is installed by default is not starting up when computer starts. The mongod is running only as root user. maybe this is the reason. but as far as I know, the services should be running since they are added by the root user.
What may be the solution?
if I run just mongod
Tue Mar 27 13:00:44 [initandlisten] couldn't open /data/db/transaction_processor_dummy_development.ns errno:1 Operation not permitted
If I run sudo service mongodb start it says:
mongodb start/running, process 4861
but there's no process when looking with htop and mongo says:
MongoDB shell version: 2.0.3
connecting to: test
Tue Mar 27 13:02:40 Error: couldn't connect to server 127.0.0.1 shell/mongo.js:84
exception: connect failed
| On my ubuntu server, just run:
sudo rm /var/lib/mongodb/mongod.lock
mongod --repair
sudo service mongodb start
| MongoDB | 9,884,233 | 86 |
When using MongoDB's .stats() function to determine document size, are the values returned in bits or bytes?
| Running the collStats command - db.collection.stats() - returns all sizes in bytes, e.g.
> db.foo.stats()
{
"size" : 715578011834, // total size (bytes)
"avgObjSize" : 2862, // average size (bytes)
}
However, if you want the results in another unit then you can also pass in a scale argument.
For example, to get the results in KB:
> db.foo.stats(1024)
{
"size" : 698806652, // total size (KB)
"avgObjSize" : 2, // average size (KB)
}
Or for MB:
> db.foo.stats(1024 * 1024)
{
"size" : 682428, // total size (MB)
"avgObjSize" : 0, // average size (MB)
}
| MongoDB | 6,082,748 | 86 |
My code was working before initially but I don't know why it just stopped working and gave me this error:
MongooseError: Operation `users.findOne()` buffering timed out after 10000ms
at Timeout.<anonymous> (/Users/nishant/Desktop/Yourfolio/backend/node_modules/mongoose/lib/drivers/node-mongodb-native/collection.js:184:20)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7)
I am trying to authenticate the user by login with JWT. My client runs fine but in my backend I get this error. My backend code:
import neuron from '@yummyweb/neuronjs'
import bodyParser from 'body-parser'
import cors from 'cors'
import mongoose from 'mongoose'
import emailValidator from 'email-validator'
import passwordValidator from 'password-validator'
import User from './models/User.js'
import Portfolio from './models/Portfolio.js'
import bcrypt from 'bcryptjs'
import jwt from 'jsonwebtoken'
import auth from './utils/auth.js'
// Dot env
import dotenv from 'dotenv'
dotenv.config()
// Custom Password Specifications
// Username Schema
const usernameSchema = new passwordValidator()
usernameSchema.is().min(3).is().max(18).is().not().spaces()
// Password Schema
const passwordSchema = new passwordValidator()
passwordSchema.is().min(8).is().max(100).has().uppercase().has().lowercase().has().digits().is().not().spaces()
const PORT = process.env.PORT || 5000
const neuronjs = neuron()
// Middleware
neuronjs.use(bodyParser())
neuronjs.use(cors())
// Mongoose Connection
mongoose.connect(process.env.MONGO_URI, { useNewUrlParser: true }, () => console.log("MongoDB Connected"))
// API Routes
neuronjs.POST('/api/auth/signup', async (req, res) => {
const { username, email, password, passwordConfirmation } = req.body
// Validation: all fields are filled
if (!username || !email || !password || !passwordConfirmation) {
return res.status(400).json({
"error": "true",
"for": "fields",
"msg": "fill all the fields"
})
}
// Validation: username is valid
if (usernameSchema.validate(username, { list: true }).length !== 0) {
return res.status(400).json({
"error": "true",
"for": "username",
"method_fail": usernameSchema.validate(username, { list: true }),
"msg": "username is invalid"
})
}
// Validation: email is valid
if (!emailValidator.validate(email)) {
return res.status(400).json({
"error": "true",
"for": "email",
"msg": "email is invalid"
})
}
// Validation: password is valid
if (passwordSchema.validate(password, { list: true }).length !== 0) {
return res.status(400).json({
"error": "true",
"for": "password",
"method_fail": passwordSchema.validate(password, { list: true }),
"msg": "password is invalid"
})
}
// Validation: password is confirmed
if (password !== passwordConfirmation) {
return res.status(400).json({
"error": "true",
"for": "confirmation",
"msg": "confirmation password needs to match password"
})
}
// Check for existing user with email
const existingUserWithEmail = await User.findOne({ email })
if (existingUserWithEmail)
return res.status(400).json({ "error": "true", "msg": "a user already exists with this email" })
// Check for existing user with username
const existingUserWithUsername = await User.findOne({ username })
if (existingUserWithUsername)
return res.status(400).json({ "error": "true", "msg": "a user already exists with this username" })
// Generating salt
const salt = bcrypt.genSalt()
.then(salt => {
// Hashing password with bcrypt
const hashedPassword = bcrypt.hash(password, salt)
.then(hash => {
const newUser = new User({
username,
email,
password: hash
})
// Saving the user
newUser.save()
.then(savedUser => {
const newPortfolio = new Portfolio({
user: savedUser._id,
description: "",
socialMediaHandles: {
github: savedUser.username,
dribbble: savedUser.username,
twitter: savedUser.username,
devto: savedUser.username,
linkedin: savedUser.username,
}
})
// Save the portfolio
newPortfolio.save()
// Return the status code and the json
return res.status(200).json({
savedUser
})
})
.catch(err => console.log(err))
})
.catch(err => console.log(err))
})
.catch(err => console.log(err))
})
neuronjs.POST('/api/auth/login', async (req, res) => {
try {
const { username, password } = req.body
// Validate
if (!username || !password) {
return res.status(400).json({ "error": "true", "msg": "fill all the fields", "for": "fields", })
}
const user = await User.findOne({ username })
if (!user) {
return res.status(400).json({ "error": "true", "msg": "no account is registered with this username", "for": "username" })
}
// Compare hashed password with plain text password
const match = await bcrypt.compare(password, user.password)
if (!match) {
return res.status(400).json({ "error": "true", "msg": "invalid credentials", "for": "password" })
}
// Create JWT token
const token = jwt.sign({ id: user._id }, process.env.JWT_SECRET)
return res.json({ token, user: { "id": user._id, "username": user.username, "email": user.email } })
}
catch (e) {
console.log(e)
}
})
// Delete a user and their portfolio
neuronjs.DELETE("/api/users/delete", async (req, res) => {
auth(req, res)
const deletedPortfolio = await Portfolio.findOneAndDelete({ user: req.user })
const deletedUser = await User.findByIdAndDelete(req.user)
res.json(deletedUser)
})
neuronjs.POST("/api/isTokenValid", async (req, res) => {
const token = req.headers["x-auth-token"]
if (!token) return res.json(false)
const verifiedToken = jwt.verify(token, process.env.JWT_SECRET)
if (!verifiedToken) return res.json(false)
const user = await User.findById(verifiedToken.id)
if (!user) return res.json(false)
return res.json(true)
})
// Getting one user
neuronjs.GET("/api/users/user", async (req, res) => {
auth(req, res)
const user = await User.findById(req.user)
res.json({
"username": user.username,
"email": user.email,
"id": user._id
})
})
// Getting the porfolio based on username
neuronjs.GET("/api/portfolio/:username", async (req, res) => {
try {
const existingUser = await User.findOne({ username: req.params.username })
// User exists
if (existingUser) {
const userPortfolio = await Portfolio.findOne({ user: existingUser._id })
return res.status(200).json(userPortfolio)
}
// User does not exist
else return res.status(400).json({ "error": "true", "msg": "user does not exist" })
}
catch (e) {
console.log(e)
return res.status(400).json({ "error": "true", "msg": "user does not exist" })
}
})
// Update Portfolio info
neuronjs.POST("/api/portfolio/update", async (req, res) => {
auth(req, res)
// Find the portfolio
const portfolio = await Portfolio.findOne({ user: req.user })
// Then, update the portfolio
if (portfolio) {
// Call the update method
const updatedPortfolio = await portfolio.updateOne({
user: req.user,
description: req.body.description,
socialMediaHandles: req.body.socialMediaHandles,
greetingText: req.body.greetingText,
navColor: req.body.navColor,
font: req.body.font,
backgroundColor: req.body.backgroundColor,
rssFeed: req.body.rssFeed,
displayName: req.body.displayName,
layout: req.body.layout,
occupation: req.body.occupation
})
return res.status(200).json(portfolio)
}
})
neuronjs.listen(PORT, () => console.log("Server is running on port " + PORT))
The auth.js file function:
import jwt from 'jsonwebtoken'
const auth = (req, res) => {
const token = req.headers["x-auth-token"]
if (!token)
return res.status(401).json({ "error": "true", "msg": "no authentication token" })
const verifiedToken = jwt.verify(token, process.env.JWT_SECRET)
if (!verifiedToken)
return res.status(401).json({ "error": "true", "msg": "token failed" })
req.user = verifiedToken.id
}
export default auth
Any help is much appreciated and I have already tried a few solutions like deleting node_modules and re installing mongoose.
| In my experience this happens when your database is not connected, Try checking out following things -
Is you database connected and you are pointing to the same url from your code.
check if your mongoose.connect(...) code is loading.
I faced this issue where I was running the node index.js from my terminal and mongoose connect code was into different file. After requiring that mongoose code in index.js it was working again.
| MongoDB | 65,408,618 | 85 |
I updated to MacOS 10.15 (Catalina) today. When I run mongod in the terminal it cannot find the /data/db directory:
➜ /Users/william > mongod
2019-10-08T17:02:44.183+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-10-08T17:02:44.209+0800 I CONTROL [initandlisten] MongoDB starting : pid=43162 port=27017 dbpath=/data/db 64-bit host=Williams-MacBook-Pro-6.local
2019-10-08T17:02:44.209+0800 I CONTROL [initandlisten] db version v4.0.3
2019-10-08T17:02:44.209+0800 I CONTROL [initandlisten] git version: 7ea530946fa7880364d88c8d8b6026bbc9ffa48c
2019-10-08T17:02:44.209+0800 I CONTROL [initandlisten] allocator: system
2019-10-08T17:02:44.209+0800 I CONTROL [initandlisten] modules: none
2019-10-08T17:02:44.209+0800 I CONTROL [initandlisten] build environment:
2019-10-08T17:02:44.209+0800 I CONTROL [initandlisten] distarch: x86_64
2019-10-08T17:02:44.209+0800 I CONTROL [initandlisten] target_arch: x86_64
2019-10-08T17:02:44.209+0800 I CONTROL [initandlisten] options: {}
2019-10-08T17:02:44.211+0800 I STORAGE [initandlisten] exception in initAndListen: NonExistentPath: Data directory /data/db not found., terminating
2019-10-08T17:02:44.211+0800 I NETWORK [initandlisten] shutdown: going to close listening sockets...
2019-10-08T17:02:44.211+0800 I NETWORK [initandlisten] removing socket file: /tmp/mongodb-27017.sock
2019-10-08T17:02:44.211+0800 I CONTROL [initandlisten] now exiting
2019-10-08T17:02:44.211+0800 I CONTROL [initandlisten] shutting down with code:100
➜ /Users/william >
I tried to install MongoDB with brew:
brew install mongodb
➜ /Users/william > brew install mongodb
Updating Homebrew...
Error: mongodb: unknown version :mountain_lion
Any help?
| This is the main error:
exception in initAndListen: NonExistentPath: Data directory /data/db not found., terminating
Catalina has a surprise change: it won't allow changes to the root directory (this was discussed in a forum thread as well):
% sudo mkdir -p /data/db
mkdir: /data/db: Read-only file system
Unfortunately, this is not spelled out explicitly in Apple's Catalina release notes, other than a brief mention in Catalina features:
macOS Catalina runs in a dedicated, read-only system volume
Since the directory /data/db is coded as MongoDB default, a workaround is to specify a different dbpath that is not located on the root directory. For example:
mongod --dbpath ~/data/db
This will place MongoDB's data in your home directory. Just make sure that the path ~/data/db actually exists.
Alternative method
An alternative method is to follow the instructions at Install MongoDB Community Edition on macOS by leveraging brew:
brew tap mongodb/brew
brew install mongodb-community
This will create some additional files by default:
the configuration file (/usr/local/etc/mongod.conf)
the log directory path (/usr/local/var/log/mongodb)
the data directory path (/usr/local/var/mongodb)
To run mongod you can either:
Run the command manually from the command line (this can be aliased for convenience):
mongod --config /usr/local/etc/mongod.conf
Run MongoDB as a service using brew services. Note that this will run MongoDB as a standalone node (not a replica set), so features that depends on the oplog e.g. changestreams will not work unless you modify the mongod configuration file:
brew services start mongodb-community
| MongoDB | 58,283,257 | 85 |
I have a Linode server running Ubuntu 12.04 LTS and MongoDB instance (service is running and CAN connect locally) that I can't connect to from an outside source.
I have added these two rules to my IP tables, where < ip address > is the server I want to connect FROM (as outlined in this MongoDB reference):
iptables -A INPUT -s < ip-address > -p tcp --destination-port 27017 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -d < ip-address > -p tcp --source-port 27017 -m state --state ESTABLISHED -j ACCEPT
And I see the rule in my IP table allowing connections on 27017 to and from < ip address > however when I try to connect from , < ip address > to my mongo database using a command like this:
mongo databasedomain/databasename -u username -p password
I get this error:
2014-07-22T23:54:03.093+0000 warning: Failed to connect to databaseserverip:27017, reason: errno:111 Connection refused
2014-07-22T23:54:03.094+0000 Error: couldn't connect to server < ip address >:27017 (databaseserverip), connection attempt failed at src/mongo/shell/mongo.js:148
exception: connect failed
Any help is VERY APPRECIATED!!!! Thanks!!!
| Thanks for the help everyone!
Turns out that it was an iptable conflict. Two rules listing the port open (which resulted in a closed port).
However, one of the comments by aka and another by manu2013 were problems that I would have run into, if not for the conflict.
So! Always remember to edit the /etc/mongod.conf file and set your bind_ip = 0.0.0.0 in order to make connections externally.
Also, make sure that you don't have conflicting rules in your iptable for the port mongo wants (see link on mongodb's site to set up your iptables properly).
| MongoDB | 24,899,849 | 85 |
I'm using Node.js program to insert data into a MongoDB database. I have inserted data into a collection named "repl-failOver".
var mongoClient = require("mongodb").MongoClient;
mongoClient.connect("mongodb://localhost:30002/test", function(err, db) {
if (err) throw err;
db.collection("repl-failOver").insert( { "documentNumber" : document++}, function (err, doc) {
if (err) throw err;
console.log(doc);
});
db.close();
});
When I use the Mongo shell and list down the collections in the database using show collections I am able to see the collection "repl-failOver".
How do I run a find command from the mongo shell for this collection?
| Use this syntax:
db['repl-failOver'].find({})
or
db.getCollection('repl-failOver').find({})
You can find more information in the Executing Queries section of the manual:
If the mongo shell does not accept the name of the collection, for
instance if the name contains a space, hyphen, or starts with a
number, you can use an alternate syntax to refer to the collection, as
in the following:
db["3test"].find()
db.getCollection("3test").find()
| MongoDB | 24,711,939 | 85 |
I have gone through several articles and examples, and have yet to find an efficient way to do this SQL query in MongoDB (where there are millions of rows documents)
First attempt
(e.g. from this almost duplicate question - Mongo equivalent of SQL's SELECT DISTINCT?)
db.myCollection.distinct("myIndexedNonUniqueField").length
Obviously I got this error as my dataset is huge
Thu Aug 02 12:55:24 uncaught exception: distinct failed: {
"errmsg" : "exception: distinct too big, 16mb cap",
"code" : 10044,
"ok" : 0
}
Second attempt
I decided to try and do a group
db.myCollection.group({key: {myIndexedNonUniqueField: 1},
initial: {count: 0},
reduce: function (obj, prev) { prev.count++;} } );
But I got this error message instead:
exception: group() can't handle more than 20000 unique keys
Third attempt
I haven't tried yet but there are several suggestions that involve mapReduce
e.g.
this one how to do distinct and group in mongodb? (not accepted, answer author / OP didn't test it)
this one MongoDB group by Functionalities (seems similar to Second Attempt)
this one http://blog.emmettshear.com/post/2010/02/12/Counting-Uniques-With-MongoDB
this one https://groups.google.com/forum/?fromgroups#!topic/mongodb-user/trDn3jJjqtE
this one http://cookbook.mongodb.org/patterns/unique_items_map_reduce/
Also
It seems there is a pull request on GitHub fixing the .distinct method to mention it should only return a count, but it's still open: https://github.com/mongodb/mongo/pull/34
But at this point I thought it's worth to ask here, what is the latest on the subject? Should I move to SQL or another NoSQL DB for distinct counts? or is there an efficient way?
Update:
This comment on the MongoDB official docs is not encouraging, is this accurate?
http://www.mongodb.org/display/DOCS/Aggregation#comment-430445808
Update2:
Seems the new Aggregation Framework answers the above comment... (MongoDB 2.1/2.2 and above, development preview available, not for production)
http://docs.mongodb.org/manual/applications/aggregation/
| 1) The easiest way to do this is via the aggregation framework. This takes two "$group" commands: the first one groups by distinct values, the second one counts all of the distinct values
pipeline = [
{ $group: { _id: "$myIndexedNonUniqueField"} },
{ $group: { _id: 1, count: { $sum: 1 } } }
];
//
// Run the aggregation command
//
R = db.runCommand(
{
"aggregate": "myCollection" ,
"pipeline": pipeline
}
);
printjson(R);
2) If you want to do this with Map/Reduce you can. This is also a two-phase process: in the first phase we build a new collection with a list of every distinct value for the key. In the second we do a count() on the new collection.
var SOURCE = db.myCollection;
var DEST = db.distinct
DEST.drop();
map = function() {
emit( this.myIndexedNonUniqueField , {count: 1});
}
reduce = function(key, values) {
var count = 0;
values.forEach(function(v) {
count += v['count']; // count each distinct value for lagniappe
});
return {count: count};
};
//
// run map/reduce
//
res = SOURCE.mapReduce( map, reduce,
{ out: 'distinct',
verbose: true
}
);
print( "distinct count= " + res.counts.output );
print( "distinct count=", DEST.count() );
Note that you cannot return the result of the map/reduce inline, because that will potentially overrun the 16MB document size limit. You can save the calculation in a collection and then count() the size of the collection, or you can get the number of results from the return value of mapReduce().
| MongoDB | 11,782,566 | 85 |
Is there any difference between using the field ID or _ID from a MongoDB document?
I am asking this, because I usually use "_id", however I saw this sort({id:-1}) in the documentation: http://www.mongodb.org/display/DOCS/Optimizing+Object+IDs#OptimizingObjectIDs-Sortbyidtosortbyinsertiontime
EDIT
Turns out the docs were wrong.
| I expect it's just a typo in the documentation. The _id field is primary key for every document. It's called _id and is also accessible via id. Attempting to use an id key may result in a illegal ObjectId format error.
That section is just indicating that the automatically generated ObjectIDs start with a timestamp so it's possible to sort your documents automatically. This is pretty cool since the _id is automatically indexed in every collection. See http://www.mongodb.org/display/DOCS/Object+IDs for more information. Specifically under "BSON ObjectID Specification".
A BSON ObjectID is a 12-byte value consisting of a 4-byte timestamp (seconds since epoch), a 3-byte machine id, a 2-byte process id, and a 3-byte counter. Note that the timestamp and counter fields must be stored big endian unlike the rest of BSON.
| MongoDB | 9,694,460 | 85 |
I've installed MongoDB v4.0 for the most amazing feature of it Transaction in Nodejs with mongodb 3.1 as a driver.
When I try to use a transaction session I've faced this error:
MongoError: Transaction numbers are only allowed on a replica set member or mongos.
What's that and how can I get rid of it?
| Transactions are undoubtedly the most exciting new feature in MongoDB 4.0. But unfortunately, most tools for installing and running MongoDB start a standalone server as opposed to a replica set. If you try to start a session on a standalone server, you'll get this error.
In order to use transactions, you need a MongoDB replica set, and starting a replica set locally for development is an involved process. The new run-rs npm module makes starting replica sets easy. Running run-rs is all you need to start a replica set, run-rs will even install the correct version of MongoDB for you.
Run-rs has no outside dependencies except Node.js and npm. You do not need to have Docker, homebrew, APT, Python, or even MongoDB installed.
Install run-rs globally with npm's -g flag. You can also list run-rs in your package.json file's devDependencies.
npm install run-rs -g
Next, run run-rs with the --version flag. Run-rs will download MongoDB v4.0.0 for you. Don't worry, it won't overwrite your existing MongoDB install.
run-rs -v 4.0.0 --shell
Then use replicaSet=rs in your connection string.
You find more details about it here.
| MongoDB | 51,461,952 | 84 |
I have collection foo with documents like:
{site_id: 'xxx', title: {ru: 'a', en: 'b'}, content: {ru: 'a', en: 'b'}}
{site_id: 'xxx', title: {ru: 'c', de: 'd'}, content: {ru: 'c', de: 'd'}}
I need to update multiple fields which are can exists or not:
db.foo.update(
{ site_id: 'xxx'},
{ $set: {'title.de': '', 'content.de': ''}},
{multi: true}
)
But I need something like $set which will not overwrite value if it exists.
| You can add a query to your update statement:
db.foo.update({'title.de': {$exists : false}}, {$set: {'title.de': ''}})
Update
For your modified question my solution looks like this - would that work for you? (If not, why?)
db.foo.update({site_id: 'xxx', 'title.de': {$exists : false}}, {$set: {'title.de': ''}, {multi: true})
db.foo.update({site_id: 'xxx', 'content.de': {$exists : false}}, {$set: {'content.de': ''}}, {multi: true})
| MongoDB | 24,824,657 | 84 |
I want to get updated documents. This is my original code and it successfully updates but doesn't return the document.
collection.update({ "code": req.body.code },{$set: req.body.updatedFields}, function(err, results) {
res.send({error: err, affected: results});
db.close();
});
I used the toArray function, but this gave the error "Cannot use a writeConcern without a provided callback":
collection.update({ "code": req.body.code },{$set: req.body.updatedFields}).toArray( function(err, results) {
res.send({error: err, affected: results});
db.close();
});
Any ideas?
| collection.update() will only report the number of documents that were affected to its own callback.
To retrieve the documents while modifying, you can use collection.findOneAndUpdate() instead (formerly .findAndModify()).
collection.findOneAndUpdate(
{ "code": req.body.code },
{ $set: req.body.updatedFields },
{ returnOriginal: false },
function (err, documents) {
res.send({ error: err, affected: documents });
db.close();
}
);
The returnOriginal option (or new with Mongoose) lets you specify which version of a found document (original [default] or updated) is passed to the callback.
returnOriginal was deprecated in version 3.6. Use returnDocument: "before" | "after" for version 3.6 and later.
Disclaimer: This answer currently refers to the Node.js Driver as of version 3.6. As new versions are released, check their documentation for possibly new deprecation warnings and recommended alternatives.
| MongoDB | 24,747,189 | 84 |
I am interested in optimizing a "pagination" solution I'm working on with MongoDB. My problem is straight forward. I usually limit the number of documents returned using the limit() functionality. This forces me to issue a redundant query without the limit() function in order for me to also capture the total number of documents in the query so I can pass to that to the client letting them know they'll have to issue an additional request(s) to retrieve the rest of the documents.
Is there a way to condense this into 1 query? Get the total number of documents but at the same time only retrieve a subset using limit()? Is there a different way to think about this problem than I am approaching it?
| Mongodb 3.4 has introduced $facet aggregation
which processes multiple aggregation pipelines within a single stage
on the same set of input documents.
Using $facet and $group you can find documents with $limit and can get total count.
You can use below aggregation in mongodb 3.4
db.collection.aggregate([
{ "$facet": {
"totalData": [
{ "$match": { }},
{ "$skip": 10 },
{ "$limit": 10 }
],
"totalCount": [
{ "$group": {
"_id": null,
"count": { "$sum": 1 }
}}
]
}}
])
Even you can use $count aggregation which has been introduced in mongodb 3.6.
You can use below aggregation in mongodb 3.6
db.collection.aggregate([
{ "$facet": {
"totalData": [
{ "$match": { }},
{ "$skip": 10 },
{ "$limit": 10 }
],
"totalCount": [
{ "$count": "count" }
]
}}
])
| MongoDB | 21,803,290 | 84 |
I've got a simple app set up that shows a list of Projects. I've removed the autopublish package so that I'm not sending everything to the client.
<template name="projectsIndex">
{{#each projects}}
{{name}}
{{/each}}
</template>
When autopublish was turned on, this would display all the projects:
if Meteor.isClient
Template.projectsIndex.projects = Projects.find()
With it removed, I have to additionally do:
if Meteor.isServer
Meteor.publish "projects", ->
Projects.find()
if Meteor.isClient
Meteor.subscribe "projects"
Template.projectsIndex.projects = Projects.find()
So, is it accurate to say that the client-side find() method only searches records which have been published from the server-side? It's been tripping me up because I felt like I should only be calling find() once.
| Collections, publications and subscriptions are a tricky area of Meteor, that the documentation could discuss in more detail, so as to avoid frequent confusion, which sometimes get amplified by confusing terminology.
Here's Sacha Greif (co-author of DiscoverMeteor) explaining publications and subscriptions in one slide:
To properly understand why you need to call find() more than once, you need to understand how collections, publications and subscriptions work in Meteor:
You define collections in MongoDB. No Meteor involved yet. These collections contain database records (also called "documents" by both Mongo and Meteor, but a "document" is more general than a database record; for instance, an update specification or a query selector are documents too - JavaScript objects containing field: value pairs).
Then you define collections on the Meteor server with
MyCollection = new Mongo.Collection('collection-name-in-mongo')
These collections contain all the data from the MongoDB collections, and you can run MyCollection.find({...}) on them, which will return a cursor (a set of records, with methods to iterate through them and return them).
This cursor is (most of the time) used to publish (send) a set of records (called a "record set"). You can optionally publish only some fields from those records. It is record sets (not collections) that clients subscribe to. Publishing is done by a publish function, which is called every time a new client subscribes, and which can take parameters to manage which records to return (e.g. a user id, to return only that user's documents).
On the client, you have Minimongo collections that partially mirror some of the records from the server. "Partially" because they may contain only some of the fields, and "some of the records" because you usually want to send to the client only the records it needs, to speed up page load, and only those it needs and has permission to access.
Minimongo is essentially an in-memory, non-persistent implementation of Mongo in pure JavaScript. It serves as a local cache that stores just the subset of the database that this client is working with. Queries on the client (find) are served directly out of this cache, without talking to the server.
These Minimongo collections are initially empty. They are filled by
Meteor.subscribe('record-set-name')
calls. Note that the parameter to subscribe isn't a collection name; it's the name of a record set that the server used in the publish call. The subscribe() call subscribes the client to a record set - a subset of records from the server collection (e.g. most recent 100 blog posts), with all or a subset of the fields in each record (e.g. only title and date). How does Minimongo know into which collection to place the incoming records? The name of the collection will be the collection argument used in the publish handler's added, changed, and removed callbacks, or if those are missing (which is the case most of the time), it will be the name of the MongoDB collection on the server.
Modifying records
This is where Meteor makes things very convenient: when you modify a record (document) in the Minimongo collection on the client, Meteor will instantly update all templates that depend on it, and will also send the changes back to the server, which in turn will store the changes in MongoDB and will send them to the appropriate clients that have subscribed to a record set including that document. This is called latency compensation and is one of the seven core principles of Meteor.
Multiple subscriptions
You can have a bunch of subscriptions that pull in different records, but they'll all end up in the same collection on the client if the came from the same collection on the server, based on their _id. This is not explained clearly, but implied by the Meteor docs:
When you subscribe to a record set, it tells the server to send records to the client. The client stores these records in local Minimongo collections, with the same name as the collection argument used in the publish handler's added, changed, and removed callbacks. Meteor will queue incoming attributes until you declare the Mongo.Collection on the client with the matching collection name.
What's not explained is what happens when you don't explicitly use added, changed and removed, or publish handlers at all - which is most of the time. In this most common case, the collection argument is (unsurprisingly) taken from the name of the MongoDB collection you declared on the server at step 1. But what this means is that you can have different publications and subscriptions with different names, and all the records will end up in the same collection on the client. Down to the level of top level fields, Meteor takes care to perform a set union among documents, such that subscriptions can overlap - publish functions that ship different top level fields to the client work side by side and on the client, the document in the collection will be the union of the two sets of fields.
Example: multiple subscriptions filling the same collection on the client
You have a BlogPosts collection, which you declare the same way on both the server and the client, even though it does different things:
BlogPosts = new Mongo.Collection('posts');
On the client, BlogPosts can get records from:
a subscription to the most recent 10 blog posts
// server
Meteor.publish('posts-recent', function publishFunction() {
return BlogPosts.find({}, {sort: {date: -1}, limit: 10});
}
// client
Meteor.subscribe('posts-recent');
a subscription to the current user's posts
// server
Meteor.publish('posts-current-user', function publishFunction() {
return BlogPosts.find({author: this.userId}, {sort: {date: -1}, limit: 10});
// this.userId is provided by Meteor - http://docs.meteor.com/#publish_userId
}
Meteor.publish('posts-by-user', function publishFunction(who) {
return BlogPosts.find({authorId: who._id}, {sort: {date: -1}, limit: 10});
}
// client
Meteor.subscribe('posts-current-user');
Meteor.subscribe('posts-by-user', someUser);
a subscription to the most popular posts
etc.
All these documents come from the posts collection in MongoDB, via the BlogPosts collection on the server, and end up in the BlogPosts collection on the client.
Now we can understand why you need to call find() more than once - the second time being on the client, because documents from all subscriptions will end up in the same collection, and you need to fetch only those you care about. For example, to get the most recent posts on the client, you simply mirror the query from the server:
var recentPosts = BlogPosts.find({}, {sort: {date: -1}, limit: 10});
This will return a cursor to all documents/records that the client has received so far, both the top posts and the user's posts. (thanks Geoffrey).
| MongoDB | 19,826,804 | 84 |
I believe this question is similar to this one but the terminology is different. From the Mongoose 4 documentation:
We may also define our own custom document instance methods too.
// define a schema
var animalSchema = new Schema({ name: String, type: String });
// assign a function to the "methods" object of our animalSchema
animalSchema.methods.findSimilarTypes = function (cb) {
return this.model('Animal').find({ type: this.type }, cb);
}
Now all of our animal instances have a findSimilarTypes method available to it.
And then:
Adding static methods to a Model is simple as well. Continuing with our animalSchema:
// assign a function to the "statics" object of our animalSchema
animalSchema.statics.findByName = function (name, cb) {
return this.find({ name: new RegExp(name, 'i') }, cb);
}
var Animal = mongoose.model('Animal', animalSchema);
Animal.findByName('fido', function (err, animals) {
console.log(animals);
});
It seems with static methods each of the animal instances would have the findByName method available to it as well. What are the statics and methods objects in a Schema? What is the difference and why would I use one over the other?
| statics are the methods defined on the Model. methods are defined on the document (instance).
You might use a static method like Animal.findByName:
const fido = await Animal.findByName('fido');
// fido => { name: 'fido', type: 'dog' }
And you might use an instance method like fido.findSimilarTypes:
const dogs = await fido.findSimilarTypes();
// dogs => [ {name:'fido',type:'dog} , {name:'sheeba',type:'dog'} ]
But you wouldn't do Animals.findSimilarTypes() because Animals is a model, it has no "type". findSimilarTypes needs a this.type which wouldn't exist in Animals model, only a document instance would contain that property, as defined in the model.
Similarly you wouldn't¹ do fido.findByName because findByName would need to search through all documents and fido is just a document.
¹Well, technically you can, because instance does have access to the collection (this.constructor or this.model('Animal')) but it wouldn't make sense (at least in this case) to have an instance method that doesn't use any properties from the instance. (thanks to @AaronDufour for pointing this out)
| MongoDB | 29,664,499 | 83 |
I have the following problem retrieving data from MongoDB using mongoose.
Here is my Schema:
const BookSchema = new Schema(
{
_id:Number,
title:String,
authors:[String],
subjects:[String]
}
);
as you can see i have 2 arrays embedded in the object, let's say the content of the authors can be something like this: authors:["Alex Ferguson", "Didier Drogba", "Cristiano Ronaldo", "Alex"]
what I'm trying to achieve is get all the Alex in the array.
So far, I've been able to get the values if they match the value completely. However if I try to get the ones containing Alex the answer is always [].
What I want to know is how I can do this using find() without performing a map-reduce to create a view or a collection and then applying find() over that.
The code here works for exact matches
Book.find( {authors:req.query.q} , function(errs, books){
if(errs){
res.send(errs);
}
res.json(books);
});
I tried some things but no luck
{authors:{$elemMatch:req.query.q}}
{authors:{$in:[req.query.q]}}
This one gives me an error and on top of that says is very inefficient in another post I found here.
{$where:this.authors.indexOf(req.query.q) != -1}
and I also tried {authors:{$regex:"./value/i"}}
The map-reduce works fine, I need to make it work using the other approach to see which one is better?
Any help is greatly appreciated. I'm sure this is easy, but I'm new with NodeJS and Mongo and I haven't been able to figure it out on my own.
| You almost answered this yourself in your tags. MongoDB has a $regex operator which allows a regular expression to be submitted as a query. So you query for strings containing "Alex" you do this:
Books.find(
{ "authors": { "$regex": "Alex", "$options": "i" } },
function(err,docs) {
}
);
You can also do this:
Books.find(
{ "authors": /Alex/i },
function(err,docs) {
}
);
Both are valid and different to how you tried in the correct supported syntax as shown in the documentation.
But of course if you are actually asking "how to get the 'array' results only for those that match 'Alex' somewhere in the string?" then this is a bit different.
Complex matching for more than one array element is the domain of the aggregation framework ( or possibly mapReduce, but that is much slower ), where you need to "filter" the array content.
You start of much the same. The key here is to $unwind to "de-normalize" the array content in order to be alble to "filter" properly as individual documents. Then re-construct the array with the "matching" documents.
Books.aggregate(
[
// Match first to reduce documents to those where the array contains the match
{ "$match": {
"authors": { "$regex": "Alex", "$options": i }
}},
// Unwind to "de-normalize" the document per array element
{ "$unwind": "$authors" },
// Now filter those document for the elements that match
{ "$match": {
"authors": { "$regex": "Alex", "$options": i }
}},
// Group back as an array with only the matching elements
{ "$group": {
"_id": "$_id",
"title": { "$first": "$title" },
"authors": { "$push": "$authors" },
"subjects": { "$first": "$subjects" }
}}
],
function(err,results) {
}
)
| MongoDB | 26,814,456 | 83 |
I have two collections. The first collection contains students:
{ "_id" : ObjectId("51780f796ec4051a536015cf"), "name" : "John" }
{ "_id" : ObjectId("51780f796ec4051a536015d0"), "name" : "Sam" }
{ "_id" : ObjectId("51780f796ec4051a536015d1"), "name" : "Chris" }
{ "_id" : ObjectId("51780f796ec4051a536015d2"), "name" : "Joe" }
The second collection contains courses:
{
"_id" : ObjectId("51780fb5c9c41825e3e21fc4"),
"name" : "CS 101",
"students" : [
ObjectId("51780f796ec4051a536015cf"),
ObjectId("51780f796ec4051a536015d0"),
ObjectId("51780f796ec4051a536015d2")
]
}
{
"_id" : ObjectId("51780fb5c9c41825e3e21fc5"),
"name" : "Literature",
"students" : [
ObjectId("51780f796ec4051a536015d0"),
ObjectId("51780f796ec4051a536015d0"),
ObjectId("51780f796ec4051a536015d2")
]
}
{
"_id" : ObjectId("51780fb5c9c41825e3e21fc6"),
"name" : "Physics",
"students" : [
ObjectId("51780f796ec4051a536015cf"),
ObjectId("51780f796ec4051a536015d0")
]
}
Each course document contains students array which has a list of students registered for the course. When a student views a course on a web page he needs to see if he has already registered for the course or not. In order to do that, when the courses collection gets queried on the student's behalf, we need to find out if students array already contains the student's ObjectId. Is there a way to specify in the projection of a find query to retrieve student ObjectId from students array only if it is there?
I tried to see if I could $elemMatch operator but it is geared towards an array of sub-documents. I understand that I could use aggregation framework but it seems that it would be on overkill in this case. Aggregation framework would probably not be as fast as a single find query. Is there a way to query course collection to so that the returned document could be in a form similar to this?
{
"_id" : ObjectId("51780fb5c9c41825e3e21fc4"),
"name" : "CS 101",
"students" : [
ObjectId("51780f796ec4051a536015d0"),
]
}
| [edit based on this now being possible in recent versions]
[Updated Answer] You can query the following way to get back the name of class and the student id only if they are already enrolled.
db.student.find({},
{_id:0, name:1, students:{$elemMatch:{$eq:ObjectId("51780f796ec4051a536015cf")}}})
and you will get back what you expected:
{ "name" : "CS 101", "students" : [ ObjectId("51780f796ec4051a536015cf") ] }
{ "name" : "Literature" }
{ "name" : "Physics", "students" : [ ObjectId("51780f796ec4051a536015cf") ] }
[Original Answer] It's not possible to do what you want to do currently. This is unfortunate because you would be able to do this if the student was stored in the array as an object. In fact, I'm a little surprised you are using just ObjectId() as that will always require you to look up the students if you want to display a list of students enrolled in a particular course (look up list of Id's first then look up names in the students collection - two queries instead of one!)
If you were storing (as an example) an Id and name in the course array like this:
{
"_id" : ObjectId("51780fb5c9c41825e3e21fc6"),
"name" : "Physics",
"students" : [
{id: ObjectId("51780f796ec4051a536015cf"), name: "John"},
{id: ObjectId("51780f796ec4051a536015d0"), name: "Sam"}
]
}
Your query then would simply be:
db.course.find( { },
{ students :
{ $elemMatch :
{ id : ObjectId("51780f796ec4051a536015d0"),
name : "Sam"
}
}
}
);
If that student was only enrolled in CS 101 you'd get back:
{ "name" : "Literature" }
{ "name" : "Physics" }
{
"name" : "CS 101",
"students" : [
{
"id" : ObjectId("51780f796ec4051a536015cf"),
"name" : "John"
}
]
}
| MongoDB | 16,198,429 | 83 |
While using Mongodb console or shell sometimes my screen is cluttered with a lot of previous output which I do not need. So, is there a function to clear console in MongoDB?
Something like an analog of clear in MatLab.
I have tried clc, clear but with no success.
| Judging by Jira they have done it: You need to write cls in the shell.
You can also use a hotkey: CTRL + L.
In macOS: Command + K.
Check more hotkeys for the shell here.
| MongoDB | 13,327,218 | 83 |
I have tried MongoMapper and it is feature complete (offering almost all AR functionality) but i was not very happy with the performance when using large datasets. Has anyone compared with Mongoid? Any performance gains ?
| I have used MongoMapper for awhile but decided to migrate to MongoId. The reason is hidden issues plus arrogance towards users. I had to jump through hoops to make MongoMapper work with Cucumber (succeeded in the end) and to put a couple of patches even the project was simple, but it's not the point. When I tried to submit a bug fix (due to incompatibility with ActiveRecord), they seemingly got pissed off that I found a problem and I was pushed around. While I was testing, I also encountered a major bug with their query implementation, while their testing was tuned in a way that the tests pass. After my previous experience, didn't dare to submit it.
They have a significantly lower number of pull requests and bug/feature submissions than MongoId, i.e. community participation is much lower. Same experience as mine?
I don't know which one has more features right now, but I don't see much future in MongoMapper. I don't mind fixing issues and adding functionality myself, but I do mind situations when they wouldn't fix bugs.
| MongoDB | 1,958,365 | 83 |
When I try to run Mongod in terminal I got this message :
2015-05-14T17:33:04.554+0700 I STORAGE [initandlisten] exception in initAndListen: 29 Data directory /data/db not found., terminating
2015-05-14T17:33:04.554+0700 I CONTROL [initandlisten] dbexit: rc: 100
and running mongo command :
MongoDB shell version: 3.0.3
connecting to: test
2015-05-14T17:34:26.679+0700 W NETWORK Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2015-05-14T17:34:26.681+0700 E QUERY Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed
at connect (src/mongo/shell/mongo.js:179:14)
at (connect):1:6 at src/mongo/shell/mongo.js:179
exception: connect failed
I Have tried to change permision in /var/lib/mongodb and /var/log/mongodb, but still doesnt work, and I tried to uninstall and install mongodb again, but still the same problem exist.
Anyone could help? Thanks
I'm using Ubuntu 14.04 LTS 64 bit
| MongoDB needs data directory to store data.
Default path is /data/db
When you start MongoDB engine, it searches this directory which is missing in your case. Solution is create this directory and assign rwx permission to user.
If you want to change the path of your data directory then you should specify it while starting mongod server like,
mongod --dbpath /data/<path> --port <port no>
This should help you start your mongod server with custom path and port.
| MongoDB | 30,235,200 | 82 |
Is there an easy way to "$push" all fields of a document?
For example:
Say I have a Mongo collection of books:
{author: "tolstoy", title:"war & peace", price:100, pages:800}
{author: "tolstoy", title:"Ivan Ilyich", price:50, pages:100}
I'd like to group them by author - for each author, list his entire book objects:
{ author: "tolstoy",
books: [
{author: "tolstoy", title:"war & peace", price:100, pages:800}
{author: "tolstoy", title:"Ivan Ilyich", price:50, pages:100}
]
}
I can achieve this by explicitly pushing all fields:
{$group: {
_id: "$author",
books:{$push: {author:"$author", title:"$title", price:"$price", pages:"$pages"}},
}}
But is there any shortcut, something in the lines of:
// Fictional syntax...
{$group: {
_id: "$author",
books:{$push: "$.*"},
}}
| You can use $$ROOT
{ $group : {
_id : "$author",
books: { $push : "$$ROOT" }
}}
Found here: how to use mongodb aggregate and retrieve entire documents
| MongoDB | 22,150,205 | 82 |
In my collections, documents contains key like status and timestamp. When I want to find latest ten documents then I write following query
db.collectionsname.find().sort({"timestamp"-1}).limit(10)
This query gives me results which I want but when I want to delete latest ten documents then I was writing the following query
db.collectionsname.remove({"status":0},10).sort({"timestamp":-1})
but it shows following error
TypeError: Cannot call method 'sort' of undefined
and again I wrote the same query as below
db.collectionsname.remove({"status":0},10)
It deletes only one document. So how can I write a query which deletes ten latest documents and sorts on timestamp?
| You can't set a limit when using remove or findAndModify. So, if you want to precisely limit the number of documents removed, you'll need to do it in two steps.
db.collectionName.find({}, {_id : 1})
.limit(100)
.sort({timestamp:-1})
.toArray()
.map(function(doc) { return doc._id; }); // Pull out just the _ids
Then pass the returned _ids to the remove method:
db.collectionName.remove({_id: {$in: removeIdsArray}})
FYI: you cannot remove documents from a capped collection.
| MongoDB | 19,065,615 | 82 |
Is there a simple way to reset the data from a meteor deployed app?
So, for example, if I had deployed an app named test.meteor.com — how could I easily reset the data that has been collected by that app?
Locally I run meteor reset, but I am unsure of what to do in production.
| If you have your app with you you could do this in your project directory
meteor deploy test.meteor.com --delete
meteor deploy test.meteor.com
The first deletes the app so its all blank. The second deploys a fresh instance of it back.
| MongoDB | 15,583,107 | 82 |
So, I'm sure I'm missing something simple here, but when I run mongo as a daemon (using mongod --fork or just mongod), I see different database content than if I just run "mongo" on the host machine.
My only assumption is that the data is being stored somewhere other than /data/db when it's running just the shell, and switches to /data/db when I boot the mongod. In that case, how do I get at my data when running mongod?
| I think there is some confusion here.
mongod is the "Mongo Daemon" it's basically the host process for the database. When you start mongod you're basically saying "start the MongoDB process and run it in the background". mongod has several default parameters, such as storing data in /data/db and running on port 27017.
mongo is the command-line shell that connects to a specific instance of mongod. When you run mongo with no parameters it defaults to connecting to the localhost on port 27017. If you run mongo against an invalid machine:port combination then it will fail to connect (and tell you as much).
Ideally, when doing anything other than just "playing around", you'll use the Command Line Parameters for starting mongod. By the same measure you should start the mongo shell with explicit instructions.
Based on your description, I think you may be encountering an issue regarding the use of default databases. Try starting mongo with the following (where dbname is your database name)
./mongo localhost:27017/dbname
| MongoDB | 4,883,045 | 82 |
i've seen many answers to this question here, but i still don't get it (maybe because they use more "complex" examples)...
So what im trying to do is a schema for a "Customer", and it will have two fields that will have nested "subfields", and others that may repeat. here is what i mean:
let customerModel = new Schema({
firstName: String,
lastName: String,
company: String,
contactInfo: {
tel: [Number],
email: [String],
address: {
city: String,
street: String,
houseNumber: String
}
}
});
tel and email might be an array.
and address will not be repeated, but have some sub fields as you can see.
How can i make this work?
| const mongoose = require("mongoose");
// Make connection
// https://mongoosejs.com/docs/connections.html#error-handling
mongoose.connect("mongodb://localhost:27017/test", {
useNewUrlParser: true,
useUnifiedTopology: true,
});
// Define schema
// https://mongoosejs.com/docs/models.html#compiling
const AddressSchema = mongoose.Schema({
city: String,
street: String,
houseNumber: String,
});
const ContactInfoSchema = mongoose.Schema({
tel: [Number],
email: [String],
address: {
type: AddressSchema,
required: true,
},
});
const CustomerSchema = mongoose.Schema({
firstName: String,
lastName: String,
company: String,
connectInfo: ContactInfoSchema,
});
const CustomerModel = mongoose.model("Customer", CustomerSchema);
// Create a record
// https://mongoosejs.com/docs/models.html#constructing-documents
const customer = new CustomerModel({
firstName: "Ashish",
lastName: "Suthar",
company: "BitOrbits",
connectInfo: {
tel: [8154080079, 6354492692],
email: ["[email protected]", "[email protected]"],
},
});
// Insert customer object
// https://mongoosejs.com/docs/api.html#model_Model-save
customer.save((err, cust) => {
if (err) return console.error(err);
// This will print inserted record from database
// console.log(cust);
});
// Display any data from CustomerModel
// https://mongoosejs.com/docs/api.html#model_Model.findOne
CustomerModel.findOne({ firstName: "Ashish" }, (err, cust) => {
if (err) return console.error(err);
// To print stored data
console.log(cust.connectInfo.tel[0]); // output 8154080079
});
// Update inner record
// https://mongoosejs.com/docs/api.html#model_Model.update
CustomerModel.updateOne(
{ firstName: "Ashish" },
{
$set: {
"connectInfo.tel.0": 8154099999,
},
}
);
| MongoDB | 39,596,625 | 81 |
I need to retrieve the entire single object hierarchy from the database as a JSON. Actually, the proposal about any other solution to achieve this result would be highly appreciated. I decided to use MongoDB with its $lookup support.
So I have three collections:
party
{ "_id" : "2", "name" : "party2" }
{ "_id" : "5", "name" : "party5" }
{ "_id" : "4", "name" : "party4" }
{ "_id" : "1", "name" : "party1" }
{ "_id" : "3", "name" : "party3" }
address
{ "_id" : "a3", "street" : "Address3", "party_id" : "2" }
{ "_id" : "a6", "street" : "Address6", "party_id" : "5" }
{ "_id" : "a1", "street" : "Address1", "party_id" : "1" }
{ "_id" : "a5", "street" : "Address5", "party_id" : "5" }
{ "_id" : "a2", "street" : "Address2", "party_id" : "1" }
{ "_id" : "a4", "street" : "Address4", "party_id" : "3" }
addressComment
{ "_id" : "ac2", "address_id" : "a1", "comment" : "Comment2" }
{ "_id" : "ac1", "address_id" : "a1", "comment" : "Comment1" }
{ "_id" : "ac5", "address_id" : "a5", "comment" : "Comment6" }
{ "_id" : "ac4", "address_id" : "a3", "comment" : "Comment4" }
{ "_id" : "ac3", "address_id" : "a2", "comment" : "Comment3" }
I need to retrieve all parties with all corresponding addresses and address comments as part of the record. My aggregation:
db.party.aggregate([{
$lookup: {
from: "address",
localField: "_id",
foreignField: "party_id",
as: "address"
}
},
{
$unwind: "$address"
},
{
$lookup: {
from: "addressComment",
localField: "address._id",
foreignField: "address_id",
as: "address.addressComment"
}
}])
The result is pretty weird. Some records are ok. But Party with _id: 4 is missing (there is no address for it). Also, there are two Party _id: 1 in the result set (but with different addresses):
{
"_id": "1",
"name": "party1",
"address": {
"_id": "2",
"street": "Address2",
"party_id": "1",
"addressComment": [{
"_id": "3",
"address_id": "2",
"comment": "Comment3"
}]
}
}{
"_id": "1",
"name": "party1",
"address": {
"_id": "1",
"street": "Address1",
"party_id": "1",
"addressComment": [{
"_id": "1",
"address_id": "1",
"comment": "Comment1"
},
{
"_id": "2",
"address_id": "1",
"comment": "Comment2"
}]
}
}{
"_id": "3",
"name": "party3",
"address": {
"_id": "4",
"street": "Address4",
"party_id": "3",
"addressComment": []
}
}{
"_id": "5",
"name": "party5",
"address": {
"_id": "5",
"street": "Address5",
"party_id": "5",
"addressComment": [{
"_id": "5",
"address_id": "5",
"comment": "Comment5"
}]
}
}{
"_id": "2",
"name": "party2",
"address": {
"_id": "3",
"street": "Address3",
"party_id": "2",
"addressComment": [{
"_id": "4",
"address_id": "3",
"comment": "Comment4"
}]
}
}
Please help me with this. I'm pretty new to MongoDB but I feel it can do what I need from it.
| The cause of your 'troubles' is the second aggregation stage - { $unwind: "$address" }. It removes record for party with _id: 4 (because its address array is empty, as you mention) and produces two records for parties _id: 1 and _id: 5 (because each of them has two addresses).
To prevent removing of parties without addresses you should set preserveNullAndEmptyArrays option of $unwind stage to true.
To prevent duplicating of parties for its different addresses you should add $group aggregation stage to your pipeline. Also, use $project stage with $filter operator to exclude empty address records in output.
db.party.aggregate([{
$lookup: {
from: "address",
localField: "_id",
foreignField: "party_id",
as: "address"
}
}, {
$unwind: {
path: "$address",
preserveNullAndEmptyArrays: true
}
}, {
$lookup: {
from: "addressComment",
localField: "address._id",
foreignField: "address_id",
as: "address.addressComment",
}
}, {
$group: {
_id : "$_id",
name: { $first: "$name" },
address: { $push: "$address" }
}
}, {
$project: {
_id: 1,
name: 1,
address: {
$filter: { input: "$address", as: "a", cond: { $ifNull: ["$$a._id", false] } }
}
}
}]);
| MongoDB | 36,019,713 | 81 |
When using a FindOne() using MongoDB and C#, is there a way to ignore fields not found in the object?
EG, example model.
public class UserModel
{
public ObjectId id { get; set; }
public string Email { get; set; }
}
Now we also store a password in the MongoDB collection, but do not want to bind it to out object above. When we do a Get like so,
var query = Query<UserModel>.EQ(e => e.Email, model.Email);
var entity = usersCollection.FindOne(query);
We get the following error
Element 'Password' does not match any field or property of class
Is there anyway to tell Mongo to ignore fields it cant match with the models?
| Yes. Just decorate your UserModel class with the BsonIgnoreExtraElements attribute:
[BsonIgnoreExtraElements]
public class UserModel
{
public ObjectId id { get; set; }
public string Email { get; set; }
}
As the name suggests, the driver would ignore any extra fields instead of throwing an exception. More information here - Ignoring Extra Elements.
| MongoDB | 23,448,634 | 81 |
I have a Mongo database that I did not create or architect, is there a good way to introspect the db or print out what the structure is to start to get a handle on what types of data are being stored, how the data types are nested, etc?
| Just query the database by running the following commands in the mongo shell:
use mydb //this switches to the database you want to query
show collections //this command will list all collections in the database
db.collectionName.find().pretty() //this will show all documents in the database in a readable format; do the same for each collection in the database
You should then be able to examine the document structure.
| MongoDB | 14,713,179 | 81 |
How can I find all the objects in a database with where a field of a object contains a substring?
If the field is A in an object of a collection with a string value:
I want to find all the objects in the db "database" where A contains a substring say "abc def".
I tried:
db.database.find({A: {$regex: '/^*(abc def)*$/''}})
but didn't work
UPDATE
A real string (in unicode):
Sujet Commentaire sur Star Wars Episode III - La Revanche des Sith 1
Need to search for all entries with Star Wars
db.test.find({A: {$regex: '^*(star wars)*$''}}) not wokring
| Instead of this:
db.database.find({A: {$regex: '/^*(abc def)*$/''}})
You should do this:
db.database.find({A: /abc def/i })
^* is not actually valid syntax as ^ and $ are anchors and not something that is repeatable. You probably meant ^.* here. But there is no need for ^.* as that simply means "Everything up to the character following" and (abc def)* means "0 or more times "abc def", but it has to be at the end of the string, because of your $. The "i" at the end is to make it case insensitive.
| MongoDB | 10,242,501 | 81 |
Campaign.find {client_id:req.param('client_id')}, (error, campaigns) ->
if error
response =
error: error.message
else
for campaign in campaigns
query =
campaign_id: campaign._id
console.log query
CampaignResponse.find query, (err, campaignResponsesCount) ->
console.log campaignResponsesCount
response = campaigns
res.json response
For some reason, this returns no results. However, there are items in CampaignResponse with that specific campaign._id. I'm pretty sure this is an issue with types and casting, but I can't figure out what to do.
Any help?
| A couple tips:
Try running the same query from mongodb at the command line, see if you get any results.
Is the "campaign_id" defined as an ObjectId in your schema? If so, try searching using the ObjectId type.
For example:
var ObjectId = require('mongoose').Types.ObjectId;
var query = { campaign_id: new ObjectId(campaign._id) };
| MongoDB | 7,878,557 | 81 |
I have a large CSV file containing a list of stores, in which one of the field is ZipCode.
I have a separate MongoDB database called ZipCodes, which stores the latitude and longitude for any given zip code.
In SQL Server, I would execute a stored procedure called InsertStore which would do a look up on the ZipCodes table to get corresponding latitude and longitude and insert the data into the Stores table.
Is there something similar to the concept of stored procedures in MongoDB for this?
Basically, for each insert I need to look up the latitude and longitude for that store and save that as well.
I am not too familiar with the concept of Map/Reduce, but would that be relevant here?
Thank you!
| The closest thing to an equivalent of a stored procedure in mongodb is stored javascript. A good introduction to stored javascript is available in this article on Mike Dirolf's blog.
| MongoDB | 3,876,049 | 81 |
I've googled around but couldn't find a working MongoDB viewer or data browser.
An ideal (for my needs) tool would be a web based viewer with dead simple features (browsing and doing queries).
| You have :
Mongo3
futon4mongo
| MongoDB | 2,411,087 | 81 |
I'm trying to determine the best way to deal with a composite primary key in a mongo db. The main key for interacting with the data in this system is made up of 2 uuids. The combination of uuids is guaranteed to be unique, but neither of the individual uuids is.
I see a couple of ways of managing this:
Use an object for the primary key that is made up of 2 values (as suggested here)
Use a standard auto-generated mongo object id as the primary key, store my key in two separate fields, and then create a composite index on those two fields
Make the primary key a hash of the 2 uuids
Some other awesome solution that I currently am unaware of
What are the performance implications of these approaches?
For option 1, I'm worried about the insert performance due to having non sequential keys. I know this can kill traditional RDBMS systems and I've seen indications that this could be true in MongoDB as well.
For option 2, it seems a little odd to have a primary key that would never be used by the system. Also, it seems that query performance might not be as good as in option 1. In a traditional RDBMS a clustered index gives the best query results. How relevant is this in MongoDB?
For option 3, this would create one single id field, but again it wouldn't be sequential when inserting. Are there any other pros/cons to this approach?
For option 4, well... what is option 4?
Also, there's some discussion of possibly using CouchDB instead of MongoDB at some point in the future. Would using CouchDB suggest a different solution?
MORE INFO: some background about the problem can be found here
| You should go with option 1.
The main reason is that you say you are worried about performance - using the _id index which is always there and already unique will allow you to save having to maintain a second unique index.
For option 1, I'm worried about the insert performance do to having
non sequential keys. I know this can kill traditional RDBMS systems
and I've seen indications that this could be true in MongoDB as well.
Your other options do not avoid this problem, they just shift it from the _id index to the secondary unique index - but now you have two indexes, once that's right-balanced and the other one that's random access.
There is only one reason to question option 1 and that is if you plan to access the documents by just one or just the other UUID value. As long as you are always providing both values and (this part is very important) you always order them the same way in all your queries, then the _id index will be efficiently serving its full purpose.
As an elaboration on why you have to make sure you always order the two UUID values the same way, when comparing subdocuments { a:1, b:2 } is not equal to { b:2, a:1 } - you could have a collection where two documents had those values for _id. So if you store _id with field a first, then you must always keep that order in all of your documents and queries.
The other caution is that index on _id:1 will be usable for query:
db.collection.find({_id:{a:1,b:2}})
but it will not be usable for query
db.collection.find({"_id.a":1, "_id.b":2})
| MongoDB | 23,164,417 | 80 |
I've a mongodb collection in this form:
{id=ObjectId(....),key={dictionary of values}}
where dictionary of values is {'a':'1','b':'2'.....}
Let dictionary of values be 'd'.
I need to update the values of the key in the 'd'.
i.e I want to change 'a':'1' to 'a':'2'
How can do I this in pymongo?
Code goes something like this:
productData is a collection in mongoDB
for p in productData.find():
for k,v in p.iteritems():
value=v['a']
value=value+1
v['a']=value
Now reflect the new value in the productData.
This is what I've tried and it introduces a new key-value pair instead of updating the
for p in productData.find():
for k,v in p.iteritems():
value=v['a']
value=value+1
v['a']=value
productData.update({'_id':mongoId},{"$set":{'d.a':'100'}},upsert=False)
| You can use the $set syntax if you want to set the value of a document to an arbitrary value. This will either update the value if the attribute already exists on the document or create it if it doesn't. If you need to set a single value in a dictionary like you describe, you can use the dot notation to access child values.
If p is the object retrieved:
existing = p['d']['a']
For pymongo versions < 3.0
db.ProductData.update({
'_id': p['_id']
},{
'$set': {
'd.a': existing + 1
}
}, upsert=False, multi=False)
For pymongo versions >= 3.0
db.ProductData.update_one({
'_id': p['_id']
},{
'$set': {
'd.a': existing + 1
}
}, upsert=False)
However if you just need to increment the value, this approach could introduce issues when multiple requests could be running concurrently. Instead you should use the $inc syntax:
For pymongo versions < 3.0:
db.ProductData.update({
'_id': p['_id']
},{
'$inc': {
'd.a': 1
}
}, upsert=False, multi=False)
For pymongo versions >= 3.0:
db.ProductData.update_one({
'_id': p['_id']
},{
'$inc': {
'd.a': 1
}
}, upsert=False)
This ensures your increments will always happen.
| MongoDB | 13,710,770 | 80 |
I'm asking this because when I write unit tests, I want to drop the test database and insert some initialize data, and also check the data in mongodb in testing. So I need raw operations to mongodb.
How to do this in mongoose? What I can do now is just create the connection, and not find any document in mongoose's official site.
var mongoose = require('mongoose');
mongoose.connect('mongo://localhost/shuzu_test');
// get the connection
var conn = mongoose.connection;
But how to:
drop the database
create a collection
write some data to a collection
query a collection
drop a collection
| You can run mongodb commands using the native NodeJS driver by using mongoose.connection.db. This accesses the NodeJS MongoDB driver, and you don't need to create a mongoose model.
An insert
mongoose.connection.db.collection('userCollection').insert({
username: 'captain1',
firstName: 'Steve',
lastName: 'Rogers',
});
An update
mongoose.connection.db.collection('userCollection').update(
{someFilterProperty: true},
{$set: {
siteId: new mongoose.mongo.ObjectId('56cb91bdc5946f14678934ba'),
hasNewSiteId: true}},
{multi: true});
});
You can send every command specific to that database using the database connection db reference mongoose.connection.db.
This is the mongoose API doc: http://mongoosejs.com/docs/api.html#connection_Connection-db
Important: Note some of the options in the NodeJS driver are different than the options in MongoDB shell commands. For example findOneAndUpdate() uses returnOriginal instead of returnNewDocument. See here and here for more on this.
| MongoDB | 10,519,432 | 80 |
How do I get the timestamp from the MongoDB id?
| The timestamp is contained in the first 4 bytes of a mongoDB id (see: http://www.mongodb.org/display/DOCS/Object+IDs).
So your timestamp is:
timestamp = _id.toString().substring(0,8)
and
date = new Date( parseInt( timestamp, 16 ) * 1000 )
| MongoDB | 6,452,021 | 80 |
I have my json_file.json like this:
[
{
"project": "project_1",
"coord1": 2,
"coord2": 10,
"status": "yes",
"priority": 7
},
{
"project": "project_2",
"coord1": 2,
"coord2": 10,
"status": "yes",
"priority": 7
},
{
"project": "project_3",
"coord1": 2,
"coord2": 10,
"status": "yes",
"priority": 7
}
]
When I run the following command to import this into mongodb:
mongoimport --db my_db --collection my_collection --file json_file.json
I get the following error:
Failed: error unmarshaling bytes on document #0: JSON decoder out of sync - data changing underfoot?
If I add the --jsonArray flag to the command I import like this:
imported 3 documents
instead of one document with the json format as shown in the original file.
How can I import json into mongodb with the original format in the file shown above?
| The mongoimport tool has an option:
--jsonArray treat input source as a JSON array
Or it is possible to import from file containing same data format as the result of db.collection.find() command. Here is example from university.mongodb.com courseware some content from grades.json:
{ "_id" : { "$oid" : "50906d7fa3c412bb040eb577" }, "student_id" : 0, "type" : "exam", "score" : 54.6535436362647 }
{ "_id" : { "$oid" : "50906d7fa3c412bb040eb578" }, "student_id" : 0, "type" : "quiz", "score" : 31.95004496742112 }
{ "_id" : { "$oid" : "50906d7fa3c412bb040eb579" }, "student_id" : 0, "type" : "homework", "score" : 14.8504576811645 }
As you can see, no array used and no comma delimiters between documents either.
I discover, recently, that this complies with the JSON Lines text format.
Like one used in apache.spark.sql.DataFrameReader.json() method.
Side note:
$ python -m json.tool --sort-keys --json-lines < data.jsonl
also can handle this format
see demo and details here
| MongoDB | 30,380,751 | 79 |
I know that MongoDB supports the syntax find{array.0.field:"value"}, but I specifically want to do this for the last element in the array, which means I don't know the index. Is there some kind of operator for this, or am I out of luck?
EDIT: To clarify, I want find() to only return documents where a field in the last element of an array matches a specific value.
| In 3.2 this is possible. First project so that myField contains only the last element, and then match on myField.
db.collection.aggregate([
{ $project: { id: 1, myField: { $slice: [ "$myField", -1 ] } } },
{ $match: { myField: "myValue" } }
]);
| MongoDB | 28,680,295 | 79 |
Trying to create a MongoDB data source with icCube. The idea is to return the size of an array as a new field. Something like :
$project:
{
"people": 1,
"Count myFieldArray" : {$size : "$myFieldArray" }
}
But I'm getting for some records the following error :
The argument to $size must be an Array, but was of type: EOO
Is there a way that size is 0 if the field is empty or not an array (getting rid of the error) ?
| You can use the $ifNull operator here. It seems the field is either not an array or not present by the given error:
{ "$project": {
"people": 1,
"Count": {
"$size": { "$ifNull": [ "$myFieldArray", [] ] }
}
}}
Also you might want to check for the $type in your $match in case these do exist but are not an array.
| MongoDB | 24,201,120 | 79 |
I'm aware of the $addToSet method for MongoDB, but I can't find a "remove" equivalent anywhere in the docs.
What's the best way to achieve this? Trying to achieve something like the following:
obj = {
name: 'object1',
tags: ['fus', 'ro', 'dah']
}
db.collection.update({
name: 'object1'
}, {
$removeFromSet: {
tags: 'dah'
}
});
| I think you are looking for $pull, which "removes all instances of a value from an existing array".
db.collection.update(
{name: 'object1'},
{$pull: { tags: 'dah'}});
| MongoDB | 18,395,412 | 79 |
I'm new to MongoDB. I currently have a dump of a mongo db (i.e. directory of .bson files) and am trying to import that into mongo.
I installed mongo as per the instructions on http://docs.mongodb.org/manual/tutorial/install-mongodb-on-os-x/.
I'm currently trying to test starting a local mongo instance by running mongod --dbpath /path/to/my/mongodata (which is an empty directory).
I get the following in stdout:
Thu Sep 20 09:46:01 [initandlisten] MongoDB starting : pid=1065 port=27017 dbpath=/path/to/my/mongodata/ 64-bit host=dhcp-18-111-28-92.dyn.mit.edu
Thu Sep 20 09:46:01 [initandlisten]
Thu Sep 20 09:46:01 [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 266 processes, 2560 files. Number of processes should be at least 1280 : 0.5 times number of files.
Thu Sep 20 09:46:01 [initandlisten] db version v2.2.0, pdfile version 4.5
Thu Sep 20 09:46:01 [initandlisten] git version: f5e83eae9cfbec7fb7a071321928f00d1b0c5207
Thu Sep 20 09:46:01 [initandlisten] build info: Darwin bs-osx-106-x86-64-1.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_49
Thu Sep 20 09:46:01 [initandlisten] options: { dbpath: "/path/to/my/mongodata/" }
Thu Sep 20 09:46:01 [initandlisten] journal dir=/path/to/my/mongodata/journal
Thu Sep 20 09:46:01 [initandlisten] recover : no journal files present, no recovery needed
Thu Sep 20 09:46:01 [websvr] admin web console waiting for connections on port 28017
Thu Sep 20 09:46:01 [initandlisten] waiting for connections on port 27017
At this point, it just hangs there and does nothing. Seems like it's waiting for something to happen on localhost, but I don't know mongo well enough to understand what's going on. Any help?
| There is nothing wrong, you have started the server, it is running and listening on port 27017. Now you can start to interact with the server, for example just open a new terminal tab and run mongo ,which will open mongo's interactive console and connects to the default server(localhost:27017)
If you want to run mongod as a background process (to get back the console) you can use --fork command option. This requires you to use some sort of logging.
Eg. mongod --dbpath /path/to/my/mongodata --fork --logpath /path/to/my/mongod.log
If you want to restore a bsonexport you will probably use the mongorestore command
| MongoDB | 12,514,119 | 79 |
The list of MongoDB GUI client apps on the official site is outdated: some clients are not supported, some are heavily bound to .NET and not runnable on Linux. And all of them lack the ability to edit stored documents (i.e. provide read-only access).
I need a GUI client that:
Works on Linux (but not web);
Is free;
Supports documents editing.
Is there an app which satisfies these requirements?
| Robomongo - cross-platform MongoDB GUI client.
Update: Mac OS X and Linux(as Debian/Ubuntu, RHEL/CentOS packages) versions released.
Update: Robomongo officially changed it's name and released two different products Studio 3T and Robo 3T. Old robomongo is now called Robo 3T. Studio is for professionals.
Update: from October 2022 Robo 3T is named Studio 3T Free.
| MongoDB | 10,227,664 | 79 |
Anyone has experiences with MongoKit, MongoEngine or Flask-MongoAlchemy for Flask?
Which one do you prefer? Positive or negative experiences?. Too many options for a Flask-Newbie.
| I have invested a lot of time evaluating the popular Python ORMs for MongoDB. This was an exhaustive exercise, as I really wanted to pick one.
My conclusion is that an ORM removes the fun out of MongoDB. None feels natural, they impose restrictions similar to the ones which made me move away from relational databases in the first place.
Again, I really wanted to use an ORM, but now I am convinced that using pymongo directly is the way to go. Now, I follow a pattern which embraces MongoDB, pymongo, and Python.
A Resource Oriented Architecture leads to very natural representations. For instance, take the following User resource:
from werkzeug.wrappers import Response
from werkzeug.exceptions import NotFound
Users = pymongo.Connection("localhost", 27017)["mydb"]["users"]
class User(Resource):
def GET(self, request, username):
spec = {
"_id": username,
"_meta.active": True
}
# this is a simple call to pymongo - really, do
# we need anything else?
doc = Users.find_one(spec)
if not doc:
return NotFound(username)
payload, mimetype = representation(doc, request.accept)
return Response(payload, mimetype=mimetype, status=200)
def PUT(self, request, username):
spec = {
"_id": username,
"_meta.active": True
}
operation = {
"$set": request.json,
}
# this call to pymongo will return the updated document (implies safe=True)
doc = Users.update(spec, operation, new=True)
if not doc:
return NotFound(username)
payload, mimetype = representation(doc, request.accept)
return Response(payload, mimetype=mimetype, status=200)
The Resource base class looks like
class Resource(object):
def GET(self, request, **kwargs):
return NotImplemented()
def HEAD(self, request, **kwargs):
return NotImplemented()
def POST(self, request, **kwargs):
return NotImplemented()
def DELETE(self, request, **kwargs):
return NotImplemented()
def PUT(self, request, **kwargs):
return NotImplemented()
def __call__(self, request, **kwargs):
handler = getattr(self, request.method)
return handler(request, **kwargs)
Notice that I use the WSGI spec directly, and leverage Werkzeug where possible (by the way, I think that Flask adds an unnecessary complication to Werkzeug).
The function representation takes the request's Accept headers, and produces a suitable representation (for example, application/json, or text/html). It is not difficult to implement. It also adds the Last-Modified header.
Of course, your input needs to be sanitized, and the code, as presented, will not work (I mean it as an example, but it is not difficult to understand my point).
Again, I tried everything, but this architecture made my code flexible, simple, and extensible.
| MongoDB | 9,447,629 | 79 |
What's the best practice (or tool) for updating/migrating Mongoose schemas as the application evolves?
| It's funny though, MongoDB was born to respond to the schema problems in RDBMS. You don't have to migrate anything, all you have to do is set the default value in the schema definition if the field is required.
new Schema({
name: { type: string }
})
to:
new Schema({
name: { type: string },
birthplace: { type: string, required: true, default: 'neverborn' }
});
| MongoDB | 7,617,002 | 79 |
I'm running an update on my MongoDB from Python. I have this line:
self.word_counts[source].update({'date':posttime},{"$inc" : words},{'upsert':True})
But it throws this error:
raise TypeError("upsert must be an instance of bool")
But True looks like an instance of bool to me!
How should I correctly write this update?
| The third argument to PyMongo's update() is upsert and must be passed a boolean, not a dictionary. Change your code to:
self.word_counts[source].update({'date':posttime}, {"$inc" : words}, True)
Or pass upsert=True as a keyword argument:
self.word_counts[source].update({'date':posttime}, {"$inc" : words}, upsert=True)
Your mistake was likely caused by reading about update() in the MongoDB docs. The JavaScript version of update takes an object as its third argument containing optional parameters like upsert and multi. But since Python allows passing keyword arguments to a function (unlike JavaScript which only has positional arguments), this is unnecessary and PyMongo takes these options as optional function parameters instead.
| MongoDB | 5,055,797 | 79 |
What are different ways to insert a document(record) into MongoDB using Mongoose?
My current attempt:
var mongoose = require('mongoose');
var Schema = mongoose.Schema;
var notificationsSchema = mongoose.Schema({
"datetime" : {
type: Date,
default: Date.now
},
"ownerId":{
type:String
},
"customerId" : {
type:String
},
"title" : {
type:String
},
"message" : {
type:String
}
});
var notifications = module.exports = mongoose.model('notifications', notificationsSchema);
module.exports.saveNotification = function(notificationObj, callback){
//notifications.insert(notificationObj); won't work
//notifications.save(notificationObj); won't work
notifications.create(notificationObj); //work but created duplicated document
}
Any idea why insert and save doesn't work in my case? I tried create, it inserted 2 document instead of 1. That's strange.
| The .save() is an instance method of the model, while the .create() is called directly from the Model as a method call, being static in nature, and takes the object as a first parameter.
var mongoose = require('mongoose');
var notificationSchema = mongoose.Schema({
"datetime" : {
type: Date,
default: Date.now
},
"ownerId":{
type:String
},
"customerId" : {
type:String
},
"title" : {
type:String
},
"message" : {
type:String
}
});
var Notification = mongoose.model('Notification', notificationsSchema);
function saveNotification1(data) {
var notification = new Notification(data);
notification.save(function (err) {
if (err) return handleError(err);
// saved!
})
}
function saveNotification2(data) {
Notification.create(data, function (err, small) {
if (err) return handleError(err);
// saved!
})
}
Export whatever functions you would want outside.
More at the Mongoose Docs, or consider reading the reference of the Model prototype in Mongoose.
| MongoDB | 38,290,684 | 78 |
Let us have a MongoDB collection which has three docs..
db.collection.find()
{ _id:'...', user: 'A', title: 'Physics', Bank: 'Bank_A' }
{ _id:'...', user: 'A', title: 'Chemistry', Bank: 'Bank_B' }
{ _id:'...', user: 'B', title: 'Chemistry', Bank: 'Bank_A' }
We have a doc,
doc = { user: 'B', title: 'Chemistry', Bank:'Bank_A' }
If we use
db.collection.insert(doc)
here, this duplicate doc will get inserted in database.
{ _id:'...', user: 'A', title: 'Physics', Bank: 'Bank_A' }
{ _id:'...', user: 'A', title: 'Chemistry', Bank: 'Bank_B' }
{ _id:'...', user: 'B', title: 'Chemistry', Bank: 'Bank_A' }
{ _id:'...', user: 'B', title: 'Chemistry', Bank: 'Bank_A' }
How this duplicate can be stopped. On which field should indexing be done or any other approach?
| Don't use insert.
Use update with upsert=true. Update will look for the document that matches your query, then it will modify the fields you want and then, you can tell it upsert:True if you want to insert if no document matches your query.
db.collection.update(
<query>,
<update>,
{
upsert: <boolean>,
multi: <boolean>,
writeConcern: <document>
}
)
So, for your example, you could use something like this:
db.collection.update(doc, doc, {upsert:true})
| MongoDB | 24,122,981 | 78 |
To modify a field in an existing entry in mongoose, what is the difference between using
model = new Model([...])
model.field = 'new value';
model.save();
and this
Model.update({[...]}, {$set: {field: 'new value'});
The reason I'm asking this question is because of someone's suggestion to an issue I posted yesterday: NodeJS and Mongo - Unexpected behaviors when multiple users send requests simultaneously. The person suggested to use update instead of save, and I'm not yet completely sure why it would make a difference.
Thanks!
| Two concepts first. Your application is the Client, Mongodb is the Server.
The main difference is that with .save() you already have an object in your client side code or had to retrieve the data from the server before you are writing it back, and you are writing back the whole thing.
On the other hand .update() does not require the data to be loaded to the client from the server. All of the interaction happens server side without retrieving to the client.So .update() can be very efficient in this way when you are adding content to existing documents.
In addition, there is the multi parameter to .update() that allows the actions to be performed on more than one document that matches the query condition.
There are some things in convenience methods that you lose when using .update() as a call, but the benefits for certain operations is the "trade-off" you have to bear. For more information on this, and the options available, see the documentation.
In short .save() is a client side interface, .update() is server side.
| MongoDB | 22,278,761 | 78 |
How do I show the current user that I'm logged into the mongo shell as? This is useful to know because it is possible to change the user that you are logged in as—e.g. db.auth("newuser", "password")—while in the interactive shell. One can easily lose track.
Update
Using the accepted answer as a base, I changed the prompt to include user, connection, and db:
Edit .mongorc.js in your home directory.
function prompt() {
var username = "anon";
var user = db.runCommand({connectionStatus : 1}).authInfo.authenticatedUsers[0];
var host = db.getMongo().toString().split(" ")[2];
var current_db = db.getName();
if (!!user) {
username = user.user;
}
return username + "@" + host + ":" + current_db + "> ";
}
Result:
MongoDB shell version: 2.4.8
connecting to: test
[email protected]:test> use admin
switched to db admin
[email protected]:admin> db.auth("a_user", "a_password")
1
[email protected]:admin>
| The connectionStatus command shows authenticated users (if any, among some other data):
db.runCommand({connectionStatus : 1})
Which results in something like bellow:
{
"authInfo" : {
"authenticatedUsers" : [
{
"user" : "aa",
"userSource" : "test"
}
]
},
"ok" : 1
}
So if you are connecting from the shell, this is basically the current user
You can also add the user name to prompt by overriding the prompt function in .mongorc.js file, under OS user home directory. Roughly:
prompt = function() {
user = db.runCommand({connectionStatus : 1}).authInfo.authenticatedUsers[0]
if (user) {
return "user: " + user.user + ">"
}
return ">"
}
An example:
$ mongo -u "cc" -p "dd"
MongoDB shell version: 2.4.8
connecting to: test
user: cc>db.auth("aa", "bb")
1
user: aa>
| MongoDB | 21,414,608 | 78 |
For example I have the following data in MongoDB:
{ "_id" : ObjectId("524091f99c49c4c3f66b0e46"), "hour" : 10, "incoming", 100}
{ "_id" : ObjectId("5240a045dbeff33c7333aa51"), "hour" : 11, "incoming", 200}
{ "_id" : ObjectId("5240a2ecda0d37f35c618aca"), "hour" : 12, "incoming", 300}
Now I want to query "SUM the number of incoming between 11 - 12" (the result should be 500), how could I do this using Mongo Shell?
| As llovet suggested, the aggregation framework is the way to go. Here's what your query would look like:
db.CollectionNameGoesHere.aggregate({ $match: {
$and: [
{ hour: { $gte: 11 } },
{ hour: { $lte: 12 } }
]
} },
{ $group: { _id : null, sum : { $sum: "$incoming" } } });
You can also shape the resulting document to only contain the sum by adding a $project operator at the end of the pipeline, like so:
{ $project: { _id: 0, sum: 1 } }
| MongoDB | 18,969,916 | 78 |
My Ubuntu computer had crashed, and when I restarted it MongoDB wasn't working. I tried the following commands, and got the following output:
$ mongo
Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91
exception: connect failed
$ service mongodb status
mongodb stop/waiting
$ service mongodb restart
stop: Unknown instance:
start: Rejected send message, 1 matched rules; type="method_call",
sender=":1.57" (uid=1000 pid=2227 comm="start mongodb ")
interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)"
requested_reply="0"
destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")
$ tail /var/log/mongodb/mongodb.log
[initandlisten] exception in initAndListen: 12596 old lock file, terminating
dbexit:
[initandlisten] shutdown: going to close listening sockets...
[initandlisten] shutdown: going to flush diaglog...
[initandlisten] shutdown: going to close sockets...
[initandlisten] shutdown: waiting for fs preallocator...
[initandlisten] shutdown: closing all files...
[initandlisten] closeAllFiles() finished
dbexit: really exiting now
(Output reformatted to match website layout.)
What happened? How can I fix it?
| The log file is telling you that you have an "old lock file". MongoDB keeps a lock file while it's running. It creates this file when it is started, and deletes it when it's stopped. When the computer crashes (or MongoDB crashes, e.g. via kill), this file is not deleted, and thus the database does not start. The existence of this file indicates unclean shutdown of MongoDB.
Two things can be done:
If this is a development machine and you haven't been using your database (and neither have your programs), you can remove the file manually. For MongoDB 2.2.2 running on Ubuntu 12.10, it's in /var/lib/mongodb/mongod.lock. For other versions, the file could be in a different path or it could be named mongo.lock.
The safer route is to follow MongoDB's Durability and Repair guide. In summary, for a machine with the above configuration, you should execute the following commands:
sudo -u mongodb mongod --repair --dbpath /var/lib/mongodb/
sudo service mongod start
| MongoDB | 13,700,261 | 78 |
My database of choice is MongoDB. I'm writing a data-layer API to abstract implementation details from client applications - that is, I'm essentially providing a single public interface (an object which acts as an IDL).
I'm testing my logic as I go in a TDD manner. Before each unit test, an @Before method is called to create a database singleton, after which, when the test completes, an @After method is called to drop the database. This helps to promote independence among unit tests.
Nearly all unit tests, i.e. performing a contextual query, require some kind of insertion logic to occur before hand. My public interface provides an insert method - yet, it seems incorrect to use this method as precursor logic to each unit test.
Really I need some kind of mocking mechanism, yet, I haven't had much experience with mocking frameworks, and it seems that Google returns nothing re a mocking framework one might use with MongoDB.
What do others do in these situations? That is, how do people unit test code that interacts with a database?
Also, my public interface connects to a database defined in a external configuration file - it seems incorrect to use this connection for my unit testing - again, a situation that would benefit from some kind of mocking?
| Technically tests that talk to a database (nosql or otherwise) are not unit tests, as the tests are testing interactions with an external system, and not just testing an isolated unit of code. However tests that talk to a database are often extremely useful, and are often fast enough to run with the other unit tests.
Usually I have a Service interface (eg UserService) which encapsulates all the logic for dealing with the database. Code that relies on UserService can use a mocked version of UserService and is easily tested.
When testing the implementation of the Service that talks to Mongo, (eg MongoUserService) it is easiest to write some java code that will start/stop a mongo process on the local machine, and have your MongoUserService connect to that, see this question for some notes.
You could try to mock the functionality of the database while testing MongoUserService, but generally that is too error prone, and doesn't test what you really want to test, which is interaction with a real database. So when writing tests for MongoUserService, you set up a database state for each test. Look at DbUnit for an example of a framework for doing so with a database.
| MongoDB | 7,413,985 | 78 |
how can i set a callback for the error handling if mongoose isn't able to connect to my DB?
i know of
connection.on('open', function () { ... });
but is there something like
connection.on('error', function (err) { ... });
?
| When you connect you can pick up the error in the callback:
mongoose.connect('mongodb://localhost/dbname', function(err) {
if (err) throw err;
});
| MongoDB | 6,676,499 | 78 |
How do I connect to mongodb with node.js?
I have the node-mongodb-native driver.
There's apparently 0 documentation.
Is it something like this?
var mongo = require('mongodb/lib/mongodb');
var Db= new mongo.Db( dbname, new mongo.Server( 'mongolab.com', 27017, {}), {});
Where do I put the username and the password?
Also how do I insert something?
Thanks.
| Per the source:
After connecting:
Db.authenticate(user, password, function(err, res) {
// callback
});
| MongoDB | 4,688,693 | 78 |
I have a list of documents, each with lat and lon properties (among others).
{ 'lat': 1, 'lon': 2, someotherdata [...] }
{ 'lat': 4, 'lon': 1, someotherdata [...] }
[...]
I want to modify it so that it looks like this:
{ 'coords': {'lat': 1, 'lon': 2}, someotherdata [...]}
{ 'coords': {'lat': 4, 'lon': 1}, someotherdata [...]}
[...]
So far I've got this:
db.events.update({}, {$set : {'coords': {'lat': db.events.lat, 'lon': db.events.lon}}}, false, true)
But it treats the db.events.lat and db.events.lon as strings. How can I reference the document's properties?
Cheers.
| Update: If all you have to do is change the structure of a document without changing the values, see gipset's answer for a nice solution.
According to a (now unavailable) comment on the Update documentation page, you cannot reference the current document's properties from within an update().
You'll have to iterate through all the documents and update them like this:
db.events.find().snapshot().forEach(
function (e) {
// update document, using its own properties
e.coords = { lat: e.lat, lon: e.lon };
// remove old properties
delete e.lat;
delete e.lon;
// save the updated document
db.events.save(e);
}
)
Such a function can also be used in a map-reduce job or a server-side db.eval() job, depending on your needs.
| MongoDB | 3,788,256 | 78 |
I'm using:
Python 3.4.2
PyMongo 3.0.2
mongolab running mongod 2.6.9
uWSGI 2.0.10
CherryPy 3.7.0
nginx 1.6.2
uWSGI start params:
--socket 127.0.0.1:8081 --daemonize --enable-threads --threads 2 --processes 2
I setup my MongoClient ONE time:
self.mongo_client = MongoClient('mongodb://user:[email protected]:port/mydb')
self.db = self.mongo_client['mydb']
I try and save a JSON dict to MongoDB:
result = self.db.jobs.insert_one(job_dict)
It works via a unit test that executes the same code path to mongodb. However when I execute via CherryPy and uWSGI using an HTTP POST, I get this:
pymongo.errors.ServerSelectionTimeoutError: No servers found yet
Why am I seeing this behavior when run via CherryPy and uWSGI? Is this perhaps the new thread model in PyMongo 3?
Update:
If I run without uWSGI and nginx by using the CherryPy built-in server, the insert_one() works.
Update 1/25 4:53pm EST:
After adding some debug in PyMongo, it appears that topology._update_servers() knows that the server_type = 2 for server 'myserver-a.mongolab.com'. However server_description.known_servers() has the server_type = 0 for server 'myserver.mongolab.com'
This leads to the following stack trace:
result = self.db.jobs.insert_one(job_dict)
File "/usr/local/lib/python3.4/site-packages/pymongo/collection.py", line 466, in insert_one
with self._socket_for_writes() as sock_info:
File "/usr/local/lib/python3.4/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.4/site-packages/pymongo/mongo_client.py", line 663, in _get_socket
server = self._get_topology().select_server(selector)
File "/usr/local/lib/python3.4/site-packages/pymongo/topology.py", line 121, in select_server
address))
File "/usr/local/lib/python3.4/site-packages/pymongo/topology.py", line 97, in select_servers
self._error_message(selector))
pymongo.errors.ServerSelectionTimeoutError: No servers found yet
| We're investigating this problem, tracked in PYTHON-961. You may be able to work around the issue by passing connect=False when creating instances of MongoClient. That defers background connection until the first database operation is attempted, avoiding what I suspect is a race condition between spin up of MongoClient's monitor thread and multiprocess forking.
| MongoDB | 31,030,307 | 77 |
I'm on osx6.8 and need to install an earlier version of Mongodb, how do I install an earlier version with HomeBrew?
The below didn't work :(
dream-2:app2 star$ brew install mongodb-2.6.10
Error: No available formula for mongodb-2.6.10
Searching formulae...
Searching taps...
dream-2:app2 star$
Edit:
I'm getting a message to explain how this post is unique compared to another one, well, the answer to the other question is super long and complex and it's specific to postgresql and doesn't really answer my question.
|
Note: In September 2019 mongodb was removed from homebrew core, so these instructions have been updated to use mongodb-community instead, installed from the external tap.
If your current installation is still the pre-September mongodb package then you will need to replace mongodb-community with just mongodb on the lines marked with #*# below.
Another option is to simply upgrade away from the deprecated package now.
I already have the latest version of mongo installed, thanks to.
brew tap mongodb/brew
brew install mongodb-community
But I want to switch to the old version sometimes. First, install it:
brew search mongo
brew install [email protected]
Let's stop the current mongodb, if it is running:
brew services stop mongodb/brew/mongodb-community #*#
# or if you had started it manually
killall mongod
Now I want 3.2 on my PATH instead of the latest:
brew unlink mongodb-community #*#
brew link --force [email protected]
(Apparently it needs --force because it is keg-only.)
Now I have 3.2 on my PATH, I can start the test DB:
mongod --version
brew services start mongodb/brew/mongodb-community
# or start your own mongod from the command-line
When I am finished, I can do the reverse to switch back to the latest version:
brew services stop mongodb/brew/mongodb-community
brew unlink [email protected]
brew link mongodb-community #*#
brew services start mongodb/brew/mongodb-community #*#
And restart again.
| MongoDB | 30,379,127 | 77 |
I am writing a NodeJS server with ExpressJS, PassportJS, MongoDB and MongooseJS. I just managed to get PassportJS to use user data obtained via Mongoose to authenticate.
But to make it work, I had to use a "findById" function like below.
var UserModel = db.model('User',UserSchema);
UserModel.findById(id, function (err, user) { < SOME CODE > } );
UserModel is a Mongoose model. I declare the schema, UserSchema earlier. So I suppose UserModel.findById() is a method of the Mongoose model?
Question
What does findById do and is there documentation on it? I googled around a bit but didn't find anything.
| findById is a convenience method on the model that's provided by Mongoose to find a document by its _id. The documentation for it can be found here.
Example:
// Search by ObjectId
var id = "56e6dd2eb4494ed008d595bd";
UserModel.findById(id, function (err, user) { ... } );
Functionally, it's the same as calling:
UserModel.findOne({_id: id}, function (err, user) { ... });
Note that Mongoose will cast the provided id value to the type of _id as defined in the schema (defaulting to ObjectId).
| MongoDB | 12,483,632 | 77 |
MongoDB is fast, but only when your working set or index can fit into RAM. So if my server has 16G of RAM, does that mean the sizes of all my collections need to be less than or equal to 16G? How does one say "ok this is my working set, the rest can be "archived?"
| "Working set" is basically the amount of data AND indexes that will be active/in use by your system.
So for example, suppose you have 1 year's worth of data. For simplicity, each month relates to 1GB of data giving 12GB in total, and to cover each month's worth of data you have 1GB worth of indexes again totalling 12GB for the year.
If you are always accessing the last 12 month's worth of data, then your working set is: 12GB (data) + 12GB (indexes) = 24GB.
However, if you actually only access the last 3 month's worth of data, then your working set is: 3GB (data) + 3GB (indexes) = 6GB. In this scenario, if you had 8GB RAM and then you started regularly accessing the past 6 month's worth of data, then your working set would start to exceed past your available RAM and have a performance impact.
But generally, if you have enough RAM to cover the amount of data/indexes you expect to be frequently accessing then you will be fine.
Edit: Response to question in comments
I'm not sure I quite follow, but I'll have a go at answering. Firstly, the calculation for working set is a "ball park figure". Secondly, if you have a (e.g.) 1GB index on user_id, then only the portion of that index that is commonly accessed needs to be in RAM (e.g. suppose 50% of users are inactive, then 0.5GB of the index will be more frequently required/needed in RAM). In general, the more RAM you have, the better especially as working set is likely to grow over time due to increased usage. This is where sharding comes in - split the data over multiple nodes and you can cost effectively scale out. Your working set is then divided over multiple machines, meaning the more can be kept in RAM. Need more RAM? Add another machine to shard on to.
| MongoDB | 6,453,584 | 77 |
Could anybody tell me what is the pros and cons of mongodb, especially comparing with the relational database? including ACID, scalability, throughput, main memory usage, insert/query performance and index size etc.
| Some general points on MongoDB
Pros:
schema-less. If you have a flexible schema, this is ideal for a document store like
MongoDB. This is difficult to implement in a performant manner in RDBMS
ease of scale-out. Scale reads by using replica sets. Scale writes by using sharding (auto balancing). Just fire up another machine and away you go. Adding more machines = adding more RAM over which to distribute your working set.
cost. Depends on which RDBMS of course, but MongoDB is free and can run on Linux, ideal for running on cheaper commodity kit.
you can choose what level of consistency you want depending on the value of the data (e.g. faster performance = fire and forget inserts to MongoDB, slower performance = wait til insert has been replicated to multiple nodes before returning)
Cons:
Data size in MongoDB is typically higher due to e.g. each document has field names stored it
less flexibity with querying (e.g. no JOINs)
no support for transactions - certain atomic operations are supported, at a single document level
at the moment Map/Reduce (e.g. to do aggregations/data analysis) is OK, but not blisteringly fast. So if that's required, something like Hadoop may need to be added into the mix
less up to date information available/fast evolving product
I recently blogged my thoughts on MongoDB as someone coming from SQL Server background, so you might be interested in that (above are just some of the main points).
If you're looking for a "Is MongoDB better than RDBMS" answer - then IMHO there is no answer. NoSQL technologies like MongoDB provide an alternative, that complements RDBMS technologies. One may be better suited to a particular purpose than the other, so it's all about making a call on what is best for you for a given requirement.
| MongoDB | 5,244,437 | 77 |
The records in my database are
{"_id":"1","fn":"sagar","ln":"Varpe"}
{"_id":"1","fn":"sag","score":"10"}
{"_id":"1","ln":"ln1","score":"10"}
{"_id":"1","ln":"ln2"}
I need to design a MongoDB query to find all records that have a given key.
For example, if I pass ln as a parameter to the query it shuold return all records in which lnis a key. The results would be
{"_id":"1","fn":"sagar","ln":"Varpe"}
{"_id":"1","ln":"ln1","score":"10"}
{"_id":"1","ln":"ln2"}
| To find if a key/field exists in your document use the $exists operator.
Via the MongoDB shell ...
db.things.find( { ln : { $exists : true } } );
| MongoDB | 4,582,354 | 77 |
I know that MongoDB accepts and retrieves records as JSON/BSON objects, but how does it actually store these files on disk? Are they stored as a collection of individual *.json files or as one large file? I have a hunch as to the latter, since the MongoDB docs state that it works best on systems with ext4/xfs, which are better at handling large files. Can anyone confirm?
| A given mongo database is broken up into a series of BSON files on disk, with increasing size up to 2GB. BSON is its own format, built specifically for MongoDB.
These slides should answer all of your questions:
http://www.slideshare.net/mdirolf/inside-mongodb-the-internals-of-an-opensource-database
| MongoDB | 4,127,386 | 77 |
What's the difference between insert(), insertOne(), and insertMany() methods on MongoDB. In what situation should I use each one?
I read the docs, but it's not clear when use each one.
|
What's the difference between insert(), insertOne() and insertMany() methods on MongoDB
db.collection.insert() as mentioned in the documentation inserts a document or documents into a collection and returns
a WriteResult object for single inserts and a BulkWriteResult object for bulk inserts.
> var d = db.collection.insert({"b": 3})
> d
WriteResult({ "nInserted" : 1 })
> var d2 = db.collection.insert([{"b": 3}, {'c': 4}])
> d2
BulkWriteResult({
"writeErrors" : [ ],
"writeConcernErrors" : [ ],
"nInserted" : 2,
"nUpserted" : 0,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [ ]
})
db.collection.insertOne() as mentioned in the documentation inserts a document into a collection and returns a document which look like this:
> var document = db.collection.insertOne({"a": 3})
> document
{
"acknowledged" : true,
"insertedId" : ObjectId("571a218011a82a1d94c02333")
}
db.collection.insertMany() inserts multiple documents into a collection and returns a document that looks like this:
> var res = db.collection.insertMany([{"b": 3}, {'c': 4}])
> res
{
"acknowledged" : true,
"insertedIds" : [
ObjectId("571a22a911a82a1d94c02337"),
ObjectId("571a22a911a82a1d94c02338")
]
}
In what situation should I use each one?
The insert() method is deprecated in major driver so you should use the
the .insertOne() method whenever you want to insert a single document into your collection and the .insertMany when you want to insert multiple documents into your collection. Of course this is not mentioned in the documentation but the fact is that nobody really writes an application in the shell. The same thing applies to updateOne, updateMany, deleteOne, deleteMany, findOneAndDelete, findOneAndUpdate and findOneAndReplace. See Write Operations Overview.
| MongoDB | 36,792,649 | 76 |
I am using nodejs with the node-mongodb-native driver (http://mongodb.github.io/node-mongodb-native/).
I have documents with a date property stored as ISODate type.
Through nodejs, I am using this query:
db.collection("log").find({
localHitDate: {
'$gte': '2013-12-12T16:00:00.000Z',
'$lt': '2013-12-12T18:00:00.000Z'
}
})
It returns nothing. To make it work I need to do the following instead:
db.collection("log").find({
localHitDate: {
'$gte': ISODate('2013-12-12T16:00:00.000Z'),
'$lt': ISODate('2013-12-12T18:00:00.000Z')
}
})
But ISODate is not recognized in my nodejs code.
So how can I make a query against mongo date fields through my nodejs program?
Thank you
| You can use new Date('2013-12-12T16:00:00.000Z') in node.js;
new is a must, because Date() is already use to return date string.
ISODate is concepted in mongodb, you can use it in mongodb console, but it can be different for different programming language.
| MongoDB | 20,561,381 | 76 |
To what extent are 'lost data' criticisms still valid of MongoDB? I'm referring to the following:
1. MongoDB issues writes in unsafe ways by default in order to
win benchmarks
If you don't issue getLastError(), MongoDB doesn't wait for any
confirmation from the database that the command was processed.
This introduces at least two classes of problems:
In a concurrent environment (connection pools, etc), you may
have a subsequent read fail after a write has "finished";
there is no barrier condition to know at what point the
database will recognize a write commitment
Any unknown number of save operations can be dropped on the floor
due to queueing in various places, things outstanding in the TCP
buffer, etc, when your connection drops of the db were to be KILL'd or
segfault, hardware crash, you name it
2. MongoDB can lose data in many startling ways
Here is a list of ways we personally experienced records go missing:
They just disappeared sometimes. Cause unknown.
Recovery on corrupt database was not successful,
pre transaction log.
Replication between master and slave had gaps in the oplogs,
causing slaves to be missing records the master had. Yes,
there is no checksum, and yes, the replication status had the
slaves current
Replication just stops sometimes, without error. Monitor
your replication status!
...[other criticisms]
If still valid, these criticisms would be worrying to some extent. The article primarily references v1.6 and v1.8, but since then v2 has been released. Are the shortcomings discussed in the article still outstanding as of the current release?
| Note on Context:
This question was asked in 2012, but still sees traffic and votes to this day. The original answer was specifically to refute a particular post that was popular at the time of the question. Things have changed (and continue to change) massively since this answer was written. MongoDB has certainly become far more durable and reliable than it was in 2012 when even things like basic journaling were relatively new. I get downvotes and comments on this answer because people feel I don't address the current (for a given value of current) general answer to the titular question (not the detail): "are lost data criticisms still valid?". I have attempted to clarify in updates below, but there is basically no perfect answer to this question, it depends on your perspective, what your expectations are/were, what version you are using, what configuration, whether you feel upset about the default settings etc.
Original Answer:
That particular post was debunked, point by point by the MongoDB CTO and co-founder, Eliot Horowitz, here:
http://news.ycombinator.com/item?id=3202959
There is also a good summary here:
http://www.betabeat.com/2011/11/10/the-trolls-come-out-for-10gen/
The short version is, it looks like this was basically someone trolling for attention (successfully), with no solid evidence or corroboration. There have been genuine incidents in the past, which have been dealt with as the product evolved (see the introduction of journaling in 1.8 for example) or as more specific bugs were found and fixed.
Disclaimer: I do work for MongoDB (formerly 10gen), and love the fact that philnate got here and refuted this independently first - that probably says more about the product than anything else :)
Update: August 19th 2013
I've seen quite a bit of activity on this answer recently, which I assume is related to the announcement of the bug in SERVER-10478 - it is most certainly an edge case, but I would still recommend anyone using sharding with large documents to upgrade ASAP to v2.2.6 and v2.4.6 which include the fix for this issue.
Update: March 24th 2017
I no longer work for MongoDB, but stand behind this answer nonetheless. Given that this answer continues to get up (and down) votes and receives a lot of views I would like to point people at this post which shows the progress MongoDB has made since this question was posed. The database now passes the Jepsen tests, and has integrated the tests into its build process, there are plenty of far more mature systems that do not pass. Anyone still beating the data loss drum in 2017 really hasn't been paying attention.
Update: May 24th 2020
Jepsen has re-analyzed MongoDB 4.2.6 given that MongoDB now offers "full ACID transactions" and while it gets quite technical in parts, I highly recommend reading the article if data loss in MongoDB is a concern for you (I would recommend checking out any database you use that Jepsen tests, you might be surprised at their weak spots). The report summarizes the weaknesses in the default read and write concerns, talks about how reliable non-transaction reads and writes are with appropriate read and write concerns, addresses flaws in the documentation, and then provides significant details about the issues encountered when testing the new ACID transactions (and associated read/write concerns).
So, can you still lose data with MongoDB? Yes, especially with default settings, but that is true of most databases. Things are vastly better than they were back when this question was answered, and the capabilities are there for more reliability and durability, and they seem to work (transactions aside). My advice is to learn what the limitations of the configuration are that you operate and to then determine whether the data loss risk is acceptable or not for your product/business/use case.
| MongoDB | 10,560,834 | 76 |
I have a document structured like this:
{
_id:"43434",
heroes : [
{ nickname : "test", items : ["", "", ""] },
{ nickname : "test2", items : ["", "", ""] },
]
}
Can I $set the second element of the items array of the embedded object in array heros with nickname "test" ?
Result:
{
_id:"43434",
heroes : [
{ nickname : "test", items : ["", "new_value", ""] }, // modified here
{ nickname : "test2", items : ["", "", ""] },
]
}
| You need to make use of 2 concepts: mongodb's positional operator and simply using the numeric index for the entry you want to update.
The positional operator allows you to use a condition like this:
{"heroes.nickname": "test"}
and then reference the found array entry like so:
{"heroes.$ // <- the dollar represents the first matching array key index
As you want to update the 2nd array entry in "items", and array keys are 0 indexed - that's the key 1.
So:
> db.denis.insert({_id:"43434", heroes : [{ nickname : "test", items : ["", "", ""] }, { nickname : "test2", items : ["", "", ""] }]});
> db.denis.update(
{"heroes.nickname": "test"},
{$set: {
"heroes.$.items.1": "new_value"
}}
)
> db.denis.find()
{
"_id" : "43434",
"heroes" : [
{"nickname" : "test", "items" : ["", "new_value", "" ]},
{"nickname" : "test2", "items" : ["", "", "" ]}
]
}
| MongoDB | 10,432,677 | 76 |
I've been using mongo and script files like this:
$ mongo getSimilar.js
I would like to pass an argument to the file:
$ mongo getSimilar.js apples
And then in the script file pick up the argument passed in.
var arg = $1;
print(arg);
| Use --eval and use shell scripting to modify the command passed in.
mongo --eval "print('apples');"
Or make global variables (credit to Tad Marshall):
$ cat addthem.js
printjson( param1 + param2 );
$ ./mongo --nodb --quiet --eval "var param1=7, param2=8" addthem.js
15
| MongoDB | 10,114,355 | 76 |
Let's say I have the following documents
Article { Comment: embedMany }
Comment { Reply: embedMany }
Reply { email: string, ip: string }
I want to make a query that selects distinct Reply.ip where Reply.email = xxx
Something like this, only it doesn't work..
db.Article.find("Comment.Reply.email" : "xxx").distinct("Comment.Reply.ip")
JSON export:
{
"_id":{
"$oid":"4e71be36c6eed629c61cea2c"
},
"name":"test",
"Comment":[
{
"name":"comment test",
"Reply":[
{
"ip":"192.168.2.1",
"email":"yyy"
},
{
"ip":"127.0.0.1",
"email":"zzz"
}
]
},
{
"name":"comment 2 test",
"Reply":[
{
"ip":"128.168.1.1",
"email":"xxx"
},
{
"ip":"192.168.1.1",
"email":"xxx"
}
]
}
]
}
I run : db.Article.distinct("Comment.Reply.ip",{"Comment.Reply.email" : "xxx"})
I expect : ["128.168.1.1", "192.168.1.1"]
I get : ["127.0.0.1", "128.168.1.1", "192.168.1.1", "192.168.2.1"]
| Distinct query in mongo with condition works like this
db.Article.distinct("Comment.Reply.ip",{"Comment.Reply.email" : "xxx"})
not other way around
EDIT:
I understand the problem now, inorder to match/filter subdocuments we need to use $elemMatch operator, like this
db.Article.distinct("Comment.Reply.ip",{Comment: {$elemMatch: {"Reply.email" : "xxx"}}})
but this will not work if the sub-document contains sub arrays (in your case, you have array of replies). There is an existing issue $elemMatch on subArray is opened. And its planned for mongo 2.1. You can check out the link for more info
| MongoDB | 7,419,986 | 76 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.