question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I had working Let's encrypt certificates some months ago (with the old letsencrypt client). The server I am using is nginx. Certbot is creating the .well-known folder, but not the acme-challenge folder Now I tried to create new certificates via ~/certbot-auto certonly --webroot -w /var/www/webroot -d domain.com -d www.domain.com -d git.domain.com But I always get errors like this: IMPORTANT NOTES: - The following errors were reported by the server: Domain: git.domain.com Type: unauthorized Detail: Invalid response from http://git.domain.com/.well-known/acme-challenge/ZLsZwCsBU5LQn6mnzDBaD6MHHlhV3FP7ozenxaw4fow: "<.!DOCTYPE html> <.html lang='en'> <.head prefix='og: http://ogp.me/ns#'> <.meta charset='utf-8'> <.meta content='IE=edge' http-equiv" Domain: www.domain.com Type: unauthorized Detail: Invalid response from http://www.domain.com/.well-known/acme-challenge/7vHwDXstyiY0wgECcR5zuS2jE57m8I3utszEkwj_mWw: "<.html> <.head><.title>404 Not Found</title></head> <.body bgcolor="white"> <.center><.h1>404 Not Found</h1></center> (Of course the dots inside the HTML tags are not really there) I have looked for a solution, but didn't found one yet. Does anybody know why certbot is not creating the folders? Thanks in advance!
The problem was the nginx configuration. I replaced my long configuration files with the simplest config possible: server { listen 80; server_name domain.com www.domain.com git.domain.com; root /var/www/domain/; } Then I was able to issue new certificates. The problem with my long configuration files was (as far as I can tell) that I had the these lines: location ~ /.well-known { allow all; } But they should be: location ~ /.well-known/acme-challenge/ { allow all; } Now the renewal works, too.
NGINX
38,382,739
29
I've got this in my nginx config: location ~ /\. { deny all; } location /.well-known/ { allow all; } But I still can't access http://example.com/.well-known/acme-challenge/taUUGC822PcdnCnW_aADOzObZqFm3NNM5PEzLNFJXRU. How do I allow access to just that one dot directory?
You have a regex location and a prefix location. The regex location takes precedence unless ^~ is used with the prefix location. Try: location ~ /\. { deny all; } location ^~ /.well-known/ { # allow all; } See this document for details.
NGINX
34,259,548
29
I'm trying to install an intermediate certificate on Nginx ( laravel forge ). Right now the certificate is properly installed, just the intermediate that is missing. I've seen that I need to concatenate the current certificate with the intermediate. What is the best/safest way to add the intermediate certificate. Also, if the install of the intermediate failed, can I just roll back to the previous certificate, and reboot nginx? ( the website site is live, so I can't have a too long downtime )
Nginx expects all server section certificates in a file that you refer with ssl_certificate. Just put all vendor's intermediate certificates and your domain's certificate in a file. It'll look like this. -----BEGIN CERTIFICATE----- MII... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MII... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MII... -----END CERTIFICATE----- To make sure everything is okay and to avoid downtime, I would suggest you to setup Nginx locally, add 127.0.0.1 yourdomain.com to /etc/hosts, and try open it from major browsers. When you've verified that everything is correct your can replicate it to the production server. When you're done, it is a good idea to use some SSL checker tool to verify (e.g. this one). Because pre-installed CA certificates may vary depending on browser and platform, you can easily overlook a misconfiguration checking from one OS or a limited set of browsers. Edit As @Martin pointed out, the order of certificates in the file is important. RFC 4346 for TLS 1.1 states: This is a sequence (chain) of X.509v3 certificates. The sender's certificate must come first in the list. Each following certificate must directly certify the one preceding it. Thus the order is: 1. Your domain's certificate 2. Vendor's intermediate certificate that certifies (1) 3. Vendor's intermediate certificate that certifies (2) ... n. Vendor's root certificate that certifies (n-1). Optional, because it should be contained in client's CA store.
NGINX
25,750,890
29
I want to write non-blocking applications. I use apache2, but I was reading about nginx and its advantage with respect to apache processes. I am considering changing out apache for nginx. My question is, is it possible to write non-blocking web applications with php and nginx?. Or is a better idea to try and do this with python, using some reverse proxy like uwsgi or gunicorn with nginx? Or is the solution to learn nodejs?
Writing non blocking applications in php is possible, but it's probably not the best environment to do so, as it wasn't created keeping that in mind! You get a pretty decent control over your child processes using the process control library PCNTL but it obviously won't ever offer you same ease of use that other environments can give you! I don't know python very well but personally I'd recommended you go with nodejs! It's a fairly new technology, that's true, but everything is non blocking there and it's meant to be that way! Basically what you have is a single thread (which you can extend however you want in this news versions) and literally everything (except you tell it to do differently) is going to be event-driven, leaving space to proceed on the process queue as expected! Nodejs is really easy to learn, if you ever stumbled upon web applications, you know javascript anyways! it is still not hugely documented, but there are many ready to use modules you can download and use straight away!
NGINX
16,313,224
29
we have two servers, A and B. Server A is accessed worldwide. He has nginx installed. That's what I have in conf: location /test { proxy_pass http://localserver.com; } What it should do, is translate the addreess http://globalserver.com/test (that is server A) to internal server address http://localserver.com. However, it does append the location path, that is, itries to look for http://localserver.com/test, which is not available at all. How can I make the proxy pass to the correct address, throwing out the last part in the location?
That should work. Nginx should strip the '/test' path on the upstream local server. So what I can say is that is not the cause. To make it a bit better, try this: location /test/ { proxy_pass http://localserver.com/; } The 2 slashes I added at the first 2 lines will avoid mistakenly match '/testABC' and send the wrong request to the upstream local server, for example. Do you have a proxy_redirect line in the same location block? If your upstream local server has redirects, then a mistake on that line will cause an issue like you described. [UPDATE] Found the root cause why the original config didn't work and mine works: nginx does NOT replace URI path part if the proxy_pass directive does not have a URI path itself. So my fix of adding a slash (slash is treated as a URI path) at the end triggers the URI path replacement. Reference: http://wiki.nginx.org/HttpProxyModule#proxy_pass If it is necessary to transmit URI in the unprocessed form then directive proxy_pass should be used without URI part location /some/path/ { proxy_pass http://127.0.0.1; }
NGINX
15,353,935
29
my nginx server is actually proxying my node backend (which listens on port 3000) with a simple: location /api/ { proxy_pass http://upstream_1; } Where upstream_1 is my node cluster defined in nginx.conf (on port 3000). I'm gonna have to add SSL over http connections, so I have the following question: do I only need to configure nginx to enable ssl? And it will automatically "uncrypt" the request and pass it uncrypted to Node which will be able to handle it normally? Or do I need to configure Nodejs to support ssl as well?
If you're using nginx to handle SSL, then your node server will just be using http. upstream nodejs { server 127.0.0.1:4545 max_fails=0; } server { listen 443; ssl on; ssl_certificate newlocalhost.crt; ssl_certificate_key newlocalhost.key; server_name nodejs.newlocalhost.com; add_header Strict-Transport-Security max-age=500; location / { proxy_pass http://nodejs; proxy_redirect off; proxy_set_header Host $host ; proxy_set_header X-Real-IP $remote_addr ; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for ; proxy_set_header X-Forwarded-Proto https; } }
NGINX
10,375,659
29
I just installed Passenger 3.0.11 and nginx and got this error: Starting nginx: /opt/nginx/sbin/nginx: error while loading shared libraries: libpcre.so.0: cannot open shared object file: No such file or directory
I got the same error, and I fixed the problem by running sudo ldconfig.
NGINX
8,501,163
29
I am trying to create a docker-compose setup with nginzx, flask, and react. I started my react app with react-create-app (https://github.com/facebook/create-react-app) and haven't changed anything from it yet. My Dockerfile for the react app is: FROM node:10 WORKDIR /usr/src/app # Install app dependencies # A wildcard is used to ensure both package.json AND package-lock.json are copied COPY package*.json ./ RUN npm install --verbose # Bundle app source COPY . . EXPOSE 3000 CMD ["npm", "start"] The compose script is: version: '3.1' services: nginx: image: nginx:1.15 container_name: nginx volumes: - ../:/var/www - ./nginx-dev.conf:/etc/nginx/conf.d/default.conf ports: - 80:80 networks: - my-network depends_on: - flask - react react: build: context: ../react-app/ dockerfile: ./Dockerfile container_name: react volumes: - ../react-app:/usr/src/app networks: my-network: aliases: - react-app expose: - 3000 ports: - "3000:3000" flask: ... networks: my-network: The flask and nginx containers start fine, the output for react is: react | react | > [email protected] start /usr/src/app react | > react-scripts start react | react | ℹ 「wds」: Project is running at http://my-ip-address/ react | ℹ 「wds」: webpack output is served from react | ℹ 「wds」: Content not from webpack is served from /usr/src/app/public react | ℹ 「wds」: 404s will fallback to / react | Starting the development server... react | react | react | npm verb lifecycle [email protected]~start: unsafe-perm in lifecycle true react | npm verb lifecycle [email protected]~start: PATH: /usr/local/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin:/usr/src/app/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin react | npm verb lifecycle [email protected]~start: CWD: /usr/src/app react | npm info lifecycle [email protected]~poststart: [email protected] react | npm verb exit [ 0, true ] react | npm timing npm Completed in 1727ms react | npm info ok react exited with code 0
Adding: stdin_open: true to the React component of my docker-compose file fixed my issue. Example: version: '3.1' services: react: build: context: ../react-app/ dockerfile: ./Dockerfile container_name: react volumes: - ../react-app:/usr/src/app networks: my-network: aliases: - react-app expose: - 3000 ports: - "3000:3000" stdin_open: true
NGINX
60,895,246
28
(I know others have asked this question before, but I'm not able to solve the problem using the solutions proposed in other posts, so i figured i would try to post my configuration files and see if someone could help.) I want to create a container for nginx, and use proxy_pass to pass requests to the container with the running web application. I can't figure out how to communicate between the two containers. When i try to run docker stack deploy -c docker-compose.yml somename, only the web container starts. The nginx container fails to start, and is stuck in a loop of trying to restart. This is the log messages I get: 2017/08/16 14:56:10 [emerg] 1#1: host not found in upstream "web:8000" in /etc/nginx/conf.d/nginx.conf:2 nginx: [emerg] host not found in upstream "web:8000" in /etc/nginx/conf.d/nginx.conf:2 I found an answer that as long as you use the same name as under services in the docker-compose.yml file, nginx would find the variable. However that doesn't seem to help in my case. How does communication like this between different containers work? Where is the 'web' variable Most examples I've seen use version: "2" in the docker-compose.yml file, should this make a difference? My docker-compose.yml: version: "3" services: web: image: user/repo:web deploy: resources: limits: cpus: "0.1" memory: 50M restart_policy: condition: on-failure ports: - "8000:80" networks: - webnet nginx: image: user/repo:nginx ports: - 80:80 links: - web:web depends_on: - web networks: webnet: Nginx config: upstream docker-web { server web:8000; } server { listen 80; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; location / { proxy_pass http://docker-web; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; } }
I figured out how to fix the problem. Got some help to fix the docker-compose.yml, so it looks like this: docker-compose-yml: version: "3" services: web: image: user/repo:web deploy: resources: limits: cpus: "0.1" memory: 50M restart_policy: condition: on-failure ports: - "8000:80" networks: main: aliases: - web nginx: image: user/repo:nginx ports: - 80:80 links: - web:web depends_on: - web networks: main: aliases: - nginx networks: main: After this the nginx container actually ran, but it was still not capable of connecting to the web-container. Found out I was able to use both curl web and curl web -p 8000 to get the page from web, from inside the nginx container. Then I changed the upstream in my nginx.conf from this upstream docker-web { server web:8000; } to this: upstream docker-web { server web; }
NGINX
45,717,835
28
I have a Python REST service and I want to serve it using HTTP2. My current server setup is nginx -> Gunicorn. In other words, nginx (port 443 and 80 that redirects to port 443) is running as a reverse proxy and forwards requests to Gunicorn (port 8000, no SSL). nginx is running in HTTP2 mode and I can verify that by using chrome and inspecting the 'protocol' column after sending a simple GET to the server. However, Gunicorn reports that the requests it receives are HTTP1.0. Also, I coulnt't find it in this list: https://github.com/http2/http2-spec/wiki/Implementations So, my questions are: Is it possible to serve a Python (Flask) application with HTTP2? If yes, which servers support it? In my case (one reverse proxy server and one serving the actual API), which server has to support HTTP2? The reason I want to use HTTP2 is because in some cases I need to perform thousands of requests all together and I was interested to see if the multiplexed requests feature of HTTP2 can speed things up. With HTTP1.0 and Python Requests as the client, each request takes ~80ms which is unacceptable. The other solution would be to just bulk/batch my REST resources and send multiple with a single requests. Yes, this idea sounds just fine, but I am really interested to see if HTTP2 could speed things up. Finally, I should mention that for the client side I use Python Requests with the Hyper http2 adapter.
Is it possible to serve a Python (Flask) application with HTTP/2? Yes, by the information you provide, you are doing it just fine. In my case (one reverse proxy server and one serving the actual API), which server has to support HTTP2? Now I'm going to tread on thin ice and give opinions. The way HTTP/2 has been deployed so far is by having an edge server that talks HTTP/2 (like ShimmerCat or NginX). That server terminates TLS and HTTP/2, and from there on uses HTTP/1, HTTP/1.1 or FastCGI to talk to the inner application. Can, at least theoretically, an edge server talk HTTP/2 to web application? Yes, but HTTP/2 is complex and for inner applications, it doesn't pay off very well. That's because most web application frameworks are built for handling requests for content, and that's done well enough with HTTP/1 or FastCGI. Although there are exceptions, web applications have little use for the subtleties of HTTP/2: multiplexing, prioritization, all the myriad of security precautions, and so on. The resulting separation of concerns is in my opinion a good thing. Your 80 ms response time may have little to do with the HTTP protocol you are using, but if those 80 ms are mostly spent waiting for input/output, then of course running things in parallel is a good thing. Gunicorn will use a thread or a process to handle each request (unless you have gone the extra-mile to configure the greenlets backend), so consider if letting Gunicorn spawn thousands of tasks is viable in your case. If the content of your requests allow it, maybe you can create temporary files and serve them with an HTTP/2 edge server.
NGINX
38,878,880
28
I have installed Gitlab CE version. I can find nginx bundled in Gitlab. However I cannot find a way to restart nginx separately. I have tried sudo service nginx restart but it gives: * Restarting nginx nginx [fail] I have checked all the document but cannot find a solution. I am trying to add vhost to the bundled nginx according to this tutorial. But I stuck at that step. Is there other way to add vhost to bundled nginx with Gitlab? Or How can I check whether my nginx conf work? Edit: 502 error I have solved. I try to use NON-bundle nginx according to this doc , But after I modify gitlab.rb and run sudo gitlab-ctl reconfigure , I got 502 Whoops, GitLab is taking too much time to respond. error. Here is my gitlab.conf for nginx. upstream gitlab { server unix://var/opt/gitlab/gitlab-git-http-server/sockets/gitlab.socket fail_timeout=0; # } server { listen *:80; server_name blcu.tk; server_tokens off; root /opt/gitlab/embedded/service/gitlab-rails/public; client_max_body_size 250m; access_log /var/log/gitlab/nginx/gitlab_access.log; error_log /var/log/gitlab/nginx/gitlab_error.log; # Ensure Passenger uses the bundled Ruby version passenger_ruby /opt/gitlab/embedded/bin/ruby; # Correct the $PATH variable to included packaged executables passenger_env_var PATH "/opt/gitlab/bin:/opt/gitlab/embedded/bin:/usr/local/bin:/usr/bin:/bin"; # Make sure Passenger runs as the correct user and group to # prevent permission issues passenger_user git; passenger_group git; # Enable Passenger & keep at least one instance running at all times passenger_enabled on; passenger_min_instances 1; location / { try_files $uri $uri/index.html $uri.html @gitlab; } location @gitlab { # If you use https make sure you disable gzip compression # to be safe against BREACH attack proxy_read_timeout 300; # Some requests take more than 30 seconds. proxy_connect_timeout 300; # Some requests take more than 30 seconds. proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Frame-Options SAMEORIGIN; proxy_pass http://gitlab; } location ~ ^/(assets)/ { root /opt/gitlab/embedded/service/gitlab-rails/public; # gzip_static on; # to serve pre-gzipped version expires max; add_header Cache-Control public; } error_page 502 /502.html; }
To restart only one component of GitLab Omnibus you can execute sudo gitlab-ctl restart <component>. Therefore, to restart Nginx: sudo gitlab-ctl restart nginx As a further note, this same concept is possible with nearly all of the gitlab-ctl commands. For example, sudo gitlab-ctl tail allows you to see all GitLab logs. Applying this concept, sudo gitlab-ctl tail nginx will tail only Nginx logs.
NGINX
32,969,612
28
I managed to deploy meteor on my infrastructure (Webfactions). The application seems to work fine but I get the following error in the browser console when my application starts: WebSocket connection to 'ws://.../websocket' failed: Error during WebSocket handshake: Unexpected response code: 400
WebSockets are fast and you don't have to (and shouldn't) disable them. The real cause of this error is that Webfactions uses nginx, and nginx was improperly configured. Here's how to correctly configure nginx to proxy WebSocket requests, by setting proxy_set_header Upgrade $http_upgrade; and proxy_set_header Connection $connection_upgrade;: # we're in the http context here map $http_upgrade $connection_upgrade { default upgrade; '' close; } # the Meteor / Node.js app server server { server_name yourdomain.com; access_log /etc/nginx/logs/yourapp.access; error_log /etc/nginx/logs/yourapp.error error; location / { proxy_pass http://localhost:3000; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; # pass the host header - http://wiki.nginx.org/HttpProxyModule#proxy_pass proxy_http_version 1.1; # recommended with keepalive connections - http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version # WebSocket proxying - from http://nginx.org/en/docs/http/websocket.html proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; } } This is an improved nginx configuration based on David Weldon's nginx config. Andrew Mao has reached a very similar configuration. Remember to also set the HTTP_FORWARDED_COUNT environment variable to the number of proxies in front of the app (usually 1).
NGINX
17,014,969
28
I compiled the nginx on Ubuntu myself. I start my nginx with -c nginx.conf parameter. In my nginx.conf file, I try to turn off error log with but failed. error_log /dev/null crit; Still got the error message: nginx: [alert] could not open error log file: open() "/usr/nginx/logs/error.log" failed (2: No such file or directory) How could I turn off this log or change its location?
The syntax for disabling the error log is ok, but the docs state that a default logfile is used before the config is read. (which seems reasonable because how would it otherwise tell you you have an error in your config) Try creating this file by hand with the correct permissions for the user that runs nginx. Or try starting the server as root.
NGINX
13,371,925
28
I need to keep alive my connection between nginx and upstream nodejs. Just compiled and installed nginx 1.2.0 my configuration file: upstream backend { ip_hash; server dev:3001; server dev:3002; server dev:3003; server dev:3004; keepalive 128; } server { listen 9000; server_name dev; location / { proxy_pass http://backend; error_page 404 = 404.png; } } My programe (dev:3001 - 3004) detect that the connection was closed by nginx after response. document
The documentation states that for http keepalive, you should also set proxy_http_version 1.1; and proxy_set_header Connection "";
NGINX
10,395,807
28
I'm trying to cache static content which are basically inside the paths below in virtual server configuration. For some reason files are not being cached. I see several folders and files inside the cache dir but its always something like 20mb no higher no lower. If it were caching images for example would take at least 500mb of space. Here is the nginx.conf cache part: ** nginx.conf ** proxy_cache_path /usr/share/nginx/www/cache levels=1:2 keys_zone=static$ proxy_temp_path /usr/share/nginx/www/tmp; proxy_read_timeout 300s; Heres the default virtual server. **sites-available/default** server { listen 80; root /usr/share/nginx/www; server_name myserver; access_log /var/log/nginx/myserver.log main; error_log /var/log/nginx/error.log; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; location ~* ^/(thumbs|images|css|js|pubimg)/(.*)$ { proxy_pass http://backend; proxy_cache static; proxy_cache_min_uses 1; proxy_cache_valid 200 301 302 120m; proxy_cache_valid 404 1m; expires max; } location / { proxy_pass http://backend; } }
Make sure your backend does not return Set-Cookie header. If Nginx sees it, it disables caching. If this is your case, the best option is to fix your backend. When fixing the backend is not an option, it's possible to instruct Nginx to ignore Set-Cookie header proxy_ignore_headers "Set-Cookie"; proxy_hide_header "Set-Cookie"; See the documentation proxy_ignore_header will ensure that the caching takes place. proxy_hide_header will ensure the Cookie payload is not included in the cached payload. This is important to avoid leaking cookies via the NGINX cache.
NGINX
9,230,812
28
I'm developing a small application with C# in .NET and I want to have a small light weight database which does not use much resources. Could you please list some of the best known light weight database softwares.
14/06/2016 Yep... still getting upvotes :-/ 17/03/2014 I'm still receiving upvotes for this, be mindful of the date this was originally answered. Though the main three items listed are still entirely viable, the list will tend towards becoming stale. There are further database technologies available that are not listed. You have a couple of immediately recognisable and free options: SQL Server Express LocalDB SQL Server Compact Edition SQLite The SQL Server Compact download comes with the ADO.NET provider that you will need to reference in code. The SQLite download might not have it so here is a link: http://sqlite.phxsoftware.com/ All three use SQL, though likely with a few limitations / quirks. Management Studio works with Compact and LocalDB, whereas with SQLite you will need another UI tool such as SQLite Administrator: http://sqliteadmin.orbmu2k.de/ There are NoSQL alternatives, such as: Sterling RavenDb Personally I would avoid using MS Access in the face of other free options. You cannot go wrong with LocalDB, Compact, or SQLite. They are all lovely small databases that run relatively quickly in little RAM - personal preference as to the religious aspects about liking a Microsoft product I suppose :-) I use Sterling for Windows Phone programming as it is built to use Isolated Storage. I have only seen articles on RavenDb, but I can tell you that it is a JSON based document storage framework. Not to confuse the situation (go with SQLite, SQL Server Express LocalDB, or SQL Server Compact Edition), but there are other embedded / local databases out there, some are relational others are object-oriented: Embedded Firebird db4o VistaDb SharpHSQL Berkeley DB Eloquera SiaqoDb Not all of these are free. SQL / LINQ / in-proc support differs across them all. This list is just for curiosity. There is now also Karvonite, however the code gallery link is broken. When it's live again I'll be looking into this one for WP7 development.
MongoDB
6,749,556
189
I am trying to create a new folder in the root directory. I tried all kinds of examples. sudo mkdir /data/db sudo mkdir -p /data/db I keep getting: mkdir: /data: Read-only file system
If you have a Mac and updated to Catalina or more recent version, then the root folder is no longer writable. I just changed the directory somewhere else. Been using this command for now mongod --dbpath=/Users/user/data/db
MongoDB
58,034,955
187
I have gone through many blogs and sites about configuring Elasticsearch for MongoDB to index Collections in MongoDB but none of them were straightforward. Please explain to me a step by step process for installing elasticsearch, which should include: configuration run in the browser I am using Node.js with express.js, so please help accordingly.
This answer should be enough to get you set up to follow this tutorial on Building a functional search component with MongoDB, Elasticsearch, and AngularJS. If you're looking to use faceted search with data from an API then Matthiasn's BirdWatch Repo is something you might want to look at. So here's how you can setup a single node Elasticsearch "cluster" to index MongoDB for use in a NodeJS, Express app on a fresh EC2 Ubuntu 14.04 instance. Make sure everything is up to date. sudo apt-get update Install NodeJS. sudo apt-get install nodejs sudo apt-get install npm Install MongoDB - These steps are straight from MongoDB docs. Choose whatever version you're comfortable with. I'm sticking with v2.4.9 because it seems to be the most recent version MongoDB-River supports without issues. Import the MongoDB public GPG Key. sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 Update your sources list. echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list Get the 10gen package. sudo apt-get install mongodb-10gen Then pick your version if you don't want the most recent. If you are setting your environment up on a windows 7 or 8 machine stay away from v2.6 until they work some bugs out with running it as a service. apt-get install mongodb-10gen=2.4.9 Prevent the version of your MongoDB installation being bumped up when you update. echo "mongodb-10gen hold" | sudo dpkg --set-selections Start the MongoDB service. sudo service mongodb start Your database files default to /var/lib/mongo and your log files to /var/log/mongo. Create a database through the mongo shell and push some dummy data into it. mongo YOUR_DATABASE_NAME db.createCollection(YOUR_COLLECTION_NAME) for (var i = 1; i <= 25; i++) db.YOUR_COLLECTION_NAME.insert( { x : i } ) Now to Convert the standalone MongoDB into a Replica Set. First Shutdown the process. mongo YOUR_DATABASE_NAME use admin db.shutdownServer() Now we're running MongoDB as a service, so we don't pass in the "--replSet rs0" option in the command line argument when we restart the mongod process. Instead, we put it in the mongod.conf file. vi /etc/mongod.conf Add these lines, subbing for your db and log paths. replSet=rs0 dbpath=YOUR_PATH_TO_DATA/DB logpath=YOUR_PATH_TO_LOG/MONGO.LOG Now open up the mongo shell again to initialize the replica set. mongo DATABASE_NAME config = { "_id" : "rs0", "members" : [ { "_id" : 0, "host" : "127.0.0.1:27017" } ] } rs.initiate(config) rs.slaveOk() // allows read operations to run on secondary members. Now install Elasticsearch. I'm just following this helpful Gist. Make sure Java is installed. sudo apt-get install openjdk-7-jre-headless -y Stick with v1.1.x for now until the Mongo-River plugin bug gets fixed in v1.2.1. wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.1.1.deb sudo dpkg -i elasticsearch-1.1.1.deb curl -L http://github.com/elasticsearch/elasticsearch-servicewrapper/tarball/master | tar -xz sudo mv *servicewrapper*/service /usr/local/share/elasticsearch/bin/ sudo rm -Rf *servicewrapper* sudo /usr/local/share/elasticsearch/bin/service/elasticsearch install sudo ln -s `readlink -f /usr/local/share/elasticsearch/bin/service/elasticsearch` /usr/local/bin/rcelasticsearch Make sure /etc/elasticsearch/elasticsearch.yml has the following config options enabled if you're only developing on a single node for now: cluster.name: "MY_CLUSTER_NAME" node.local: true Start the Elasticsearch service. sudo service elasticsearch start Verify it's working. curl http://localhost:9200 If you see something like this then you're good. { "status" : 200, "name" : "Chi Demon", "version" : { "number" : "1.1.2", "build_hash" : "e511f7b28b77c4d99175905fac65bffbf4c80cf7", "build_timestamp" : "2014-05-22T12:27:39Z", "build_snapshot" : false, "lucene_version" : "4.7" }, "tagline" : "You Know, for Search" } Now install the Elasticsearch plugins so it can play with MongoDB. bin/plugin --install com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb/1.6.0 bin/plugin --install elasticsearch/elasticsearch-mapper-attachments/1.6.0 These two plugins aren't necessary but they're good for testing queries and visualizing changes to your indexes. bin/plugin --install mobz/elasticsearch-head bin/plugin --install lukas-vlcek/bigdesk Restart Elasticsearch. sudo service elasticsearch restart Finally index a collection from MongoDB. curl -XPUT localhost:9200/_river/DATABASE_NAME/_meta -d '{ "type": "mongodb", "mongodb": { "servers": [ { "host": "127.0.0.1", "port": 27017 } ], "db": "DATABASE_NAME", "collection": "ACTUAL_COLLECTION_NAME", "options": { "secondary_read_preference": true }, "gridfs": false }, "index": { "name": "ARBITRARY INDEX NAME", "type": "ARBITRARY TYPE NAME" } }' Check that your index is in Elasticsearch curl -XGET http://localhost:9200/_aliases Check your cluster health. curl -XGET 'http://localhost:9200/_cluster/health?pretty=true' It's probably yellow with some unassigned shards. We have to tell Elasticsearch what we want to work with. curl -XPUT 'localhost:9200/_settings' -d '{ "index" : { "number_of_replicas" : 0 } }' Check cluster health again. It should be green now. curl -XGET 'http://localhost:9200/_cluster/health?pretty=true' Go play.
MongoDB
23,846,971
187
I'm doing a Node.js project that contains sub projects. One sub project will have one Mongodb database and Mongoose will be use for wrapping and querying db. But the problem is Mongoose doesn't allow to use multiple databases in single mongoose instance as the models are build on one connection. To use multiple mongoose instances, Node.js doesn't allow multiple module instances as it has caching system in require(). I know disable module caching in Node.js but I think it is not the good solution as it is only need for mongoose. I've tried to use createConnection() and openSet() in mongoose, but it was not the solution. I've tried to deep copy the mongoose instance (http://blog.imaginea.com/deep-copy-in-javascript/) to pass new mongoose instances to the sub project, but it throwing RangeError: Maximum call stack size exceeded. I want to know is there anyways to use multiple database with mongoose or any workaround for this problem? Because I think mongoose is quite easy and fast. Or any other modules as recommendations?
According to the fine manual, createConnection() can be used to connect to multiple databases. However, you need to create separate models for each connection/database: var conn = mongoose.createConnection('mongodb://localhost/testA'); var conn2 = mongoose.createConnection('mongodb://localhost/testB'); // stored in 'testA' database var ModelA = conn.model('Model', new mongoose.Schema({ title : { type : String, default : 'model in testA database' } })); // stored in 'testB' database var ModelB = conn2.model('Model', new mongoose.Schema({ title : { type : String, default : 'model in testB database' } })); I'm pretty sure that you can share the schema between them, but you have to check to make sure.
MongoDB
19,474,712
187
I am using MongoDB with Node.JS. I have a collection which contains a date and other rows. The date is a JavaScript Date object. How can I sort this collection by date?
Just a slight modification to @JohnnyHK answer collection.find().sort({datefield: -1}, function(err, cursor){...}); In many use cases we wish to have latest records to be returned (like for latest updates / inserts).
MongoDB
13,847,766
185
I'm getting the following error: alex@alex-K43U:/$ mongo MongoDB shell version: 2.2.0 connecting to: test Thu Oct 11 11:46:53 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91 exception: connect failed alex@alex-K43U:/$ This is what happens when I try to start mongodb: * Starting database mongodb [fail] I already tried mongo --repair I made chown and chmod to var, lib, and data/db and log mongodb. Not sure what else to do. Any suggestions? mongodb.log: ***** SERVER RESTARTED ***** Thu Oct 11 08:29:40 Thu Oct 11 08:29:40 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Thu Oct 11 08:29:40 Thu Oct 11 08:29:41 [initandlisten] MongoDB starting : pid=1052 port=27017 dbpath=/var/lib/mongodb 32-bit host=alex-K43U Thu Oct 11 08:29:41 [initandlisten] Thu Oct 11 08:29:41 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Thu Oct 11 08:29:41 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Thu Oct 11 08:29:41 [initandlisten] ** with --journal, the limit is lower Thu Oct 11 08:29:41 [initandlisten] Thu Oct 11 08:29:41 [initandlisten] db version v2.2.0, pdfile version 4.5 Thu Oct 11 08:29:41 [initandlisten] git version: f5e83eae9cfbec7fb7a071321928f00d1b0c5207 Thu Oct 11 08:29:41 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Thu Oct 11 08:29:41 [initandlisten] options: { config: "/etc/mongodb.conf", dbpath: "/var/lib/mongodb", logappend: "true", logpath: "/var/log/mongodb/mongodb.log" } Thu Oct 11 08:29:41 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/var/lib/mongodb/journal" ************** Unclean shutdown detected. Please visit http://dochub.mongodb.org/core/repair for recovery instructions. ************* Thu Oct 11 08:29:41 [initandlisten] exception in initAndListen: 12596 old lock file, terminating Thu Oct 11 08:29:41 dbexit: Thu Oct 11 08:29:41 [initandlisten] shutdown: going to close listening sockets... Thu Oct 11 08:29:41 [initandlisten] shutdown: going to flush diaglog... Thu Oct 11 08:29:41 [initandlisten] shutdown: going to close sockets... Thu Oct 11 08:29:41 [initandlisten] shutdown: waiting for fs preallocator... Thu Oct 11 08:29:41 [initandlisten] shutdown: closing all files... Thu Oct 11 08:29:41 [initandlisten] closeAllFiles() finished Thu Oct 11 08:29:41 dbexit: really exiting now EDIT: I removed the lock then did mongod repair and got this error: Thu Oct 11 12:05:37 [initandlisten] exception in initAndListen: 10309 Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating so I did it with sudo: alex@alex-K43U:~$ sudo mongod --repair Thu Oct 11 12:05:42 Thu Oct 11 12:05:42 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Thu Oct 11 12:05:42 Thu Oct 11 12:05:42 [initandlisten] MongoDB starting : pid=5129 port=27017 dbpath=/data/db/ 32-bit host=alex-K43U Thu Oct 11 12:05:42 [initandlisten] Thu Oct 11 12:05:42 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Thu Oct 11 12:05:42 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Thu Oct 11 12:05:42 [initandlisten] ** with --journal, the limit is lower Thu Oct 11 12:05:42 [initandlisten] Thu Oct 11 12:05:42 [initandlisten] db version v2.2.0, pdfile version 4.5 Thu Oct 11 12:05:42 [initandlisten] git version: f5e83eae9cfbec7fb7a071321928f00d1b0c5207 Thu Oct 11 12:05:42 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Thu Oct 11 12:05:42 [initandlisten] options: { repair: true } Thu Oct 11 12:05:42 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/data/db/journal" Thu Oct 11 12:05:42 [initandlisten] finished checking dbs Thu Oct 11 12:05:42 dbexit: Thu Oct 11 12:05:42 [initandlisten] shutdown: going to close listening sockets... Thu Oct 11 12:05:42 [initandlisten] shutdown: going to flush diaglog... Thu Oct 11 12:05:42 [initandlisten] shutdown: going to close sockets... Thu Oct 11 12:05:42 [initandlisten] shutdown: waiting for fs preallocator... Thu Oct 11 12:05:42 [initandlisten] shutdown: closing all files... Thu Oct 11 12:05:42 [initandlisten] closeAllFiles() finished Thu Oct 11 12:05:42 [initandlisten] shutdown: removing fs lock... Thu Oct 11 12:05:42 dbexit: really exiting now But still having the same problem.
Step 1: Remove lock file. sudo rm /var/lib/mongodb/mongod.lock Step 2: Repair mongodb. sudo mongod --repair Step 3: start mongodb. sudo start mongodb or sudo service mongodb start Step 4: Check status of mongodb. sudo status mongodb or sudo service mongodb status Step 5: Start mongo console. mongo
MongoDB
12,831,939
183
A fairly common requirement in database applications is to track changes to one or more specific entities in a database. I've heard this called row versioning, a log table or a history table (I'm sure there are other names for it). There are a number of ways to approach it in an RDBMS--you can write all changes from all source tables to a single table (more of a log) or have a separate history table for each source table. You also have the option to either manage the logging in application code or via database triggers. I'm trying to think through what a solution to the same problem would look like in a NoSQL/document database (specifically MongoDB), and how it would be solved in a uniform way. Would it be as simple as creating version numbers for documents, and never overwriting them? Creating separate collections for "real" vs. "logged" documents? How would this affect querying and performance? Anyway, is this a common scenario with NoSQL databases, and if so, is there a common solution?
Good question, I was looking into this myself as well. Create a new version on each change I came across the Versioning module of the Mongoid driver for Ruby. I haven't used it myself, but from what I could find, it adds a version number to each document. Older versions are embedded in the document itself. The major drawback is that the entire document is duplicated on each change, which will result in a lot of duplicate content being stored when you're dealing with large documents. This approach is fine though when you're dealing with small-sized documents and/or don't update documents very often. Only store changes in a new version Another approach would be to store only the changed fields in a new version. Then you can 'flatten' your history to reconstruct any version of the document. This is rather complex though, as you need to track changes in your model and store updates and deletes in a way that your application can reconstruct the up-to-date document. This might be tricky, as you're dealing with structured documents rather than flat SQL tables. Store changes within the document Each field can also have an individual history. Reconstructing documents to a given version is much easier this way. In your application you don't have to explicitly track changes, but just create a new version of the property when you change its value. A document could look something like this: { _id: "4c6b9456f61f000000007ba6" title: [ { version: 1, value: "Hello world" }, { version: 6, value: "Foo" } ], body: [ { version: 1, value: "Is this thing on?" }, { version: 2, value: "What should I write?" }, { version: 6, value: "This is the new body" } ], tags: [ { version: 1, value: [ "test", "trivial" ] }, { version: 6, value: [ "foo", "test" ] } ], comments: [ { author: "joe", // Unversioned field body: [ { version: 3, value: "Something cool" } ] }, { author: "xxx", body: [ { version: 4, value: "Spam" }, { version: 5, deleted: true } ] }, { author: "jim", body: [ { version: 7, value: "Not bad" }, { version: 8, value: "Not bad at all" } ] } ] } Marking part of the document as deleted in a version is still somewhat awkward though. You could introduce a state field for parts that can be deleted/restored from your application: { author: "xxx", body: [ { version: 4, value: "Spam" } ], state: [ { version: 4, deleted: false }, { version: 5, deleted: true } ] } With each of these approaches you can store an up-to-date and flattened version in one collection and the history data in a separate collection. This should improve query times if you're only interested in the latest version of a document. But when you need both the latest version and historical data, you'll need to perform two queries, rather than one. So the choice of using a single collection vs. two separate collections should depend on how often your application needs the historical versions. Most of this answer is just a brain dump of my thoughts, I haven't actually tried any of this yet. Looking back on it, the first option is probably the easiest and best solution, unless the overhead of duplicate data is very significant for your application. The second option is quite complex and probably isn't worth the effort. The third option is basically an optimization of option two and should be easier to implement, but probably isn't worth the implementation effort unless you really can't go with option one. Looking forward to feedback on this, and other people's solutions to the problem :)
MongoDB
3,507,624
182
What is the difference between save and insert in Mongo DB? both looks the same db.users.save({username:"google",password:"google123"}) db.users.insert({username:"google",password:"google123"})
Save Vs Insert : In your given examples, the behavior is essentially the same. save behaves differently if it is passed with an "_id" parameter. For save, If the document contains _id, it will upsert querying the collection on the _id field, If not, it will insert. If a document does not exist with the specified _id value, the save() method performs an insert with the specified fields in the document. If a document exists with the specified _id value, the save() method performs an update, replacing all field in the existing record with the fields from the document. Save vs Update : update modifies an existing document matched with your query params. If there is no such matching document, that's when upsert comes in picture. upsert : false : Nothing happens when no such document exist upsert : true : New doc gets created with contents equal to query params and update params save : Doesn't allow any query-params. if _id exists and there is a matching doc with the same _id, it replaces it. When no _id specified/no matching document, it inserts the document as a new one.
MongoDB
16,209,681
180
My response back from MongoDB after querying an aggregated function on document using Python, It returns valid response and i can print it but can not return it. Error: TypeError: ObjectId('51948e86c25f4b1d1c0d303c') is not JSON serializable Print: {'result': [{'_id': ObjectId('51948e86c25f4b1d1c0d303c'), 'api_calls_with_key': 4, 'api_calls_per_day': 0.375, 'api_calls_total': 6, 'api_calls_without_key': 2}], 'ok': 1.0} But When i try to return: TypeError: ObjectId('51948e86c25f4b1d1c0d303c') is not JSON serializable It is RESTfull call: @appv1.route('/v1/analytics') def get_api_analytics(): # get handle to collections in MongoDB statistics = sldb.statistics objectid = ObjectId("51948e86c25f4b1d1c0d303c") analytics = statistics.aggregate([ {'$match': {'owner': objectid}}, {'$project': {'owner': "$owner", 'api_calls_with_key': {'$cond': [{'$eq': ["$apikey", None]}, 0, 1]}, 'api_calls_without_key': {'$cond': [{'$ne': ["$apikey", None]}, 0, 1]} }}, {'$group': {'_id': "$owner", 'api_calls_with_key': {'$sum': "$api_calls_with_key"}, 'api_calls_without_key': {'$sum': "$api_calls_without_key"} }}, {'$project': {'api_calls_with_key': "$api_calls_with_key", 'api_calls_without_key': "$api_calls_without_key", 'api_calls_total': {'$add': ["$api_calls_with_key", "$api_calls_without_key"]}, 'api_calls_per_day': {'$divide': [{'$add': ["$api_calls_with_key", "$api_calls_without_key"]}, {'$dayOfMonth': datetime.now()}]}, }} ]) print(analytics) return analytics db is well connected and collection is there too and I got back valid expected result but when i try to return it gives me Json error. Any idea how to convert the response back into JSON. Thanks
Bson in PyMongo distribution provides json_util - you can use that one instead to handle BSON types from bson import json_util def parse_json(data): return json.loads(json_util.dumps(data))
MongoDB
16,586,180
179
I'm currently having problems in creating a schema for the document below. The response from the server always returns the "trk" field values as [Object]. Somehow I have no idea how this should work, as I tried at least all approaches which made sense to me ;-) If this helps, my Mongoose version is 3.6.20 and MongoDB 2.4.7 And before I forget, it would be nice to also set it as Index (2d) Original data: { "_id": ObjectId("51ec4ac3eb7f7c701b000000"), "gpx": { "metadata": { "desc": "Nürburgring VLN-Variante", "country": "de", "isActive": true }, "trk": [ { "lat": 50.3299594, "lng": 6.9393006 }, { "lat": 50.3295046, "lng": 6.9390688 }, { "lat": 50.3293714, "lng": 6.9389939 }, { "lat": 50.3293284, "lng": 6.9389634 }] } } Mongoose Schema: var TrackSchema = Schema({ _id: Schema.ObjectId, gpx: { metadata: { desc: String, country: String, isActive: Boolean }, trk: [{lat:Number, lng:Number}] } }, { collection: "tracks" }); The response from the Network tab in Chrome always looks like this (that's only the trk-part which is wrong) : { trk: [ [Object], [Object], [Object], [Object], [Object], [Object], I already tried different Schema definitions for "trk": trk: Schema.Types.Mixed trk: [Schema.Types.Mixed] trk:[ { type:[Number], index: "2d" }] Hope you can help me ;-)
You can declare trk by the following ways : - either trk : [{ lat : String, lng : String }] or trk : { type : Array , "default" : [] } In the second case during insertion make the object and push it into the array like db.update({'Searching criteria goes here'}, { $push : { trk : { "lat": 50.3293714, "lng": 6.9389939 } //inserted data is the object to be inserted } }); or you can set the Array of object by db.update ({'seraching criteria goes here ' }, { $set : { trk : [ { "lat": 50.3293714, "lng": 6.9389939 }, { "lat": 50.3293284, "lng": 6.9389634 } ]//'inserted Array containing the list of object' } });
MongoDB
19,695,058
178
I am using the same connection string on local and production. When the connection string is mongodb://localhost/mydb What is the username and password? Is it secure to keep it this way?
By default mongodb has no enabled access control, so there is no default user or password. To enable access control, use either the command line option --auth or security.authorization configuration file setting. You can use the following procedure or refer to Enabling Auth in the MongoDB docs. Procedure Start MongoDB without access control. mongod --port 27017 --dbpath /data/db1 Connect to the instance. mongosh --port 27017 Create the user administrator. use admin db.createUser( { user: "myUserAdmin", pwd: passwordPrompt(), // or cleartext password roles: [ { role: "userAdminAnyDatabase", db: "admin" }, { role: "readWriteAnyDatabase", db: "admin" } ] } ) Re-start the MongoDB instance with access control. mongod --auth --port 27017 --dbpath /data/db1 Authenticate as the user administrator. mongosh --port 27017 --authenticationDatabase "admin"\ -u "myUserAdmin" -p
MongoDB
38,921,414
177
I am trying to create and use an enum type in Mongoose. I checked it out, but I'm not getting the proper result. I'm using enum in my program as follows: My schema is: var RequirementSchema = new mongooseSchema({ status: { type: String, enum : ['NEW,'STATUS'], default: 'NEW' }, }) But I am little bit confused here, how can I put the value of an enum like in Java NEW("new"). How can I save an enum in to the database according to it's enumerable values. I am using it in express node.js.
The enums here are basically String objects. Change the enum line to enum: ['NEW', 'STATUS'] instead. You have a typo there with your quotation marks.
MongoDB
29,299,477
177
I know that ObjectIds contain the date they were created on. Is there a way to query this aspect of the ObjectId?
Popping Timestamps into ObjectIds covers queries based on dates embedded in the ObjectId in great detail. Briefly in JavaScript code: /* This function returns an ObjectId embedded with a given datetime */ /* Accepts both Date object and string input */ function objectIdWithTimestamp(timestamp) { /* Convert string date to Date object (otherwise assume timestamp is a date) */ if (typeof(timestamp) == 'string') { timestamp = new Date(timestamp); } /* Convert date object to hex seconds since Unix epoch */ var hexSeconds = Math.floor(timestamp/1000).toString(16); /* Create an ObjectId with that hex timestamp */ var constructedObjectId = ObjectId(hexSeconds + "0000000000000000"); return constructedObjectId } /* Find all documents created after midnight on May 25th, 1980 */ db.mycollection.find({ _id: { $gt: objectIdWithTimestamp('1980/05/25') } });
MongoDB
8,749,971
177
I have a REST service built in node.js with Restify and Mongoose and a mongoDB with a collection with about 30.000 regular sized documents. I have my node service running through pmx and pm2. Yesterday, suddenly, node started crapping out errors with the message "MongoError: Topology was destroyed", nothing more. I have no idea what is meant by this and what could have possibly triggered this. there is also not much to be found when google-searching this. So I thought I'd ask here. After restarting the node service today, the errors stopped coming in. I also have one of these running in production and it scares me that this could happen at any given time to a pretty crucial part of the setup running there... I'm using the following versions of the mentioned packages: mongoose: 4.0.3 restify: 3.0.3 node: 0.10.25
It seems to mean your node server's connection to your MongoDB instance was interrupted while it was trying to write to it. Take a look at the Mongo source code that generates that error Mongos.prototype.insert = function(ns, ops, options, callback) { if(typeof options == 'function') callback = options, options = {}; if(this.s.state == DESTROYED) return callback(new MongoError(f('topology was destroyed'))); // Topology is not connected, save the call in the provided store to be // Executed at some point when the handler deems it's reconnected if(!this.isConnected() && this.s.disconnectHandler != null) { callback = bindToCurrentDomain(callback); return this.s.disconnectHandler.add('insert', ns, ops, options, callback); } executeWriteOperation(this.s, 'insert', ns, ops, options, callback); } This does not appear to be related to the Sails issue cited in the comments, as no upgrades were installed to precipitate the crash or the "fix"
MongoDB
30,909,492
176
What are the advantages of using NoSQL databases? I've read a lot about them lately, but I'm still unsure why I would want to implement one, and under what circumstances I would want to use one.
Relational databases enforces ACID. So, you will have schema based transaction oriented data stores. It's proven and suitable for 99% of the real world applications. You can practically do anything with relational databases. But, there are limitations on speed and scaling when it comes to massive high availability data stores. For example, Google and Amazon have terabytes of data stored in big data centers. Querying and inserting is not performant in these scenarios because of the blocking/schema/transaction nature of the RDBMs. That's the reason they have implemented their own databases (actually, key-value stores) for massive performance gain and scalability. NoSQL databases have been around for a long time - just the term is new. Some examples are graph, object, column, XML and document databases. For your 2nd question: Is it okay to use both on the same site? Why not? Both serves different purposes right?
MongoDB
3,713,313
176
For example, this code results in a collection called "datas" being created var Dataset = mongoose.model('data', dataSchema); And this code results in a collection called "users" being created var User = mongoose.model('user', dataSchema); Thanks
Mongoose is trying to be smart by making your collection name plural. You can however force it to be whatever you want: var dataSchema = new Schema({..}, { collection: 'data' })
MongoDB
10,547,118
175
I've been using mongo on my mac os x 10.8 and suddenly yesterday at my logs appeared this warning (and when starting shell it's present too) - WARNING: soft rlimits too low. Number of files is 256, should be at least 1000 Who could explain, what does it mean? And should I increase number of rlimits somehow?
on mac, you probably using mongodb for development purpose. If yes, then you can ignore this.
MongoDB
16,621,763
174
Is there a query for calculating how many distinct values a field contains in DB. f.e I have a field for country and there are 8 types of country values (spain, england, france, etc...) If someone adds more documents with a new country I would like the query to return 9. Is there easier way then group and count?
MongoDB has a distinct command which returns an array of distinct values for a field; you can check the length of the array for a count. There is a shell db.collection.distinct() helper as well: > db.countries.distinct('country'); [ "Spain", "England", "France", "Australia" ] > db.countries.distinct('country').length 4 As noted in the MongoDB documentation: Results must not be larger than the maximum BSON size (16MB). If your results exceed the maximum BSON size, use the aggregation pipeline to retrieve distinct values using the $group operator, as described in Retrieve Distinct Values with the Aggregation Pipeline.
MongoDB
14,924,495
173
I have a large collection of 300 question objects in a database test. I can interact with this collection easily through MongoDB's interactive shell; however, when I try to get the collection through Mongoose in an express.js application I get an empty array. My question is, how can I access this already existing dataset instead of recreating it in express? Here's some code: var mongoose = require('mongoose'); var Schema = mongoose.Schema; mongoose.connect('mongodb://localhost/test'); mongoose.model('question', new Schema({ url: String, text: String, id: Number })); var questions = mongoose.model('question'); questions.find({}, function(err, data) { console.log(err, data, data.length); }); This outputs: null [] 0
Mongoose added the ability to specify the collection name under the schema, or as the third argument when declaring the model. Otherwise it will use the pluralized version given by the name you map to the model. Try something like the following, either schema-mapped: new Schema({ url: String, text: String, id: Number}, { collection : 'question' }); // collection name or model mapped: mongoose.model('Question', new Schema({ url: String, text: String, id: Number}), 'question'); // collection name
MongoDB
5,794,834
173
I'm sure MongoDB doesn't officially support "joins". What does this mean? Does this mean "We cannot connect two collections(tables) together."? I think if we put the value for _id in collection A to the other_id in collection B, can we simply connect two collections? If my understanding is correct, MongoDB can connect two tables together, say, when we run a query. This is done by "Reference" written in http://www.mongodb.org/display/DOCS/Schema+Design. Then what does "joins" really mean? I'd love to know the answer because this is essential to learn MongoDB schema design. http://www.mongodb.org/display/DOCS/Schema+Design
It's no join since the relationship will only be evaluated when needed. A join (in a SQL database) on the other hand will resolve relationships and return them as if they were a single table (you "join two tables into one"). You can read more about DBRef here: http://docs.mongodb.org/manual/applications/database-references/ There are two possible solutions for resolving references. One is to do it manually, as you have almost described. Just save a document's _id in another document's other_id, then write your own function to resolve the relationship. The other solution is to use DBRefs as described on the manual page above, which will make MongoDB resolve the relationship client-side on demand. Which solution you choose does not matter so much because both methods will resolve the relationship client-side (note that a SQL database resolves joins on the server-side).
MongoDB
4,067,197
170
What does going with a document based NoSQL option buy you over a KV store, and vice-versa?
A key-value store provides the simplest possible data model and is exactly what the name suggests: it's a storage system that stores values indexed by a key. You're limited to query by key and the values are opaque, the store doesn't know anything about them. This allows very fast read and write operations (a simple disk access) and I see this model as a kind of non volatile cache (i.e. well suited if you need fast accesses by key to long-lived data). A document-oriented database extends the previous model and values are stored in a structured format (a document, hence the name) that the database can understand. For example, a document could be a blog post and the comments and the tags stored in a denormalized way. Since the data are transparent, the store can do more work (like indexing fields of the document) and you're not limited to query by key. As I hinted, such databases allows to fetch an entire page's data with a single query and are well suited for content oriented applications (which is why big sites like Facebook or Amazon like them). Other kinds of NoSQL databases include column-oriented stores, graph databases and even object databases. But this goes beyond the question. See also Comparing Document Databases to Key-Value Stores Analysis of the NoSQL Landscape
MongoDB
3,046,001
169
From MongoDB The Definitive Guide: Documents larger than 4MB (when converted to BSON) cannot be saved to the database. This is a somewhat arbitrary limit (and may be raised in the future); it is mostly to prevent bad schema design and ensure consistent performance. I don't understand this limit, does this mean that A Document containing a Blog post with a lot of comments which just so happens to be larger than 4MB cannot be stored as a single document? Also does this count the nested documents too? What if I wanted a document which audits the changes to a value. (It will eventually may grow, exceeding 4MB limit.) Hope someone explains this correctly. I have just started reading about MongoDB (first nosql database I'm learning about). Thank you.
First off, this actually is being raised in the next version to 8MB or 16MB ... but I think to put this into perspective, Eliot from 10gen (who developed MongoDB) puts it best: EDIT: The size has been officially 'raised' to 16MB So, on your blog example, 4MB is actually a whole lot.. For example, the full uncompresses text of "War of the Worlds" is only 364k (html): http://www.gutenberg.org/etext/36 If your blog post is that long with that many comments, I for one am not going to read it :) For trackbacks, if you dedicated 1MB to them, you could easily have more than 10k (probably closer to 20k) So except for truly bizarre situations, it'll work great. And in the exception case or spam, I really don't think you'd want a 20mb object anyway. I think capping trackbacks as 15k or so makes a lot of sense no matter what for performance. Or at least special casing if it ever happens. -Eliot I think you'd be pretty hard pressed to reach the limit ... and over time, if you upgrade ... you'll have to worry less and less. The main point of the limit is so you don't use up all the RAM on your server (as you need to load all MBs of the document into RAM when you query it.) So the limit is some % of normal usable RAM on a common system ... which will keep growing year on year. Note on Storing Files in MongoDB If you need to store documents (or files) larger than 16MB you can use the GridFS API which will automatically break up the data into segments and stream them back to you (thus avoiding the issue with size limits/RAM.) Instead of storing a file in a single document, GridFS divides the file into parts, or chunks, and stores each chunk as a separate document. GridFS uses two collections to store files. One collection stores the file chunks, and the other stores file metadata. You can use this method to store images, files, videos, etc in the database much as you might in a SQL database. I have used this to even store multi gigabyte video files.
MongoDB
4,667,597
169
Recently I start using MongoDB with Mongoose on Nodejs. When I use Model.find method with $or condition and _id field, Mongoose does not work properly. This does not work: User.find({ $or: [ { '_id': param }, { 'name': param }, { 'nickname': param } ] }, function(err, docs) { if(!err) res.send(docs); }); By the way, if I remove the '_id' part, this DOES work! User.find({ $or: [ { 'name': param }, { 'nickname': param } ] }, function(err, docs) { if(!err) res.send(docs); }); And in MongoDB shell, both work properly.
I solved it through googling: var ObjectId = require('mongoose').Types.ObjectId; var objId = new ObjectId( (param.length < 12) ? "123456789012" : param ); // You should make string 'param' as ObjectId type. To avoid exception, // the 'param' must consist of more than 12 characters. User.find( { $or:[ {'_id':objId}, {'name':param}, {'nickname':param} ]}, function(err,docs){ if(!err) res.send(docs); });
MongoDB
7,382,207
167
I have been trying W3schools tutorial on nodeJS with MongoDB. When I try to implement this example in a nodeJS environment and invoke the function with an AJAX call, I got the error below: TypeError: db.collection is not a function at c:\Users\user\Desktop\Web Project\WebService.JS:79:14 at args.push (c:\Users\user\node_modules\mongodb\lib\utils.js:431:72) at c:\Users\user\node_modules\mongodb\lib\mongo_client.js:254:5 at connectCallback (c:\Users\user\node_modules\mongodb\lib\mongo_client.js:933:5) at c:\Users\user\node_modules\mongodb\lib\mongo_client.js:794:11 at _combinedTickCallback (internal/process/next_tick.js:73:7) at process._tickCallback (internal/process/next_tick.js:104:9) Please find below my implemented code: var MongoClient = require('mongodb').MongoClient; var url = "mongodb://localhost:27017/mytestingdb"; MongoClient.connect(url, function(err, db) { if (err) throw err; db.collection("customers").findOne({}, function(err, result) { if (err) throw err; console.log(result.name); db.close(); }); }); Note that the error occurs whenever the execution hits: db.collection("customers").findOne({}, function(err, result) {} Also, note (in case it matters) that I have installed the latest MongoDB package for node JS (npm install mongodb), and the MongoDB version is MongoDB Enterprise 3.4.4, with MongoDB Node.js driver v3.0.0-rc0.
For people on version 3.0 of the MongoDB native NodeJS driver: (This is applicable to people with "mongodb": "^3.0.0-rc0", or a later version in package.json, that want to keep using the latest version.) In version 2.x of the MongoDB native NodeJS driver you would get the database object as an argument to the connect callback: MongoClient.connect('mongodb://localhost:27017/mytestingdb', (err, db) => { // Database returned }); According to the changelog for 3.0 you now get a client object containing the database object instead: MongoClient.connect('mongodb://localhost:27017', (err, client) => { // Client returned var db = client.db('mytestingdb'); }); The close() method has also been moved to the client. The code in the question can therefore be translated to: MongoClient.connect('mongodb://localhost', function (err, client) { if (err) throw err; var db = client.db('mytestingdb'); db.collection('customers').findOne({}, function (findErr, result) { if (findErr) throw findErr; console.log(result.name); client.close(); }); });
MongoDB
47,662,220
166
I am using Mongoose with my Node.js app and this is my configuration: mongoose.connect(process.env.MONGO_URI, { useNewUrlParser: true, useUnifiedTopology: true, useCreateIndex: true, useFindAndModify: false }).then(()=>{ console.log(`connection to database established`) }).catch(err=>{ console.log(`db error ${err.message}`); process.exit(-1) }) but in the console it still gives me the warning: DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor. What is the problem? I was not using useUnifiedTopology before but now it shows up in the console. I added it to the config but it still gives me this warning, why? I do not even use MongoClient. Edit As Felipe Plets answered there was a problem in Mongoose and they fixed this bug in later versions. So you can solve problem by updating mongoose version.
Update Mongoose 5.7.1 was release and seems to fix the issue, so setting up the useUnifiedTopology option work as expected. mongoose.connect(mongoConnectionString, {useNewUrlParser: true, useUnifiedTopology: true}); Original answer I was facing the same issue and decided to deep dive on Mongoose code: https://github.com/Automattic/mongoose/search?q=useUnifiedTopology&unscoped_q=useUnifiedTopology Seems to be an option added on version 5.7 of Mongoose and not well documented yet. I could not even find it mentioned in the library history https://github.com/Automattic/mongoose/blob/master/History.md According to a comment in the code: @param {Boolean} [options.useUnifiedTopology=false] False by default. Set to true to opt in to the MongoDB driver's replica set and sharded cluster monitoring engine. There is also an issue on the project GitHub about this error: https://github.com/Automattic/mongoose/issues/8156 In my case I don't use Mongoose in a replica set or sharded cluster and though the option should be false. But if false it complains the setting should be true. Once is true it still don't work, probably because my database does not run on a replica set or sharded cluster. I've downgraded to 5.6.13 and my project is back working fine. So the only option I see for now is to downgrade it and wait for the fix to update for a newer version.
MongoDB
57,895,175
164
I am working with Docker and I have a stack with PHP, MySQL, Apache and Redis. I need to add MongoDB now so I was checking the Dockerfile for the latest version and also the docker-entrypoint.sh file from the MongoDB Dockerhub but I couldn't find a way to setup a default DB, admin user/password and possibly auth method for the container from a docker-compose.yml file. In MySQL you can setup some ENV variables as for example: db: image: mysql:5.7 env_file: .env environment: MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} MYSQL_DATABASE: ${MYSQL_DATABASE} MYSQL_USER: ${MYSQL_USER} MYSQL_PASSWORD: ${MYSQL_PASSWORD} And this will setup the DB and the user/password as the root password. Is there any way to achieve the same with MongoDB? Anyone has some experience or workaround?
Here another cleaner solution by using docker-compose and a js script. This example assumes that both files (docker-compose.yml and mongo-init.js) lay in the same folder. docker-compose.yml version: '3.7' services: mongodb: image: mongo:latest container_name: mongodb restart: always environment: MONGO_INITDB_ROOT_USERNAME: <admin-user> MONGO_INITDB_ROOT_PASSWORD: <admin-password> MONGO_INITDB_DATABASE: <database to create> ports: - 27017:27017 volumes: - ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro mongo-init.js db.createUser( { user: "<user for database which shall be created>", pwd: "<password of user>", roles: [ { role: "readWrite", db: "<database to create>" } ] } ); Then simply start the service by running the following docker-compose command docker-compose up --build -d mongodb Note: The code in the docker-entrypoint-init.d folder is only executed if the database has never been initialized before.
MongoDB
42,912,755
164
How would you do a many-to-many association with MongoDB? For example; let's say you have a Users table and a Roles table. Users have many roles, and roles have many users. In SQL land you would create a UserRoles table. Users: Id Name Roles: Id Name UserRoles: UserId RoleId How is same sort of relationship handled in MongoDB?
Depending on your query needs you can put everything in the user document: {name:"Joe" ,roles:["Admin","User","Engineer"] } To get all the Engineers, use: db.things.find( { roles : "Engineer" } ); If you want to maintain the roles in separate documents then you can include the document's _id in the roles array instead of the name: {name:"Joe" ,roles:["4b5783300334000000000aa9","5783300334000000000aa943","6c6793300334001000000006"] } and set up the roles like: {_id:"6c6793300334001000000006" ,rolename:"Engineer" }
MongoDB
2,336,700
164
Is there a way to update values in an object? { _id: 1, name: 'John Smith', items: [{ id: 1, name: 'item 1', value: 'one' },{ id: 2, name: 'item 2', value: 'two' }] } Lets say I want to update the name and value items for item where id = 2; I have tried the following w/ mongoose: var update = {name: 'updated item2', value: 'two updated'}; Person.update({'items.id': 2}, {'$set': {'items.$': update}}, function(err) { ... Problem with this approach is that it updates/sets the entire object, therefore in this case I lose the id field. Is there a better way in mongoose to set certain values in an array but leave other values alone? I have also queried for just the Person: Person.find({...}, function(err, person) { person.items ..... // I might be able to search through all the items here and find item with id 2 then update the values I want and call person.save(). });
You're close; you should use dot notation in your use of the $ update operator to do that: Person.update({'items.id': 2}, {'$set': { 'items.$.name': 'updated item2', 'items.$.value': 'two updated' }}, function(err) { ...
MongoDB
15,691,224
163
Everywhere I look, I see that MongoDB is CP. But when I dig in I see it is eventually consistent. Is it CP when you use safe=true? If so, does that mean that when I write with safe=true, all replicas will be updated before getting the result?
MongoDB is strongly consistent by default - if you do a write and then do a read, assuming the write was successful you will always be able to read the result of the write you just read. This is because MongoDB is a single-master system and all reads go to the primary by default. If you optionally enable reading from the secondaries then MongoDB becomes eventually consistent where it's possible to read out-of-date results. MongoDB also gets high-availability through automatic failover in replica sets: http://www.mongodb.org/display/DOCS/Replica+Sets
MongoDB
11,292,215
163
I was wondering if there is way to force a unique collection entry but only if entry is not null. e Sample schema: var UsersSchema = new Schema({ name : {type: String, trim: true, index: true, required: true}, email : {type: String, trim: true, index: true, unique: true} }); 'email' in this case is not required but if 'email' is saved I want to make sure that this entry is unique (on a database level). Empty entries seem to get the value 'null' so every entry wih no email crashes with the 'unique' option (if there is a different user with no email). Right now I'm solving it on an application level, but would love to save that db query. thx
As of MongoDB v1.8+ you can get the desired behavior of ensuring unique values but allowing multiple docs without the field by setting the sparse option to true when defining the index. As in: email : {type: String, trim: true, index: true, unique: true, sparse: true} Or in the shell: db.users.ensureIndex({email: 1}, {unique: true, sparse: true}); Note that a unique, sparse index still does not allow multiple docs with an email field with a value of null, only multiple docs without an email field. See http://docs.mongodb.org/manual/core/index-sparse/
MongoDB
7,955,040
163
I'm using the Mongoose Library for accessing MongoDB with node.js Is there a way to remove a key from a document? i.e. not just set the value to null, but remove it? User.findOne({}, function(err, user){ //correctly sets the key to null... but it's still present in the document user.key_to_delete = null; // doesn't seem to have any effect delete user.key_to_delete; user.save(); });
In early versions, you would have needed to drop down the node-mongodb-native driver. Each model has a collection object that contains all the methods that node-mongodb-native offers. So you can do the action in question by this: User.collection.update({_id: user._id}, {$unset: {field: 1 }}); Since version 2.0 you can do: User.update({_id: user._id}, {$unset: {field: 1 }}, callback); And since version 2.4, if you have an instance of a model already you can do: doc.field = undefined; doc.save(callback);
MongoDB
4,486,926
163
I have a collection T, with 2 fields: Grade1 and Grade2, and I want to select those with condition Grade1 > Grade2, how can I get a query like in MySQL? Select * from T Where Grade1 > Grade2
You can use a $where. Just be aware it will be fairly slow (has to execute Javascript code on every record) so combine with indexed queries if you can. db.T.find( { $where: function() { return this.Grade1 > this.Grade2 } } ); or more compact: db.T.find( { $where : "this.Grade1 > this.Grade2" } ); UPD for mongodb v.3.6+ you can use $expr as described in recent answer
MongoDB
4,442,453
161
When we run a Mongo find() query without any sort order specified, what does the database internally use to sort the results? According to the documentation on the mongo website: When executing a find() with no parameters, the database returns objects in forward natural order. For standard tables, natural order is not particularly useful because, although the order is often close to insertion order, it is not guaranteed to be. However, for Capped Collections, natural order is guaranteed to be the insertion order. This can be very useful. However for standard collections (non capped collections), what field is used to sort the results? Is it the _id field or something else? Edit: Basically, I guess what I am trying to get at is that if I execute the following search query: db.collection.find({"x":y}).skip(10000).limit(1000); At two different points in time: t1 and t2, will I get different result sets: When there have been no additional writes between t1 & t2? When there have been new writes between t1 & t2? There are new indexes that have been added between t1 & t2? I have run some tests on a temp database and the results I have gotten are the same (Yes) for all the 3 cases - but I wanted to be sure and I am certain that my test cases weren't very thorough.
What is the default sort order when none is specified? The default internal sort order (or natural order) is an undefined implementation detail. Maintaining order is extra overhead for storage engines and MongoDB's API does not mandate predictability outside of an explicit sort() or the special cases of clustered collections and fixed-sized capped collections. For typical workloads it is desirable for the storage engine to try to reuse available preallocated space and make decisions about how to most efficiently store data on disk and in memory. Without any query criteria, results will be returned by the storage engine in natural order (aka in the order they are found). Result order may coincide with insertion order but this behaviour is not guaranteed and cannot be relied on (aside from clustered or capped collections). Some examples that may affect storage (natural) order: WiredTiger uses a different representation of documents on disk versus the in-memory cache, so natural ordering may change based on internal data structures. The original MMAPv1 storage engine (removed in MongoDB 4.2) allocates record space for documents based on padding rules. If a document outgrows the currently allocated record space, the document location (and natural ordering) will be affected. New documents can also be inserted in storage marked available for reuse due to deleted or moved documents. Replication uses an idempotent oplog format to apply write operations consistently across replica set members. Each replica set member maintains local data files that can vary in natural order, but will have the same data outcome when oplog updates are applied. What if an index is used? If an index is used, documents will be returned in the order they are found (which does necessarily match insertion order or I/O order). If more than one index is used then the order depends internally on which index first identified the document during the de-duplication process. If you want a predictable sort order you must include an explicit sort() with your query and have unique values for your sort key. How do capped collections maintain insertion order? The implementation exception noted for natural order in capped collections is enforced by their special usage restrictions: documents are stored in insertion order but existing document size cannot be increased and documents cannot be explicitly deleted. Ordering is part of the capped collection design that ensures the oldest documents "age out" first. Clustered Collections (MongoDB 5.3+) Starting in MongoDB 5.3, it is possible to create a clustered collection where documents are ordered by _id index key values. The clusteredIndex must be declared when the collection is created. Clustered collections have some usage limitations but can improve performance for queries like range scans and equality comparisons on the clustered index key.
MongoDB
11,599,069
160
I'm curious as to the pros and cons of using subdocuments vs a deeper layer in my main schema: var subDoc = new Schema({ name: String }); var mainDoc = new Schema({ names: [subDoc] }); or var mainDoc = new Schema({ names: [{ name: String }] }); I'm currently using subdocs everywhere but I am wondering primarily about performance or querying issues I might encounter.
According to the docs, it's exactly the same. However, using a Schema would add an _id field as well (as long as you don't have that disabled), and presumably uses some more resources for tracking subdocs. Alternate declaration syntax New in v3 If you don't need access to the sub-document schema instance, you may also declare sub-docs by simply passing an object literal [...]
MongoDB
15,208,711
159
Is there any way to dump mongo collection into json format? Either on the shell or using java driver.I am looking for the one with best performance.
Mongo includes a mongoexport utility (see docs) which can dump a collection. This utility uses the native libmongoclient and is likely the fastest method. mongoexport -d <database> -c <collection_name> Also helpful: -o: write the output to file, otherwise standard output is used (docs) --jsonArray: generates a valid json document, instead of one json object per line (docs) --pretty: outputs formatted json (docs)
MongoDB
8,991,292
159
I have an Email document which has a sent_at date field: { 'sent_at': Date( 1336776254000 ) } If this Email has not been sent, the sent_at field is either null, or non-existant. I need to get the count of all sent/unsent Emails. I'm stuck at trying to figure out the right way to query for this information. I think this is the right way to get the sent count: db.emails.count({sent_at: {$ne: null}}) But how should I get the count of the ones that aren't sent?
If the sent_at field is not there when its not set then: db.emails.count({sent_at: {$exists: false}}) If it's there and null, or not there at all: db.emails.count({sent_at: null}) If it's there and null: db.emails.count({sent_at: { $type: 10 }}) The Query for Null or Missing Fields section of the MongoDB manual describes how to query for null and missing values. Equality Filter The { item : null } query matches documents that either contain the item field whose value is null or that do not contain the item field. db.inventory.find( { item: null } ) Existence Check The following example queries for documents that do not contain a field. The { item : { $exists: false } } query matches documents that do not contain the item field: db.inventory.find( { item : { $exists: false } } ) Type Check The { item : { $type: 10 } } query matches only documents that contain the item field whose value is null; i.e. the value of the item field is of BSON Type Null (type number 10) : db.inventory.find( { item : { $type: 10 } } )
MongoDB
10,591,543
156
I'm relatively new to MongoDB and am trying to install MongoDB on my Mac with Homebrew, but I'm getting the following error: Error: No available formula with the name "mongodb" ==> Searching for a previously deleted formula (in the last month)... Warning: homebrew/core is shallow clone. To get complete history run: git -C "$(brew --repo homebrew/core)" fetch --unshallow Error: No previously deleted formula found. ==> Searching for similarly named formulae... Error: No similarly named formulae found. ==> Searching taps... ==> Searching taps on GitHub... Error: No formulae found in taps. I ran brew update Then brew install mongodb
Formula mongodb has been removed from homebrew-core. Check pr-43770 from homebrew-core To our users: if you came here because mongodb stopped working for you, we have removed it from the Homebrew core formulas since it was migrated to a non open-source license. Fortunately, the team of mongodb is maintaining a custom Homebrew tap. You can uninstall the old mongodb and reinstall the new one from the new tap. # If you still have the old mongodb installed from homebrew-core brew services stop mongodb brew uninstall homebrew/core/mongodb # Use the migrated distribution from custom tap brew tap mongodb/brew brew install mongodb-community brew services start mongodb-community Check mongodb/homebrew-brew for more info.
MongoDB
57,856,809
155
I've tried db.users.remove(*) Although it returns an error so how do I go about clearing all records?
The argument to remove() is a filter document, so passing in an empty document means 'remove all': db.user.remove({}) However, if you definitely want to remove everything you might be better off dropping the collection. Though that probably depends on whether you have user defined indexes on the collection i.e. whether the cost of preparing the collection after dropping it outweighs the longer duration of the remove() call vs the drop() call. More details in the docs.
MongoDB
46,368,368
155
if I have two schemas like: var userSchema = new Schema({ twittername: String, twitterID: Number, displayName: String, profilePic: String, }); var User = mongoose.model('User') var postSchema = new Schema({ name: String, postedBy: User, //User Model Type dateCreated: Date, comments: [{body:"string", by: mongoose.Schema.Types.ObjectId}], }); I tried to connect them together like the example above but I couldn't figure out how to do it. Eventually, if I can do something like this it would make my life very easy var profilePic = Post.postedBy.profilePic
It sounds like the populate method is what your looking for. First make small change to your post schema: var postSchema = new Schema({ name: String, postedBy: {type: mongoose.Schema.Types.ObjectId, ref: 'User'}, dateCreated: Date, comments: [{body:"string", by: mongoose.Schema.Types.ObjectId}], }); Then make your model: var Post = mongoose.model('Post', postSchema); Then, when you make your query, you can populate references like this: Post.findOne({_id: 123}) .populate('postedBy') .exec(function(err, post) { // do stuff with post });
MongoDB
18,001,478
155
Everybody. In mongo group query, the result shows only the key(s) in arguments. How to keep the first document in each group like mysql query group. for example: ------------------------------------------------------------------------- | name | age | sex | province | city | area | address | ------------------------------------------------------------------------- | ddl1st | 22 | 纯爷们 | BeiJing | BeiJing | ChaoYang | QingNianLu | | ddl1st | 24 | 纯爷们 | BeiJing | BeiJing | XuHui | ZhaoJiaBangLu | | 24k | 220 | ... | .... | ... | ... | ... | ------------------------------------------------------------------------- db.users.group({key: { name: 1},reduce: function ( curr, result ) { result.count ++ },initial: {count : 0 } }) result: [ { "name" : "ddl1st", "count" : 1 }, { "name" : "24k", "count" : 1 } ] How to get the following: [ { "name" : "ddl1st", "age" : 22, "sex" : "纯爷们", "province" : "BeiJing", "city" : "BeiJing", "area" : "ChaoYang", "address" : "QingNianLu", "count" : 1 }, { "name" : "24k", "age" : 220, "sex" : "...", "province" : "...", "city" : "...", "area" : "...", "address" : "...", "count" : 1 } ]
If you want to keep the information about the first matching entries for each group, you can try aggregating like: db.test.aggregate([{ $group: { _id : '$name', name : { $first: '$name' }, age : { $first: '$age' }, sex : { $first: '$sex' }, province : { $first: '$province' }, city : { $first: '$city' }, area : { $first: '$area' }, address : { $first: '$address' }, count : { $sum: 1 }, } }]);
MongoDB
16,662,405
155
I am trying to perform a regex query using PyMongo against a MongoDB server. The document structure is as follows { "files": [ "File 1", "File 2", "File 3", "File 4" ], "rootFolder": "/Location/Of/Files" } I want to get all the files that match the pattern *File. I tried doing this as such db.collectionName.find({'files':'/^File/'}) Yet I get nothing back. Am I missing something, because according to the MongoDB docs this should be possible? If I perform the query in the Mongo console it works fine, does this mean the API doesn't support it or am I just using it incorrectly?
If you want to include regular expression options (such as ignore case), try this: import re regx = re.compile("^foo", re.IGNORECASE) db.users.find_one({"files": regx})
MongoDB
3,483,318
155
I am a complete noob when it comes to the NoSQL movement. I have heard lots about MongoDB and CouchDB. I know there are differences between the two. Which do you recommend learning as a first step into the NoSQL world?
See following links CouchDB Vs MongoDB MongoDB or CouchDB - fit for production? DB-Engines - Comparison CouchDB vs. MongoDB Update: I found great comparison of NoSQL databases. MongoDB (3.2) Written in: C++ Main point: JSON document store License: AGPL (Drivers: Apache) Protocol: Custom, binary (BSON) Master/slave replication (auto failover with replica sets) Sharding built-in Queries are javascript expressions Run arbitrary javascript functions server-side Has geospatial indexing and queries Multiple storage engines with different performance characteristics Performance over features Document validation Journaling Powerful aggregation framework On 32bit systems, limited to ~2.5Gb Text search integrated GridFS to store big data + metadata (not actually an FS) Data center aware Best used: If you need dynamic queries. If you prefer to define indexes, not map/reduce functions. If you need good performance on a big DB. If you wanted CouchDB, but your data changes too much, filling up disks. For example: For most things that you would do with MySQL or PostgreSQL, but having predefined columns really holds you back. CouchDB (1.2) Written in: Erlang Main point: DB consistency, ease of use License: Apache Protocol: HTTP/REST Bi-directional (!) replication, continuous or ad-hoc, with conflict detection, thus, master-master replication. (!) MVCC - write operations do not block reads Previous versions of documents are available Crash-only (reliable) design Needs compacting from time to time Views: embedded map/reduce Formatting views: lists & shows Server-side document validation possible Authentication possible Real-time updates via '_changes' (!) Attachment handling Best used: For accumulating, occasionally changing data, on which pre-defined queries are to be run. Places where versioning is important. For example: CRM, CMS systems. Master-master replication is an especially interesting feature, allowing easy multi-site deployments.
MongoDB
3,375,494
155
Is there an explain function for the Aggregation framework in MongoDB? I can't see it in the documentation. If not is there some other way to check, how a query performs within the aggregation framework? I know with find you just do db.collection.find().explain() But with the aggregation framework I get an error db.collection.aggregate( { $project : { "Tags._id" : 1 }}, { $unwind : "$Tags" }, { $match: {$or: [{"Tags._id":"tag1"},{"Tags._id":"tag2"}]}}, { $group: { _id : { id: "$_id"}, "count": { $sum:1 } } }, { $sort: {"count":-1}} ).explain()
Starting with MongoDB version 3.0, simply changing the order from collection.aggregate(...).explain() to collection.explain().aggregate(...) will give you the desired results (documentation here). For older versions >= 2.6, you will need to use the explain option for aggregation pipeline operations explain:true db.collection.aggregate([ { $project : { "Tags._id" : 1 }}, { $unwind : "$Tags" }, { $match: {$or: [{"Tags._id":"tag1"},{"Tags._id":"tag2"}]}}, { $group: { _id : "$_id", count: { $sum:1 } }}, {$sort: {"count":-1}} ], { explain:true } ) An important consideration with the Aggregation Framework is that an index can only be used to fetch the initial data for a pipeline (e.g. usage of $match, $sort, $geonear at the beginning of a pipeline) as well as subsequent $lookup and $graphLookup stages. Once data has been fetched into the aggregation pipeline for processing (e.g. passing through stages like $project, $unwind, and $group) further manipulation will be in-memory (possibly using temporary files if the allowDiskUse option is set). Optimizing pipelines In general, you can optimize aggregation pipelines by: Starting a pipeline with a $match stage to restrict processing to relevant documents. Ensuring the initial $match / $sort stages are supported by an efficient index. Filtering data early using $match, $limit , and $skip . Minimizing unnecessary stages and document manipulation (perhaps reconsidering your schema if complicated aggregation gymnastics are required). Taking advantage of newer aggregation operators if you have upgraded your MongoDB server. For example, MongoDB 3.4 added many new aggregation stages and expressions including support for working with arrays, strings, and facets. There are also a number of Aggregation Pipeline Optimizations that automatically happen depending on your MongoDB server version. For example, adjacent stages may be coalesced and/or reordered to improve execution without affecting the output results. Limitations As at MongoDB 3.4, the Aggregation Framework explain option provides information on how a pipeline is processed but does not support the same level of detail as the executionStats mode for a find() query. If you are focused on optimizing initial query execution you will likely find it beneficial to review the equivalent find().explain() query with executionStats or allPlansExecution verbosity. There are a few relevant feature requests to watch/upvote in the MongoDB issue tracker regarding more detailed execution stats to help optimize/profile aggregation pipelines: SERVER-19758: Add "executionStats" and "allPlansExecution" explain modes to aggregation explain SERVER-21784: Track execution stats for each aggregation pipeline stage and expose via explain SERVER-22622: Improve $lookup explain to indicate query plan on the "from" collection
MongoDB
12,702,080
154
In previous versions of Mongoose (for node.js) there was an option to use it without defining a schema var collection = mongoose.noSchema(db, "User"); But in the current version the "noSchema" function has been removed. My schemas are likely to change often and really don't fit in with a defined schema so is there a new way to use schema-less models in mongoose?
I think this is what are you looking for Mongoose Strict option: strict The strict option, (enabled by default), ensures that values added to our model instance that were not specified in our schema do not get saved to the db. Note: Do not set to false unless you have good reason. var thingSchema = new Schema({..}, { strict: false }); var Thing = mongoose.model('Thing', thingSchema); var thing = new Thing({ iAmNotInTheSchema: true }); thing.save() // iAmNotInTheSchema is now saved to the db!!
MongoDB
5,370,846
154
How can I know the count of a model that data has been saved? there is a method of Model.count(), but it doesn't seem to work. var db = mongoose.connect('mongodb://localhost/myApp'); var userSchema = new Schema({name:String,password:String}); userModel =db.model('UserList',userSchema); var userCount = userModel.count('name'); userCount is an Object, which method called can get a real count? Thanks
The reason your code doesn't work is because the count function is asynchronous, it doesn't synchronously return a value. Here's an example of usage: userModel.count({}, function( err, count){ console.log( "Number of users:", count ); })
MongoDB
10,811,887
153
What's the syntax for doing a $lookup on a field that is an array of ObjectIds rather than just a single ObjectId? Example Order Document: { _id: ObjectId("..."), products: [ ObjectId("..<Car ObjectId>.."), ObjectId("..<Bike ObjectId>..") ] } Not Working Query: db.orders.aggregate([ { $lookup: { from: "products", localField: "products", foreignField: "_id", as: "productObjects" } } ]) Desired Result { _id: ObjectId("..."), products: [ ObjectId("..<Car ObjectId>.."), ObjectId("..<Bike ObjectId>..") ], productObjects: [ {<Car Object>}, {<Bike Object>} ], }
2017 update $lookup can now directly use an array as the local field. $unwind is no longer needed. Old answer The $lookup aggregation pipeline stage will not work directly with an array. The main intent of the design is for a "left join" as a "one to many" type of join ( or really a "lookup" ) on the possible related data. But the value is intended to be singular and not an array. Therefore you must "de-normalise" the content first prior to performing the $lookup operation in order for this to work. And that means using $unwind: db.orders.aggregate([ // Unwind the source { "$unwind": "$products" }, // Do the lookup matching { "$lookup": { "from": "products", "localField": "products", "foreignField": "_id", "as": "productObjects" }}, // Unwind the result arrays ( likely one or none ) { "$unwind": "$productObjects" }, // Group back to arrays { "$group": { "_id": "$_id", "products": { "$push": "$products" }, "productObjects": { "$push": "$productObjects" } }} ]) After $lookup matches each array member the result is an array itself, so you $unwind again and $group to $push new arrays for the final result. Note that any "left join" matches that are not found will create an empty array for the "productObjects" on the given product and thus negate the document for the "product" element when the second $unwind is called. Though a direct application to an array would be nice, it's just how this currently works by matching a singular value to a possible many. As $lookup is basically very new, it currently works as would be familiar to those who are familiar with mongoose as a "poor mans version" of the .populate() method offered there. The difference being that $lookup offers "server side" processing of the "join" as opposed to on the client and that some of the "maturity" in $lookup is currently lacking from what .populate() offers ( such as interpolating the lookup directly on an array ). This is actually an assigned issue for improvement SERVER-22881, so with some luck this would hit the next release or one soon after. As a design principle, your current structure is neither good or bad, but just subject to overheads when creating any "join". As such, the basic standing principle of MongoDB in inception applies, where if you "can" live with the data "pre-joined" in the one collection, then it is best to do so. The one other thing that can be said of $lookup as a general principle, is that the intent of the "join" here is to work the other way around than shown here. So rather than keeping the "related ids" of the other documents within the "parent" document, the general principle that works best is where the "related documents" contain a reference to the "parent". So $lookup can be said to "work best" with a "relation design" that is the reverse of how something like mongoose .populate() performs it's client side joins. By idendifying the "one" within each "many" instead, then you just pull in the related items without needing to $unwind the array first.
MongoDB
34,967,482
152
I'm using MongoDB in a reporting system and have to delete a whole bunch of test documents. While I don't have too much trouble using the JSON-based command-line tools, it gets extremely tedious to have to keep searching for documents, copy-and-pasting OIDs, etc., especially from a command prompt window (ever tried to "mark" text that wraps multiple lines?) How can I visually inspect the databases and collections, perform some simple CRUD tasks and manage multiple scripts in a proper window (not a command prompt)?
Here are some popular MongoDB GUI administration tools: Open source dbKoda - cross-platform, tabbed editor with auto-complete, syntax highlighting and code formatting (plus auto-save, something Studio 3T doesn't support), visual tools (explain plan, real-time performance dashboard, query and aggregation pipeline builder), profiling manager, storage analyzer, index advisor, convert MongoDB commands to Node.js syntax etc. Lacks in-place document editing and the ability to switch themes. Nosqlclient - multiple shell output tabs, autocomplete, schema analyzer, index management, user/role management, live monitoring, and other features. Electron/Meteor.js-based, actively developed on GitHub. adminMongo - web-based or Electron app. Supports server monitoring and document editing. Closed source NoSQLBooster – full-featured shell-centric cross-platform GUI tool for MongoDB v2.2-4. Free, Personal, and Commercial editions (feature comparison matrix). MongoDB Compass – provides a graphical user interface that allows you to visualize your schema and perform ad-hoc find queries against the database – all with zero knowledge of MongoDB's query language. Developed by MongoDB, Inc. No update queries or access to the shell. Studio 3T, formerly MongoChef – a multi-platform in-place data browser and editor desktop GUI for MongoDB (Core version is free for personal and non-commercial use). Last commit: 2017-Jul-24 Robo 3T – acquired by Studio 3T. A shell-centric cross-platform open source MongoDB management tool. Shell-related features only, e.g. multiple shells and results, autocomplete. No export/ import or other features are mentioned. Last commit: 2017-Jul-04 HumongouS.io – web-based interface with CRUD features, a chart builder and some collaboration capabilities. 14-day trial. Database Master – a Windows based MongoDB Management Studio, supports also RDBMS. (not free) SlamData - an open source web-based user-interface that allows you to upload and download data, run queries, build charts, explore data. Abandoned projects RockMongo – a MongoDB administration tool, written in PHP5. Allegedly the best in the PHP world. Similar to PHPMyAdmin. Last version: 2015-Sept-19 Fang of Mongo – a web-based UI built with Django and jQuery. Last commit: 2012-Jan-26, in a forked project. Opricot – a browser-based MongoDB shell written in PHP. Latest version: 2010-Sep-21 Futon4Mongo – a clone of the CouchDB Futon web interface for MongoDB. Last commit: 2010-Oct-09 MongoVUE – an elegant GUI desktop application for Windows. Free and non-free versions. Latest version: 2014-Jan-20 UMongo – a full-featured open-source MongoDB server administration tool for Linux, Windows, Mac; written in Java. Last commit 2014-June Mongo3 – a Ruby/Sinatra-based interface for cluster management. Last commit: Apr 16, 2013
MongoDB
3,310,242
152
I am doing MongoDB lookups by converting a string to BSON. Is there a way for me to determine if the string I have is a valid ObjectID for Mongo before doing the conversion? Here is the coffeescript for my current findByID function. It works great, but I'd like to lookup by a different attribute if I determine the string is not an ID. db.collection "pages", (err, collection) -> collection.findOne _id: new BSON.ObjectID(id) , (err, item) -> if item res.send item else res.send 404
I found that the mongoose ObjectId validator works to validate valid objectIds but I found a few cases where invalid ids were considered valid. (eg: any 12 characters long string) var ObjectId = require('mongoose').Types.ObjectId; ObjectId.isValid('microsoft123'); //true ObjectId.isValid('timtomtamted'); //true ObjectId.isValid('551137c2f9e1fac808a5f572'); //true What has been working for me is casting a string to an objectId and then checking that the original string matches the string value of the objectId. new ObjectId('timtamtomted'); //616273656e6365576f726b73 new ObjectId('537eed02ed345b2e039652d2') //537eed02ed345b2e039652d2 This work because valid ids do not change when casted to an ObjectId but a string that gets a false valid will change when casted to an objectId.
MongoDB
13,850,819
150
Per the Mongoose documentation for MongooseJS and MongoDB/Node.js : When your application starts up, Mongoose automatically calls ensureIndex for each defined index in your schema. While nice for development, it is recommended this behavior be disabled in production since index creation can cause a significant performance impact. Disable the behavior by setting the autoIndex option of your schema to false. This appears to instruct removal of auto-indexing from mongoose prior to deploying to optimize Mongoose from instructing Mongo to go and churn through all indexes on application startup, which seems to make sense. What is the proper way to handle indexing in production code? Maybe an external script should generate indexes? Or maybe ensureIndex is unnecessary if a single application is the sole reader/writer to a collection because it will continue an index every time a DB write occurs? Edit: To supplement, MongoDB provides good documentation for the how to do indexing, but not why or when explicit indexing directives should be done. It seems to me that indexes should be kept up to date by writer applications automatically on collections with existing indexes and that ensureIndex is really more of a one-time thing (done when a new index is being applied), in which case Mongoose's autoIndex should be a no-op under a normal server restart.
I've never understood why the Mongoose documentation so broadly recommends disabling autoIndex in production. Once the index has been added, subsequent ensureIndex calls will simply see that the index already exists and then return. So it only has an effect on performance when you're first creating the index, and at that time the collections are often empty so creating an index would be quick anyway. My suggestion is to leave autoIndex enabled unless you have a specific situation where it's giving you trouble; like if you want to add a new index to an existing collection that has millions of docs and you want more control over when it's created.
MongoDB
14,342,708
149
I know there are similar questions here but they are either telling me to switch back to regular RDBMS systems if I need transactions or use atomic operations or two-phase commit. The second solution seems the best choice. The third I don't wish to follow because it seems that many things could go wrong and I can't test it in every aspect. I'm having a hard time refactoring my project to perform atomic operations. I don't know whether this comes from my limited viewpoint (I have only worked with SQL databases so far), or whether it actually can't be done. We would like to pilot test MongoDB at our company. We have chosen a relatively simple project - an SMS gateway. It allows our software to send SMS messages to the cellular network and the gateway does the dirty work: actually communicating with the providers via different communication protocols. The gateway also manages the billing of the messages. Every customer who applies for the service has to buy some credits. The system automatically decreases the user's balance when a message is sent and denies the access if the balance is insufficient. Also because we are customers of third party SMS providers, we may also have our own balances with them. We have to keep track of those as well. I started thinking about how I can store the required data with MongoDB if I cut down some complexity (external billing, queued SMS sending). Coming from the SQL world, I would create a separate table for users, another one for SMS messages, and one for storing the transactions regarding the users' balance. Let's say I create separate collections for all of those in MongoDB. Imagine an SMS sending task with the following steps in this simplified system: check if the user has sufficient balance; deny access if there's not enough credit send and store the message in the SMS collection with the details and cost (in the live system the message would have a status attribute and a task would pick up it for delivery and set the price of the SMS according to its current state) decrease the users's balance by the cost of the sent message log the transaction in the transaction collection Now what's the problem with that? MongoDB can do atomic updates only on one document. In the previous flow it could happen that some kind of error creeps in and the message gets stored in the database but the user's balance is not updated and/or the transaction is not logged. I came up with two ideas: Create a single collection for the users, and store the balance as a field, user related transactions and messages as sub documents in the user's document. Because we can update documents atomically, this actually solves the transaction problem. Disadvantages: if the user sends many SMS messages, the size of the document could become large and the 4MB document limit could be reached. Maybe I can create history documents in such scenarios, but I don't think this would be a good idea. Also I don't know how fast the system would be if I push more and more data to the same big document. Create one collection for users, and one for transactions. There can be two kinds of transactions: credit purchase with positive balance change and messages sent with negative balance change. Transaction may have a subdocument; for example in messages sent the details of the SMS can be embedded in the transaction. Disadvantages: I don't store the current user balance so I have to calculate it every time a user tries to send a message to tell if the message could go through or not. I'm afraid this calculation can became slow as the number of stored transactions grows. I'm a little bit confused about which method to pick. Are there other solutions? I couldn't find any best practices online about how to work around these kinds of problems. I guess many programmers who are trying to become familiar with the NoSQL world are facing similar problems in the beginning.
As of 4.0, MongoDB will have multi-document ACID transactions. The plan is to enable those in replica set deployments first, followed by the sharded clusters. Transactions in MongoDB will feel just like transactions developers are familiar with from relational databases - they'll be multi-statement, with similar semantics and syntax (like start_transaction and commit_transaction). Importantly, the changes to MongoDB that enable transactions do not impact performance for workloads that do not require them. For more details see here. Having distributed transactions, doesn't mean that you should model your data like in tabular relational databases. Embrace the power of the document model and follow the good and recommended practices of data modeling.
MongoDB
6,635,718
148
To quote the docs: When creating an index, the number associated with a key specifies the direction of the index, so it should always be 1 (ascending) or -1 (descending). Direction doesn't matter for single key indexes or for random access retrieval but is important if you are doing sorts or range queries on compound indexes. However, I see no reason why direction of the index should matter on compound indexes. Can someone please provide a further explanation (or an example)?
MongoDB concatenates the compound key in some way and uses it as the key in a BTree. When finding single items - The order of the nodes in the tree is irrelevant. If you are returning a range of nodes - The elements close to each other will be down the same branches of the tree. The closer the nodes are in the range the quicker they can be retrieved. With a single field index - The order won't matter. If they are close together in ascending order they will also be close together in descending order. When you have a compound key - The order starts to matter. For example, if the key is A ascending B ascending the index might look something like this: Row A B 1 1 1 2 2 6 3 2 7 4 3 4 5 3 5 6 3 6 7 5 1 A query for A ascending B descending will need to jump around the index out of order to return the rows and will be slower. For example it will return Row 1, 3, 2, 6, 5, 4, 7 A ranged query in the same order as the index will simply return the rows sequentially in the correct order. Finding a record in a BTree takes O(Log(n)) time. Finding a range of records in order is only OLog(n) + k where k is the number of records to return. If the records are out of order, the cost could be as high as OLog(n) * k
MongoDB
10,329,104
147
I am using mongodb now. I have a blogpost collection, and it has a tags field which is an array, e.g. blogpost1.tags = ['tag1', 'tag2', 'tag3', 'tag4', 'tag5'] blogpost2.tags = ['tag2', 'tag3'] blogpost3.tags = ['tag2', 'tag3', 'tag4', 'tag5'] blogpost4.tags = ['tag1', 'tag4', 'tag5'] How can I do these search contains tag1 contains ['tag1','tag2'] contains any of ['tag3', 'tag4']
Try this out: db.blogpost.find({ 'tags' : 'tag1'}); //1 db.blogpost.find({ 'tags' : { $all : [ 'tag1', 'tag2' ] }}); //2 db.blogpost.find({ 'tags' : { $in : [ 'tag3', 'tag4' ] }}); //3
MongoDB
5,366,687
147
This is my first day with MongoDB so please go easy with me :) I can't understand the $unwind operator, maybe because English is not my native language. db.article.aggregate( { $project : { author : 1 , title : 1 , tags : 1 }}, { $unwind : "$tags" } ); The project operator is something I can understand, I suppose (it's like SELECT, isn't it?). But then, $unwind (citing) returns one document for every member of the unwound array within every source document. Is this like a JOIN? If yes, how the result of $project (with _id, author, title and tags fields) can be compared with the tags array? NOTE: I've taken the example from MongoDB website, I don't know the structure of tags array. I think it's a simple array of tag names.
The thing to remember is that MongoDB employs an "NoSQL" approach to data storage, so perish the thoughts of selects, joins, etc. from your mind. The way that it stores your data is in the form of documents and collections, which allows for a dynamic means of adding and obtaining the data from your storage locations. That being said, in order to understand the concept behind the $unwind parameter, you first must understand what the use case that you are trying to quote is saying. The example document from mongodb.org is as follows: { title : "this is my title" , author : "bob" , posted : new Date () , pageViews : 5 , tags : [ "fun" , "good" , "fun" ] , comments : [ { author :"joe" , text : "this is cool" } , { author :"sam" , text : "this is bad" } ], other : { foo : 5 } } Notice how tags is actually an array of 3 items, in this case being "fun", "good" and "fun". What $unwind does is allow you to peel off a document for each element and returns that resulting document. To think of this in a classical approach, it would be the equivilent of "for each item in the tags array, return a document with only that item". Thus, the result of running the following: db.article.aggregate( { $project : { author : 1 , title : 1 , tags : 1 }}, { $unwind : "$tags" } ); would return the following documents: { "result" : [ { "_id" : ObjectId("4e6e4ef557b77501a49233f6"), "title" : "this is my title", "author" : "bob", "tags" : "fun" }, { "_id" : ObjectId("4e6e4ef557b77501a49233f6"), "title" : "this is my title", "author" : "bob", "tags" : "good" }, { "_id" : ObjectId("4e6e4ef557b77501a49233f6"), "title" : "this is my title", "author" : "bob", "tags" : "fun" } ], "OK" : 1 } Notice that the only thing changing in the result array is what is being returned in the tags value. If you need an additional reference on how this works, I've included a link here.
MongoDB
16,448,175
146
NoSQL has been getting a lot of attention in our industry recently. I'm really interested in what peoples thoughts are on the best use-cases for its use over relational database storage. What should trigger a developer into thinking that particular datasets are more suited to a NoSQL solution. I'm particularly interested in MongoDB and CouchDB as they seem to be getting the most coverage with regard to PHP development and that is my focus.
Just promise yourself that you will never try to map a relational data model to a NoSQL database like MongoDB or CouchDB... This is the most common mistake developers make when evaluating emerging tech. That approach is analogous to taking a car and trying to use it to pull your cart down the road like a horse. It's a natural reaction due to everyone's experience of course, but the real value in using a document database is being able to simplify your datamodel and minimize your suffering as a developer. Your codebase will shrink, your bugs will be fewer and easier to find, performance is going to be awesome, and scale will be much simpler. As a Joomla founder I'm biased :-) but coming from the CMS space, something like MongoDB is a silver bullet as content maps very naturally to document systems. Another great case for MongoDB is real-time analytics, as MongoDB has very strong performance and scale particularly regarding concurrency. There are case studies at the MongoDB.org website that demonstrate those attributes. I agree with the notion that each database has its own aims and use cases; take the purpose of each database for evaluation accordingly.
MongoDB
2,875,432
146
We are migrating a database from MySQL to MongoDB for performance reasons and considering what to use for IDs of the MongoDB documents. We are debating between using ObjectIDs, which is the MongoDB default, or using UUIDs instead (which is what we have been using up until now in MySQL). So far, the arguments we have to support any of these options are the following: ObjectIDs: ObjectIDs are the MongoDB default and I assume (although I'm not sure) that this is for a reason, meaning that I expect that MongoDB can handle them more efficiently than UUIDs or has another reason for preferring them. I also found this stackoverflow answer that mentions that usage of ObjectIDs makes indexing more efficient, it would be nice however to have some metrics on how much this "more efficient" is. UUIDs: Our basic argument in favour of using UUIDs (and it is a quite important one) is that they are supported, one way or another, by virtually any database. This means that if some way down the road we decide to switch from MongoDB to something else for whatever reason and we already have an API that retrieves documents from the DB based on their IDs nothing changes for the clients of this API since the IDs can continue to be exactly the same. If we were to use ObjectIDs I'm not really sure how we would go about migrating them to another DB. Does anyone have any insights on whether one of these options may be better than the other and why? Have you ever used UUIDs in MongoDB instead of ObjectIDs and if yes what were the advantages / problems you came across?
Using UUIDs in Mongo is certainly possible and reasonably well supported. For example, the Mongo docs list UUIDs as one of the common options for the _id field. Considerations Performance – As other answers mention, benchmarks show UUIDs cause a performance drop for inserts. In the worst case measured (going from 10M to 20M docs in a collection) they've about ~2-3x slower – the difference between inserting 2,000 (UUID) and 7,500 (ObjectID) docs per second. This is a large difference but its significance depends entirely on your use case. Will you be batch inserting millions of docs at a time? For most apps I've build the common case is inserting individual documents. The same benchmarks show that, for that usage pattern, the difference is much smaller (6,250 -vs- 7,500; ~20%). Not insignificant.. but not earth shattering either. Portability – Many other DB platforms have good UUID support so portability would be improved. Alternatively, since UUIDs are larger (more bits) it is possible to repack an ObjectID into the "shape" of a UUID. This approach isn't as nice as direct portability but it does give you a way to "map" between existing ObjectIDs and UUIDs. Decentralisation – One of the big selling points of UUIDs is that they're universally unique. This makes it practical to generate them anywhere, in a decentralised fashion (in contrast to, for example an auto-incrementing value, that requires a centralised source of truth to determine the "next" value). Of course, Mongo Object IDs profess this benefit too. The difference is, UUIDs are based on a 15+ year old standard and supported on (nearly?) all platforms, languages, etc. This makes them very useful if you ever need to create entities (or specifically, sets of related entities) in disjointed systems, without interacting with the database. You can create a dataset with IDs and foreign keys in place, then write the whole graph into the database at some point in the future without conflict. Although this is also possible with Mongo ObjectIDs, finding code to generate them/work with the format will often be harder. Corrections Contrary to some of the other answers: UUIDs do have native Mongo support – You can use the UUID() function in the Mongo Shell exactly the same way you'd use ObjectID(); to convert a UUID string into equivalent BSON object. UUIDs are not especially large – When encoded using binary subtype 0x04 they're 128 bits, compared to 96 bits for ObjectIDs. (If encoded as strings they will be pretty wasteful, taking around 288 bits.) UUIDs can include a timestamp – Specifically, UUIDv1 encodes a timestamp with 60 bits of precision, compared to 32 bits in ObjectIDs. In decimal, this is over 6 orders of magnitude more precision – so nanoseconds instead of seconds. It can actually be a decent way of storing create timestamps with more accuracy than Mongo/JS Date objects support, however... The build in UUID() function only generates v4 (random) UUIDs so, to leverage this this, you'd to lean on on your app or Mongo driver for ID creation. Unlike ObjectIDs, because of the way UUIDs are chunked, the timestamp doesn't give you a natural order. This can be good or bad depending on your use case. (New standards may change this; see 2021 update below.) Including timestamps in your IDs is sometimes a Bad Idea. You end up leaking the created time of documents anywhere an ID is exposed. (Of course ObjectIDs also encode a timestamp so this is partly true for them too.) If you do this with (spec compliant) v1 UUIDs, you're also encoding part of the servers MAC address, which can potentially be used to identify the machine. Probably not an issue for most systems but also not ideal. (New standards may change this; see 2021 update below.) Conclusion If you think about your Mongo DB in isolation, ObjectIDs are the obvious choice. They work well out of the box and are a perfectly capable default. Using UUIDs instead does add some friction, both when working with the values (needing to convert to binary types, etc.) and in terms of performance. Whether this slight inconvenience is worth having a standardised ID format really depends on the importance you place on portability and your architectural choices. Will you be syncing data between different database platforms? Will you migrate your data to a different platform in the future? Do you need to generate IDs outside the database, in other systems or in the browser? If not now at some point in the future? UUIDs might be worth the hassle. Aug 2021 Update The IEFT recently published a draft update to the UUID spec that would introduce some new versions of the format. Specifically, UUIDv6 and UUIDv7 are based on UUIDv1 but flip the timestamp chunks so the bits are arranged from most significant to least significant. This gives the resultant values a natural order that (more or less) reflects the order in which they were created. The new versions also exclude data derived from the servers MAC address, addressing a long-standing criticism of v1 UUIDs. It'll take time for these changes to flow though to implementations but (IMHO) they significantly modernise and improve the format.
MongoDB
28,895,067
145
When I run mongo, I get the warning: Failed global initialization: BadValue Invalid or no user locale set. Please ensure LANG and/or LC_* environment variables are set correctly.
you can use the below command on terminal export LC_ALL=C
MongoDB
26,337,557
145
Looking to do the following query: Entrant .find enterDate : oneMonthAgo confirmed : true .where('pincode.length > 0') .exec (err,entrants)-> Am I doing the where clause properly? I want to select documents where pincode is not null.
You should be able to do this like (as you're using the query api): Entrant.where("pincode").ne(null) ... which will result in a mongo query resembling: entrants.find({ pincode: { $ne: null } }) A few links that might help: The mongoose query api The docs for mongo query operators
MongoDB
16,531,895
145
I have mongo DB installed in the following path c:\mongodb\bin. I have configured my environment variable PATH in advanced settings.I also have mongod running .When I run the following command mongorestore dump from the following path c:\hw1-1\dump (This contains the BSON files) I'm getting this error: Don't know what to do with the dump file I have referred to this thread to check my path.
in mongodb 3.0 or above, we should specify the database name to restore mongorestore -d [your_db_name] [your_dump_dir]
MongoDB
21,233,290
143
I have an issue I've not seen before with the Mongoose findByIdAndUpdate not returning the correct model in the callback. Here's the code: var id = args._id; var updateObj = {updatedDate: Date.now()}; _.extend(updateObj, args); Model.findByIdAndUpdate(id, updateObj, function(err, model) { if (err) { logger.error(modelString +':edit' + modelString +' - ' + err.message); self.emit('item:failure', 'Failed to edit ' + modelString); return; } self.emit('item:success', model); }); The original document in the db looks like this: { _id: 1234 descriptors: Array[2], name: 'Test Name 1' } The updateObj going in looks like this: { _id: 1234 descriptors: Array[2], name: 'Test Name 2' } The model returned from the callback is identical to the original model, not the updatedObj. If I query the db, it has been updated correctly. It's just not being returned from the database. This feels like a 'stupid-user' error, but I can't see it. Any ideas greatly appreciated.
In Mongoose 4.0, the default value for the new option of findByIdAndUpdate (and findOneAndUpdate) has changed to false, which means returning the old doc (see #2262 of the release notes). So you need to explicitly set the option to true to get the new version of the doc, after the update is applied: Model.findByIdAndUpdate(id, updateObj, {new: true}, function(err, model) {...
MongoDB
30,419,575
142
How do I design a scheme such this in MongoDB? I think there are no foreign keys!
How to design table like this in mongodb? First, to clarify some naming conventions. MongoDB uses collections instead of tables. I think there are no foreign keys! Take the following model: student { _id: ObjectId(...), name: 'Jane', courses: [ { course: 'bio101', mark: 85 }, { course: 'chem101', mark: 89 } ] } course { _id: 'bio101', name: 'Biology 101', description: 'Introduction to biology' } Clearly Jane's course list points to some specific courses. The database does not apply any constraints to the system (i.e.: foreign key constraints), so there are no "cascading deletes" or "cascading updates". However, the database does contain the correct information. In addition, MongoDB has a DBRef standard that helps standardize the creation of these references. In fact, if you take a look at that link, it has a similar example. How can I solve this task? To be clear, MongoDB is not relational. There is no standard "normal form". You should model your database appropriate to the data you store and the queries you intend to run.
MongoDB
6,334,048
142
I've installed mongodb and have been able to run it, work with it, do simple DB read / write type stuff. Now I'm trying to set up my Mac to run mongod as a service. I get "Command not found" in response to: init mongod start In response to: ~: service mongod start service: This command still works, but it is deprecated. Please use launchctl(8) instead. service: failed to start the 'mongod' service And if I try: ~: launchctl start mongod launchctl start error: No such process So obviously I'm blundering around a bit. Next step seems to be typing in random characters until something works. The command which does work is: mongod --quiet & I'm not sure, maybe that is the way you're supposed to do it? Maybe I should just take off 'quiet mode' and add > /logs/mongo.log to the end of the command line? I'm building a development environment on a Mac with the intention of doing the same thing on a linux server. I'm just not sure of the Bash commands. All the other searches I do with trying to pull up the answer give me advice for windows machines. Perhaps someone knows the linux version of the commands? Thanks very much
Edit: you should now use brew services start mongodb, as in Gergo's answer... When you install/upgrade mongodb, brew will tell you what to do: To have launchd start mongodb at login: ln -sfv /usr/local/opt/mongodb/*.plist ~/Library/LaunchAgents Then to load mongodb now: launchctl load ~/Library/LaunchAgents/homebrew.mxcl.mongodb.plist Or, if you don't want/need launchctl, you can just run: mongod It works perfectly.
MongoDB
5,596,521
141
Is it possible to do an OR in the $match? I mean something like this: db.articles.aggregate( { $or: [ $match : { author : "dave" }, $match : { author : "john" }] } );
$match: { $or: [{ author: 'dave' }, { author: 'john' }] } Like so, since the $match operator just takes what you would normally put into the find() function
MongoDB
16,902,930
138
var thename = 'Andrew'; db.collection.find({'name':thename}); How do I query case insensitive? I want to find result even if "andrew";
Chris Fulstow's solution will work (+1), however, it may not be efficient, especially if your collection is very large. Non-rooted regular expressions (those not beginning with ^, which anchors the regular expression to the start of the string), and those using the i flag for case insensitivity will not use indexes, even if they exist. An alternative option you might consider is to denormalize your data to store a lower-case version of the name field, for instance as name_lower. You can then query that efficiently (especially if it is indexed) for case-insensitive exact matches like: db.collection.find({"name_lower": thename.toLowerCase()}) Or with a prefix match (a rooted regular expression) as: db.collection.find( {"name_lower": { $regex: new RegExp("^" + thename.toLowerCase(), "i") } } ); Both of these queries will use an index on name_lower.
MongoDB
7,101,703
138
I am trying to test out mongoDB and see if it is anything for me. I downloaded the 32bit windows version, but have no idea on how to continue from now on. I normally use the WAMP services for developing on my local computer. Can i run mongoDB on Wamp? However, what's the best (easiest!) way to make it work on windows? Thanks!
Mongo Installation Process in Windows Are you ready for the installation … and use … Technically, it’s not an installation it’s just Downloading… I. Download the zip file http://www.mongodb.org/downloads II. Extract it and copy the files into your desired location. III. Start the DB engine. IV. Test the installation and use it. That's it! So simple, right? Ok let’s start 1. Download the zip file Go to http://www.mongodb.org/downloads You will see a screen like this: I am using a windows 7 32 bit machine - that’s why I downloaded the package marked in red. Click download (It only takes a few seconds). Wow... I got that downloaded. It was a zipped file called mongodb-win32-i386-2.4.4.zip (The name of the folder will change according to the version you download, here I got version 2.4.4). OK all set. 2. Extract Extract the zip Copy the files into a desired location in your machine. I am going to copy the extracted files to my D drive, since I don’t have many files there. Alright then where are you planning to paste the mongo files? In C: or in your Desktop itself? Ok, no matter where you paste... In the snap shot below, you can see that I have navigated to the bin folder inside the Mongo folder. I count fifteen files inside bin. What about you? Finished! That’s all What we have to do next? 3. Start the DB engine Let’s go and start using our mongo db... Open up a command prompt, then navigate to bin in the mongo folder Type mongo.exe (which is the command used to start mongo Db Power shell). Then see the below response.. That was an awesome exception J LOL … What is that? Couldn’t connect to server. Why did the exception happen? I have no idea... Did I create a server in between? No. Right, then how come it connected to a server in between? Silly Machine …Jz. I got it! Like all other DBs - we have to start the DB engine before we use it. So, how can we start it? We have to start the mongo db by using the command mongod. Execute this from the bin folder of mongo. Let’s see what had happened. Again a wonderfully formatted exception J we got right? Did you notice what I have highlighted on top? Yeah it is the mongod command. The second one is the exception asking us to create a folder called data. And, inside the data folder, a folder called db. So we have to create these data\db folders. The next question is where to create these folders? We have to create the data\db folders in the C drive of our BOX in which we are installing mongo. Let’s go and create the folder structure in C drive. A question arises here: "Is it mandatory to create the data\db directories inside C?" Nooo, not really. Mongo looks in C by default for this folder, but you can create them wherever you want. However, if it's not in C, you have to tell mongo where it is. In other words, if you don't want the mongo databases to be on C:\, you have to set the db path for mongo.exe. Optional Ok, I will create those folders in some other location besides C for better understanding of this option. I will create then in the D drive root, with the help of cmd. Why? Because it’s an opportunity for us to remember the old dos commands... The next step is to set the Db path to mongo.exe. Navigate back to bin, and enter the command, mongod.exe --dbpath d:\data. I got the response below: I Hope everything went well... Because I didn’t see any ERROR *** in the console J. Next, we can go and start the db using the command start mongo.exe I didn't see any error or warning messages. But, we have to supply a command to make sure mongo is up and running, i.e. mongod will get a response: Hope everything went well. 4. Test the Mongo DB installation Now we have to see our DB right? Yea very much, Otherwise how will we know it’s running? For testing purpose MONGO has got a DB called test by default. Lets go query that. But how without any management studios? Unlike SQL, we have to depend on the command prompt. Yes exactly the same command prompt… our good old command prompt… Heiiiii.. Don’t get afraid yes it’s our old command prompt only. Ok let’s go and see how we are going to use it… Ohhh Nooo… don’t close the above Command prompt, leave it as it is… Open a new cmd window. Navigate to Bin as usual we do… I am sure you people may be remembering the old C programming which we have done on our college day’s right? In the command prompt, execute the command mongo or mongo.exe again and see what happens. You will get a screen as shown below: I mentioned before that Mongo has got a test db by default called test, try inserting a record into it. The next question here is "How will we insert?" Does mongo have SQL commands? No, mongo has got only commands to help with. The basic command to insert is db.test.save( { KodothTestField: ‘My name is Kodoth’ } ) Where test is the DB and .save is the insert command. KodothTestField is the column or field name, and My name is Kodoth is the value. Before talking more let’s check whether it’s stored or not by performing another command: db.test.find() Our Data got successfully inserted … Hurrayyyyyy.. I know that you are thinking about the number which is displayed with every record right called ObjectId. It’s like a unique id field in SQL that auto-increments and all. Have a closer look you can see that the Object Id ends with 92, so it’s different for each and every record. At last we are successful in installing and verifying the MONGO right. Let’s have a party... So do you agree now MONGO is as Sweet as MANGO? Also we have 3rd party tools to explore the MONGO. One is called MONGO VUE. Using this tool we can perform operations against the mongo DB like we use Management studio for SQL Server. Can you just imagine an SQL server or Oracle Db with entirely different rows in same table? Is it possible in our relational DB table? This is how mongo works. I will show you how we can do that… First I will show you how the data will look in a relational DB. For example consider an Employee table and a Student table in relational way. The schemas would be entirely different right? Yes exactly… Let us now see how it will look in Mongo DB. The above two tables are combined into single Collection in Mongo… This is how Collections are stored in Mongo. I think now you can feel the difference really right? Every thing came under a single umbrella. This is not the right way but I just wanted to show you all how this happens that’s why I combined 2 entirely different tables in to one single Collection. If you want to try out you can use below test scripts *********************** TEST INSERT SCRIPT *********EMPLOYEE****** db.test.save( { EmployeId: "1", EmployeFirstName: "Kodoth", EmployeLastName:"KodothLast", EmployeAge:"14" } ) db.test.save( { EmployeId: "2", EmployeFirstName: "Kodoth 2", EmployeLastName:"Kodoth Last2", EmployeAge:"14" } ) db.test.save( { EmployeId: "3", EmployeFirstName: "Kodoth 3", EmployeLastName:"Kodoth Last3", EmployeAge:"14" } ) ******STUDENT****** db.test.save( { StudentId: "1", StudentName: "StudentName", StudentMark:"25" } ) db.test.save( { StudentId: "2", StudentName: "StudentName 2", StudentMark:"26" } ) db.test.save( {StudentId: "3", StudentName: "StudentName 3", StudentMark:"27"} ) ************************ Thanks
MongoDB
2,404,742
138
I have array in subdocument like this { "_id" : ObjectId("512e28984815cbfcb21646a7"), "list" : [ { "a" : 1 }, { "a" : 2 }, { "a" : 3 }, { "a" : 4 }, { "a" : 5 } ] } Can I filter subdocument for a > 3 My expect result below { "_id" : ObjectId("512e28984815cbfcb21646a7"), "list" : [ { "a" : 4 }, { "a" : 5 } ] } I try to use $elemMatch but returns the first matching element in the array My query: db.test.find( { _id" : ObjectId("512e28984815cbfcb21646a7") }, { list: { $elemMatch: { a: { $gt:3 } } } } ) The result return one element in array { "_id" : ObjectId("512e28984815cbfcb21646a7"), "list" : [ { "a" : 4 } ] } and I try to use aggregate with $match but not work db.test.aggregate({$match:{_id:ObjectId("512e28984815cbfcb21646a7"), 'list.a':{$gte:5} }}) It's return all element in array { "_id" : ObjectId("512e28984815cbfcb21646a7"), "list" : [ { "a" : 1 }, { "a" : 2 }, { "a" : 3 }, { "a" : 4 }, { "a" : 5 } ] } Can I filter element in array to get result as expect result?
Using aggregate is the right approach, but you need to $unwind the list array before applying the $match so that you can filter individual elements and then use $group to put it back together: db.test.aggregate([ { $match: {_id: ObjectId("512e28984815cbfcb21646a7")}}, { $unwind: '$list'}, { $match: {'list.a': {$gt: 3}}}, { $group: {_id: '$_id', list: {$push: '$list.a'}}} ]) outputs: { "result": [ { "_id": ObjectId("512e28984815cbfcb21646a7"), "list": [ 4, 5 ] } ], "ok": 1 } MongoDB 3.2 Update Starting with the 3.2 release, you can use the new $filter aggregation operator to do this more efficiently by only including the list elements you want during a $project: db.test.aggregate([ { $match: {_id: ObjectId("512e28984815cbfcb21646a7")}}, { $project: { list: {$filter: { input: '$list', as: 'item', cond: {$gt: ['$$item.a', 3]} }} }} ]) $and: get data between 0-5: cond: { $and: [ { $gt: [ "$$item.a", 0 ] }, { $lt: [ "$$item.a", 5 ] } ]}
MongoDB
15,117,030
137
I am trying to download mongodb and I am following the steps on this link. But when I get to the step: sudo apt-get install -y mongodb-org I get the following error: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package mongodb-org //This is the error Why is this occurring and is there a work around?
I have faced the same issue but then fixed it by the changing the package file section command. The steps that I followed were: First try with this command: sudo apt-get install -y mongodb This is the unofficial mongodb package provided by Ubuntu and it is not maintained by MongoDB and conflicts with MongoDB’s officially supported packages. If the above command doesn't work then you can fix the issue by one of the following procedures: Step 1: Import the MongoDB public key In Ubuntu 18.*+, you may get invalid signatures. --recv value may need to be updated to EA312927. See here for more details on the invalid signature issue: MongoDB GPG - Invalid Signatures sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 Step 2: Generate a file with the MongoDB repository url echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list Step 3: Refresh the local database with the packages sudo apt-get update Step 4: Install the last stable MongoDB version and all the necessary packages on our system sudo apt-get install mongodb-org Or if the unofficial mongodb package provided by Ubuntu is not maintained by MongoDB and conflict with MongoDB’s officially supported packages. Use the official MongoDB mongodb-org packages, which are kept up-to-date with the most recent major and minor MongoDB releases. sudo apt-get install -y mongodb Hope this will work for you also. You can follow this MongoDB Update The above instruction will install mongodb 2.6 version, if you want to install latest version for Ubuntu 12.04 then just omit the above step 2 and follow below instruction instead of that: Step 2: Generate a file with the MongoDB repository url echo "deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb.list If you are using Ubuntu 14.04 then use bellow step instead of above step 2 Step 2: Generate a file with the MongoDB repository url echo "deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list
MongoDB
28,945,921
136
I need to write an application with which I can do complex queries using spring-data and MongoDB. I started using the MongoRepository but struggled with complex queries to find examples or understand the Syntax. I'm talking about queries like this: @Repository public interface UserRepositoryInterface extends MongoRepository<User, String> { List<User> findByEmailOrLastName(String email, String lastName); } or the use of JSON based queries which I tried by trial and error because I didn't get the syntax right. Even after reading the MongoDB documentation (non-working example due to the wrong syntax). @Repository public interface UserRepositoryInterface extends MongoRepository<User, String> { @Query("'$or':[{'firstName':{'$regex':?0,'$options':'i'}},{'lastName':{'$regex':?0,'$options':'i'}}]") List<User> findByEmailOrFirstnameOrLastnameLike(String searchText); } After reading all the documentation it seems that mongoTemplate is far better documented than MongoRepository. I'm referring to the following documentation: http://static.springsource.org/spring-data/data-mongodb/docs/current/reference/html/ Can you tell me which is more convenient and powerful to use? mongoTemplate or MongoRepository? Are both same mature or does one lack more features than the other?
"Convenient" and "powerful to use" are contradicting goals to some degree. Repositories are by far more convenient than templates but the latter of course give you more fine-grained control over what to execute. As the repository programming model is available for multiple Spring Data modules, you'll find more in-depth documentation for it in the general section of the Spring Data MongoDB reference docs. TL;DR We generally recommend the following approach: Start with the repository abstract and just declare simple queries using the query derivation mechanism or manually defined queries. For more complex queries, add manually implemented methods to the repository (as documented here). For the implementation use MongoTemplate. Details For your example this would look something like this: Define an interface for your custom code: interface CustomUserRepository { List<User> yourCustomMethod(); } Add an implementation for this class and follow the naming convention to make sure we can find the class. class UserRepositoryImpl implements CustomUserRepository { private final MongoOperations operations; @Autowired public UserRepositoryImpl(MongoOperations operations) { Assert.notNull(operations, "MongoOperations must not be null!"); this.operations = operations; } public List<User> yourCustomMethod() { // custom implementation here } } Now let your base repository interface extend the custom one and the infrastructure will automatically use your custom implementation: interface UserRepository extends CrudRepository<User, Long>, CustomUserRepository { } This way you essentially get the choice: everything that just easy to declare goes into UserRepository, everything that's better implemented manually goes into CustomUserRepository. The customization options are documented here.
MongoDB
17,008,947
136
I want to copy a collection within the same database and give it a different name - basically take a snapshot. What's the best way to do this? Is there a command, or do I have to copy each record in turn? I'm aware of the cloneCollection command, but it seems to be for copying to another server only. I'm also aware of mongoimport and mongoexport, but as I'm doing this via PHP I'd prefer not to make calls out to the shell.
> use yourDatabaseName > db.myOriginal.aggregate([{$out: "myCopy"}]) It is a lot faster than doing many inserts in a forEach loop.
MongoDB
10,624,964
135
I'm trying to display a query in MongoDB where a text field is not '' (blank) { 'name' : { $not : '' }} However I get the error invalid use of $not I've looked over the documentation but the examples they use are for complicated cases (with regexp and $not negating another operator). How would I do the simple thing I'm trying to do?
Use $ne -- $not should be followed by the standard operator: An examples for $ne, which stands for not equal: use test switched to db test db.test.insert({author : 'me', post: ""}) db.test.insert({author : 'you', post: "how to query"}) db.test.find({'post': {$ne : ""}}) { "_id" : ObjectId("4f68b1a7768972d396fe2268"), "author" : "you", "post" : "how to query" } And now $not, which takes in predicate ($ne) and negates it ($not): db.test.find({'post': {$not: {$ne : ""}}}) { "_id" : ObjectId("4f68b19c768972d396fe2267"), "author" : "me", "post" : "" }
MongoDB
9,790,878
135
My question is a variation of this one. Since my Java Web-app project requires a lot of read filters/queries and interfaces with tools like GridFS, I'm struggling to think of a sensible way to employ MongoDB in the way the above solution suggests. Therefore, I'm considering running an embedded instance of MongoDB alongside my integration tests. I'd like it to start up automatically (either for each test or the whole suite), flush the database for every test, and shut down at the end. These tests might be run on development machines as well as the CI server, so my solution will also need to be portable. Can anyone with more knowledge on MongoDB help me get idea of the feasibility of this approach, and/or perhaps suggest any reading material that might help me get started? I'm also open to other suggestions people might have on how I could approach this problem...
I have found Embedded MongoDB library which looks quite promising and does what you have asked for. Currently supports MongoDB versions: 1.6.5 to 3.1.6, provided the binaries are still available from the configured mirror. Here is short example of use, which I have just tried and it works perfectly: public class EmbeddedMongoTest { private static final String DATABASE_NAME = "embedded"; private MongodExecutable mongodExe; private MongodProcess mongod; private Mongo mongo; @Before public void beforeEach() throws Exception { MongoDBRuntime runtime = MongoDBRuntime.getDefaultInstance(); mongodExe = runtime.prepare(new MongodConfig(Version.V2_3_0, 12345, Network.localhostIsIPv6())); mongod = mongodExe.start(); mongo = new Mongo("localhost", 12345); } @After public void afterEach() throws Exception { if (this.mongod != null) { this.mongod.stop(); this.mongodExe.stop(); } } @Test public void shouldCreateNewObjectInEmbeddedMongoDb() { // given DB db = mongo.getDB(DATABASE_NAME); DBCollection col = db.createCollection("testCollection", new BasicDBObject()); // when col.save(new BasicDBObject("testDoc", new Date())); // then assertThat(col.getCount(), Matchers.is(1L)); } }
MongoDB
6,437,226
135
I'm trying to let MongoDB detect a duplicate value based on its index. I think this is possible in MongoDB, but through the Mongoose wrapper things appear to be broken. So for something like this: User = new Schema ({ email: {type: String, index: {unique: true, dropDups: true}} }) I can save 2 users with the same email. Darn. The same issue has been expressed here: https://github.com/LearnBoost/mongoose/issues/56, but that thread is old and lead to nowhere. For now, I'm manually making a call to the db to find the user. That call is not expensive since "email" is indexed. But it would still be nice to let it be handled natively. Does anyone have a solution to this?
Oops! You just have to restart mongo.
MongoDB
5,535,610
135
I'm trying to update a single subelement contained within an array in a mongodb document. I want to reference the field using its array index (elements within the array don't have any fields that I can guarantee will be unique identifiers). Seems like this should be easy to do, but I can't figure out the syntax. Here's what I want to do in pseudo-json. Before: { _id : ..., other_stuff ... , my_array : [ { ... old content A ... }, { ... old content B ... }, { ... old content C ... } ] } After: { _id : ..., other_stuff ... , my_array : [ { ... old content A ... }, { ... NEW content B ... }, { ... old content C ... } ] } Seems like the query should be something like this: //pseudocode db.my_collection.update( {_id: ObjectId(document_id), my_array.1 : 1 }, {my_array.$.content: NEW content B } ) But this doesn't work. I've spent way too long searching the mongodb docs, and trying different variations on this syntax (e.g. using $slice, etc.). I can't find any clear explanation of how to accomplish this kind of update in MongoDB.
As expected, the query is easy once you know how. Here's the syntax, in python: db["my_collection"].update( { "_id": ObjectId(document_id) }, { "$set": { 'documents.'+str(doc_index)+'.content' : new_content_B}} )
MongoDB
11,372,065
133
What is the command to get the number of clients connected to a particular MongoDB server?
connect to the admin database and run db.serverStatus(): > var status = db.serverStatus() > status.connections {"current" : 21, "available" : 15979} > You can directly get by querying db.serverStatus().connections To understand what does MongoDb's db.serverStatus().connections response mean, read the documentation here. connections "connections" : { "current" : <num>, "available" : <num>, "totalCreated" : NumberLong(<num>) }, connections A document that reports on the status of the connections. Use these values to assess the current load and capacity requirements of the server. connections.current The number of incoming connections from clients to the database server . This number includes the current shell session. Consider the value of connections.available to add more context to this datum. The value will include all incoming connections including any shell connections or connections from other servers, such as replica set members or mongos instances. connections.available The number of unused incoming connections available. Consider this value in combination with the value of connections.current to understand the connection load on the database, and the UNIX ulimit Settings document for more information about system thresholds on available connections. connections.totalCreated Count of all incoming connections created to the server. This number includes connections that have since closed.
MongoDB
8,975,531
133
How to search for documents in a collection that are missing a certain field in MongoDB?
Yeah, it's possible using $exists: db.things.find( { a : { $exists : false } } ); // return if a is missing When is true, $exists matches the documents that contain the field, including documents where the field value is null. If is false, the query returns only the documents that do not contain the field.
MongoDB
5,719,408
133
I'm searching on Google since days and I tried many things but I still can not perform a good full text search on my user collection. I tried ElasticSearch but was pretty impossible to query and paginate... I tried many plugins for Mongoose like ElMongo, mongoose-full-text, Mongoosastic, etc... everyone are really bad documented and I don't know how to perform a good full text search. So, my collection is a normal collection: user = { name: String, email: String, profile: { something: String, somethingElse: String } } I have a search input in a page with a simple POST, if I type hello world what I need is to search on the entire collection fields the matching words of my search query and get the results. It will be really nice also to have options to handle a pagination like 10 items per page or something... What is the best solution to achieve this? I'm using MongoDB 2.6.* with Mongoose, NodeJS and ExpressJS. Thanks.
You can add a text index to your Mongoose schema definition that lets you use the $text operator in your find queries to search all fields included in the text index. To create an index to support text search on, say, name and profile.something: var schema = new Schema({ name: String, email: String, profile: { something: String, somethingElse: String } }); schema.index({name: 'text', 'profile.something': 'text'}); Or if you want to include all string fields in the index, use the '$**' wildcard: schema.index({'$**': 'text'}); This would enable you to perform a paged text search query like: MyModel.find({$text: {$search: searchString}}) .skip(20) .limit(10) .exec(function(err, docs) { ... }); For more details, read the full MongoDB Text Indexes documentation.
MongoDB
28,775,051
132
I'm getting returned a JSON value from MongoDB after I run my query. The problem is I do not want to return all the JSON associated with my return, I tried searching the docs and didn't find a proper way to do this. I was wondering what if it is at possible, and if so what is the proper way of doing such. Example: In the DB { user: "RMS", OS: "GNU/HURD", bearded: "yes", philosophy: { software: "FOSS", cryptology: "Necessary" }, email: { responds: "Yes", address: "[email protected]" }, facebook: {} } { user: "zuckerburg", os: "OSX", bearded: "no", philosophy: { software: "OSS", cryptology: "Optional" }, email: {}, facebook: { responds: "Sometimes", address: "https://www.facebook.com/zuck?fref=ts" } } What would be the proper way of returning a field if it exists for a user, but if it doesn't return another field. For the example above I would want to return the [email][address] field for RMS and the [facebook][address] field for Zuckerburg. This is what I have tried to find if a field is null, but it doesn't appear to be working. .populate('user' , `email.address`) .exec(function (err, subscription){ var key; var f; for(key in subscription){ if(subscription[key].facebook != null ){ console.log("user has fb"); } } }
I'm not completely clear on what you mean by "returning a field", but you can use a lean() query so that you can freely modify the output, then populate both fields and post-process the result to only keep the field you want: .lean().populate('user', 'email.address facebook.address') .exec(function (err, subscription){ if (subscription.user.email.address) { delete subscription.user.facebook; } else { delete subscription.user.email; } });
MongoDB
26,691,543
132
When running a service inside a container, let's say mongodb, the command docker run -d myimage will exit instantly, and return the container id. In my CI script, I run a client to test mongodb connection, right after running the mongo container. The problem is: the client can't connect because the service is not up yet. Apart from adding a big sleep 10in my script, I don't see any option to wait for a container to be up and running. Docker has a command wait which doesn't work in that case, because the container doesn't exist. Is it a limitation of docker?
Found this simple solution, been looking for something better but no luck... until [ "`docker inspect -f {{.State.Running}} CONTAINERNAME`"=="true" ]; do sleep 0.1; done; or if you want to wait until the container is reporting as healthy (assuming you have a healthcheck) until [ "`docker inspect -f {{.State.Health.Status}} CONTAINERNAME`"=="healthy" ]; do sleep 0.1; done;
MongoDB
21,183,088
132
I encountered a strange behavior of mongo and I would like to clarify it a bit... My request is simple as that: I would like to get a size of single document in collection. I found two possible solutions: Object.bsonsize - some javascript method that should return a size in bytes db.collection.stats() - where there is a line 'avgObjSize' that produce some "aggregated"(average) size view on the data. It simply represents average size of single document. When I create test collection with only one document, both functions returns different values. How is it possible? Does it exist some other method to get a size of a mongo document? Here, I provide some code I perform testing on: I created new database 'test' and input simple document with only one attribute: type:"auto" db.test.insert({type:"auto"}) output from stats() function call: db.test.stats(): { "ns" : "test.test", "count" : 1, "size" : 40, "avgObjSize" : 40, "storageSize" : 4096, "numExtents" : 1, "nindexes" : 1, "lastExtentSize" : 4096, "paddingFactor" : 1, "systemFlags" : 1, "userFlags" : 0, "totalIndexSize" : 8176, "indexSizes" : { "_id_" : 8176 }, "ok" : 1 } output from bsonsize function call: Object.bsonsize(db.test.find({test:"auto"})) 481
In the previous call of Object.bsonsize(), Mongodb returned the size of the cursor, rather than the document. Correct way is to use this command: Object.bsonsize(db.test.findOne()) With findOne(), you can define your query for a specific document: Object.bsonsize(db.test.findOne({type:"auto"})) This will return the correct size (in bytes) of the particular document.
MongoDB
22,008,822
131
as the title says, I want to perform a find (one) for a document, by _id, and if doesn't exist, have it created, then whether it was found or was created, have it returned in the callback. I don't want to update it if it exists, as I've read findAndModify does. I have seen many other questions on Stackoverflow regarding this but again, don't wish to update anything. I am unsure if by creating (of not existing), THAT is actually the update everyone is talking about, it's all so confuzzling :(
Beginning with MongoDB 2.4, it's no longer necessary to rely on a unique index (or any other workaround) for atomic findOrCreate like operations. This is thanks to the $setOnInsert operator new to 2.4, which allows you to specify updates which should only happen when inserting documents. This, combined with the upsert option, means you can use findAndModify to achieve an atomic findOrCreate-like operation. db.collection.findAndModify({ query: { _id: "some potentially existing id" }, update: { $setOnInsert: { foo: "bar" } }, new: true, // return new doc if one is upserted upsert: true // insert the document if it does not exist }) As $setOnInsert only affects documents being inserted, if an existing document is found, no modification will occur. If no document exists, it will upsert one with the specified _id, then perform the insert only set. In both cases, the document is returned.
MongoDB
16,358,857
130
I have a large amount of data in a collection in mongodb which I need to analyze. How do i import that data to pandas? I am new to pandas and numpy. EDIT: The mongodb collection contains sensor values tagged with date and time. The sensor values are of float datatype. Sample Data: { "_cls" : "SensorReport", "_id" : ObjectId("515a963b78f6a035d9fa531b"), "_types" : [ "SensorReport" ], "Readings" : [ { "a" : 0.958069536790466, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:26:35.297Z"), "b" : 6.296118156595, "_cls" : "Reading" }, { "a" : 0.95574014778624, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:27:09.963Z"), "b" : 6.29651468650064, "_cls" : "Reading" }, { "a" : 0.953648289182713, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:27:37.545Z"), "b" : 7.29679823731148, "_cls" : "Reading" }, { "a" : 0.955931884300997, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:28:21.369Z"), "b" : 6.29642922525632, "_cls" : "Reading" }, { "a" : 0.95821381, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:41:20.801Z"), "b" : 7.28956613, "_cls" : "Reading" }, { "a" : 4.95821335, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:41:36.931Z"), "b" : 6.28956574, "_cls" : "Reading" }, { "a" : 9.95821341, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:42:09.971Z"), "b" : 0.28956488, "_cls" : "Reading" }, { "a" : 1.95667927, "_types" : [ "Reading" ], "ReadingUpdatedDate" : ISODate("2013-04-02T08:43:55.463Z"), "b" : 0.29115237, "_cls" : "Reading" } ], "latestReportTime" : ISODate("2013-04-02T08:43:55.463Z"), "sensorName" : "56847890-0", "reportCount" : 8 }
pymongo might give you a hand, followings is some code I'm using: import pandas as pd from pymongo import MongoClient def _connect_mongo(host, port, username, password, db): """ A util for making a connection to mongo """ if username and password: mongo_uri = 'mongodb://%s:%s@%s:%s/%s' % (username, password, host, port, db) conn = MongoClient(mongo_uri) else: conn = MongoClient(host, port) return conn[db] def read_mongo(db, collection, query={}, host='localhost', port=27017, username=None, password=None, no_id=True): """ Read from Mongo and Store into DataFrame """ # Connect to MongoDB db = _connect_mongo(host=host, port=port, username=username, password=password, db=db) # Make a query to the specific DB and Collection cursor = db[collection].find(query) # Expand the cursor and construct the DataFrame df = pd.DataFrame(list(cursor)) # Delete the _id if no_id: del df['_id'] return df
MongoDB
16,249,736
130
Doc: { _id: 5150a1199fac0e6910000002, name: 'some name', items: [{ id: 23, name: 'item name 23' },{ id: 24, name: 'item name 24' }] } Is there a way to pull a specific object from an array? I.E. how do I pull the entire item object with id 23 from the items array. I have tried: db.mycollection.update({'_id': ObjectId("5150a1199fac0e6910000002")}, {$pull: {id: 23}}); However I am pretty sure that I am not using 'pull' correctly. From what I understand pull will pull a field from an array but not an object. Any ideas how to pull the entire object out of the array. As a bonus I am trying to do this in mongoose/nodejs, as well not sure if this type of thing is in the mongoose API but I could not find it.
try.. db.mycollection.update( { '_id': ObjectId("5150a1199fac0e6910000002") }, { $pull: { items: { id: 23 } } }, false, // Upsert true, // Multi );
MongoDB
15,641,492
130
Being new to Spring Boot I am wondering on how I can configure connection details for MongoDB. I have tried the normal examples but none covers the connection details. I want to specify the database that is going to be used and the url/port of the host that runs MongoDB. Any hints or tips?
Just to quote Boot Docs: You can set spring.data.mongodb.uri property to change the url, or alternatively specify a host/port. For example, you might declare the following in your application.properties: spring.data.mongodb.host=mongoserver spring.data.mongodb.port=27017 All available options for spring.data.mongodb prefix are fields of MongoProperties: private String host; private int port = DBPort.PORT; private String uri = "mongodb://localhost/test"; private String database; private String gridFsDatabase; private String username; private char[] password;
MongoDB
23,515,295
129
With PyMongo, when I try to retrieve objects sorted by their 'number' and 'date' fields like this: db.test.find({"number": {"$gt": 1}}).sort({"number": 1, "date": -1}) I get this error: TypeError: if no direction is specified, key_or_list must be an instance of list What's wrong with my sort query?
sort should be a list of key-direction pairs, that is db.test.find({"number": {"$gt": 1}}).sort([("number", 1), ("date", -1)]) The reason why this has to be a list is that the ordering of the arguments matters and dicts are not ordered in Python < 3.6
MongoDB
10,242,149
129
If I have a record like this; { "text": "text goes here", "words": ["text", "goes", "here"] } How can I match multiple words from it in MongoDB? When matching a single word I can do this; db.find({ words: "text" }) But when I try this for multiple words, it doesn't work; db.find({ words: ["text", "here"] }) I'm guessing that by using an array, it tries to match the entire array against the one in the record, rather than matching the individual contents.
Depends on whether you're trying to find documents where words contains both elements (text and here) using $all: db.things.find({ words: { $all: ["text", "here"] }}); or either of them (text or here) using $in: db.things.find({ words: { $in: ["text", "here"] }});
MongoDB
8,145,523
128
I know how to... Remove a single document. Remove the collection itself. Remove all documents from the collection with Mongo. But I don't know how to remove all documents from the collection with Mongoose. I want to do this when the user clicks a button. I assume that I need to send an AJAX request to some endpoint and have the endpoint do the removal, but I don't know how to handle the removal at the endpoint. In my example, I have a Datetime collection, and I want to remove all of the documents when the user clicks a button. api/datetime/index.js 'use strict'; var express = require('express'); var controller = require('./datetime.controller'); var router = express.Router(); router.get('/', controller.index); router.get('/:id', controller.show); router.post('/', controller.create); router.put('/:id', controller.update); router.patch('/:id', controller.update); router.delete('/:id', controller.destroy); module.exports = router; api/datetime/datetime.controller.js 'use strict'; var _ = require('lodash'); var Datetime = require('./datetime.model'); // Get list of datetimes exports.index = function(req, res) { Datetime.find(function (err, datetimes) { if(err) { return handleError(res, err); } return res.json(200, datetimes); }); }; // Get a single datetime exports.show = function(req, res) { Datetime.findById(req.params.id, function (err, datetime) { if(err) { return handleError(res, err); } if(!datetime) { return res.send(404); } return res.json(datetime); }); }; // Creates a new datetime in the DB. exports.create = function(req, res) { Datetime.create(req.body, function(err, datetime) { if(err) { return handleError(res, err); } return res.json(201, datetime); }); }; // Updates an existing datetime in the DB. exports.update = function(req, res) { if(req.body._id) { delete req.body._id; } Datetime.findById(req.params.id, function (err, datetime) { if (err) { return handleError(res, err); } if(!datetime) { return res.send(404); } var updated = _.merge(datetime, req.body); updated.save(function (err) { if (err) { return handleError(res, err); } return res.json(200, datetime); }); }); }; // Deletes a datetime from the DB. exports.destroy = function(req, res) { Datetime.findById(req.params.id, function (err, datetime) { if(err) { return handleError(res, err); } if(!datetime) { return res.send(404); } datetime.remove(function(err) { if(err) { return handleError(res, err); } return res.send(204); }); }); }; function handleError(res, err) { return res.send(500, err); }
DateTime.remove({}, callback) The empty object will match all of them.
MongoDB
28,139,638
126