image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"connecting",
"containers",
"devops"
] | [
{
"code": "/usr/bin/mongo \"$@\"MongoDB shell version v4.2.11\n\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\n\n2020-12-09T13:42:32.943+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\n\nconnect@src/mongo/shell/mongo.js:353:17\n\n@(connect):2:6\n\n2020-12-09T13:42:32.944+0000 F - [main] exception: connect failed\n\n2020-12-09T13:42:32.944+0000 E - [main] exiting with code 1\nBootstrap: docker\nFrom: ubuntu:20.04\n\n%files\n\n%post\n\n apt update\n apt-get install -y wget\n apt-get install -y software-properties-common\n\n wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | apt-key add -\n echo \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.2 multiverse\" | tee /etc/apt/sources.list.d/mongodb-org-4.2.list\n apt update\n apt-get install -y mongodb-org\n\n%runscript\n /usr/bin/mongo \"$@\"",
"text": "I have a Singularity container in which I build and run MongoDB, calling /usr/bin/mongo \"$@\" directly. When I do this, I get the error messageInterestingly, when MongoDB already runs outside the container, I can also start it within the container, without that error happening.\nBecause of this, one theory is that I need to somehow open port 27017 in advance, which is what I will try out next. However, I wanted to post this question in the meantime in case this should not turn out to solve my issue.If relevant, my recipe is this:",
"username": "Ksortakh_Kraxthar"
},
{
"code": "#\n# NOTE: THIS DOCKERFILE IS GENERATED VIA \"apply-templates.sh\"\n#\n# PLEASE DO NOT EDIT IT DIRECTLY.\n#\n\nFROM ubuntu:focal\n\n# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added\nRUN set -eux; \\\n\tgroupadd --gid 999 --system mongodb; \\\n\tuseradd --uid 999 --system --gid mongodb --home-dir /data/db mongodb; \\\n\tmkdir -p /data/db /data/configdb; \\\n\tchown -R mongodb:mongodb /data/db /data/configdb\n\nRUN set -eux; \\\n\tapt-get update; \\\n\tapt-get install -y --no-install-recommends \\\n\t\tca-certificates \\\n\t\tdirmngr \\\n",
"text": "mongo is the client, mongod is the server.So what you are doing is starting the client.This product is already well containerised, but this is not the first self rolled container thread or the first singularity thread.https://hub.docker.com/_/mongo",
"username": "chris"
},
{
"code": "",
"text": "Ah I’m stupid, thanks.(As for the recipe, I need Singularity because I use other images from there which are later bundled with Singularity Compose.)",
"username": "Ksortakh_Kraxthar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Connection error when running MongoDB inside a container: connection refused | 2020-12-09T19:21:46.554Z | Connection error when running MongoDB inside a container: connection refused | 23,198 |
null | [] | [
{
"code": "2020-12-04T06:22:33.615+0200 I COMMAND [conn102] killcursors: found 0 of 1\n2020-12-04T06:22:45.243+0200 I COMMAND [conn102] killcursors: found 0 of 1\n2020-12-04T17:03:09.330+0200 I NETWORK [conn51] end connection 127.0.0.1:54872 (31 connections now open)\n2020-12-04T17:03:09.331+0200 I NETWORK [conn50] end connection 127.0.0.1:54870 (30 connections now open)\n2020-12-04T17:04:01.538+0200 I NETWORK [conn115] end connection 127.0.0.1:37508 (29 connections now open)\n2020-12-04T17:04:01.539+0200 I NETWORK [conn112] end connection 127.0.0.1:37502 (28 connections now open)\n2020-12-04T17:04:01.540+0200 I NETWORK [conn118] end connection 127.0.0.1:37514 (27 connections now open)\n2020-12-04T17:04:01.540+0200 I NETWORK [conn119] end connection 127.0.0.1:37516 (26 connections now open)\n2020-12-04T17:04:01.541+0200 I NETWORK [conn117] end connection 127.0.0.1:37512 (25 connections now open)\n2020-12-04T17:04:01.542+0200 I NETWORK [conn113] end connection 127.0.0.1:37504 (24 connections now open)\n2020-12-04T17:04:01.542+0200 I NETWORK [conn116] end connection 127.0.0.1:37510 (23 connections now open)\n2020-12-04T17:04:01.542+0200 I NETWORK [conn114] end connection 127.0.0.1:37506 (22 connections now open)\n2020-12-04T17:04:01.542+0200 I NETWORK [conn121] end connection 127.0.0.1:37520 (22 connections now open)\n2020-12-04T17:04:01.543+0200 I NETWORK [conn120] end connection 127.0.0.1:37518 (20 connections now open)\n2020-12-04T17:04:29.544+0200 I NETWORK [conn110] end connection 127.0.0.1:37498 (19 connections now open)\n2020-12-04T17:04:29.546+0200 I NETWORK [conn106] end connection 127.0.0.1:37490 (18 connections now open)\n2020-12-04T17:04:29.546+0200 I NETWORK [conn107] end connection 127.0.0.1:37492 (17 connections now open)\n2020-12-04T17:04:29.550+0200 I NETWORK [conn105] end connection 127.0.0.1:37488 (16 connections now open)\n2020-12-04T17:04:29.550+0200 I NETWORK [conn104] end connection 127.0.0.1:37486 (15 connections now open)\n2020-12-04T17:04:29.551+0200 I NETWORK [conn103] end connection 127.0.0.1:37484 (14 connections now open)\n2020-12-04T17:04:29.552+0200 I NETWORK [conn109] end connection 127.0.0.1:37496 (13 connections now open)\n2020-12-04T17:04:29.552+0200 I NETWORK [conn102] end connection 127.0.0.1:37482 (12 connections now open)\n2020-12-04T17:04:29.553+0200 I NETWORK [conn108] end connection 127.0.0.1:37494 (11 connections now open)\n2020-12-04T17:04:29.554+0200 I NETWORK [conn111] end connection 127.0.0.1:37500 (10 connections now open)\n2020-12-06T15:02:01.043+0200 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends\n2020-12-06T15:02:01.044+0200 I CONTROL [signalProcessingThread] now exiting\n2020-12-06T15:02:01.044+0200 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...\n2020-12-06T15:02:01.044+0200 I NETWORK [signalProcessingThread] closing listening socket: 6\n2020-12-06T15:02:01.044+0200 I NETWORK [signalProcessingThread] closing listening socket: 7\n2020-12-06T15:02:01.044+0200 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock\n2020-12-06T15:02:01.044+0200 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog...\n2020-12-06T15:02:01.044+0200 I NETWORK [signalProcessingThread] shutdown: going to close sockets...\n2020-12-06T15:02:01.046+0200 I NETWORK [conn53] end connection 127.0.0.1:54880 (9 connections now open)\n2020-12-06T15:02:01.047+0200 I NETWORK [conn59] end connection 127.0.0.1:54892 (9 connections now open)\n2020-12-06T15:02:01.047+0200 I NETWORK [conn55] end connection 127.0.0.1:54884 (9 connections now open)\n2020-12-06T15:02:01.047+0200 I NETWORK [conn56] end connection 127.0.0.1:54886 (9 connections now open)\n2020-12-06T15:02:01.048+0200 I NETWORK [conn60] end connection 127.0.0.1:54894 (9 connections now open)\n2020-12-06T15:02:01.048+0200 I NETWORK [conn52] end connection 127.0.0.1:54878 (9 connections now open)\n2020-12-06T15:02:01.048+0200 I NETWORK [conn57] end connection 127.0.0.1:54888 (9 connections now open)\n2020-12-06T15:02:01.049+0200 I NETWORK [conn61] end connection 127.0.0.1:54896 (9 connections now open)\n2020-12-06T15:02:01.049+0200 I NETWORK [conn54] end connection 127.0.0.1:54882 (9 connections now open)\n2020-12-06T15:02:01.050+0200 I NETWORK [conn58] end connection 127.0.0.1:54890 (9 connections now open)\n2020-12-06T15:02:01.051+0200 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down\n2020-12-06T15:02:01.184+0200 I STORAGE [signalProcessingThread] shutdown: removing fs lock...\n2020-12-06T15:02:01.184+0200 I CONTROL [signalProcessingThread] dbexit: rc: 0\n2020-12-06T15:08:10.552+0200 I CONTROL ***** SERVER RESTARTED *****\n2020-12-06T15:08:10.692+0200 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=23G,session_max=20000,eviction=(threads_max=4),statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),\n2020-12-06T15:08:10.904+0200 E STORAGE [initandlisten] WiredTiger (-31802) [1607260090:904931][25075:0x7f3b75fadc20], file:WiredTiger.wt, connection: WiredTiger.wt read error: failed to read 4096 bytes at offset 0: WT_ERROR: non-specific WiredTiger error\n2020-12-06T15:08:10.922+0200 I - [initandlisten] Assertion: 28595:-31802: WT_ERROR: non-specific WiredTiger error\n2020-12-06T15:08:11.208+0200 I STORAGE [initandlisten] exception in initAndListen: 28595 -31802: WT_ERROR: non-specific WiredTiger error, terminating\n2020-12-06T15:08:11.208+0200 I CONTROL [initandlisten] dbexit: rc: 100\n",
"text": "Please HELP if you could be so kind…After Sunday reboot by Linux SA, I get in /var/log/mongdb/mongod.log …Now when I manually start as root, I get…\n[root@esa360qv63 bin]# /usr/bin/mongod start -f /etc/mongod.conf\nabout to fork child process, waiting until server is ready for connections.\nforked process: 13259\nERROR: child process failed, exited with error number 1\n…my config file is …\n[root@esa360qv63 bin]# cat /etc/mongod.confsystemLog:\ndestination: file\nlogAppend: true\npath: “/var/log/mongodb/mongod.log”storage:\ndbPath: /mongodata\njournal:\nenabled: true\nengine: wiredTigerprocessManagement:\nfork: true\npidFilePath: /var/run/mongodb/mongod3.pidnet:\nport: 27017\nbindIp: 127.0.0.1 # Listen to local interface only, comment to listen on all interfaces.[root@esa360qv63 bin]# cat /etc/redhat-release\nRed Hat Enterprise Linux Server release 6.10 (Santiago)\n[root@esa360qv63 bin]# ulimit -a\ncore file size (blocks, -c) 0\ndata seg size (kbytes, -d) unlimited\nscheduling priority (-e) 0\nfile size (blocks, -f) unlimited\npending signals (-i) 192623\nmax locked memory (kbytes, -l) 64\nmax memory size (kbytes, -m) unlimited\nopen files (-n) 1024\npipe size (512 bytes, -p) 8\nPOSIX message queues (bytes, -q) 819200\nreal-time priority (-r) 0\nstack size (kbytes, -s) 10240\ncpu time (seconds, -t) unlimited\nmax user processes (-u) 192623\nvirtual memory (kbytes, -v) unlimited\nfile locks (-x) unlimited\n[root@esa360qv63 bin]#Does above info help point me in right direct??? TIA for your kind inputs<<<<",
"username": "CHANDRASHEKAR_MASTI"
},
{
"code": "",
"text": "Did your SA possibly make some changes to SELinux configuration?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hello Jack, thank you for your kind help and time. Indeed, SA patched the Linux OS on Sunday 6 Dec after that mongodb fails to start back up. For your pointer I checked and find below, that new files/directories have been introduced on the day of the patch, 6 DEC 2020. Do you see any quick clues here, kindly advise?\n[root@esa360qv63 selinux]# ls -altr\ntotal 52\n-rw-r–r-- 1 root root 2271 Mar 16 2015 semanage.conf\n-rw-r–r-- 1 root root 76 Oct 25 2016 restorecond_user.conf\n-rw-r–r-- 1 root root 113 Oct 25 2016 restorecond.conf\n-rw-r–r--. 1 root root 447 Mar 15 2017 config\ndrwxr-xr-x. 5 root root 4096 Dec 6 2017 .\ndrwxr-xr-x. 6 root root 4096 Dec 6 14:16 targeted\ndrwxr-xr-x. 7 root root 4096 Dec 6 14:27 mls\ndrwxr-xr-x. 6 root root 4096 Dec 6 14:28 minimum\ndrwxr-xr-x. 200 root root 20480 Dec 6 16:55 …\n[root@esa360qv63 selinux]# view config\n[root@esa360qv63 selinux]# getenforce\nDisabled\n[root@esa360qv63 selinux]# pwd\n/etc/selinux\n[root@esa360qv63 selinux]#",
"username": "CHANDRASHEKAR_MASTI"
},
{
"code": "",
"text": "SE Linux is disabled. So it is not that.This is the error line:2020-12-06T15:08:10.904+0200 E STORAGE [initandlisten] WiredTiger (-31802) [1607260090:904931][25075:0x7f3b75fadc20], file:WiredTiger.wt, connection: WiredTiger.wt read error: failed to read 4096 bytes at offset 0: WT_ERROR: non-specific WiredTiger errorWhat version of mongodb are you running ?",
"username": "chris"
},
{
"code": "2020-06-01T13:56:27.772+0300 I CONTROL [initandlisten] db version v3.0.6\n2020-06-01T13:56:27.772+0300 I CONTROL [initandlisten] git version: 1ef45a23a4c5e3480ac919b28afcba3c615488f2\n2020-06-01T13:56:27.772+0300 I CONTROL [initandlisten] build info: Linux ip-10-67-194-123 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 BOOST_LIB_VERSION=1_49\n2020-06-01T13:56:27.772+0300 I CONTROL [initandlisten] allocator: tcmalloc\n2020-06-01T13:56:27.772+0300 I CONTROL [initandlisten] options: { config: \"/etc/mongod.conf\", net: { bindIp: \"127.0.0.1\", port: 27017 }, processManagement: { fork: true, pidFilePath: \"/var/run/mongodb/mongod3.pid\" }, storage: { dbPath: \"/mongodata\", engine: \"wiredTiger\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"/var/log/mongodb/mongod.log\" } }\n2020-06-01T13:56:27.789+0300 I NETWORK [initandlisten] waiting for connections on port 27017\n2020-06-01T13:56:29.247+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35470 #1 (1 connection now open)\n2020-06-01T13:56:29.247+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35472 #2 (2 connections now open)\n2020-06-01T13:56:29.247+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35474 #3 (3 connections now open)\n2020-06-01T13:56:29.248+0300 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35476 #4 (4 connections now\n....<skipping subsequent entries>....\n",
"text": "Thank you Chris for your kind assistance, time and input. Version of mongodb running is below…extracted from older entries in the /var/log/mongodb/mongod.log file…",
"username": "CHANDRASHEKAR_MASTI"
},
{
"code": "[root@esa360qv63 mongodata]# cat /etc/redhat-release\nRed Hat Enterprise Linux Server release 6.5 (Santiago)\nFrom the database datafile directory, permissions look fine....\n[root@esa360qv63 mongodata]# ls -alt Wired*\n-rw-r--r-- 1 mongod mongod 902 Dec 6 15:02 WiredTiger.turtle\n-rw-r--r-- 1 mongod mongod 0 Jun 1 2020 WiredTiger.wt\n-rw-r--r-- 1 mongod mongod 534 Apr 24 2017 WiredTiger.basecfg\n-rw-r--r-- 1 mongod mongod 46 Apr 24 2017 WiredTiger\n-rw-r--r-- 1 mongod mongod 21 Apr 24 2017 WiredTiger.lock\n2020-12-08T23:09:01.963+0200 I CONTROL ***** SERVER RESTARTED *****\n2020-12-08T23:09:01.986+0200 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=23G,session_max=20000,eviction=(threads_max=4),statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),\n2020-12-08T23:09:01.992+0200 E STORAGE [initandlisten] WiredTiger (-31802) [1607461741:992830][626:0x7fb3cfe1ec20], file:WiredTiger.wt, connection: WiredTiger.wt read error: failed to read 4096 bytes at offset 0: WT_ERROR: non-specific WiredTiger error\n2020-12-08T23:09:01.993+0200 I - [initandlisten] Assertion: 28595:-31802: WT_ERROR: non-specific WiredTiger error\n2020-12-08T23:09:01.993+0200 I STORAGE [initandlisten] exception in initAndListen: 28595 -31802: WT_ERROR: non-specific WiredTiger error, terminating\n2020-12-08T23:09:01.993+0200 I CONTROL [initandlisten] dbexit: rc: 100\n",
"text": "I got the OS patch SA applied over the weekend rolled back then tried to start up mongodb still I ran into same error, something seems broke with the storage engine…would anyone be able to help debug further to try to salvage what we have and restart the 3.0.6 mongodb engine?after patch rollback:the mongod.log file still reports the same error again… Can anyone please help??? Much appreciated for your time and kindness…",
"username": "CHANDRASHEKAR_MASTI"
},
{
"code": "",
"text": "Your WiredTiger.wt file being 0 bytes is a bad thing.If you are running in a replica set then you can resync this node with the rest of the cluster.If you are running standalone you will have to recover from backup.",
"username": "chris"
},
{
"code": "",
"text": "Thank you Chris…is there any way to salvage the various collectionxxx.wt files in this volume using some repair command? this is a standalone install inherited as-is.",
"username": "CHANDRASHEKAR_MASTI"
},
{
"code": "",
"text": "No to my knowledge. Might be some one else knows definitively.",
"username": "chris"
}
] | Mongod FAILS to start after server reboot | 2020-12-07T22:46:02.960Z | Mongod FAILS to start after server reboot | 4,923 |
null | [
"python"
] | [
{
"code": "",
"text": "We have 2 collections in Mongodb.\nColl1 : 9gb, 7k documents\nColl2: 12gb, 20k documentsI am able to access Coll1 from python(pymongo) and fetch the document using find_one().\nBut when I try to fetch from Coll2, I get None as output for the same find_one() function.Also, I am able to access Coll2 from Mongodb Compass.",
"username": "Ananth_Nagiah"
},
{
"code": "",
"text": "Posting your schema, an example document, and a code sample would help.\nMy first wild and unsubstantiated guess is that you’re addressing a non-existent db or collection.",
"username": "Jack_Woehr"
},
{
"code": "#! /usr/bin/python3.7\nimport json\nimport pymongo\nmyclient = pymongo. Mongoclient(host='mongodb://hostname:27017', username='', password='', authSource='admin') \ncollectionname = dbname. Collection2\ndoc1=collectionname.find_one() \nprint(doc1) \nStatement: Array\n>0: object\n id:1\n >address: object\n street: string\n zip: string\n >address1: object\n street:string\n zip:string\n>1:object\n>2:object\n...... \n>999:object\n",
"text": "Hi JackPython code:Also, the document in both Collection1 and Collection 2 have the same structure.Both the collection contains document of same structure. Each document has an Array field with 1000 subdocuments.",
"username": "Ananth_Nagiah"
},
{
"code": "",
"text": "How is dbname defined?In one post it is Coll1 and Coll2 while in the other it is Collection2. Which one is the correct one? Often, issues like the one you described, are caused by misspelling database or collection names. It is really hard to find if it is case if you provide made-up names or altered sample code that do not contains the misspelling errors.Please provide a screenshot of compass that shows the 2 collections within their respective database.I do not know python enough but the space between dbname. and Collection2 might be significatif.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Jack… Do you need any additional info?",
"username": "Ananth_Nagiah"
},
{
"code": "",
"text": "@Ananth_Nagiah everthing @steevej said in his post is true: the code you posted does not make sense to either of us. Furthermore, I think Steeve is probably more expert than I am, so you should work with him and provide the information he requests.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I think Steeve is probably more expert than I amThanks for the compliment but I would not consider that I am. SeeI do not know python enoughAny information posted will be useful to all whom have time to help.Have a good day Jack!",
"username": "steevej"
},
{
"code": "",
"text": "We’ll work it out together if we get some good code and data from the OP!",
"username": "Jack_Woehr"
}
] | Unable to access large Mongodb collection | 2020-12-07T18:29:26.827Z | Unable to access large Mongodb collection | 2,118 |
[
"dot-net"
] | [
{
"code": "",
"text": "I am trying to add nuget package for Realm to a project created in .net framework 4.7.2 and I have problem with the reference to realm-wrappers. It was added when I installed Realm package but I think something is wrong with it because its paths contains osx (I am working on Windows machine). The reference below has the same problem.\nimage1920×1048 223 KBI tried to add references to windows dll manually but I am getting this error:\nPlease advise.",
"username": "Marcin_Zarebski"
},
{
"code": "",
"text": "The .dylib being referenced is a visualization bug in Visual Studio - it won’t be bundled/used in your compiled executable. You don’t need to add the references manually as they should be automatically resolved and added by NuGet. Barring the visualization issue showing a warning about the macOS .dylib, are you seeing compilation issues with your project?",
"username": "nirinchev"
},
{
"code": "",
"text": "There are no compilation errors, but exception is thrown at runtime.\nimage1920×1080 293 KB",
"username": "Marcin_Zarebski"
},
{
"code": "",
"text": "Can you try adding a reference to Realm in your WebApplication project if you haven’t already?",
"username": "nirinchev"
},
{
"code": "",
"text": "I tried that and I have the same error.",
"username": "Marcin_Zarebski"
},
{
"code": "",
"text": "Hm… sounds like either a Nuget/MSBuild cache issue - can you try clearing the obj/bin folders and rebuilding. If the problem persists, can you share your project so I can try and run it locally?",
"username": "nirinchev"
},
{
"code": "",
"text": "That didn’t help either. Here is link to github repository with this project.",
"username": "Marcin_Zarebski"
},
{
"code": "var binPath = @\"*path-to-your-solution*\\WebApplication11\\bin\";\nvar wrappersPath = Path.Combine(binPath, \"lib\", \"win32\", IntPtr.Size == 4 ? \"x86\" : \"x64\");\nvar path = wrappersPath + Path.PathSeparator + Environment.GetEnvironmentVariable(\"PATH\", EnvironmentVariableTarget.Process);\nEnvironment.SetEnvironmentVariable(\"PATH\", path, EnvironmentVariableTarget.Process);\n",
"text": "As far as I can tell, the issue is that when using IIS Express, it will copy the assembly files to a temporary location. When it does that, however, it doesn’t preserve the folder structure, so the trick we use to automatically add the realm-wrappers.dll to path doesn’t work. What you could do is to either use IIS and create an actual site that is hosted off of your project dir (i.e. not IIS express) or add the folder where realm-wrappers.dll is to the process PATH. The code would look something like:On a side note, the Realm package added to your WebApplication project is not the same version as the one added to your ClassLibrary - you should remove Realm.Database and Realm.DataBinding and add only Realm.",
"username": "nirinchev"
},
{
"code": "",
"text": "Thank you for your help.",
"username": "Marcin_Zarebski"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Issue with adding realm to the project | 2020-12-07T18:29:35.525Z | Issue with adding realm to the project | 2,752 |
|
null | [
"node-js",
"app-services-user-auth",
"realm-web",
"react-js"
] | [
{
"code": "handleAuthRedirect();fetch(url, { method, headers, body: {...data, tokenId} });",
"text": "I’m using the Realm SDK Google Auth RedirectURI. I’m getting the user data, accessToken, and refreshToken. What I’m not getting is the tokenId so that I can validate the session with my backend API.I was using react-google-login with Mongo Anonymous login. I could use that tokenId to validate the session. I can’t seem to accomplish the same thing with Realm SDK redirect. I would have passed the AuthCode to Realm SDK, but that doesn’t work either based on this post.My frontend is ReactJS. The backend is NodeJS. Here’s what I’m trying to accomplish.Login Snippet:\nconst credentials = Realm.Credentials.google(redirectURI);\nconst user = await app.logIn(credentials);Redirect Snippet:\nhandleAuthRedirect();Fetch:\nfetch(url, { method, headers, body: {...data, tokenId} });API:\nconst client = new OAuth2Client(CLIENT_ID);\nconst ticket = await client.verifyIdToken({\nidToken: token,\naudience: CLIENT_ID,\n});",
"username": "Winston_Zhao"
},
{
"code": "idTokenRealm.Credentials.google",
"text": "First off - welcome to the forum \nand thanks for your patience on this.The access and refresh tokens that are available on a user instance after authentication are JWT which are scoped to MongoDB Realm and as such, we don’t provide the public key to validate their authenticity, making it impossible to pass these to your backend to verify them.One way to authenticate with Google OAuth 2.0 and get access to the idToken for your backend component is to use the Google Platform Library (or alternatively the React library react-use-googlelogin which provides a React hooks API around it) to sign in. Once authenticated you can use the getAuthResponse method on the User returned from the Google Playform Library (or simply the idToken property on the user in case you use the react-use-googlelogin package) to get the OpenID Connect ID Token.This token can be used when authenticating towards MongoDB Realm (via Realm.Credentials.google) as of Realm Web v1.1.0 which will be released shortly.I hope that we’ll be able to make a detailed example app or guide available soon, outlining the steps required to get Google OAuth via OpenID Connect setup correctly.Please let me know if I need to dive deeper into some part of my answer above \nHappy coding!",
"username": "kraenhansen"
},
{
"code": "import { Credentials } from \"realm-web\";\nconst credentials = Credentials.google(\"google-id-token-goes-here\");\n",
"text": "FYI: Realm Web v1.1.0 with support for passing in the OpenID Connect id token was just released.",
"username": "kraenhansen"
},
{
"code": "",
"text": "Awesome timing. Thanks!",
"username": "Winston_Zhao"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Getting the Token ID | 2020-12-04T23:15:43.479Z | Getting the Token ID | 6,973 |
null | [
"aggregation",
"performance"
] | [
{
"code": "db.ECRITURE.aggregate([\n { \"$match\": {\n \"A_ECR_01\": \"01\",\n\t \"A_ECR_02\": \"02\"\n } },\n { \"$group\": {\n \"_id\": {\"$concat\": [\"$A_ECR_01\", \"_\", \"$A_ECR_02\" , \"_\", \"$A_ECR_03\"]},\n \"A_CHN_ACH_011\":{\n \"$sum\":{\n \"$cond\":[\n {\n \"$in\":[\n \"$codRegroupement\",\n [\n \"DEB_SST\"\n ]\n ]\n },\n \"$A_CHN_ACH_011\",\n null\n ]\n }\n }\n } }\n])\n",
"text": "Hello all,I am working with mongodb 4.4 and i am faced with strange behavior.I run a this very simple aggregate in a collection that contains actually 30 millions of documents :On my collection i have a compound index on A_ECR_01, A_ECR_02, A_ECR_03. But this aggregate takes about 30s.But if i execute this same aggregate on a collection containing only the documents for “A_ECR_01”: “01” and “A_ECR_02”: “02” it takes 1s.Note : there is around 500K documents in the collections for “A_ECR_01”: “01” and “A_ECR_02”: “02”.It’s like the match stage doesn’t work well.Do you have an idea where to investigate ?Best regards.",
"username": "steph_xp"
},
{
"code": "\"_id\": {A: \"$A_ECR_01\", B: \"$A_ECR_02\" ,C: \"$A_ECR_03\"},\n \"A_CHN_ACH_011\":{\n",
"text": "Hi @steph_xp,Welcome to MongoDB community!First of all have you tried using just the index with the 2 exact fields for the match stage on this collection? If that yield better results why not to use it.I suspect this is since $group stage has concatenation in its grouping expression so the engine cannot use the index and needs to access each document to perform the grouping.Why not to group based on the 3 values and add the concatenation in a next project stage?If that won’t help, please run explain(“executionStats”). aggregate and show output.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | Aggregate slow in a huge collection | 2020-12-08T16:56:51.097Z | Aggregate slow in a huge collection | 3,580 |
null | [
"change-streams"
] | [
{
"code": "",
"text": "Change stream send a new query to Mongo server for full document when set fullDocument:updateLookup?\nHow does it affect performance?",
"username": "YLY_SW"
},
{
"code": "",
"text": "Hi @YLY_SW,T\nWelcome to MongoDB community!The exact impact is hard to predict and it requires performance testing with ur data .However, there is obviously an overhead to the operation as the full document must be lookedup as not all data is present in oplog for itBest\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How do change streams with fullDocument=updateLookup works? | 2020-12-09T00:34:48.202Z | How do change streams with fullDocument=updateLookup works? | 1,666 |
null | [
"queries"
] | [
{
"code": "",
"text": "Basically, I have a collection where documents have nested objects, and I want to query documents which have certain fields, and certain subdocuments, by using an array of fields.",
"username": "Project_PlaDat"
},
{
"code": "",
"text": "Hi @Project_PlaDat,Welcome to MongoDB community!I believe you could use the $all or $any operators with $elemMatch:Please let me know if that helps\nThanks\nPavel",
"username": "Pavel_Duchovny"
}
] | How to query documents with nested subdocuments by filtering them against an array of fields | 2020-12-09T02:30:32.304Z | How to query documents with nested subdocuments by filtering them against an array of fields | 3,508 |
null | [
"queries",
"node-js",
"performance"
] | [
{
"code": " let query = {\n date: {\n $gte: from,\n },\n game_id: gameId,\n };\n\n let documents = await gameStreams.find(query).toArray();\n**local:**: time_taken 4003.124ms\n**production**: time_taken 71187.316ms\n{\n \"ns\" : \"gaming.gamestreams\",\n \"size\" : 4138284702.0,\n \"count\" : 14011415,\n \"avgObjSize\" : 295,\n \"storageSize\" : 900091904,\n \"freeStorageSize\" : 671744,\n \"capped\" : false,\n \"wiredTiger\" : {\n \"metadata\" : {\n \"formatVersion\" : 1\n },\n \"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u\",\n \"type\" : \"file\",\n \"uri\" : \"statistics:table:collection-0--7606124955649373726\",\n \"LSM\" : {\n \"bloom filter false positives\" : 0,\n \"bloom filter hits\" : 0,\n \"bloom filter misses\" : 0,\n \"bloom filter pages evicted from cache\" : 0,\n \"bloom filter pages read into cache\" : 0,\n \"bloom filters in the LSM tree\" : 0,\n \"chunks in the LSM tree\" : 0,\n \"highest merge generation in the LSM tree\" : 0,\n \"queries that could have benefited from a Bloom filter that did not exist\" : 0,\n \"sleep for LSM checkpoint throttle\" : 0,\n \"sleep for LSM merge throttle\" : 0,\n \"total size of bloom filters\" : 0\n },\n \"block-manager\" : {\n \"allocations requiring file extension\" : 51772,\n \"blocks allocated\" : 58597,\n \"blocks freed\" : 7000,\n \"checkpoint size\" : 899403776,\n \"file allocation unit size\" : 4096,\n \"file bytes available for reuse\" : 671744,\n \"file magic number\" : 120897,\n \"file major version number\" : 1,\n \"file size in bytes\" : 900091904,\n \"minor version number\" : 0\n },\n \"btree\" : {\n \"btree checkpoint generation\" : 5456,\n \"btree clean tree checkpoint expiration time\" : NumberLong(9223372036854775807),\n \"column-store fixed-size leaf pages\" : 0,\n \"column-store internal pages\" : 0,\n \"column-store variable-size RLE encoded values\" : 0,\n \"column-store variable-size deleted values\" : 0,\n \"column-store variable-size leaf pages\" : 0,\n \"fixed-record size\" : 0,\n \"maximum internal page key size\" : 368,\n \"maximum internal page size\" : 4096,\n \"maximum leaf page key size\" : 2867,\n \"maximum leaf page size\" : 32768,\n \"maximum leaf page value size\" : 67108864,\n \"maximum tree depth\" : 4,\n \"number of key/value pairs\" : 0,\n \"overflow pages\" : 0,\n \"pages rewritten by compaction\" : 0,\n \"row-store empty values\" : 0,\n \"row-store internal pages\" : 0,\n \"row-store leaf pages\" : 0\n },\n \"cache\" : {\n \"bytes currently in the cache\" : 2587594533.0,\n \"bytes dirty in the cache cumulative\" : 1180911380.0,\n \"bytes read into cache\" : 11221545998.0,\n \"bytes written from cache\" : 4664026027.0,\n \"checkpoint blocked page eviction\" : 0,\n \"data source pages selected for eviction unable to be evicted\" : 339,\n \"eviction walk passes of a file\" : 13253,\n \"eviction walk target pages histogram - 0-9\" : 3513,\n \"eviction walk target pages histogram - 10-31\" : 4855,\n \"eviction walk target pages histogram - 128 and higher\" : 0,\n \"eviction walk target pages histogram - 32-63\" : 2504,\n \"eviction walk target pages histogram - 64-128\" : 2381,\n \"eviction walks abandoned\" : 539,\n \"eviction walks gave up because they restarted their walk twice\" : 145,\n \"eviction walks gave up because they saw too many pages and found no candidates\" : 4738,\n \"eviction walks gave up because they saw too many pages and found too few candidates\" : 2980,\n \"eviction walks reached end of tree\" : 7126,\n \"eviction walks started from root of tree\" : 8404,\n \"eviction walks started from saved location in tree\" : 4849,\n \"hazard pointer blocked page eviction\" : 14,\n \"history store table reads\" : 0,\n \"in-memory page passed criteria to be split\" : 669,\n \"in-memory page splits\" : 338,\n \"internal pages evicted\" : 383,\n \"internal pages split during eviction\" : 5,\n \"leaf pages split during eviction\" : 1012,\n \"modified pages evicted\" : 1398,\n \"overflow pages read into cache\" : 0,\n \"page split during eviction deepened the tree\" : 1,\n \"page written requiring history store records\" : 1180,\n \"pages read into cache\" : 125966,\n \"pages read into cache after truncate\" : 1,\n \"pages read into cache after truncate in prepare state\" : 0,\n \"pages requested from the cache\" : 33299436,\n \"pages seen by eviction walk\" : 14244088,\n \"pages written from cache\" : 57731,\n \"pages written requiring in-memory restoration\" : 45,\n \"tracked dirty bytes in the cache\" : 0,\n \"unmodified pages evicted\" : 147454\n },\n \"cache_walk\" : {\n \"Average difference between current eviction generation when the page was last considered\" : 0,\n \"Average on-disk page image size seen\" : 0,\n \"Average time in cache for pages that have been visited by the eviction server\" : 0,\n \"Average time in cache for pages that have not been visited by the eviction server\" : 0,\n \"Clean pages currently in cache\" : 0,\n \"Current eviction generation\" : 0,\n \"Dirty pages currently in cache\" : 0,\n \"Entries in the root page\" : 0,\n \"Internal pages currently in cache\" : 0,\n \"Leaf pages currently in cache\" : 0,\n \"Maximum difference between current eviction generation when the page was last considered\" : 0,\n \"Maximum page size seen\" : 0,\n \"Minimum on-disk page image size seen\" : 0,\n \"Number of pages never visited by eviction server\" : 0,\n \"On-disk page image sizes smaller than a single allocation unit\" : 0,\n \"Pages created in memory and never written\" : 0,\n \"Pages currently queued for eviction\" : 0,\n \"Pages that could not be queued for eviction\" : 0,\n \"Refs skipped during cache traversal\" : 0,\n \"Size of the root page\" : 0,\n \"Total number of pages currently in cache\" : 0\n },\n \"checkpoint-cleanup\" : {\n \"pages added for eviction\" : 3,\n \"pages removed\" : 0,\n \"pages skipped during tree walk\" : 1992915,\n \"pages visited\" : 4892576\n },\n \"compression\" : {\n \"compressed page maximum internal page size prior to compression\" : 4096,\n \"compressed page maximum leaf page size prior to compression \" : 131072,\n \"compressed pages read\" : 125583,\n \"compressed pages written\" : 55619,\n \"page written failed to compress\" : 0,\n \"page written was too small to compress\" : 2112\n },\n \"cursor\" : {\n \"Total number of entries skipped by cursor next calls\" : 0,\n \"Total number of entries skipped by cursor prev calls\" : 0,\n \"Total number of entries skipped to position the history store cursor\" : 0,\n \"bulk loaded cursor insert calls\" : 0,\n \"cache cursors reuse count\" : 72908,\n \"close calls that result in cache\" : 0,\n \"create calls\" : 305,\n \"cursor next calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor next calls that skip less than 100 entries\" : 14756445,\n \"cursor prev calls that skip greater than or equal to 100 entries\" : 0,\n \"cursor prev calls that skip less than 100 entries\" : 1,\n \"insert calls\" : 14072870,\n \"insert key and value bytes\" : 4212628410.0,\n \"modify\" : 0,\n \"modify key and value bytes affected\" : 0,\n \"modify value bytes modified\" : 0,\n \"next calls\" : 14756445,\n \"open cursor count\" : 0,\n \"operation restarted\" : 109,\n \"prev calls\" : 1,\n \"remove calls\" : 0,\n \"remove key bytes removed\" : 0,\n \"reserve calls\" : 0,\n \"reset calls\" : 224078,\n \"search calls\" : 13128942,\n \"search history store calls\" : 0,\n \"search near calls\" : 14772,\n \"truncate calls\" : 0,\n \"update calls\" : 0,\n \"update key and value bytes\" : 0,\n \"update value size change\" : 0\n },\n \"reconciliation\" : {\n \"approximate byte size of timestamps in pages written\" : 84197280,\n \"approximate byte size of transaction IDs in pages written\" : 77480,\n \"dictionary matches\" : 0,\n \"fast-path pages deleted\" : 0,\n \"internal page key bytes discarded using suffix compression\" : 91023,\n \"internal page multi-block writes\" : 747,\n \"internal-page overflow keys\" : 0,\n \"leaf page key bytes discarded using prefix compression\" : 0,\n \"leaf page multi-block writes\" : 1368,\n \"leaf-page overflow keys\" : 0,\n \"maximum blocks required for a page\" : 1,\n \"overflow values written\" : 0,\n \"page checksum matches\" : 23657,\n \"page reconciliation calls\" : 3073,\n \"page reconciliation calls for eviction\" : 999,\n \"pages deleted\" : 3,\n \"pages written including an aggregated newest start durable timestamp \" : 1173,\n \"pages written including an aggregated newest stop durable timestamp \" : 0,\n \"pages written including an aggregated newest stop timestamp \" : 0,\n \"pages written including an aggregated newest stop transaction ID\" : 0,\n \"pages written including an aggregated oldest start timestamp \" : 1065,\n \"pages written including an aggregated oldest start transaction ID \" : 19,\n \"pages written including an aggregated prepare\" : 0,\n \"pages written including at least one prepare\" : 0,\n \"pages written including at least one start durable timestamp\" : 31568,\n \"pages written including at least one start timestamp\" : 31568,\n \"pages written including at least one start transaction ID\" : 63,\n \"pages written including at least one stop durable timestamp\" : 0,\n \"pages written including at least one stop timestamp\" : 0,\n \"pages written including at least one stop transaction ID\" : 0,\n \"records written including a prepare\" : 0,\n \"records written including a start durable timestamp\" : 5262330,\n \"records written including a start timestamp\" : 5262330,\n \"records written including a start transaction ID\" : 9685,\n \"records written including a stop durable timestamp\" : 0,\n \"records written including a stop timestamp\" : 0,\n \"records written including a stop transaction ID\" : 0\n },\n \"session\" : {\n \"object compaction\" : 0\n },\n \"transaction\" : {\n \"race to read prepared update retry\" : 0,\n \"update conflicts\" : 0\n }\n },\n \"nindexes\" : 8,\n \"indexBuilds\" : [],\n \"totalIndexSize\" : 2627395584.0,\n \"totalSize\" : 3527487488.0,\n \"indexSizes\" : {\n \"_id_\" : 270036992,\n \"service_id_1\" : 303837184,\n \"service_type_1\" : 249204736,\n \"name_1\" : 245424128,\n \"date_1\" : 226025472,\n \"game_id_1\" : 247881728,\n \"service_id_1_service_type_1_name_1_date_1_game_id_1\" : 996036608,\n \"game_id_1_date_1\" : 88948736\n },\n \"scaleFactor\" : 1,\n \"ok\" : 1.0,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1607345809, 1),\n \"signature\" : {\n \"hash\" : { \"$binary\" : \"CP8MT7YsL4t+raZn62AZhZdmTFg=\", \"$type\" : \"00\" },\n \"keyId\" : NumberLong(6855964983600087045)\n }\n },\n \"operationTime\" : Timestamp(1607345809, 1)\n}\n{\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"gaming.gamestreams\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [ \n {\n \"game_id\" : {\n \"$eq\" : \"24024\"\n }\n }, \n {\n \"date\" : {\n \"$gte\" : ISODate(\"2019-07-03T23:59:59.999Z\")\n }\n }\n ]\n },\n \"queryHash\" : \"FCE2088F\",\n \"planCacheKey\" : \"A564EAFA\",\n \"winningPlan\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"game_id\" : 1.0,\n \"date\" : 1.0\n },\n \"indexName\" : \"game_id_1_date_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"game_id\" : [],\n \"date\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"game_id\" : [ \n \"[\\\"24024\\\", \\\"24024\\\"]\"\n ],\n \"date\" : [ \n \"[new Date(1562198399999), new Date(9223372036854775807)]\"\n ]\n }\n }\n },\n \"rejectedPlans\" : [ \n {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"game_id\" : {\n \"$eq\" : \"24024\"\n }\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"date\" : 1\n },\n \"indexName\" : \"date_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"date\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"date\" : [ \n \"[new Date(1562198399999), new Date(9223372036854775807)]\"\n ]\n }\n }\n }, \n {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"date\" : {\n \"$gte\" : ISODate(\"2019-07-03T23:59:59.999Z\")\n }\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"game_id\" : 1\n },\n \"indexName\" : \"game_id_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"game_id\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"game_id\" : [ \n \"[\\\"24024\\\", \\\"24024\\\"]\"\n ]\n }\n }\n }\n ]\n },\n \"serverInfo\" : {\n \"host\" : \"mongo-prod-mongodb-0\",\n \"port\" : 27017,\n \"version\" : \"4.4.1\",\n \"gitVersion\" : \"ad91a93a5a31e175f5cbf8c69561e788bbc55ce1\"\n },\n \"ok\" : 1.0,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1607346119, 1),\n \"signature\" : {\n \"hash\" : { \"$binary\" : \"XMuLYE5NVhHktX9wZ79LteksXFs=\", \"$type\" : \"00\" },\n \"keyId\" : NumberLong(6855964983600087045)\n }\n },\n \"operationTime\" : Timestamp(1607346119, 1)\n}\n",
"text": "I have a local mongo running on my machine and a mongodb running on an ec2 container using m5.large with ebs storage.I’m aware that there will always be some difference (networking) etc when it comes to making a request locally vs making a request externally in the cloud to a production mongo.However im finding with this particularly trivial query (assuming) its taking much longer than i expected.node.js mongo queryI am using node.js to make a query to mongodb. This code is ran locally against local (my laptop) and production (ec2) version of mongoLocally it takes around 4-9 seconds but when making the same query to production its taking 70-80 seconds. Why would there be such a huge difference here?Note: i am using toArray() and the number of documents returned is 300,000.db.getCollection(‘gamestreams’).stats()db.getCollection(‘gamestreams’).find({ date: { $gte: new Date(‘2019-07-03T23:59:59.999Z’) }, game_id: ‘24024’ }).explain()Stats above are taken from production mongo, if there is any other information you need me to provide please let me know.Note: using the cursor with a batchSize(10000) helps a little, reducing time taken on production to around 40 seconds But it still does not seem quite right compared to performance i get locally.",
"username": "Kay_Khan"
},
{
"code": "",
"text": "Hi @Kay_Khan,Looking at the explain plan the production uses a wrong index: game_id_1 while I assume local deployment uses probably index game_id_1_date_1.It might as the query cache was choosen earlier and is now in cache. You can clear the cache planIf this won’t help you can hint the query with the compound index.As a last resort you can drop the index of game_id as the compound should cover it purpose as well.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "winningPlan\n\"indexName\" : \"game_id_1_date_1\",\n{\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"gaming.gamestreams\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [ \n {\n \"game_id\" : {\n \"$eq\" : \"24024\"\n }\n }, \n {\n \"date\" : {\n \"$gte\" : ISODate(\"2019-07-03T23:59:59.999Z\")\n }\n }\n ]\n },\n \"queryHash\" : \"FCE2088F\",\n \"planCacheKey\" : \"FB589916\",\n \"winningPlan\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"game_id\" : {\n \"$eq\" : \"24024\"\n }\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"date\" : 1.0,\n \"game\" : 1.0\n },\n \"indexName\" : \"date_1_game_id_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"date\" : [],\n \"game\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"date\" : [ \n \"[new Date(1562198399999), new Date(9223372036854775807)]\"\n ],\n \"game\" : [ \n \"[MinKey, MaxKey]\"\n ]\n }\n }\n },\n \"rejectedPlans\" : []\n },\n \"serverInfo\" : {\n \"host\" : \"mongo-prod-mongodb-0\",\n \"port\" : 27017,\n \"version\" : \"4.4.1\",\n \"gitVersion\" : \"ad91a93a5a31e175f5cbf8c69561e788bbc55ce1\"\n },\n \"ok\" : 1.0,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1607426019, 1),\n \"signature\" : {\n \"hash\" : { \"$binary\" : \"Xs58Gf3OJwjuknNIKorR4xV+dlM=\", \"$type\" : \"00\" },\n \"keyId\" : NumberLong(6855964983600087045)\n }\n },\n \"operationTime\" : Timestamp(1607426019, 1)\n}\n{\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"gaming.gamestreams\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [ \n {\n \"game_id\" : {\n \"$eq\" : \"24024\"\n }\n }, \n {\n \"date\" : {\n \"$gte\" : ISODate(\"2019-07-03T23:59:59.999Z\")\n }\n }\n ]\n },\n \"queryHash\" : \"FCE2088F\",\n \"planCacheKey\" : \"C23B7BA1\",\n \"winningPlan\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"date\" : 1.0,\n \"game_id\" : 1.0\n },\n \"indexName\" : \"date_1_game_id_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"date\" : [],\n \"game_id\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"date\" : [ \n \"[new Date(1562198399999), new Date(9223372036854775807)]\"\n ],\n \"game_id\" : [ \n \"[\\\"24024\\\", \\\"24024\\\"]\"\n ]\n }\n }\n },\n \"rejectedPlans\" : []\n },\n \"serverInfo\" : {\n \"host\" : \"cd303a4213d7\",\n \"port\" : 27017,\n \"version\" : \"4.4.2\",\n \"gitVersion\" : \"15e73dc5738d2278b688f8929aee605fe4279b0e\"\n },\n \"ok\" : 1.0,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1607426021, 1),\n \"signature\" : {\n \"hash\" : { \"$binary\" : \"XsjWMpxunfKZsh06lJtn9/XFr0A=\", \"$type\" : \"00\" },\n \"keyId\" : NumberLong(6896761722598588420)\n }\n },\n \"operationTime\" : Timestamp(1607426021, 1)\n}\n\n",
"text": "Where are you seeing that it used the wrong index? In the explain i gave above which is indeed production. It showsPlanCache.clear() did not help here, even droppin some of the other indecies didint help.db.getCollection(‘gamestreams’).find({ date: { $gte: new Date(‘2019-07-03T23:59:59.999Z’) }, game_id: ‘24024’ }).explain()prodlocal",
"username": "Kay_Khan"
},
{
"code": "{date : 1, game : 1}",
"text": "Hi @Kay_KhanI might overlooked it.But now I see that the prod shows a different index {date : 1, game : 1} which is not optimal.Where did that index came from?In general to rule out network try ssh to the ec2 host and run the query there and see the time difference.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "{date: 1, game_id: 1}",
"text": "Sorry that was a typo on my part, both local and prod are {date: 1, game_id: 1}The query is being made from am api not on the ec2 host, the api lives on another ec2 node",
"username": "Kay_Khan"
},
{
"code": "",
"text": "Hi @Kay_Khan,The optimal order of index fields Equility Sort and finally Range is a good thumb rule.In your case optimal index is { game_id : 1 , date : 1}However, if we want to compare the 2 runs we need to view at explain (“executionStats”) of both query. Additionally to eliminate network factors run the query locally in each host.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | How is the mongodb query performance so much slower on production instance vs local instance | 2020-12-07T14:00:21.269Z | How is the mongodb query performance so much slower on production instance vs local instance | 22,328 |
null | [
"golang"
] | [
{
"code": "type MyStruct struct {\n\tID primitive.ObjectID `bson:\"_id\"`\n\tEnds time.Time `bson:\"ends\"`\n}\nfunc (st *MyStruct) SetBSON(raw bson.Raw) error {\n\tdecoded := new(struct {\n\t\tID primitive.ObjectID `bson:\"_id\"`\n\t\tEnds string `bson:\"ends\"`\n\t})\n\n\tbsonErr := raw.Unmarshal(decoded)\n\tif bsonErr != nil {\n\t\treturn bsonErr\n\t}\n\n\tconst dateFormat = \"2006-01-02\"\n\t\n\tend, endErr := time.ParseInLocation(dateFormat, decoded.Ends, constants.NYCLoc)\n\tif endErr != nil {\n\t\treturn endErr\n\t}\n\n\tst.ID = decoded.ID\n\tst.Ends = time.Date(ey, em, ed, 23, 59, 0, 0, constants.NYCLoc)\n\treturn nil\n}\n",
"text": "I’m migration our app from mgo to the official mongo go driver, but I’m stuck with the mgo SetBSON migration, this is my old code and I don’t know how to do this using the new driver:The core is, we have a struct that in mongo stores the “end” field as a string that looks like “20200402”. When the driver reads the bson from mongo, I need to decode the struct and convert the string value from the field “end” into a time.Time Go type.I hope I can avoid having to create a new custom “field” type for the end mongo field, because the app assumes a time.Time all over.Thanks",
"username": "Diego_Medina"
},
{
"code": "UnmarshalBSONUnmarshalJSONencoding/jsonGetBSONSetBSON",
"text": "Hi @Diego_Medina,You can do this using the UnmarshalBSON hook for the driver. This is similar to the UnmarshalJSON hook in the encoding/json library. I wrote up an example of this in Go Playground - The Go Programming Language. It’s worth noting that version 1.3.0 of the driver introduced a custom BSON registry to mimic parts of mgo BSON behavior to mitigate BSON issues for users when migrating to the official driver. This registry supports GetBSON/SetBSON. You can see documentation for that at mgocompat package - go.mongodb.org/mongo-driver/bson/mgocompat - Go Packages.",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "Thanks you! your example was just what I needed.",
"username": "Diego_Medina"
},
{
"code": "",
"text": "I was trying to understand why not use the bsoncodec package - go.mongodb.org/mongo-driver/bson/bsoncodec - Go Packages.Is this what you mean byIt’s worth noting that version 1.3.0 of the driver introduced a custom BSON registry",
"username": "Yehuda_Makarov"
},
{
"code": "bson.Unmarshalermgocompatmgocompat.RegistryGetBSONSetBSONSetterSetBSONClientopts := options.Client().SetRegistry(mgocompat.Registry)\nclient, err := mongo.Connect(ctx, opts)\n// handle err, defer Disconnect(), etc\n",
"text": "@Yehuda_Makarov You can use the bsoncodec package and write a custom codec for this use case, but implementing the bson.Unmarshaler interface works just as well.Version 1.3.0 of the driver introduced a new mgocompat package that provides a custom BSON registry (mgocompat.Registry) to mimic some of mgo’s BSON behavior. This includes adding support for GetBSON and SetBSON methods. You can see more details about it at mgocompat package - go.mongodb.org/mongo-driver/bson/mgocompat - Go Packages. Search for Setter on that page to see how the SetBSON method should be implemented. If you choose to use this registry, you can enable it for a Client as follows:Feel free to add another comment here or open a new question on these forums if you have any follow-up questions about the new registry.– Divjot",
"username": "Divjot_Arora"
}
] | Mgo SetBSON to mongo golang driver | 2020-04-03T20:02:57.923Z | Mgo SetBSON to mongo golang driver | 5,242 |
null | [
"replication",
"configuration"
] | [
{
"code": "rs0:PRIMARY> rs.reconfig(cfg)\n{\n\t\"ok\" : 0,\n\t\"errmsg\" : \"This node, db1:27017, with _id 9 is not electable under the new configuration version 76695 for replica set rs0\",\n\t\"code\" : 103\n\t\t {\n \t\t\t\"_id\" : 3,\n \t\t\t\"host\" : \"db3:27017\",\n \t\t\t\"arbiterOnly\" : false,\n \t\t\t\"buildIndexes\" : true,\n \t\t\t\"hidden\" : false,\n \t\t\t\"priority\" : 1,\n \t\t\t\"tags\" : {\n\n \t\t\t},\n \t\t\t\"slaveDelay\" : NumberLong(0),\n \t\t\t\"votes\" : 1\n \t\t},\n \t\t{\n \t\t\t\"_id\" : 4,\n \t\t\t\"host\" : \"db4:27017\",\n \t\t\t\"arbiterOnly\" : false,\n \t\t\t\"buildIndexes\" : true,\n \t\t\t\"hidden\" : false,\n \t\t\t\"priority\" : 1,\n \t\t\t\"tags\" : {\n\n \t\t\t},\n \t\t\t\"slaveDelay\" : NumberLong(0),\n \t\t\t\"votes\" : 1\n \t\t},\n \t\t{\n \t\t\t\"_id\" : 5,\n \t\t\t\"host\" : \"db-arb1:27017\",\n \t\t\t\"arbiterOnly\" : true,\n \t\t\t\"buildIndexes\" : true,\n \t\t\t\"hidden\" : false,\n \t\t\t\"priority\" : 1,\n \t\t\t\"tags\" : {\n\n \t\t\t},\n \t\t\t\"slaveDelay\" : NumberLong(0),\n \t\t\t\"votes\" : 1\n \t\t},\n \t\t{\n \t\t\t\"_id\" : 8,\n \t\t\t\"host\" : \"db2:27017\",\n \t\t\t\"arbiterOnly\" : false,\n \t\t\t\"buildIndexes\" : true,\n \t\t\t\"hidden\" : false,\n \t\t\t\"priority\" : 2,\n \t\t\t\"tags\" : {\n\n \t\t\t},\n \t\t\t\"slaveDelay\" : NumberLong(0),\n \t\t\t\"votes\" : 1\n \t\t},\n \t\t{\n \t\t\t\"_id\" : 9,\n \t\t\t\"host\" : \"db1:27017\",\n \t\t\t\"arbiterOnly\" : false,\n \t\t\t\"buildIndexes\" : true,\n \t\t\t\"hidden\" : false,\n \t\t\t\"priority\" : 2,\n \t\t\t\"tags\" : {\n\n \t\t\t},\n \t\t\t\"slaveDelay\" : NumberLong(0),\n \t\t\t\"votes\" : 1\n \t\t}\n",
"text": "I have the following replSet config as below but when I attempted to reconfigure the replSet to set the secondary site nodes to priority 0, I received the following errors:My rs.conf()",
"username": "oldcomputer"
},
{
"code": "",
"text": "Hello Carlos Mennens, welcome to the MongoDB Community forum.“This node, db1:27017, with _id 9 is not electable under the new configuration version 76695 for replica set rs0”Note that:I think an election was triggered when you changed the configuration and the message is confirming that the node db1:27017, with _id 9 is not eligible for election under the new configuration.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "I’m still very confused.I have x5 nodes. I simply am trying to make x2 of the nodes in the config p0 which are currently p1 shown from my rs.conf() above.The two nodes I am intending to make p0 are currently not primary.\nThe node I’m connected to attempting to reconfigure replication is _id 9 PRIMARY which is currently p2.\nIf I reconfigure two p1 secondary nodes to just be p0 secondary nodes and those are the only changes made to the rs.conf, why when I reconfigure rs.reconfig from _id 9 which is PRIMARY p2 and has not changed at all and has remained unchanged in the reconfigure tasks…does it produce an error about _id 9 when nothing has warranted this node to change. I don’t care if the other p2 node in the current config is elected as I’m only changing the priority on two secondary’s not currently elected and with a p1.What am I not understanding? The error references a node I’ve not made any changes to nor do I expect it to need to change as the elected primary.If _id 9 is no longer electable with the new config, why??? I only changed the priority on _id 3 & _id 4 ONLY!!! So if a reconfigure is issued with only those two changes, why is a current primary with a p1 no longer eligible to be primary under the new config which made no changes to _id 9 at all.Very confused…",
"username": "oldcomputer"
},
{
"code": "rs.conf() \t\t{\n \t\t\t\"_id\" : 5,\n \t\t\t\"host\" : \"db-arb1:27017\",\n \t\t\t\"arbiterOnly\" : true,\n \t\t\t\"buildIndexes\" : true,\n \t\t\t\"hidden\" : false,\n \t\t\t\"priority\" : 1,\n \t\t\t\"tags\" : {\n\n \t\t\t},\n \t\t\t\"slaveDelay\" : NumberLong(0),\n \t\t\t\"votes\" : 1\n \t\t},\ncfgdb.version()rs.status()",
"text": "Hi @oldcomputerSkimming the output of your rs.conf(), I found this:So apparently you have an electable arbiter, which should not happen if your MongoDB is version 3.6 or newer. Since MongoDB 3.6 is the oldest supported version, I would encourage you to consider upgrading as well.With regard to your issue in reconfig, could you please post more information:Best regards,\nKevin",
"username": "kevinadi"
}
] | Setting replSet to P0 Failed w/ Errmsg | 2020-12-04T20:00:57.592Z | Setting replSet to P0 Failed w/ Errmsg | 2,944 |
null | [
"cxx"
] | [
{
"code": " c++ --std=c++11 tcpserverV2.cpp -o tcpserverV2 -I/usr/local/include/mongocxx/v_noabi \\\n-I/usr/local/include/libmongoc-1.0 -I/usr/local/include/bsoncxx/v_noabi \\\n-I/usr/local/include/libbson-1.0 -L/usr/local/lib -lmongocxx -lbsoncxx\n\n\n./tcpserverV2: error while loading shared libraries: libmongocxx.so._noabi: cannot open shared object file: No such file or directory\n",
"text": "I have compiled my code using the command and everything seems to be fine:, but when i try to run ./tcpserverV2 I am greeted with this message:It says no such file or directory but It shows that it was downloaded and installed properly when I installed the c++ driver. I’m not sure if there is something I am leaving out when im compiling or if there is some sort of simple workaround to this. Any help or guidance is greately appreciated!",
"username": "Luke_Colias"
},
{
"code": "export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH\n./tcpserverV2\n",
"text": "I’m assuming here you’re on Linux.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Yeah I’m running this on centOS 8, and I’m not really good with it yet. I tried running that command but its still giving me the same error, and I dont know really know if thats what I was supposed to do or not.",
"username": "Luke_Colias"
},
{
"code": "LD_LIBRARY_PATHldd ./tcpserverV2",
"text": "Using LD_LIBRARY_PATH is not usually the correct answer. You will know if it is and if you aren’t sure, then it isn’t. That said, I will need to see the output of ldd ./tcpserverV2 and the output of the installation command you executed when you built the C++ driver.",
"username": "Roberto_Sanchez"
},
{
"code": "[root@instance-1 ~]# ldd ./tcpserverV2\n linux-vdso.so.1 (0x00007ffd1fbdc000)\n libmongocxx.so._noabi => not found\n libbsoncxx.so._noabi => not found\n libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f5da0a31000)\n libm.so.6 => /lib64/libm.so.6 (0x00007f5da06af000)\n libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f5da0497000)\n libc.so.6 => /lib64/libc.so.6 (0x00007f5da00d5000)\n /lib64/ld-linux-x86-64.so.2 (0x00007f5da0dc6000)\n[root@instance-1 build]# sudo cmake --build . --target install\n[ 2%] Built target EP_mnmlstc_core\n[ 7%] Built target bsoncxx_testing\n[ 12%] Built target bsoncxx_shared\n[ 15%] Built target test_bson\n[ 41%] Built target mongocxx_mocked\n[ 65%] Built target mongocxx_shared\n[ 67%] Built target test_change_stream_specs\n[ 69%] Built target test_client_side_encryption_specs\n[ 70%] Built target test_crud_specs\n[ 87%] Built target test_driver\n[ 87%] Built target test_instance\n[ 90%] Built target test_gridfs_specs\n[ 92%] Built target test_command_monitoring_specs\n[ 93%] Built target test_logging\n[ 96%] Built target test_transactions_specs\n[ 97%] Built target test_retryable_reads_specs\n[100%] Built target test_read_write_concern_specs\nInstall the project...\n-- Install configuration: \"Release\"\n-- Up-to-date: /usr/local/share/mongo-cxx-driver/LICENSE\n-- Up-to-date: /usr/local/share/mongo-cxx-driver/README.md\n-- Up-to-date: /usr/local/share/mongo-cxx-driver/THIRD-PARTY-NOTICES\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx\n-- Installing: /usr/local/include/bsoncxx/v_noabi/bsoncxx/private\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/oid.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/view_or_value.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/stream\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/stream/value_context.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/stream/document.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/stream/closed_context.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/stream/key_context.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/stream/helpers.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/stream/array.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/stream/array_context.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/stream/single_context.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/basic\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/basic/sub_document.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/basic/document.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/basic/sub_array.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/basic/kvp.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/basic/helpers.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/basic/array.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/basic/impl.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/core.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/builder/concatenate.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/config\n-- Installing: /usr/local/include/bsoncxx/v_noabi/bsoncxx/config/private\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/config/compiler.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/config/postlude.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/config/prelude.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/validate.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/decimal128.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/array\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/array/view_or_value.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/array/view.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/array/value.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/array/element.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/string\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/string/view_or_value.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/string/to_string.hpp\n-- Installing: /usr/local/include/bsoncxx/v_noabi/bsoncxx/test_util\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/third_party\n-- Installing: /usr/local/include/bsoncxx/v_noabi/bsoncxx/test\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/util\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/util/functor.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/document\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/document/view_or_value.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/document/view.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/document/value.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/document/element.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/types.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/enums\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/enums/binary_sub_type.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/enums/type.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/exception\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/exception/error_code.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/exception/exception.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/types\n-- Installing: /usr/local/include/bsoncxx/v_noabi/bsoncxx/types/private\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/types/bson_value\n-- Installing: /usr/local/include/bsoncxx/v_noabi/bsoncxx/types/bson_value/private\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/types/bson_value/make_value.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/types/bson_value/view_or_value.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/types/bson_value/view.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/types/bson_value/value.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/json.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/stdx\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/stdx/make_unique.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/stdx/string_view.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/stdx/optional.hpp\n-- Installing: /usr/local/include/bsoncxx/v_noabi/bsoncxx/cmake\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/config/export.hpp\n-- Up-to-date: /usr/local/lib64/cmake/libbsoncxx-3.6.2/libbsoncxx-config.cmake\n-- Up-to-date: /usr/local/lib64/cmake/libbsoncxx-3.6.2/libbsoncxx-config-version.cmake\n-- Up-to-date: /usr/local/lib64/libbsoncxx.so.3.6.2\n-- Up-to-date: /usr/local/lib64/libbsoncxx.so._noabi\n-- Up-to-date: /usr/local/lib64/libbsoncxx.so\n-- Up-to-date: /usr/local/lib64/cmake/bsoncxx-3.6.2/bsoncxx_targets.cmake\n-- Up-to-date: /usr/local/lib64/cmake/bsoncxx-3.6.2/bsoncxx_targets-release.cmake\n-- Up-to-date: /usr/local/lib64/cmake/bsoncxx-3.6.2/bsoncxx-config-version.cmake\n-- Up-to-date: /usr/local/lib64/cmake/bsoncxx-3.6.2/bsoncxx-config.cmake\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/config/config.hpp\n-- Up-to-date: /usr/local/include/bsoncxx/v_noabi/bsoncxx/config/version.hpp\n-- Up-to-date: /usr/local/lib64/pkgconfig/libbsoncxx.pc\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/index_view.hpp\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/private\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/instance.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/cursor.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/stdx.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/client_encryption.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/bulk_write.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/logger.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/config\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/config/private\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/config/compiler.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/config/postlude.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/config/prelude.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/read_preference.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/client.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/result\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/result/insert_one.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/result/bulk_write.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/result/insert_many.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/result/gridfs\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/result/gridfs/upload.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/result/update.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/result/delete.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/result/replace_one.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/model\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/model/write.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/model/delete_many.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/model/insert_one.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/model/delete_one.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/model/update_one.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/model/update_many.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/model/replace_one.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/gridfs\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/gridfs/private\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/gridfs/uploader.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/gridfs/bucket.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/gridfs/downloader.hpp\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/test_util\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/pool.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/hint.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/change_stream.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/collection.hpp\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/test\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/test/private\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/test/result\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/test/result/gridfs\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/test/model\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/test/gridfs\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/test/spec\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/test/options\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/test/options/gridfs\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/uri.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/client_session.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/read_concern.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/write_type.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/pipeline.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/events\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/events/topology_closed_event.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/events/command_started_event.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/events/server_changed_event.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/events/server_opening_event.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/events/server_description.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/events/heartbeat_started_event.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/events/heartbeat_failed_event.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/events/topology_opening_event.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/events/command_succeeded_event.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/events/heartbeat_succeeded_event.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/events/server_closed_event.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/events/command_failed_event.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/events/topology_changed_event.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/events/topology_description.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/find.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/index_view.hpp\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/options/private\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/find_one_and_replace.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/find_one_and_update.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/create_collection.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/client_encryption.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/bulk_write.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/aggregate.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/count.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/find_one_and_delete.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/insert.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/client.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/estimated_document_count.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/gridfs\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/gridfs/bucket.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/gridfs/upload.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/distinct.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/find_one_common_options.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/encrypt.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/data_key.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/tls.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/pool.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/change_stream.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/transaction.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/apm.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/update.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/delete.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/ssl.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/client_session.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/auto_encryption.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/replace.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/options/index.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/exception\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/exception/operation_exception.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/exception/write_exception.hpp\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/exception/private\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/exception/query_exception.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/exception/error_code.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/exception/logic_error.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/exception/authentication_exception.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/exception/exception.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/exception/server_error_code.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/exception/gridfs_exception.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/exception/bulk_write_exception.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/database.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/index_model.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/write_concern.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/validation_criteria.hpp\n-- Installing: /usr/local/include/mongocxx/v_noabi/mongocxx/cmake\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/config/export.hpp\n-- Up-to-date: /usr/local/lib64/cmake/libmongocxx-3.6.2/libmongocxx-config.cmake\n-- Up-to-date: /usr/local/lib64/cmake/libmongocxx-3.6.2/libmongocxx-config-version.cmake\n-- Up-to-date: /usr/local/lib64/libmongocxx.so.3.6.2\n-- Up-to-date: /usr/local/lib64/libmongocxx.so._noabi\n-- Up-to-date: /usr/local/lib64/libmongocxx.so\n-- Up-to-date: /usr/local/lib64/cmake/mongocxx-3.6.2/mongocxx_targets.cmake\n-- Up-to-date: /usr/local/lib64/cmake/mongocxx-3.6.2/mongocxx_targets-release.cmake\n-- Up-to-date: /usr/local/lib64/cmake/mongocxx-3.6.2/mongocxx-config-version.cmake\n-- Up-to-date: /usr/local/lib64/cmake/mongocxx-3.6.2/mongocxx-config.cmake\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/config/config.hpp\n-- Up-to-date: /usr/local/include/mongocxx/v_noabi/mongocxx/config/version.hpp\n-- Up-to-date: /usr/local/lib64/pkgconfig/libmongocxx.pc\n-- Installing: /usr/local/share/mongo-cxx-driver/uninstall.sh\n",
"text": "This is the output for that command:This Is the output of the driver instalation:",
"username": "Luke_Colias"
},
{
"code": "export LD_LIBRARY_PATH=/usr/local/lib64:$LD_LIBRARY_PATH\n./tcpserverV2\n/lib64/usr/lib64/usr/local/lib64/usr/local/lib64ldd/lib64/usr/local/lib64/etc/ld.so.conf",
"text": "@Luke_Colias So, a slight tweak of the suggestion by @Jack_Woehr seems like it would work:That said, this might indicate a misconfiguration of your system. RHEL and CentOS have /lib64, /usr/lib64, and /usr/local/lib64, which other distros have migrated away from. The dynamic loader should find libraries in /usr/local/lib64 and indeed the output of ldd indicates that it is finding libraries in /lib64. Given that it is not finding them automatically, getting the loader to find libraries in /usr/local/lib64 may require modifying your system’s /etc/ld.so.conf. You should request assistance in a CentOS-specific forum to make sure that your system is properly configured and that any modifications you make won’t cause other problems.",
"username": "Roberto_Sanchez"
}
] | Compilation error for the C++ driver: error while loading shared libraries: libmongocxx.so._noabi: cannot open shared object file: No such file or directory | 2020-12-07T23:30:33.457Z | Compilation error for the C++ driver: error while loading shared libraries: libmongocxx.so._noabi: cannot open shared object file: No such file or directory | 6,291 |
null | [
"connecting",
"spring-data-odm"
] | [
{
"code": "",
"text": "Hi,i have a springboot application. This application is pluged to MongoDB Atlas with spring-boot-starter-data-mongodb.The application worked well since 1 year, but, since this morning, without any update, the connexion to the MongoDB has stoped to work.An error is thrown :Client view of cluster state is {type=REPLICA_SET, servers=[{address:27017=cluster-xxx-prep-shard-00-02.eocxs.mongodb.net, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: extension (5) should not be presented in certificate_request}}In the MongoDB documentation i saw this information :MongoDB Java Driver documentationMy application uses JDK 13.0.2.I don’t know why this issue came this morning\nI don’t successful to correct this issueAny idea ?Thank a lot",
"username": "mickael_camelot"
},
{
"code": "",
"text": "We have the exact same issue… Didn’t find any solution yet. Will keep you updated.",
"username": "Nicolas_Oeschger"
},
{
"code": "",
"text": "Adding the Java Option -Djdk.tls.client.protocols=TLSv1.2 solved the issue for us.",
"username": "Nicolas_Oeschger"
},
{
"code": "",
"text": "Hi,thanks for advice.\nif found another solution yesterday.\nJava made a fix in different version : [JDK-8236039] JSSE Client does not accept status_request extension in CertificateRequest messages for TLS 1.3 - Java Bug SystemI have update our java version from 13 to 13.0.5 and there is no more bug.Download Java Builds of OpenJDK 8, 11, 13, 15, 17, 19. Azul Zulu Builds of OpenJDK runs on Linux, Windows, macOS & Solaris on X86, Arm, SPARC & PPCthanks",
"username": "mickael_camelot"
},
{
"code": "",
"text": "I had the same issue but was finally able to resolve it after taking a few simple steps. First I installed the latest JDK(15), then deleted the .m2 folder and rebuild the project using maven command. Though I think the first step was unnecessary, it still worked. Hope this helps.",
"username": "Mohammed_Arefin"
},
{
"code": "",
"text": "Hello. I am having the same issue. But don’t necessarily want to switch to JDK15.\nI can’t find a package for JDK13.03 or 13.05 on the official Oracle archives.\nI’m unsure about adding the Java Option.Any link to a patched JDK version available ?",
"username": "Yann_Schremmer"
},
{
"code": "",
"text": "Hello, you can find the jdk 13.0.5 at zuluDownload Java Builds of OpenJDK 8, 11, 13, 15, 17, 19. Azul Zulu Builds of OpenJDK runs on Linux, Windows, macOS & Solaris on X86, Arm, SPARC & PPCcordially",
"username": "mickael_camelot"
},
{
"code": "",
"text": "Thank you so much !!!\nyou save our day can you plz explain what is the problem?",
"username": "mansi_joshi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | SSLHandshakeException : should not be presented in certificate_request | 2020-12-03T13:01:57.173Z | SSLHandshakeException : should not be presented in certificate_request | 21,007 |
null | [
"python",
"production"
] | [
{
"code": "",
"text": "We are pleased to announce the 3.11.2 release of PyMongo - MongoDB’s Python Driver. This release fixes a number of bugs.See the changelog for a high-level summary of what’s new and improved or see the PyMongo 3.11.2 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!",
"username": "Prashant_Mital"
},
{
"code": "",
"text": "",
"username": "system"
}
] | PyMongo 3.11.2 Released | 2020-12-02T23:36:48.231Z | PyMongo 3.11.2 Released | 2,751 |
null | [] | [
{
"code": "",
"text": "Hello I’m an university student and I have to do a project where I need to use prolog to calculate the best routes, and need those calculations to be read by a typescript SPA, all the data I need for my calculations come from mongoDB, what do you think is the best way to do it.\nThanks .",
"username": "Mabel_Mabel"
},
{
"code": "",
"text": "It depends on which Prolog system you’re using. If it’s one that offers a C/C++ bridge you can probably hook Prolog directly to MongoDB by writing some C/C++ code.",
"username": "Jack_Woehr"
}
] | Connect prolog to mongoDB | 2020-12-08T10:04:21.334Z | Connect prolog to mongoDB | 1,627 |
null | [
"sharding",
"devops"
] | [
{
"code": "\"2020-12-06T16:38:16.059+0000 I NETWORK [mongosMain] Starting new replica set monitor for cfg/ mongos2.local:27019, mongos3.local:27019,mongos1.local:27019\n2020-12-06T16:38:16.060+0000 I SHARDING [thread1] creating distributed lock ping thread for process mongos1:27017:1607272696:-8549465232112236536 (sleeping for 30000ms)\n2020-12-06T16:38:16.132+0000 F NETWORK [mongosMain] This mongos server must be upgraded. It is attempting to communicate with an upgraded cluster with which it is incompatible. Error: 'IncompatibleWithUpgradedServer: Server min and max wire version (8,8) is incompatible with client min wire version (7,7).You (client) are attempting to connect to a node (server) that no longer accepts connections with your (client’s) binary version. Please upgrade the client’s binary version.' Crashing in order to bring attention to the incompatibility, rather than erroring endlessly.\n2020-12-06T16:38:16.132+0000 F - [mongosMain] Fatal Assertion 50709 at src/mongo/client/dbclient_connection.cpp 278\n2020-12-06T16:38:16.132+0000 F - [mongosMain]\"\n[vagrant@mongos1 ~]$ mongos --version\nmongos version v4.0.21\ngit version: 3f68a848c68e993769589dc18e657728921d8367\nOpenSSL version: OpenSSL 1.1.1 FIPS 11 Sep 2018\nallocator: tcmalloc\nmodules: none\nbuild environment:\n distmod: rhel80\n distarch: x86_64\n target_arch: x86_64..\n[vagrant@mongos1 ~]$ mongod --version\ndb version v4.2.11\ngit version: ea38428f0c6742c7c2c7f677e73d79e17a2aab96\nOpenSSL version: OpenSSL 1.1.1 FIPS 11 Sep 2018\nallocator: tcmalloc\nmodules: none\nbuild environment:\n distmod: rhel80\n distarch: x86_64\n target_arch: x86_64\n",
"text": "Hello All,I’m currently working on an Anible Playbook to dowgrade a shadred cluster from 4.2 to 4.0. I’m following the steps documented at… https://docs.mongodb.com/manual/release-notes/4.2-downgrade-sharded-cluster/After disabling the balancer I then move onto a Play that upgrades the mongos instances. This play I run serially so it should work through the mongos instance one by one. This fails on the first instance, when starting the downgraded version of the mongos service, with the following message…I have confirmed the mongos instance has been downgraded to 4.0…The config server is still running the 4.2 version as expected…My config servers are running on the same hosts as the mongos processes. The documentation clearly states to upgrade the mongos instances before the shards or config servers. Is something wrong or am I missing something here?Cheers,Rhys",
"username": "Rhys_Campbell"
},
{
"code": "",
"text": "Hi @Rhys_CampbellMake sure you have completed all of the Prerequisites. Your error is likely due to the FeatureCompatibilityVersion still being set to 4.2",
"username": "chris"
}
] | MongoDB Sharded Cluster Downgrade 4.2 -> 4.0 | 2020-12-06T19:45:41.070Z | MongoDB Sharded Cluster Downgrade 4.2 -> 4.0 | 2,431 |
null | [
"connecting",
"scala"
] | [
{
"code": "",
"text": "When connecting to atlas using mongodb-scala-driver 2.9 it states that driver will negotiate compression to use between snappy, zlib. It also explains that you can set compression by setting compressor in url (which I have set to snappy).I am just looking to confirm if there is anything else I need to do on client side or Atlas side - to me it doesn’t appear to require any additional configuration.",
"username": "Colin_Bester"
},
{
"code": "",
"text": "Hi @Colin_Bester,You will also need the snappy dependency on your classpath. See: CompressionRoss",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Gotcha on including snappy - I was just looking to confirm that no other settings were required as except for monitoring with something like Wireshark I am unable to confirm compression being enabled to/from Atlas.",
"username": "Colin_Bester"
},
{
"code": "",
"text": "Correct, that’s all you need. The Connection string setting and supporting library to do the compression.",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Enabling compression | 2020-12-07T22:08:07.446Z | Enabling compression | 5,295 |
null | [
"connecting",
"scala"
] | [
{
"code": "",
"text": "In documentation describing connecting to MongoDB Atlas and in MongoDB Compass App it says to set system property org.mongodb.async.type=netty.I have a test M10 cluster setup and I am able to connect to Atlas database with or without setting org.mongodb.async.type system property which doesn’t make sense to me and I expected connection to either fail or give errors if not setup.I am using mongodb-scala-driver version 2.9 and Atlas version is current 3.6 - I am getting ready to test for TLS changes in new year as well as updating database.If anyone can share some light on this I’d appreciate the input.PS - support assures me that one can not connect to Atlas without SSL/TLS being enabled.",
"username": "Colin_Bester"
},
{
"code": "",
"text": "Hi @Colin_Bester,Native asynchronous TLS/SSL support was added to the Java driver version 3.10. The Scala driver 2.9.0 depends on the 3.12 version. So you don’t need to use netty for async tls/ssl support.Just to note: the latest version of the Scala driver is 4.1.1 - The scala driver source merged into the Java driver source in 4.0 to help improve maintainability of the code.All the best,Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Thanks @Ross_Lawley, always nice when there is a good explanation for what one is seeing.Also appreciate heads up on 4.1.1 but sadly when our application started it’s beta test 4.1.1 was still not marked as released so we (a small startup pushing hard) are a little behind.Is there any preference in using netty vs native async ssl/tls - I assume use native?Thanks,\nColin",
"username": "Colin_Bester"
},
{
"code": "",
"text": "I would recommend native, especially if you don’t already have netty in your stack.",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Setup to use netty | 2020-12-07T22:08:03.953Z | Setup to use netty | 4,044 |
[
"queries",
"swift"
] | [
{
"code": "",
"text": "Hoping this is an easy ask … or a point to a doc i’ve missed ?Using similar syntax to that provided in the docs …How do i add a ‘projection’ into the client call ??collection.find(filter: [“_partition”: AnyBSON(identity)], { (result) in …Thanks",
"username": "Damian_Raffell"
},
{
"code": "",
"text": "I sorted this …\ncollection.find(filter: [\"_partition\": AnyBSON(partition)], options: options, { (result) in\noption being a FindOptions",
"username": "Damian_Raffell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using Swift MongoClient & adding a Projection? | 2020-12-08T08:02:16.210Z | Using Swift MongoClient & adding a Projection? | 2,152 |
|
null | [
"installation",
"cxx"
] | [
{
"code": "build/opt/mongodbDriverCpp",
"text": "This issue happens to Mac OS X as well, and maybe to other Un*xes.curl -OL https://github.com/mongodb/mongo-cxx-driver/archive/r3.0.2.tar.gzcd mongo-cxx-driver-r3.0.2cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/opt/mongodbDriverCpp -DLIBBSON_DIR=/opt/mongodbDriverCpp -DLIBMONGOC_DIR=/opt/mongodbDriverCpp -DCMAKE_CXX_STANDARD=14make make installc++ --std=c++11 test.cpp -o run.test -I/opt/mongodbDriverCpp/include/bsoncxx/v_noabi -I /opt/mongodbDriverCpp/include/mongocxx/v_noabi/ -L /opt/mongodbDriverCpp/lib -l mongocxx -l bsoncxxexport LD_LIBRARY_PATH=/opt/mongodbDriverCpp/lib ./run.test",
"username": "Domenick_Smith"
},
{
"code": "cmakemakemake install",
"text": "@Domenick_Smith the subject of your post indicates that you encountered an error, but the text of your post does not describe any error nor does it report an actual error message. Please provide the output of the cmake, make, and make install commands, along with the output of your test program compilation command and the attempt to execute it. This will assist in diagnosing the issue you are encountering.",
"username": "Roberto_Sanchez"
}
] | Error when building mongocxx driver on Windows | 2020-12-08T05:49:43.757Z | Error when building mongocxx driver on Windows | 1,937 |
null | [
"installation",
"cxx"
] | [
{
"code": "CMake Error at src/bsoncxx/CMakeLists.txt:98 (find_package):\n By not providing \"Findlibbson-1.0.cmake\" in CMAKE_MODULE_PATH this project \n has asked CMake to find a package configuration file provided by \n \"libbson-1.0\", but CMake did not find one.\n\n Could not find a package configuration file provided by \"libbson-1.0\"\n (requested version 1.13.0) with any of the following names:\n\n libbson-1.0Config.cmake\n libbson-1.0-config.cmake\n\n Add the installation prefix of \"libbson-1.0\" to CMAKE_PREFIX_PATH or set\n \"libbson-1.0_DIR\" to a directory containing one of the above files. If\n \"libbson-1.0\" provides a separate development package or SDK, be sure it\n has been installed.\n\n\n-- Configuring incomplete, errors occurred!\nSee also \"/root/mongo-cxx-driver-r3.6.2/build/CMakeFiles/CMakeOutput.log\".\n",
"text": "I am going through the installation steps exactly as they on the website so the commands I’m using are the same. When I am trying to configure the driver in step 4 using cmake, I am getting this error.bsoncxx version: 3.6.2I have tried reinstalling just about everything but I haven’t gotten anywhere. I’m new to linux too so any gerneral guidance is appreciated!!",
"username": "Luke_Colias"
},
{
"code": "",
"text": "@Luke_Colias can you please provide the sequence of commands you used to configure, build, and install the C driver, along with the full output of each. Also, please provide the complete CMake command and output for the attempted C++ driver build.",
"username": "Roberto_Sanchez"
},
{
"code": "export WORKDIR=/home/alex/Temp\nexport CDRIVER_VERSION=1.15.1\nexport CPPDRIVER_VERSION=3.4.0\nexport LD_LIBRARY_PATH=/usr/local/lib\nsudo apt-get update && sudo apt-get install -y build-essential wget cmake git pkg-config libssl-dev libsasl2-dev \ncd ${WORKDIR}\nwget https://github.com/mongodb/mongo-c-driver/releases/download/${CDRIVER_VERSION}/mongo-c-driver-${CDRIVER_VERSION}.tar.gz && \\\n tar xzf mongo-c-driver-${CDRIVER_VERSION}.tar.gz\ncd ${WORKDIR}/mongo-c-driver-${CDRIVER_VERSION} && \\\n mkdir cmake-build && \\\n cd cmake-build && \\\n cmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF .. && \\\n make && sudo make install\ncd ${WORKDIR}\nwget https://github.com/mongodb/mongo-cxx-driver/archive/r${CPPDRIVER_VERSION}.tar.gz && \\\n tar -xzf r${CPPDRIVER_VERSION}.tar.gz\ncd ${WORKDIR}/mongo-cxx-driver-r${CPPDRIVER_VERSION}/build && \\\n cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local -DCMAKE_PREFIX_PATH=/usr/local .. && \\\n make EP_mnmlstc_core && \\\n sudo make && sudo make install\n",
"text": "Hi @Luke_Colias,I’ve successfully built the C/CXX drivers under Ubuntu 18.04 using the following:As @Roberto_Sanchez pointed out, to troubleshoot we’d need the full set of steps you followed to build both drivers.",
"username": "alexbevi"
},
{
"code": "yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm\nyum install mongo-c-driver\nyum install libbson\n\nsudo yum install cmake openssl-devel cyrus-sasl-devel\n\nwget https://github.com/mongodb/mongo-c-driver/releases/download/1.17.3/mongo-c-driver-1.17.3.tar.gz\ntar xzf mongo-c-driver-1.17.3.tar.gz\ncd mongo-c-driver-1.17.3\nmkdir cmake-build\ncd cmake-build\ncmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF ..\n\ncmake --build .\ncurl -OL https://github.com/mongodb/mongo-cxx-driver/releases/download/r3.6.2/mongo-cxx-driver-r3.6.2.tar.gz\ntar -xzf mongo-cxx-driver-r3.6.2.tar.gz\ncd mongo-cxx-driver-r3.6.2/build\n\ncmake .. \\\n -DCMAKE_BUILD_TYPE=Release \\\n -DCMAKE_INSTALL_PREFIX=/usr/local\n",
"text": "I’m running these commands on centOS 8. Also I’m building these under the root directory, I know that’s probably not how I’m supposed to do it, but my advisor told me it would make things easier.\nThese are the commands I run for the c driver.Now for the C++ driver I used:At this last cmake command is where I’m getting the error from above. I understand that I’m getting the error because something isn’t installed in the right location I just don’t know where I went wrong because I’m still relatively new Linux in general.",
"username": "Luke_Colias"
},
{
"code": "",
"text": "Thank you for your help, I’ll try to match these to the centOS commands and work at it from there. I added the commands I was using in my last reply.",
"username": "Luke_Colias"
},
{
"code": "mongo-c-driverlibbsonyumyum-develyumcmake --build . --target installsudo",
"text": "@Luke_Colias it does not make sense that you install the mongo-c-driver and libbson packages via yum and then try to build from source as well. If the packages available via yum are of a sufficiently recent version for you, then simply install the -devel packages of each and be done with it. If you need to build from source, then it is best to not also have the packages installed via yum at the same time. Apart from that, the commands you executed are helpful, but if you note my previous comment we need the full output to be able to effectively diagnose the issue. That said, based on the command sequence for the C driver, it looks like you built it but did not actually install it. You might need something like cmake --build . --target install (possibly with sudo if your installation prefix is in a system directory).",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "Okay I Think I understand more now, how would I install the -devel packages of the c driver and libbson? I get now that I installed them via yum so but I dont fully get the distinction between that and what the -devel package is.",
"username": "Luke_Colias"
},
{
"code": "yum install mongo-c-driver-devel libbson-devel",
"text": "@Luke_Colias you should be able to install them with yum install mongo-c-driver-devel libbson-devel.",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "Awesome that worked, so now I should just have to follow the instructions for installing the c++ driver right?",
"username": "Luke_Colias"
},
{
"code": "",
"text": "That’s right. Just make sure that you select a C++ driver version that supports the C driver version you installed. (I don’t recall what version is available by default from CentOS 8).",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "Okay It says the package that was installed is “(4/5): mongo-c-driver-devel-1.17.2-1.el8.x86_64” so I assume that means i can use the latest version of the c++ driver.",
"username": "Luke_Colias"
},
{
"code": "",
"text": "That sounds correct to me. The latest C++ driver requires C driver 1.17.0 or later.",
"username": "Roberto_Sanchez"
},
{
"code": "cmake .. \\\n -DCMAKE_BUILD_TYPE=Release \\\n -DCMAKE_INSTALL_PREFIX=/usr/local\n Build files have been written to: /root/mongo-cxx-driver-r3.6.2/build\n",
"text": "Okay sweet, I ran the basic Cmake command:And everything looks good becasue its giving me this message at the endNow when I run the build command do I have to add the mnmlstc core flagg becasue im really not sure about what that is.(P.S. Also I appreciate all of this so much I was stuck on this forever and you just knocked out like 10 hours of frustration in like 20 min, thank you!)",
"username": "Luke_Colias"
},
{
"code": "",
"text": "Correct. It is the same as I explained in the other thread. You are quite welcome.",
"username": "Roberto_Sanchez"
},
{
"code": "cmake --build .\nsudo cmake --build . --target install\n",
"text": "Also I’m not really sure what the poly fill is. I know in the other post you said I have two option and those would be to use the compiler that C++17 support for my polyfill(which seems easiert). And the other option would be to download MNMLSTC code from guithub which sounds like more effort and i dont know if i even need it. I was thinking I could just run the command:And this would be sufficent for me to start creating a project.",
"username": "Luke_Colias"
},
{
"code": "",
"text": "If you don’t want to use the MNMLSTC that the build tries to download and build for you, you will need to specify the option that you prefer. Have a look at Step 2 in the installation guide.",
"username": "Roberto_Sanchez"
},
{
"code": "sudo cmake --build . --target EP_mnmlstc_core\n",
"text": "Okay from how I understand it, the MNMLSTC core is installed automatically and is the default for non-windows systems.\nif I runIt will install this for me?",
"username": "Luke_Colias"
},
{
"code": "",
"text": "The build defaults to downloading MNMLSTC_core from GitHub and using that for the polyfill. However, you do not install it as a specific target. That is handled by the normal building and installation of the overall driver.",
"username": "Roberto_Sanchez"
},
{
"code": "[root@instance-1 build]# sudo cmake --build . --target EP_mnmlstc_core\nScanning dependencies of target EP_mnmlstc_core\n[ 0%] Creating directories for 'EP_mnmlstc_core'\n[ 0%] Performing download step (git clone) for 'EP_mnmlstc_core'\n-- EP_mnmlstc_core download command succeeded. See also /root/mongo-cxx-driver- r3.6.2/build/src/bsoncxx/third_party/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core- stamp/EP_mnmlstc_core-download-*.log\n[ 0%] No patch step for 'EP_mnmlstc_core'\n[ 50%] No update step for 'EP_mnmlstc_core'\n[ 50%] Performing configure step for 'EP_mnmlstc_core'\n-- EP_mnmlstc_core configure command succeeded. See also /root/mongo-cxx-driver -r3.6.2/build/src/bsoncxx/third_party/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core -stamp/EP_mnmlstc_core-configure-*.log\n[ 50%] Performing build step for 'EP_mnmlstc_core'\n-- EP_mnmlstc_core build command succeeded. See also /root/mongo-cxx-driver-r3. 6.2/build/src/bsoncxx/third_party/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-sta mp/EP_mnmlstc_core-build-*.log\n[ 50%] Performing install step for 'EP_mnmlstc_core'\n-- EP_mnmlstc_core install command succeeded. See also /root/mongo-cxx-driver-r 3.6.2/build/src/bsoncxx/third_party/EP_mnmlstc_core-prefix/src/EP_mnmlstc_core-s tamp/EP_mnmlstc_core-install-*.log\n[100%] Performing fix-includes step for 'EP_mnmlstc_core'\n[100%] Completed 'EP_mnmlstc_core'\n[100%] Built target EP_mnmlstc_core\n[root@instance-1 build]# cmake --build .\n[ 2%] Built target EP_mnmlstc_core\nScanning dependencies of target bsoncxx_testing\n",
"text": "Okay I accidentially ran that command and it Said that it downloaded it.I think that means im okay to build and install right.",
"username": "Luke_Colias"
},
{
"code": "EP_mnmlstc_core",
"text": "Yes, you should be able to continue as normal. The build instructions do not say to run the EP_mnmlstc_core target because not every build will have it. The build figures out if that target is needed based on the CMake options provided and then handles any build and install steps required for that.",
"username": "Roberto_Sanchez"
}
] | C++ Driver libbson-1.0 build error on CentOS 8 | 2020-12-03T10:49:16.240Z | C++ Driver libbson-1.0 build error on CentOS 8 | 7,729 |
null | [
"graphql",
"flutter"
] | [
{
"code": "",
"text": "Hello, for a project I would like to use flutter, since there are no sdk for mongodb realm I opted to use graphql api to connect to mongodb without use any sdk. Is this a good idea?\nIf this is not the right way is it possible to call android sdk or ios sdk natives from flutter?Thanks.",
"username": "Antonio"
},
{
"code": "",
"text": "@Antonio you can definitely call the native side of code (ios and android) from the flutter. Basically, you use realm for android and ios natively and get back access tokens which you can use in the flutter. use this (graphql_flutter | Flutter Package) library do graphlql call.platform integration doc link: Writing custom platform-specific code | FlutterHope this helps",
"username": "Safik_Momin"
}
] | Mongodb realm and flutter | 2020-12-08T10:03:54.754Z | Mongodb realm and flutter | 3,760 |
null | [
"replication",
"monitoring"
] | [
{
"code": "",
"text": "One of our mongodb instance shows incorrect oplog info as below:rs.printReplicationInfo()\nconfigured oplog size: 200000MB\nlog length start to end: 113778secs (31.61hrs)\noplog first event time: Sun Jan 07 2018 16:05:39 GMT+0800 (CST)\noplog last event time: Mon Jan 08 2018 23:41:57 GMT+0800 (CST)\nnow: Tue Dec 08 2020 10:11:51 GMT+0800 (CST)as we can see now is Tue Dec 08 2020 10:11:51 GMT+0800 (CST) ,but oplog last event time is of year 2018,to confirm that new events is late than 2018, I insert new data, and the re-check, but last event time is still 2018use hunter_test\nswitched to db hunter_testdb.myNewCollection1.insertOne( { x: 1 } )\n{\n“acknowledged” : true,\n“insertedId” : ObjectId(“5fceeb33c58ff57252427d4a”)\n}\ndb.myNewCollection1.find()\n{ “_id” : ObjectId(“5fceeb33c58ff57252427d4a”), “x” : 1 }\nrs.printReplicationInfo()\nconfigured oplog size: 200000MB\nlog length start to end: 113778secs (31.61hrs)\noplog first event time: Sun Jan 07 2018 16:05:39 GMT+0800 (CST)\noplog last event time: Mon Jan 08 2018 23:41:57 GMT+0800 (CST)\nnow: Tue Dec 08 2020 10:56:05 GMT+0800 (CST)what’s wrong with this instance and how to fix?",
"username": "hunter_huang"
},
{
"code": "",
"text": "This node at some point has become disconnected from the replica set. The node was down or disconnected longer than the oplog headroom, and now it cannot recover.You have to perform an initial sync to get this node back in sync with the cluster…",
"username": "chris"
}
] | Oplog with incorrect timestamp | 2020-12-08T03:27:24.858Z | Oplog with incorrect timestamp | 2,714 |
null | [
"dot-net",
"data-modeling"
] | [
{
"code": "",
"text": "Hello everybody,\nI’m not a programmer and I presume the frontend web application takes care of the formatting of the retrieved output.In my situation, almost each record has different content and my question is:How is the formatting of a document (record) done in the frontend if the contents are almost never the same from document-to-document?I need to deal with stuff like:Different point sizes\nSuperscripts / subscripts\nBold, italic, underline, strike through\nTabs, indents.In a website you can tag the content ahead of time and work with CSS, but I don’t know how it works when\nyou’re retrieving records from a MongoDB database.Can somebody let their light shine on this and explain whether it is at all possible, and if so, how?The content in question here, are dictionary entries and glossary entries.Thanks in advance for the feedback.Regards,Carel.",
"username": "Carel_Schilp"
},
{
"code": "",
"text": "Hi @Carel_Schilp,Can you share a bit more about your data and what you want your expected output to be?I’ll make some guesses here. Perhaps you want the word to be bold and it’s definition to not be bold. So you could store the word and definition in separate fields in a document in the database. You could write a loop that pulls out each document. Inside of that loop, you could create the html/css for each word and definition.",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "If I may add.MongoDB university has 4 courses that might be of interest. It is the M220 series of courses. You have different versions of the course for different languages; Java, JavaScript, Python and C#/.Net. They feature web front end. See https://university.mongodb.com/ for more details.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Lauren,After rereading your reply, the sentence below might be the key to my problem. I’ll need to investigate that more. Could you send me an example for me to get an idea of what that would look like. Thx. Carel.You could write a loop that pulls out each document. Inside of that loop, you could create the html/css for each word and definition.",
"username": "Carel_Schilp"
},
{
"code": "",
"text": "Thanks Steeve,I’ll look into them.Regards,Carel.",
"username": "Carel_Schilp"
},
{
"code": "",
"text": "Hi Lauren,Hope you’re feeling better. My data consists of terms and their meanings and of translations from one language into the other (dictionary). The problem with that, is there a high variability from entry to entry, so that it’s not possible to say this field needs to be bold and that field needs to be italic.Ideally, I would like to be able to put tags in the text and then control the formatting and layout using CSS.So I guess the first thing I would like to know whether it’s possible at all. I presume it’s not possible to store any formatting or layout info together with the text in MongoDB. As a kludge solution I could format each entry in a DTP program, make a screenshot of the entry and store it in the database as a picture. In that case, user would not be able to copy and paste text from the screen.So at the moment I’m a bit at a loss because I’m not a programmer myself and therefore don’t know what is possible in C#, Blazor and MongoDB.On the other hand, MongoDB is a document database and I can’t be the only person in the universe who needs to output database records with formatting and layout. Hope this helps. If not, let me know.\nThanks in advance for your feedback.",
"username": "Carel_Schilp"
},
{
"code": "",
"text": "Great call @steevej on pointing to the MongoDB University courses! The courses are totally free and super high quality.I don’t recommend storing screenshots as that presents accessibility challenges for people using screen readers. You want your text to be text.If you wanted to store tags in your fields in MongoDB as strings, you could. However, my hunch is that this would create problems in the long term. You’ll have a lot more flexibility if you store just your data in the database and then add the formatting later.Can you give an example of some data that needs to be displayed differently on a case by case basis?",
"username": "Lauren_Schaefer"
},
{
"code": "> db.test.insertOne( { \"title\" : \"<h1>This is a level 1 html header </h1>\" } )\n{\n\t\"acknowledged\" : true,\n\t\"insertedId\" : ObjectId(\"5fce748a37294c5306fd5cda\")\n}\n> db.test.find()\n{ \"_id\" : ObjectId(\"5fce748a37294c5306fd5cda\"), \"title\" : \"<h1>This is a level 1 html header </h1>\" }\n> \n\n",
"text": "If you wanted to store tags in your fields in MongoDB as strings, you could. However, my hunch is that this would create problems in the long term. You’ll have a lot more flexibility if you store just your data in the database and then add the formatting later.I confirm the above with emphasis on this would create problems in the long term.For html tags directly in MongoDB here it is:",
"username": "steevej"
},
{
"code": "",
"text": "Yes, I’ll put a few examples together so you have a better idea. I’ll send them tomorrow.",
"username": "Carel_Schilp"
},
{
"code": "",
"text": "Thanks for the reply Steeve,I didn’t know it is possible to store HTML tags together with text in MongoDB fields.Regards,Carel.",
"username": "Carel_Schilp"
},
{
"code": "",
"text": "Hi Lauren,I’ve been rethinking the whole situation and it’s more complicated than I can explain here online.\nSo as soon as I can afford it, I’ll sit down with a MongoDB expert and a C# expert and discuss a solution.\nI’m leaning towards putting the output of each record into a preformatted document. That way the search terms can be stored as text in MongoDB and I don’t have to let the retrieval program do any on-the-fly formatting. The data to be retrieved is static, like in a normal dictionary and I can let MongoDB do the searching and sorting. At a later stage, it may be possible to implement full text search using Atlas, but first I need to have an installed base of users before I add other functionality. So I won’t be sending you the examples and will put the project on hold until I have spoken face-to-face with a developer and Database expert. Thanks for your input and I’ll be in touch in the future again.Regards,Carel.",
"username": "Carel_Schilp"
},
{
"code": "",
"text": "Sounds good. Have fun working on your project!",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Formatting and layout of retrieved records | 2020-12-03T20:16:20.635Z | Formatting and layout of retrieved records | 5,834 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Hi. I have a nodeJS web application. There are different “webpages” such as About Us - Domain.com, About Us - Domain.com, domain.com/events, how would I save all the text from all the web pages to a database and search it up when a user inputs a keyword? Basically a search function like Search Hacking with Swift – tutorials and examples for SwiftUI and UIKit ?",
"username": "sound_cloud"
},
{
"code": "",
"text": "Hi @sound_cloud,Welcome to MongoDB community!MongoDB allows verious ways to search documents and we recommend segregating entities that would need to be fetched together in embedded documents.For searches you can index “tags” arrays and search in any of the keywordsMongoDB Manual - How to query an array: query on the array field as a whole, check if element in array, query for array element, query if field in array, query by array size.The server have different indexes like geo and text search.However, the best full text search experience is provided via our Atlas Search product when you host your database in Atlas.Use MongoDB Atlas Search to customize and embed a full-text search engine in your app.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | How to implement search on nodeJS website using mongoDB database? | 2020-12-07T19:04:49.952Z | How to implement search on nodeJS website using mongoDB database? | 3,892 |
null | [
"devops"
] | [
{
"code": "",
"text": "When read/write comes from the client, does load balancing occur automatically if you configure it as a connection pool?\nOr should I attach L4 to the secondary and distribute the read separately?",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Hi @Kim_Hakseon,We do not recommend placing a load balancer in front of the replica set and it certainly might yield unexpected results.You should specify all seed hosts in a connection string. The connection pool with the default readPreference (Primary) does not load balance connections across members and direct all to the Primary. However, if you specify readPreference of secondaryPreferred or secondary the connection for reads will be round robin across replica set.What we suggest for specific workload isolation is to tag specific members using replica set tags and then point specific read workloads like analytics to specific nodes.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Load Balancing with connection pool | 2020-12-08T01:04:01.087Z | Load Balancing with connection pool | 3,173 |
[
"dot-net"
] | [
{
"code": "",
"text": "I’ve been working over the last few weeks on an F# DSL that lets you build MongoDB Commands right now it only covers indexes and queries, but It should be extensible for the rest of the existing commands and can be used for F# scriptshere’s re repository if someone is interestedAn alternative way to interact with MongoDB databases from F# that allows you to use mongo-idiomatic constructs - GitHub - AngelMunoz/Mondocks: An alternative way to interact with MongoDB databases...and A couple of blog posts around itI'm a person that tries to ease other people's lives, if not at least my own. One of the things I fou...F# 5.0 has strong scripting capabilities thanks to the updated #r \"nuget: \" directive, this allows yo...",
"username": "Angel_Munoz"
},
{
"code": "",
"text": "Welcome to the MongoDB community forums @Angel_Munozm, and thanks for sharing your project!Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "thank you, I appreciate it ",
"username": "Angel_Munoz"
}
] | Mondocks a MongoDB command builder | 2020-12-08T03:24:17.525Z | Mondocks a MongoDB command builder | 3,509 |
|
null | [
"atlas-device-sync"
] | [
{
"code": "Error:Failed to integrate download with non-retryable error: error applying downloaded changesets to mongodb: (AtlasError) error performing bulk write to MongoDB {table: \"Order\", err: (AtlasError) Pipeline length greater than 50 not supported}",
"text": "Hello Realm Enthusiast,Just wanted to share an error that I’m getting very often within the past 2 to 3 days. I’m getting the below error for realm sync for both ios and android.Error:\nFailed to integrate download with non-retryable error: error applying downloaded changesets to mongodb: (AtlasError) error performing bulk write to MongoDB {table: \"Order\", err: (AtlasError) Pipeline length greater than 50 not supported}Some Details:\nI have an order collection whose scheme has nested documents called items and if there are more than 15 to 20 items in a document and it gives me the above error. I’m not doing any bulk inset rather it’s just a single inset of an object with realm DB. Since we are in the development phase (launching our product on 1st Feb), we have tested this realm object (order object) with more than 20 items before and there was no problem it’s been happening for the past 2 or 3 days. We have not changed anything in terms of the scheme.Looks like a bug from the Realm side but if I’m wrong I would definitely like to know some solution for it ",
"username": "Safik_Momin"
},
{
"code": "",
"text": "@Safik_Momin Can you share the server-side Realm app url please?",
"username": "Ian_Ward"
},
{
"code": "https://realm.mongodb.com/groups/5edfa722b73ba7359e2bad16/apps/5fb83be7e72ffcfde8464ec5/dashboard",
"text": "Here @Ian_Ward\nhttps://realm.mongodb.com/groups/5edfa722b73ba7359e2bad16/apps/5fb83be7e72ffcfde8464ec5/dashboard\nHope this is what you asking for it",
"username": "Safik_Momin"
},
{
"code": "",
"text": "+1 seeing this issue in my app as well",
"username": "Roger_Cheng"
},
{
"code": "",
"text": "@Safik_Momin Thank you - we identified the issue and have a PR up. Will get a fix out in the next release.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This has been fixed and released - thank you for reporting.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward Thanks for the quick fix ",
"username": "Safik_Momin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | (Sync AtlasError) Pipeline length greater than 50 not supported | 2020-12-06T23:57:35.783Z | (Sync AtlasError) Pipeline length greater than 50 not supported | 3,104 |
null | [
"cxx"
] | [
{
"code": "libbson-1.0Config.cmake\nlibbson-1.0-config.cmake\n",
"text": "Hi, I’m having some issues compiling mongo-cxx-driver-r3.6.1Could not find a package configuration file provided by “libbson-1.0”\n(requested version 1.13.0) with any of the following names:I downloaded and compiled the mongo-c-driver",
"username": "chris_d"
},
{
"code": "",
"text": "Is there any documentation on compiling the cxx driver? it’s missing from the README",
"username": "chris_d"
},
{
"code": "",
"text": "@chris_d, C++ driver installation instructions can be found here: Installing the mongocxx driverHowever, your post does not include any sort of information that might be useful in diagnosing the issue. If you are still experiencing issues after following the installation instructions at the link, please post the complete sequence of commands and all associated output for both the C driver build/installation and C++ driver build/installation.",
"username": "Roberto_Sanchez"
},
{
"code": "libbson-1.0Config.cmake\nlibbson-1.0-config.cmake\n",
"text": "chris@nuc8i7:~/mongo$ cd mongo-c-driver-1.17.2\nchris@nuc8i7:~/mongo/mongo-c-driver-1.17.2$ mkdir cmake-build\nchris@nuc8i7:~/mongo/mongo-c-driver-1.17.2$ cd cmake-build\nchris@nuc8i7:~/mongo/mongo-c-driver-1.17.2/cmake-build$ cmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF …\n– The C compiler identification is ;GNU 8.3.0\n– Check for working C compiler: /usr/bin/cc\n– Check for working C compiler: /usr/bin/cc – works\n– Detecting C compiler ABI info\n– Detecting C compiler ABI info - done\n– Detecting C compile features\n– Detecting C compile features - done\n– No CMAKE_BUILD_TYPE selected, defaulting to RelWithDebInfo\nfile VERSION_CURRENT contained BUILD_VERSION 1.17.2\n– Build and install static libraries\n– Using bundled libbson\nlibbson version (from VERSION_CURRENT file): 1.17.2\n– Check if the system is big endian\n– Searching 16 bit integer\n– Looking for sys/types.h\n– Looking for sys/types.h - found\n– Looking for stdint.h\n– Looking for stdint.h - found\n– Looking for stddef.h\n– Looking for stddef.h - found\n– Check size of unsigned short\n– Check size of unsigned short - done\n– Using unsigned short\n– Check if the system is big endian - little endian\n– Looking for snprintf\n– Looking for snprintf - found\n– Looking for reallocf\n– Looking for reallocf - not found\n– Performing Test BSON_HAVE_TIMESPEC\n– Performing Test BSON_HAVE_TIMESPEC - Success\n– struct timespec found\n– Looking for gmtime_r\n– Looking for gmtime_r - found\n– Looking for rand_r\n– Looking for rand_r - found\n– Looking for strings.h\nCMake Warning (dev) at CMakeLists.txt:10 (_message):\nPolicy CMP0075 is not set: Include file check macros honor\nCMAKE_REQUIRED_LIBRARIES. ;Run “cmake --help-policy CMP0075” for policy\ndetails. ;Use the cmake_policy command to set the policy and suppress this\nwarning.;;CMAKE_REQUIRED_LIBRARIES is set to:; /usr/lib/x86_64-linux-gnu/librt.so;For compatibility with CMake 3.11 and below this check is ignoring it.\nCall Stack (most recent call first):\n/usr/share/cmake-3.13/Modules/CheckIncludeFile.cmake:70 (message)\nsrc/libbson/CMakeLists.txt:91 (CHECK_INCLUDE_FILE)\nThis warning is for project developers. Use -Wno-dev to suppress it.– Looking for strings.h - found\n– Looking for strlcpy\n– Looking for strlcpy - not found\n– Looking for clock_gettime\n– Looking for clock_gettime - found\n– Looking for strnlen\n– Looking for strnlen - found\n– Looking for stdbool.h\n– Looking for stdbool.h - found\n– Looking for SYS_gettid\n– Looking for SYS_gettid - found\n– Looking for syscall\n– Looking for syscall - found\n– Performing Test HAVE_ATOMIC_32_ADD_AND_FETCH\n– Performing Test HAVE_ATOMIC_32_ADD_AND_FETCH - Success\n– Performing Test HAVE_ATOMIC_64_ADD_AND_FETCH\n– Performing Test HAVE_ATOMIC_64_ADD_AND_FETCH - Success\n– Looking for pthread.h\n– Looking for pthread.h - found\n– Looking for pthread_create\n– Looking for pthread_create - not found\n– Check if compiler accepts -pthread\n– Check if compiler accepts -pthread - yes\n– Found Threads: TRUE\nAdding -fPIC to compilation of bson_static components\nlibmongoc version (from VERSION_CURRENT file): 1.17.2\n– Searching for zlib CMake packages\n– Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found version “1.2.11”)\n– zlib found version “1.2.11”\n– zlib include path “/usr/include”\n– zlib libraries “/usr/lib/x86_64-linux-gnu/libz.so”\n– Looking for include file unistd.h\n– Looking for include file unistd.h - found\n– Looking for include file stdarg.h\n– Looking for include file stdarg.h - found\n– Searching for compression library zstd\n– Found PkgConfig: /usr/bin/pkg-config (found version “0.29.1”)\n– Checking for module ‘libzstd’\n– No package ‘libzstd’ found\n– Not found\n– Found OpenSSL: /usr/lib/x86_64-linux-gnu/libcrypto.so (found version “1.1.1b”)\n– Looking for ASN1_STRING_get0_data in /usr/lib/x86_64-linux-gnu/libcrypto.so\n– Looking for ASN1_STRING_get0_data in /usr/lib/x86_64-linux-gnu/libcrypto.so - found\n– Searching for sasl/sasl.h\n– Not found (specify -DCMAKE_INCLUDE_PATH=/path/to/sasl/include for SASL support)\n– Searching for libsasl2\n– Not found (specify -DCMAKE_LIBRARY_PATH=/path/to/sasl/lib for SASL support)\n– Check size of socklen_t\n– Check size of socklen_t - done\n– Looking for res_nsearch\n– Looking for res_nsearch - found\n– Looking for res_ndestroy\n– Looking for res_ndestroy - not found\n– Looking for res_nclose\n– Looking for res_nclose - found\n– Looking for sched_getcpu\n– Looking for sched_getcpu - not found\n– Detected parameters: accept (int, struct sockaddr *, socklen_t *)\n– Searching for compression library header snappy-c.h\n– Not found (specify -DCMAKE_INCLUDE_PATH=/path/to/snappy/include for Snappy compression)\nSearching for libmongocrypt\n– libmongocrypt not found. Configuring without Client-Side Field Level Encryption support.\n– Performing Test MONGOC_HAVE_SS_FAMILY\n– Performing Test MONGOC_HAVE_SS_FAMILY - Success\n– Compiling against OpenSSL\n– SASL disabled\nAdding -fPIC to compilation of mongoc_static components\n– Building with MONGODB-AWS auth support\n– Build files generated for:\n– \tbuild system: Unix Makefiles\n– Configuring done\n– Generating done\n– Build files have been written to: /home/chris/mongo/mongo-c-driver-1.17.2/cmake-buildcd …/…/mongo-cxx-driver-r3.6.1/build\nchris@nuc8i7:~/mongo/mongo-cxx-driver-r3.6.1/build$ cmake … -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=~/mongo/mongo-c-driver-1.17.2/cmake-build -DCMAKE_INSTALL_PREFIX=~/mongo/mongo-c-driver-1.17.2/cmake-build\n– The CXX compiler identification is GNU 8.3.0\n– Check for working CXX compiler: /usr/bin/c++\n– Check for working CXX compiler: /usr/bin/c++ – works\n– Detecting CXX compiler ABI info\n– Detecting CXX compiler ABI info - done\n– Detecting CXX compile features\n– Detecting CXX compile features - done\n– The C compiler identification is GNU 8.3.0\n– Check for working C compiler: /usr/bin/cc\n– Check for working C compiler: /usr/bin/cc – works\n– Detecting C compiler ABI info\n– Detecting C compiler ABI info - done\n– Detecting C compile features\n– Detecting C compile features - done\n– Auto-configuring bsoncxx to use MNMLSTC for polyfills since C++17 is inactive\nbsoncxx version: 3.6.1\nCMake Error at src/bsoncxx/CMakeLists.txt:113 (find_package):\nBy not providing “Findlibbson-1.0.cmake” in CMAKE_MODULE_PATH this project\nhas asked CMake to find a package configuration file provided by\n“libbson-1.0”, but CMake did not find one.Could not find a package configuration file provided by “libbson-1.0”\n(requested version 1.13.0) with any of the following names:Add the installation prefix of “libbson-1.0” to CMAKE_PREFIX_PATH or set\n“libbson-1.0_DIR” to a directory containing one of the above files. If\n“libbson-1.0” provides a separate development package or SDK, be sure it\nhas been installed.– Configuring incomplete, errors occurred!\nSee also “/home/chris/mongo/mongo-cxx-driver-r3.6.1/build/CMakeFiles/CMakeOutput.log”.",
"username": "chris_d"
},
{
"code": "/usr/localInstall the project...\n-- Install configuration: \"RelWithDebInfo\"\n-- Installing: /usr/local/share/mongo-c-driver/COPYING\n-- Installing: /usr/local/share/mongo-c-driver/NEWS\n-DCMAKE_PREFIX_PATH=-DCMAKE_INSTALL_PREFIX=~/mongo/mongo-c-driver-1.17.2/cmake-build/usr/local/-DCMAKE_PREFIX_PATH=/usr/local//usr/local/lib",
"text": "You only provided the output for the CMake configure commands. Without the output of the C driver installation it is still not possible to determine the precise issue. However, on my own system, the installation goes into /usr/local by default:Can you confirm that to the case? If it is, then the -DCMAKE_PREFIX_PATH= option is unneeded. Additionally, specifying -DCMAKE_INSTALL_PREFIX=~/mongo/mongo-c-driver-1.17.2/cmake-build tells CMake you want the C++ driver to be installed to the same directory where you built the C driver. You probably do not want this.In any event, if your CMake defaults to installing somewhere other than /usr/local/ you will need to adjust the -DCMAKE_PREFIX_PATH= option to the C++ driver build to match. If it is installing to /usr/local/, then you may have some other environmental issue that is causing CMake to no longer search /usr/local/lib as it should by default.Feel free to provide the output of the C driver installation step if you still need assistance to determine where the components are being installed.",
"username": "Roberto_Sanchez"
},
{
"code": "cd mongo-c-driver-1.17.2\nmkdir cmake-build\ncd cmake-build\ncmake .. -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF -DCMAKE_INSTALL_PREFIX=~/mongo/mongoc\n-- The C compiler identification is ;GNU 4.8.5\n-- Check for working C compiler: /bin/cc\n-- Check for working C compiler: /bin/cc -- works\n-- Detecting C compiler ABI info\n-- Detecting C compiler ABI info - done\n-- Detecting C compile features\n-- Detecting C compile features - done\n-- No CMAKE_BUILD_TYPE selected, defaulting to RelWithDebInfo\n-- Build and install static libraries\n-- Check if the system is big endian\n-- Searching 16 bit integer\n-- Looking for sys/types.h\n-- Looking for sys/types.h - found\n-- Looking for stdint.h\n-- Looking for stdint.h - found\n-- Looking for stddef.h\n-- Looking for stddef.h - found\n-- Check size of unsigned short\n-- Check size of unsigned short - done\n-- Using unsigned short\n-- Check if the system is big endian - little endian\n-- Looking for snprintf\n-- Looking for snprintf - found\n-- Looking for reallocf\n-- Looking for reallocf - not found\n-- Performing Test BSON_HAVE_TIMESPEC\n-- Performing Test BSON_HAVE_TIMESPEC - Success\n-- struct timespec found\n-- Looking for gmtime_r\n-- Looking for gmtime_r - found\n-- Looking for rand_r\n-- Looking for rand_r - found\n-- Looking for strings.h\n-- Looking for strings.h - found\n-- Looking for strlcpy\n-- Looking for strlcpy - not found\n-- Looking for clock_gettime\n-- Looking for clock_gettime - found\n-- Looking for strnlen\n-- Looking for strnlen - found\n-- Looking for stdbool.h\n-- Looking for stdbool.h - found\n-- Looking for SYS_gettid\n-- Looking for SYS_gettid - found\n-- Looking for syscall\n-- Looking for syscall - found\n-- Performing Test HAVE_ATOMIC_32_ADD_AND_FETCH\n-- Performing Test HAVE_ATOMIC_32_ADD_AND_FETCH - Success\n-- Performing Test HAVE_ATOMIC_64_ADD_AND_FETCH\n-- Performing Test HAVE_ATOMIC_64_ADD_AND_FETCH - Success\n-- Looking for pthread.h\n-- Looking for pthread.h - found\n-- Looking for pthread_create\n-- Looking for pthread_create - not found\n-- Check if compiler accepts -pthread\n-- Check if compiler accepts -pthread - yes\n-- Found Threads: TRUE\n-- Searching for zlib CMake packages\n-- Found ZLIB: /usr/lib64/libz.so (found version \"1.2.7\")\n-- Looking for include file unistd.h\n-- Looking for include file unistd.h - found\n-- Looking for include file stdarg.h\n-- Looking for include file stdarg.h - found\n-- Searching for compression library zstd\n-- Found PkgConfig: /bin/pkg-config (found version \"0.27.1\")\n-- Checking for module 'libzstd'\n-- No package 'libzstd' found\n-- Not found\n-- Found OpenSSL: /usr/lib64/libcrypto.so (found version \"1.0.2k\")\n-- Looking for ASN1_STRING_get0_data in /usr/lib64/libcrypto.so\n-- Looking for ASN1_STRING_get0_data in /usr/lib64/libcrypto.so - not found\n-- Searching for sasl/sasl.h\n-- Not found (specify -DCMAKE_INCLUDE_PATH=/path/to/sasl/include for SASL support)\n-- Searching for libsasl2\n-- Not found (specify -DCMAKE_LIBRARY_PATH=/path/to/sasl/lib for SASL support)\n-- Check size of socklen_t\n-- Check size of socklen_t - done\n-- Looking for res_nsearch\n-- Looking for res_nsearch - found\n-- Looking for res_ndestroy\n-- Looking for res_ndestroy - not found\n-- Looking for res_nclose\n-- Looking for res_nclose - found\n-- Looking for sched_getcpu\n-- Looking for sched_getcpu - not found\n-- Detected parameters: accept (int, struct sockaddr *, socklen_t *)\n-- Searching for compression library header snappy-c.h\n-- Not found (specify -DCMAKE_INCLUDE_PATH=/path/to/snappy/include for Snappy compression)\n-- No ICU library found, SASLPrep disabled for SCRAM-SHA-256 authentication.\n-- If ICU is installed in a non-standard directory, define ICU_ROOT as the ICU installation path.\n-- libmongocrypt not found. Configuring without Client-Side Field Level Encryption support.\n-- Performing Test MONGOC_HAVE_SS_FAMILY\n-- Performing Test MONGOC_HAVE_SS_FAMILY - Success\n-- Compiling against OpenSSL\n-- SASL disabled\n-- Building with MONGODB-AWS auth support\n-- Build files generated for:\n-- build system: Unix Makefiles\n-- Configuring done\n-- Generating done\n-- Build files have been written to: /home/ux1/net/mongo/mongo-c-driver-1.17.2/cmake-build\nmake install\n[ 6%] Built target bson_shared\n[ 6%] Built target bcon-speed\n[ 11%] Built target bson_static\n[ 12%] Built target json-to-bson\n[ 12%] Built target bson-streaming-reader\n[ 13%] Built target bson-to-json\n[ 13%] Built target bson-metrics\n[ 13%] Built target bson-validate\n[ 14%] Built target bcon-col-view\n[ 15%] Built target bson-check-depth\n[ 39%] Built target mongoc_shared\n[ 40%] Built target common-operations\n[ 40%] Built target bulk6\n[ 40%] Built target bulk4\n[ 40%] Built target bulk3\n[ 40%] Built target bulk1\n[ 41%] Built target bulk-collation\n[ 41%] Built target basic-aggregation\n[ 42%] Built target bulk2\n[ 42%] Built target mongoc-dump\n[ 42%] Built target hello_mongoc\n[ 43%] Built target find-and-modify\n[ 43%] Built target example-resume\n[ 66%] Built target mongoc_static\n[ 66%] Built target test-mongoc-cache\n[ 66%] Built target aggregation1\n[ 67%] Built target test-mongoc-gssapi\n[ 68%] Built target example-scram\n[ 69%] Built target mongoc-stat\n[ 70%] Built target example-command-monitoring\n[ 70%] Built target example-collection-watch\n[ 70%] Built target fam\n[ 95%] Built target test-libmongoc\n[ 95%] Built target mongoc-ping\n[ 95%] Built target example-sdam-monitoring\n[ 95%] Built target example-command-with-opts\n[ 96%] Built target bulk5\n[ 97%] Built target example-create-indexes\n[ 97%] Built target mongoc-tail\n[ 97%] Built target example-pool\n[ 97%] Built target example-start-at-optime\n[ 97%] Built target example-gridfs\n[ 98%] Built target example-session\n[ 98%] Built target example-client\n[ 99%] Built target example-gridfs-bucket\n[100%] Built target example-transaction\n[100%] Built target example-update\nInstall the project...\n-- Install configuration: \"RelWithDebInfo\"\n-- Installing: /home/ux1/net/mongo/mongoc/share/mongo-c-driver/COPYING\n-- Installing: /home/ux1/net/mongo/mongoc/share/mongo-c-driver/NEWS\n-- Installing: /home/ux1/net/mongo/mongoc/share/mongo-c-driver/README.rst\n-- Installing: /home/ux1/net/mongo/mongoc/share/mongo-c-driver/THIRD_PARTY_NOTICES\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/libbson-1.0.so.0.0.0\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/libbson-1.0.so.0\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/libbson-1.0.so\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/libbson-static-1.0.a\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-config.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-version.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bcon.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-atomic.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-clock.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-compat.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-context.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-decimal128.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-endian.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-error.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-iter.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-json.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-keys.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-macros.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-md5.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-memory.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-oid.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-prelude.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-reader.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-string.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-types.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-utf8.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-value.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-version-functions.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson/bson-writer.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libbson-1.0/bson.h\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/pkgconfig/libbson-1.0.pc\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/pkgconfig/libbson-static-1.0.pc\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/bson-1.0/bson-targets.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/bson-1.0/bson-targets-relwithdebinfo.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/bson-1.0/bson-1.0-config.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/bson-1.0/bson-1.0-config-version.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/libbson-1.0/libbson-1.0-config.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/libbson-1.0/libbson-1.0-config-version.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/libbson-static-1.0/libbson-static-1.0-config.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/libbson-static-1.0/libbson-static-1.0-config-version.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/bin/mongoc-stat\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/libmongoc-1.0.so.0.0.0\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/libmongoc-1.0.so.0\n-- Set runtime path of \"/home/ux1/net/mongo/mongoc/lib64/libmongoc-1.0.so.0.0.0\" to \"\"\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/libmongoc-1.0.so\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/libmongoc-static-1.0.a\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-config.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-version.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-apm.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-bulk-operation.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-change-stream.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-client.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-client-pool.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-client-side-encryption.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-collection.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-cursor.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-database.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-error.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-flags.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-find-and-modify.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-gridfs.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-gridfs-bucket.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-gridfs-file.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-gridfs-file-page.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-gridfs-file-list.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-handshake.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-host-list.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-init.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-index.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-iovec.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-log.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-macros.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-matcher.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-opcode.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-prelude.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-read-concern.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-read-prefs.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-server-description.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-client-session.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-socket.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-stream-tls-libressl.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-stream-tls-openssl.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-stream.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-stream-buffered.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-stream-file.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-stream-gridfs.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-stream-socket.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-topology-description.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-uri.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-version-functions.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-write-concern.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-rand.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-stream-tls.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc/mongoc-ssl.h\n-- Installing: /home/ux1/net/mongo/mongoc/include/libmongoc-1.0/mongoc.h\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/pkgconfig/libmongoc-1.0.pc\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/pkgconfig/libmongoc-static-1.0.pc\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/pkgconfig/libmongoc-ssl-1.0.pc\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/mongoc-1.0/mongoc-targets.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/mongoc-1.0/mongoc-targets-relwithdebinfo.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/mongoc-1.0/mongoc-1.0-config.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/mongoc-1.0/mongoc-1.0-config-version.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/libmongoc-1.0/libmongoc-1.0-config.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/libmongoc-1.0/libmongoc-1.0-config-version.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/libmongoc-static-1.0/libmongoc-static-1.0-config.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/lib64/cmake/libmongoc-static-1.0/libmongoc-static-1.0-config-version.cmake\n-- Installing: /home/ux1/net/mongo/mongoc/share/mongo-c-driver/uninstall.sh\n",
"text": "",
"username": "chris_d"
},
{
"code": "",
"text": "the C compile and install is above",
"username": "chris_d"
},
{
"code": "-DCMAKE_PREFIX_PATH=~/mongo/mongoc",
"text": "Based on the additional output, you need to pass -DCMAKE_PREFIX_PATH=~/mongo/mongoc to the C++ Driver CMake command.",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "it’s attempting to download EP_mnmlstc_core, but Ithe machine I am building it on doesn’t have internet access. Why does it need EP_mnmlstc_core, and how can I get it so I can copy it across manually?",
"username": "chris_d"
},
{
"code": "-DBSONCXX_POLY_USE_BOOST=1",
"text": "I am not sure why your posts are delayed in becoming visible. As far as the MNMLSTC project, it provides the c++17 polyfill. You can choose from other options (“Step 2” in the installation guide. For reference, the Debian and Ubuntu packages use the -DBSONCXX_POLY_USE_BOOST=1 option to avoid triggering the MNMLSTC download during the build.",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "If you keep deleting your posts it will make resolving the problem more difficult. That said, based on the GCC 4.8.5 version, it looks like you are running on RHEL 7.0 (or a clone). Where did you obtain the Boost 1.58 packages? Can you provide the source so that I can try to replicate the failure you encountered?",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "Hi, I deleted the post because I resolved the issue by upgrading to the latest boost, however I can confirm that boost 1.58 doesn’t work, I got it from Boost Version History",
"username": "chris_d"
},
{
"code": "",
"text": "The mongo-cxx-driver-r3.6.1 examples fail to compile:/mongo-cxx-driver-r3.6.1/examples/projects/mongocxx/cmake/static$ build.shCMake Error at CMakeLists.txt:74 (message):\nExpected BSONCXX_STATIC to be defined– Configuring incomplete, errors occurred!",
"username": "chris_d"
},
{
"code": "",
"text": "OK. Thanks for the information. I will investigate and try to replicate the error.",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "what should BSONCXX_STATIC be set to?",
"username": "chris_d"
},
{
"code": "",
"text": "The examples are massively confusing - they have deep nested directory structure with multiple cmake files, and there is no README on how to build them. Is it possible to provide a hello world with a makefile?",
"username": "chris_d"
},
{
"code": "",
"text": "Any chance you got it working? Im running into the same issue and i dont really know where to go from here.",
"username": "Luke_Colias"
},
{
"code": "get-started-cxx",
"text": "Hi @Luke_Colias,I noticed you started a topic with details specific to your install challenges: C++ Driver libbson-1.0 build error on CentOS 8. If you can provide the additional details requested on that topic (and also include your specific O/S version), someone may be able to provide suggestions relevant to your environment.There’s a related discussion topic with some suggestions which might be helpful for you: Simple c++ example - #2 by wan.@wan created a Docker image which sets up a working development environment including a code example:Please take a look at get-started-cxx repository for a standalone example. It contains a simple example of connecting to MongoDB. This repository is part of the Get-Started project, see also get-started-readme for more information.Even if you don’t intend to use Docker, it can be useful to have a working environment to compare against one you are trying to configure. The get-started-cxx Docker image uses Ubuntu 20.04.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I am not able to use BOOST since mongo only works with the latest version of boost which clashes with our existing version.\nTherefore I need cmake to work with a local version of EP_mnmlstc_core. Please can you advise the changes I need to make to the cmake files to pick up a local version rather than download during cmake.\nAlso, where can I download EP_mnmlstc_core?",
"username": "chris_d"
},
{
"code": "",
"text": "Thank you for the reply, I’m using centOS 8 as my operating system which is a red-hat variant I believe. I’ll take a look at this post though and try to compare his configuration to mine still though!",
"username": "Luke_Colias"
}
] | Compiling mongo-cxx-driver-r3.6.1 | 2020-11-27T18:35:05.192Z | Compiling mongo-cxx-driver-r3.6.1 | 9,184 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "I would like to query the Realm server for users, similar to the Realm Studio after you connect to server and when you press button “Users” at the top.How can I get the same data?\nProviderID\nUser ID\nRole\nRealms\nStatusRegards",
"username": "Per_Eriksson"
},
{
"code": "",
"text": "Which version of the SDK are you using? And which server are you talking about - is this the legacy cloud (cloud.realm.io) or MongoDB Realm (realm.mongodb.com)?",
"username": "nirinchev"
},
{
"code": "",
"text": "This is the legacy, cloud.realm.io.\nI’m using .net nuget-package:\nRealm 5.0.1",
"username": "Per_Eriksson"
},
{
"code": "/__admin",
"text": "Studio reads the /__admin Realm to extract that information. It is a system Realm, so you’ll need an admin user to inspect it. It’s also highly recommended that you don’t make any modifications to it, or you can prevent the server from starting. To inspect the schema and/or export model classes, you can open it in Studio by unchecking the “Hide system Realms” option.",
"username": "nirinchev"
},
{
"code": "",
"text": "How would I query the list of Users using the new MongoDB Realm solution?",
"username": "Daniel_Smith"
},
{
"code": "",
"text": "This is not possible with MongoDB Realm. If you do need to expose a list of users, what you can do is register an authentication trigger and create a document representing the user.",
"username": "nirinchev"
},
{
"code": "",
"text": "@Daniel_Smith There is an admin API you can leverage to list all the users -\nhttps://docs.mongodb.com/realm/admin/api/v3#users-apisBut depending on what you want to do with it, ie. if you want to display users to another user then you should create a collection in MongoDB for your client SDKs to query as Nikola described",
"username": "Ian_Ward"
},
{
"code": "",
"text": "We let our Realm-users register with phone number and password.\nSo phone number would be stored in “providerid” in Account in __admin.\nAnd id of realm would be “userid” in User in __admin?\nIf above is the correct way to do it, I’m not sure how the relationships are in realm __adminWe would simply like to query Realm somehow to find out which realm a specific user has as his/her personal realm.So input would be: 004627292023\noutput would be:\n239h282-2d232323-d23d2… (uuid for the user)\nWith the uuid we can find the correct realm to open and look for more data related to the user.",
"username": "Per_Eriksson"
},
{
"code": "",
"text": "I have a couple of use cases:When a user attempts to login - the SDK response does not seem to distinguish between User not found and Invalid password so ideally if my app knows the user does not exist, I can navigate to the Sign Up flow or Incorrect password flowI want to use SMS code for Forgot Password functionality - so the user supplies email address during registration so they also supply email to initiate the forgot password flow…I need to lookup the user and get phone number to send SMS code for resetI have used stored phone# and redundantly stored email address as User Custom Data to solve #2 and it is working. I solve #1 by calling a function if login fails and looking up user by email address against User Custom Data if I find match then i assume it was invalid password.So I have workarounds but does not “feel” very clean.",
"username": "Daniel_Smith"
},
{
"code": "",
"text": "The reason 1. behaves as it does is to avoid leaking user registration data to potential attackers. Differentiating between an account does not exist and account exists, but password is incorrect is valuable information for a hacker who can try to enumerate registered accounts using data dumps, then try to brute force well known passwords against these accounts. It’s not a major roadblock by any means but it’s making their job a little bit harder.The workarounds are reasonable, although you have to weigh the risks vs the benefits of the UX you choose for sign up.",
"username": "nirinchev"
}
] | List Realm users | 2020-11-11T20:58:57.624Z | List Realm users | 3,425 |
[
"app-services-cli",
"app-services-hosting"
] | [
{
"code": "failed to create draft for import: error: upstream request timeout\nrealm-cli import --strategy=replace-by-name --include-hosting --yes\nCreating draft for app...\nDiscarding existing draft...\nfailed to create draft for import: error: upstream request timeout\n",
"text": "The initial import of my hosting files usually works. when I deploy it a second time I usually receive this message.I can usually delete all the files through the web UI and then run the deploy and it usually works.\nThe terminal command and response is:I am on the free tier to see if it works before buying and I am not exceeding the limits of hosting\nmongodb realm hosting file sizes778×517 61.6 KB\nAny suggestions appreciated.",
"username": "Bentley_Davis"
},
{
"code": "",
"text": "I may have figured it out but I would like confirmation if anyone has the time to comment. Is the static storage stored on the cluster? Being on the free tier the Atlas M0 cluster is just 512mb which may cause issues with files smaller than the limits on the Realm static hosting. I would have though it would be stored on another storage system like the underlying S3 if on Amazon.",
"username": "Bentley_Davis"
}
] | Failed to create draft for import: error: upstream request timeout - Realm Hosting | 2020-12-06T21:51:19.894Z | Failed to create draft for import: error: upstream request timeout - Realm Hosting | 4,304 |
|
null | [
"installation"
] | [
{
"code": "",
"text": "Today I am trying to sudo yum install mongodb-org-shell-4.4.2-1.el6.x86_64.rpm\nand getting the error:Error: Package: mongodb-org-shell-4.4.2-1.el6.x86_64 (/mongodb-org-shell-4.4.2-1.el6.x86_64)\nRequires: liblzma.so.0()(64bit)But when I try sudo yum install xz-libs\nI get the message:Package xz-libs-5.2.2-1.el7.x86_64 already installed and latest version\nNothing to doSo it seems like I am stuck?",
"username": "Dale_Wyttenbach"
},
{
"code": "yum install xz-devel",
"text": "Try yum install xz-devel",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "From the package names it looks like you are installing RedHat 6 version of mongodb-org-shell on RedHat 7",
"username": "chris"
},
{
"code": "",
"text": "Thank you, that was it!",
"username": "Dale_Wyttenbach"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Shared library issue on RedHat: liblzma.so.0 | 2020-12-03T17:25:56.597Z | Shared library issue on RedHat: liblzma.so.0 | 4,223 |
[] | [
{
"code": "",
"text": "Hi team,We’ve added some new forum badges for you to add to your collection. One of these (Open Source Contributor 2020) is only available during this month (October 2020), so be sure to get your PRs in! Open Source Contributor 2020 O-FISH ContributorTo earn these badges, share your URL to your PRs in the thread below. We’d love it if you can tell us about the project you worked on too!Check out the available badges, plus their requirements and descriptions, on the Badges page.",
"username": "Jamie"
},
{
"code": "",
"text": "Hi Jamie!I worked on the O-FISH application!My PR is at Fixes #248 by Sheeri · Pull Request #249 · WildAid/o-fish-web · GitHubI added helper text to the sandbox page so that developers creating a new user to use on the sandbox, knew what was required and suggested values to use.I also wanted to work on another project that wasn’t one I was familiar with, and I did a test PR here to test their autobot responder - Test the change to the autoresponder by Sheeri · Pull Request #65 · DoobDev/Doob · GitHub (it wasn’t meaningful from a code standpoint but they needed someone who had never made a PR, to make a PR).",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "Here is a public repository for a super simple chat program for MongoDB RealmSuper Simple Chat app for MongoDB Realm. Contribute to Cosync/SuperSimpleChat development by creating an account on GitHub.Here is the medium article that explains how to set it upI have been a Realm Cloud developer since early 2018, shortly after the Realm company introduced its Realm Cloud upgrade to its native…\nReading time: 12 min read\nRichard Krueger",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "Thanks @Richard_Krueger! Congrats on your badge & thanks for sharing both the repo & article. ",
"username": "Jamie"
},
{
"code": "",
"text": "Thanks for starting us off, @Sheeri_Cabral!!",
"username": "Jamie"
},
{
"code": "WildAid:mainevayde:fix_issue_137",
"text": "Hello there,I have been working on the O-Fish web application and added the ability to filter for violations.## Related Issue\nFixes #137 \n\n## Checklist:\n- [x] I have read the [contribut…or's guide](https://wildaid.github.io/contribute/index.html).\n- [x] I linked an issue in the previous section\n- [x] I have commented on the linked issue\n- [x] I was assigned the linked issue (not required)\n- [x] I have tested the change to the best of my ability against the [sandbox](https://wildaid.github.io/contribute/sandbox.html) or a [local build](https://wildaid.github.io/build).\n\nOptional items:\n- [ ] My change adds new text and requires a change to translations.\n- [ ] My change requires a change to the documentation.\n- [ ] I have submitted a PR to the [documentation repo](https://github.com/WildAid/wildaid.github.io).\n- [ ] I was not able to test... (explain below, e.g. you did not have permissions to test a specific feature)\n- [ ] This change depends O-FISH Realm repository changes (explain below)\n\n* **Optional: Add any explanations here** \n1. This change introduces a new Filter for violations\n1. The filter will be removed from the UI when refreshing the page, while maintaining the filtered results (see comments in #137 )I have learned something about MongoDB and Realm, which I am very grateful for. Also, I should mention that Adrienne Tacke brought my attention to this project with her weekly posts.You have a great team. I like them a lot.Greetings from Germany.",
"username": "evayde"
},
{
"code": "",
"text": "Thanks @evayde (and @yo_adrienne!).",
"username": "Jamie"
},
{
"code": "",
"text": "Hello!I have really been enjoying working on this project. This has been my first experience contributing to open source and everyone has been so kind and helpful, especially Sheeri in the issue comments and at office hours. So, thank you all for such an awesome intro! I’m excited to dig into more issues.I enabled date filtering on the agencies dashboard for the global admin so that the number of boardings and violations listed for each agency reflects the dates selected in the date picker.",
"username": "Susan_Holland"
},
{
"code": "mongoshJamesKovacs/zsh_completions_mongodbmlaunchrueckstiess/mtoolsmlauncho-fish-web",
"text": "G’day all!I try to contribute to open source projects that I use and am also a maintainer for a few (for those interested, see 🌱 G'day, I'm Stennie from MongoDB 🇦🇺).My contributions so far this month include:Add support for mongosh and new command line options in MongoDB 4.4 (JamesKovacs/zsh_completions_mongodb). You can read more about this at Zsh completions for MongoDB server, database tools, and new MongoDB shell. These completion shortcuts are very handy for my personal productivity. mlaunch: add TLS aliases for SSL options (rueckstiess/mtools). I use mlaunch to quickly stand up local test clusters. I’ve been the release manager for the mtools project since 2016, but I always ask other collaborators to review any significant changes.I started looking into the O-FISH project and am loving the info sessions, weekly weigh-ins, and excellent support for Hacktoberfest from @Sheeri_Cabral and @Andrew_Morgan.I found a few minor things to improve while getting my development environment set up to contribute to o-fish-web, so reported these issues and proactively submitted PRs:I’ve volunteered for an open O-FISH issue that I can test with the handy O-FISH development sandbox and aim to have a few more open source contributions before the end of the month.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hello!I worked on the O-FISH application. This has been my first experience contributing to open source and really grateful to work on this project.I updated some UI elements to match the mocks drawn up for the siteMy PR is at Boardings filter by crowtech7 · Pull Request #281 · WildAid/o-fish-web · GitHubThanks!",
"username": "Talin_Crow"
},
{
"code": "",
"text": "Hello!\nI worked on o-fish-web application.\nI have added global admin and agency admin default homepage route\npull requests :\nhttps://github.com/WildAid/o-fish-web/pull/251\nhttps://github.com/WildAid/o-fish-web/pull/253Regards,\nRitik",
"username": "Ritik_Pandey"
},
{
"code": "",
"text": "Greetings!I worked on a the O-FISH (web ver.) application.The PR I did is located here https://github.com/WildAid/o-fish-web/pull/258I removed the Boarding Information header and filter options from the boarding component.Thank you to @Sheeri_Cabral for her quick response and reminding me to post here!Cheers,\nKen",
"username": "Kenneth_Charette"
},
{
"code": "WildAid:mainbladebunny:347-darkmode-colorsWildAid:mainbladebunny:339-loc-script",
"text": "Hi all,I worked on the o-fish iOS app. I worked on adding the initial dark mode support.## Related Issue\n\n\n\nFixes #347 \nThis is a partial fix to add color themes. … Naming is a bit cumbersome (lightTheme/DarkTheme) but is necessary to disambiguate system light and dark named colors.\n\n## Checklist:\n\n- [x] I have read the [contributor's guide](https://wildaid.github.io/contribute/index.html).\n- [x] I linked an issue in the previous section\n- [x] I have commented on the linked issue\n- [x] I was assigned the linked issue (not required)\n- [x] I have tested the change to the best of my ability against the [sandbox](https://wildaid.github.io/contribute/sandbox.html) or a [local build](https://wildaid.github.io/build).\n\nOptional items:\n\n- [ ] My change adds new text and requires a change to translations.\n- [ ] My change requires a change to the documentation.\n- [ ] I have submitted a PR to the [documentation repo](https://github.com/WildAid/wildaid.github.io).\n- [ ] I was not able to test... (explain below, e.g. you did not have permissions to test a specific feature)\n- [ ] This change depends O-FISH Realm repository changes (explain below)\n- [ ] This change depends O-FISH Web repository changes (explain below)\n\n* **Optional: Add any explanations here** \nThis is the first step toward adding light/dark mode. Breaking up changes into smaller PRs.\n\n* **Optional: Add any relevant screenshots here**I’m also working on adding a script on the iOS project to help discover unlocalized strings in the app.## Related Issue\n\n\n\nFixes #339\n\n## Checklist:\n\n- [x] I have read the [co…ntributor's guide](https://wildaid.github.io/contribute/index.html).\n- [x] I linked an issue in the previous section\n- [x] I have commented on the linked issue\n- [ ] I was assigned the linked issue (not required)\n- [x] I have tested the change to the best of my ability against the [sandbox](https://wildaid.github.io/contribute/sandbox.html) or a [local build](https://wildaid.github.io/build).\n\nOptional items:\n\n- [ ] My change adds new text and requires a change to translations.\n- [x] My change requires a change to the documentation.\n- [ ] I have submitted a PR to the [documentation repo](https://github.com/WildAid/wildaid.github.io).\n- [ ] I was not able to test... (explain below, e.g. you did not have permissions to test a specific feature)\n- [ ] This change depends O-FISH Realm repository changes (explain below)\n- [ ] This change depends O-FISH Web repository changes (explain below)\n\n* **Optional: Add any explanations here** \nAdded REAME to Scripts directory with instructions and description.\n\n\n* **Optional: Add any relevant screenshots here**Cheers,\nTim",
"username": "Tim_Brooks"
},
{
"code": "",
"text": "Hi. I contributed to the O-FISH project. Here is a link to my PR https://github.com/WildAid/o-fish-android/pull/402#issuecomment-712333761",
"username": "Napoleon_Salazar"
},
{
"code": "WildAid:mainsunny52525:notes-page-dark-modeWildAid:mainsunny52525:basic-info-darkmode",
"text": "Hello Jamie!\nI worked on O-Fish Android app.\nHere is the link to my PR:## Related Issue\n\n\n\nFixes #386 \n\n## Checklist:\n\n- [x ] I have read the […contributor's guide](https://wildaid.github.io/contribute/index.html).\n- [x] I linked an issue in the previous section\n- [x] I have commented on the linked issue\n- [x] I was assigned the linked issue (not required)\n- [x] I have tested the change to the best of my ability against the [sandbox](https://wildaid.github.io/contribute/sandbox.html) or a [local build](https://wildaid.github.io/build).\n\nOptional items:\n\n- [ ] My change adds new text and requires a change to translations.\n- [ ] My change requires a change to the documentation.\n- [ ] I have submitted a PR to the [documentation repo](https://github.com/WildAid/wildaid.github.io).\n- [ ] I was not able to test... (explain below, e.g. you did not have permissions to test a specific feature)\n- [ ] This change depends O-FISH Realm repository changes (explain below)\n- [ ] This change depends O-FISH Web repository changes (explain below)\n\n* **Optional: Add any explanations here** \nI added new colors.xml which is used when system theme is changed to dark. Which means not only notes page is changed but other pages too. It will be easier to theme other section.\n\n\n* **Optional: Add any relevant screenshots here** \n## Related Issue\n#378 \nFixes #378 \n\n## Checklist:\n\n- [x] I have read the […contributor's guide](https://wildaid.github.io/contribute/index.html).\n- [x] I linked an issue in the previous section\n- [ ] I have commented on the linked issue\n- [ ] I was assigned the linked issue (not required)\n- [x] I have tested the change to the best of my ability against the [sandbox](https://wildaid.github.io/contribute/sandbox.html) or a [local build](https://wildaid.github.io/build).\n\nOptional items:\n\n- [ ] My change adds new text and requires a change to translations.\n- [ ] My change requires a change to the documentation.\n- [ ] I have submitted a PR to the [documentation repo](https://github.com/WildAid/wildaid.github.io).\n- [ ] I was not able to test... (explain below, e.g. you did not have permissions to test a specific feature)\n- [ ] This change depends O-FISH Realm repository changes (explain below)\n- [ ] This change depends O-FISH Web repository changes (explain below)\n\n\n* **Optional: Add any relevant screenshots here** \n\n\nScreenshot in original resolution\n[light](https://drive.google.com/file/d/1DYbRgOQqMuwBwjsQxXBpkEXpFPqyO38H/view?usp=drivesdk)\n[dark](https://drive.google.com/file/d/1DaB_g97CmURLzA-H3LW1bAuDPlGvAwbp/view?usp=drivesdk)",
"username": "sunny52525"
},
{
"code": "",
"text": "Hi @Susan_Holland, welcome to the open source world and thanks so much for your contributions to the O-FISH project!",
"username": "Jamie"
},
{
"code": "",
"text": "Great post, @Stennie_X! Love seeing all your contributions ",
"username": "Jamie"
},
{
"code": "",
"text": "Welcome @Talin_Crow!! Thanks for your contributions to the project!",
"username": "Jamie"
},
{
"code": "",
"text": "Welcome @Ritik_Pandey! Thanks for your work on O-FISH!",
"username": "Jamie"
},
{
"code": "",
"text": "Welcome @Kenneth_Charette! Thanks for your contributions to the O-FISH project!",
"username": "Jamie"
}
] | Open Source Contributors, Pick Up Your Badges Here! | 2020-10-09T07:28:03.513Z | Open Source Contributors, Pick Up Your Badges Here! | 12,090 |
|
null | [] | [
{
"code": "",
"text": "I’m on chapter 1 section: Mongod options.\nI tried to run the command I following command:\nmongod --help\nbut I got an error: 2020-12-01T18:35:28.056-0600 E QUERY [js] SyntaxError: missing ; before statement @(shell):1:9Am I missing a package or I need to configure something here?\nThanks",
"username": "Alpha_Ly"
},
{
"code": "",
"text": "Most likely you are at mongo prompt\nPlease exit and run the command from your os promptC:\\Users\\xyz> mongod --help—>on Windows systems",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "What is the xyz standing for?",
"username": "Alpha_Ly"
},
{
"code": "",
"text": "Your windows username. If your Windows username is Alpha then it will show C:\\Users\\Alpha>Go to windows search then enter “cmd” without quotes you will find it.",
"username": "Shashank_Jadon"
},
{
"code": "",
"text": "Hi @Alpha_Ly,I hope your doubts are resolved now.Just an additional reading if in case you are interested : mongod. We cover more about this topic in our M103 course.",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Chapter 1: mongod options | 2020-12-02T00:48:44.872Z | Chapter 1: mongod options | 2,468 |
null | [
"cxx"
] | [
{
"code": "",
"text": "Hello everyone, I have just started working on mongocxx driver and I need some help using the gridfs to upload files into and download files from the database. It will be great if someone can provide some sample codes for reference.",
"username": "Red_reindeer"
},
{
"code": "",
"text": "https://developer.aliyun.com/article/397577",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "http://mongocxx.org/api/current/classmongocxx_1_1gridfs_1_1bucket.html",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Thank you @Jack_Woehr this is helpful.",
"username": "Red_reindeer"
}
] | Code for uploading and downloading using gridfs of mongocxx | 2020-12-04T19:58:41.054Z | Code for uploading and downloading using gridfs of mongocxx | 2,365 |
[
"java"
] | [
{
"code": "public static void main(String[] args) throws Throwable {\n\n\n Micronaut.run(Application.class, args);\n\n MongoClient mongoClient = MongoClients.create();\n MongoDatabase database = mongoClient.getDatabase(\"mybank\");\n MongoCollection<Document> collection = database.getCollection(\"account_collection\");\n FindPublisher<Document> findPublisher = collection.find(eq(\"key\", \"1\"));\n\n System.out.print(findPublisher.first());\n Document doc = new Document(\"name\", \"MongoDB\")\n .append(\"type\", \"database\")\n .append(\"count\", 1)\n .append(\"info\", new Document(\"x\", 203).append(\"y\", 102));\n\n collection.insertOne(doc).subscribe(new SubscriberHelpers.OperationSubscriber<>());\n\n // get it (since it's the only one in there since we dropped the rest earlier on)\n collection.find().first().subscribe(new SubscriberHelpers.PrintDocumentSubscriber());\n\n System.out.print(collection.countDocuments());\n\n // Clean up\n SubscriberHelpers.ObservableSubscriber subscriber = new SubscriberHelpers.ObservableSubscriber<>();\n subscriber = new SubscriberHelpers.PrintSubscriber(\"Collection Dropped\");\n collection.drop().subscribe(subscriber);\n subscriber.await();\n\n // release resources\n mongoClient.close();\n",
"text": "I have used MongoDb for few years successfully by using Spring Data (typical CRUD repositories). Now it is my first timme using MongDb Reactive Stream and, on top of this, my first time not using Spring. I am trying a very simple query and I am completely stuck after two days studing.Here is the full code:package com.example;import com.mongodb.reactivestreams.client.*;\nimport com.sun.net.httpserver.Authenticator;\nimport io.micronaut.runtime.Micronaut;\nimport org.bson.Document;\nimport static com.mongodb.client.model.Filters.eq;public class Application {}and all dependencies in build.gradleplugins {\nid “com.github.johnrengelman.shadow” version “6.1.0”\nid “io.micronaut.application” version ‘1.0.5’\n}version “0.1”\ngroup “com.example”repositories {\nmavenCentral()\njcenter()\n}micronaut {\nruntime “netty”\ntestRuntime “junit5”\nprocessing {\nincremental true\nannotations “com.example.*”\n}\n}dependencies {\nimplementation(“io.micronaut:micronaut-validation”)\nimplementation(“io.micronaut:micronaut-runtime”)\nimplementation(“javax.annotation:javax.annotation-api”)\nimplementation(“io.micronaut:micronaut-http-client”)\nimplementation(“io.micronaut.mongodb:micronaut-mongo-reactive”)\nruntimeOnly(“ch.qos.logback:logback-classic”)\ntestImplementation(“de.flapdoodle.embed:de.flapdoodle.embed.mongo:2.0.1”)\n}mainClassName = “com.example.Application”\njava {\nsourceCompatibility = JavaVersion.toVersion(‘11’)\ntargetCompatibility = JavaVersion.toVersion(‘11’)\n}Well I have tried several approaches after studied carefully an example named Tour provided in official repository. I believe the above code was the closest tentative from myself.When I debug the code above in IntelliJ, it get blocked in “System.out.print(findPublisher.first());” forever. Well, I was expecting to block or wait only until it resolves.When I read document about FindPublisher<>.find it says “Helper to return a publisher limited to the first result”. If I understood correctly, it brings the fist document which matches the criteria from “collection.find(eq(“key”, “1”))”So my main question is: what I am doing wrong or missing in order to retrieve/select/query the document?An additional information, I tried another approach based on Tour example. I copied SubscriberHelpers file to my project and I triedI noticed that the Enum Success isn’t available so I just removed assuming the example isn’t up-to-date with newer version.And here it seems the document was added since I see an automatic _id created but I check in database and it isn’t there at all. Also, subscriber.await() lock the application foreever\ntour1478×988 108 KB\n",
"username": "Jim_C"
},
{
"code": "",
"text": "A bit said, no one replies it",
"username": "Jim_C"
},
{
"code": "ObservableSubscriber<Document> documentSubscriber = new PrintDocumentSubscriber();\ncollection.find().first().subscribe(documentSubscriber);\ndocumentSubscriber.await(); \nObservableSubscriber<InsertOneResult> insertOneSubscriber = new OperationSubscriber<>();\ncollection.insertOne(doc).subscribe(insertOneSubscriber);\ninsertOneSubscriber.await(); \n",
"text": "Hi @Jim_C,So my main question is: what I am doing wrong or missing in order to retrieve/select/query the document?I think you’re on the right track already. As you have mentioned, you can use the example SubscriberHelpers class as a subscriber. Feel free to modify/extend/copy the class to suit your use case.For example, to just find a document and print out, you could do:And here it seems the document was added since I see an automatic _id created but I check in database and it isn’t there at all.Make sure that you’re waiting for the insert operation before the program has exited or the client has been closed. An example:Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | How to query documents with com.mongodb.reactivestreams.client | 2020-11-16T01:18:52.613Z | How to query documents with com.mongodb.reactivestreams.client | 5,840 |
|
null | [
"atlas-functions"
] | [
{
"code": "exports = async (username, tokenId, token) => {\n\n console.log(\"sendgrid_api_key\", context.values.get(\"sendgrid_api_key\"));\n\n const body = {\n personalizations: [{ to: [{ email: username }] }],\n from: { email: \"[email protected]\" },\n subject: `Hi ${username} reset email`,\n content: [\n {\n type: \"text/plain\",\n value: \"test\",\n },\n ],\n };\n\n return context.http.post({\n url: \"https://api.sendgrid.com/v3/mail/send\",\n headers: {\n Authorization: [`Bearer ${context.values.get(\"sendgrid_api_key\")}`],\n },\n body: JSON.stringify(body),\n encodeBodyAsJSON: true,\n });\n};\nsendgrid_api_key undefined",
"text": "Hello,\nI have and secret “sendgrid_api_key” defined in Realm UI!\nScreenshot 2020-12-07 at 00.17.35|690x282\nWhen I try to get this value inside the function it returns undefined.Executing this function logs sendgrid_api_key undefined\nWhat am I doing wrong?",
"username": "Stanislaw_Baranski"
},
{
"code": "",
"text": "Ok, nevermind I just found solution in docsYou cannot directly read the value of a Secret after defining it. Instead, you link to the Secret by name in authentication provider and service configurations. If you need to access the Secret from a Function or Rule, you can link the Secret to a Value.",
"username": "Stanislaw_Baranski"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Context.values.get for secret return undefined | 2020-12-06T23:22:17.060Z | Context.values.get for secret return undefined | 3,716 |
null | [
"app-services-data-access"
] | [
{
"code": "",
"text": "Hi All,So RealmDB is very type specific Is there a way in the schema to define an either or type\n(a) I have a field that could be either double or int, however it appears I can only have bsonType: double OR bsonType int Thereis reference in the docs to a bsonType: [\"double, “int”] but then it complains the schema is invalid with Sync.?\n(b) similarly having a bsontype object, optional true, does not allow for null so can one have bsontype [object, null], again doing so Sync says it’s invalid ?[Run Valiation] only tests against the first 1,000 records. How can I run validation against the entire collection 178,000 records. I have tried upping the value to 4000 and then get a timeout error?I have a field where the key:value, key is a unique variable string , value always an integer\nso something:\nuserLogins: [\nname: 1,\nnameB, 8,\nnameC: 12\n]\nHow does one write the schema for a variable key?Thanks in advance",
"username": "Barry_Fawthrop"
},
{
"code": "",
"text": "did you got a way to solve?\ni’m in the same trouble",
"username": "Royal_Advice"
}
] | Validation Schema Rules | 2020-07-25T20:43:13.336Z | Validation Schema Rules | 3,515 |
null | [
"atlas-search"
] | [
{
"code": "",
"text": "We have 600,000 documents in our atlas collection, soon to be over 2M. Once we created the proper indexes, an atlas search with\" text\" or “autocomplete” returns in less than a second. We need pagination so we tried mongoose library mongoose-paginate-v2, https://www.npmjs.com/package/mongoose-paginate-v2 it takes 15 seconds!Any suggestions?",
"username": "Fred_Kufner"
},
{
"code": "$skip",
"text": "Out of curiosity, have you tried using the $skip aggregation stage?",
"username": "Marcus"
},
{
"code": "def idlimit(page_size, last_id=None, query=None, collection=None):\n\"\"\"Function returns `page_size` number of documents after last_id\nand the new last_id.\n\"\"\"\n\nif query is None:\n query = {}\n\nif last_id is None:\n # When it is first page\n cursor = collection.find(query).limit(page_size)\nelse:\n query['_id'] = {'$gt': last_id}\n cursor = collection.find(query).limit(page_size)\n\n# Get the data\ndata = [x for x in cursor]\n\nif not data:\n # No documents left\n del cursor\n return None, None\n\n# Since documents are naturally ordered with _id, last document will\n# have max id.\nlast_id = data[-1]['_id']\n\n# Return data and last_id\ndel cursor\nreturn data, last_id",
"text": "This can help you for pagination logic fast with $skip and $limit, just convert it to JS or NodeJS ",
"username": "Jonathan_Gautier"
},
{
"code": "",
"text": "Hey jonathan, I wound up just using atlas search with limit and skip. I implemented virtual scrolling on my vue client and it works really well. Thanks for the response",
"username": "Fred_Kufner"
},
{
"code": "",
"text": "@Fred_Kufner That’s great to hear. Let us know if you have any more questions.",
"username": "Marcus"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Atlas search with pagination slow | 2020-10-30T21:03:07.284Z | Atlas search with pagination slow | 5,653 |
null | [
"replication",
"connecting"
] | [
{
"code": "",
"text": "Hi,I currently have a replica set with 4 nodes (4 different Linux servers), but only one of them has a static public IP address. In the private network everything works just fine.\nNow I have a new requirement, I need to connect to this Mongo DB from another network, read and write needed. Of course I will configure firewall and access rules accordingly (I can whitelist IPs).\nIf it was a read-only request it was enough to connect to the static IP address node.How can I approach to this problem without having to force all of the involved IPs to be static?Thanks in advance for your suggestions!",
"username": "lorenzo_sfarra"
},
{
"code": "",
"text": "All nodes of the replica set must be accessible from all clients.The solution is to set up the replica set using host names and have the appropriate DNS services.",
"username": "steevej"
}
] | Connect to a replica set with only one static public IP | 2020-12-05T19:33:09.510Z | Connect to a replica set with only one static public IP | 3,234 |
null | [] | [
{
"code": "",
"text": "Hi Team,I have a find query and it returns almost 50000 records.Now how to export to excel the find query output ?I am using nosqlbooster tool. so let me know if it is possible from tool or i need command how to do it in server level?Regards\nMamatha",
"username": "Mamatha_M"
},
{
"code": "",
"text": "Hi,The mongoexport tool can export to a CSV format, which can be read by Excel. There is some example on that documentation page.If you require further help, please post additional details, such as some example documents and what do you expect to be imported into Excel.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Whith nodejs i use\nhttps://www.papaparse.com/\nCool and simple ",
"username": "Upsylon_Developpemen"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to export find query result to Excel format? | 2020-05-22T09:10:05.703Z | How to export find query result to Excel format? | 3,874 |
null | [] | [
{
"code": "",
"text": "I tried to connect altas using IDE. It does not connect. please help me",
"username": "pothireddy_venkatasai_dholendar"
},
{
"code": "",
"text": "May be temporary network issues preventing connection\nIs your cluster healthy in Atlas\nDo you see any errors?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "No I don’t see any errors. I am able to connect in command prompt.",
"username": "pothireddy_venkatasai_dholendar"
},
{
"code": "",
"text": "Share the connect string\nSo from your PC command prompt it works but not in IDE.Is that correct?\nHave you whitelisted your IP",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I have not whitelisted IP . thanks for your help",
"username": "pothireddy_venkatasai_dholendar"
},
{
"code": "",
"text": "The IDE does not hvae the same IP as your PC. It is a VM somewhere on the cloud. You shoul white list 0.0.0.O, that is allow connection from anywhere.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you for your response.",
"username": "pothireddy_venkatasai_dholendar"
},
{
"code": "",
"text": "",
"username": "Shubham_Ranjan"
}
] | Unable to connect to ide | 2020-12-05T05:11:33.769Z | Unable to connect to ide | 2,003 |
null | [
"mongoose-odm"
] | [
{
"code": "",
"text": "The new version 5.11.0 has removed some of the methods of mongoose. We are getting an error as we have put “mongoose”: “^5.9.21” in package.json. We had to modify it to “mongoose”: “~5.9.21” to make it run.Please look into this issue. As many developers will face the same.",
"username": "Sumit_De"
},
{
"code": "",
"text": "Welcome to the MongoDB community @Sumit_De!For Mongoose bug reports you are best going directly to the Mongoose GitHub repo.If a Mongoose upgrade introduces breaking changes, I suggest:Reviewing the upgrade notes to see if this change is expectedChecking the GitHub issue queue to see if someone else has reported the same problem.If you’ve encountered what appears to be an unexpected regression, you can create a new GitHub issue for the maintainer to investigate.If you do create a GitHub issue (or find a resolution for your challenges after upgrading to Mongoose 5.11.0), it would be helpful to comment here in case others encounter the same problem.Regards,\nStennie",
"username": "Stennie_X"
}
] | New version 5.11.0 error for mongoose | 2020-12-01T19:32:50.860Z | New version 5.11.0 error for mongoose | 2,513 |
null | [
"connecting",
"php"
] | [
{
"code": "serverSelectionTryOnce$this->conn = new MongoDB\\Driver\\Manager($this->connString);[conn:Mongodb:private] => MongoDB\\Driver\\Manager Object\n (\n [uri] => \nmongodb://USERNAME:[email protected]/DB_NAME?retryWrites=true&w=majority\n [cluster] => Array\n (\n )\n\n )",
"text": "hi,\nThis is my first time here… i started using MongoDB, I made a Cluster on Atlas cloud and integrated everything on my CodeIgniter Application.I am using Official Driver of PHP for MongoDB to connect with cluster and do some operations.Unfortunatly, When i try any Operation i always get errorNo suitable servers found (serverSelectionTryOnce set): [Failed to resolve ‘MY_DOMAIN.SOME_ID.mongodb.net’] | Mongodb::get | Query could not be executedAll my configuration is fine and i am using “STRING” type connection.if i print $this->conn = new MongoDB\\Driver\\Manager($this->connString); it give me output like",
"username": "Hashir_Anwaar"
},
{
"code": "",
"text": "Composer install the MongoDB PHP Library, don’t use the PHP Extension Driver for MongoDB directly.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I removed everything and reconfigured again same as you mentioned in link… and it worked …may be some extension /library issue… Thanks a lot sir…",
"username": "Hashir_Anwaar"
}
] | No suitable servers found on PHP Driver for MongoDB | 2020-11-29T18:23:35.542Z | No suitable servers found on PHP Driver for MongoDB | 6,172 |
[] | [
{
"code": "{\"_id\":{\"$oid\":\"5fc8d0dfa8bf583ff047b691\"},\n\"a\":\"humble, texas, united states\",\n\"t\":[\"1-619-549-5428\",\"1-316-871-3930\",\"1-801-953-3210\",\"1-409-755-0267\",\"1-314-989-0140\",\"1-316-683-8794\"],\n\"e\":[\"[email protected]\",\"[email protected]\"],\n\"liid\":\"billy-mckay-64217246\",\n\"linkedin\":\"https://www.linkedin.com/in/billy-mckay-64217246\",\n\"n\":\"billy mckay\"}\n{\"_id\":{\"$oid\":\"5fc8d0dfa8bf583ff047b691\"},\n\"a\":\"humble, texas, united states\",\n\"t\":{\"phone1\":\"1-619-549-5428\",\"phone2\":\"1-316-871-3930\",\"phone3\":\"1-801-953-3210\",\"phone4\":\"1-409-755-0267\",\"1-314-989-0140\",\"1-316-683-8794\"},\n\"e\":{\"email1\":\"[email protected]\",\"email2\":\"[email protected]\"},\n\"liid\":\"billy-mckay-64217246\",\n\"linkedin\":\"https://www.linkedin.com/in/billy-mckay-64217246\",\n\"n\":\"billy mckay\"}\n",
"text": "Hi Everyone in this Great Community,I am working on making a central DB from 30 files of database files most of them Json Files\nI have made a schema and faced this 40M JSON Document with this Structure:Screenshot:\nScreenShot1567×375 13.9 KBI want to modify all 4 Million Documents in the JSON Database to look like this:Thank you so much",
"username": "Osama_Al-Tahish"
},
{
"code": "{\n \"aggregate\": \"testcoll\",\n \"pipeline\": [\n {\n \"$addFields\": {\n \"t\": {\n \"$arrayToObject\": {\n \"$zip\": {\n \"inputs\": [\n {\n \"$map\": {\n \"input\": {\n \"$range\": [\n 1,\n {\n \"$add\": [\n {\n \"$size\": \"$t\"\n },\n 1\n ]\n }\n ]\n },\n \"as\": \"n\",\n \"in\": {\n \"$concat\": [\n \"phone\",\n {\n \"$toString\": \"$$n\"\n }\n ]\n }\n }\n },\n {\n \"$map\": {\n \"input\": \"$t\",\n \"as\": \"m\",\n \"in\": {\n \"$trim\": {\n \"input\": \"$$m\"\n }\n }\n }\n }\n ]\n }\n }\n }\n }\n },\n {\n \"$addFields\": {\n \"e\": {\n \"$arrayToObject\": {\n \"$zip\": {\n \"inputs\": [\n {\n \"$map\": {\n \"input\": {\n \"$range\": [\n 1,\n {\n \"$add\": [\n {\n \"$size\": \"$e\"\n },\n 1\n ]\n }\n ]\n },\n \"as\": \"n\",\n \"in\": {\n \"$concat\": [\n \"email\",\n {\n \"$toString\": \"$$n\"\n }\n ]\n }\n }\n },\n {\n \"$map\": {\n \"input\": \"$e\",\n \"as\": \"m\",\n \"in\": {\n \"$trim\": {\n \"input\": \"$$m\"\n }\n }\n }\n }\n ]\n }\n }\n }\n }\n },\n {\n \"$out\": {\n \"db\": \"testdb\",\n \"coll\": \"testcoll1\"\n }\n }\n ],\n \"cursor\": {},\n \"maxTimeMS\": 1200000\n}\n",
"text": "Hello : )Doc wasScreenshot from 2020-12-05 05-19-341031×393 36.1 KBAnd becameScreenshot from 2020-12-05 05-20-471024×399 40.3 KBIts replaces the t,e fields converting the arrays to objects.\nIt is $addFields all other fields remains the same.\nYou can run that pipeline below,in any driver.\nIts the same code 2x it can wrapped in a function also,to reduce query size.",
"username": "Takis"
}
] | Convert Array to Object in JSON DB | 2020-12-04T08:24:13.593Z | Convert Array to Object in JSON DB | 4,524 |
|
null | [
"dot-net",
"atlas-device-sync",
"xamarin"
] | [
{
"code": "\t<ContentPage.BindingContext>\n\t\t<viewModels:TestPageViewModel />\n\t</ContentPage.BindingContext>\n\n\t<StackLayou>\n\n\t\t<Entry Text=\"{Binding TestObject.Name}\"/>\n\n\t\t<Label Text=\"{Binding TestObject.Name}\"/>\n\n\t</StackLayout>\n\tpublic class TestPageViewModel:AppBaseViewModel\n\t{\n\t\tprivate readonly Realm realm;\n\t\tprivate Transaction transaction;\n\n\t\tprivate TestObject testObject;\n\t\tpublic TestObject TestObject\n\t\t{\n\t\t\tget => testObject;\n\t\t\tset => SetProperty(ref testObject, value);\n\t\t}\n\n\t\tpublic TestPageViewModel()\n\t\t{\n\t\t\trealm = Realm.GetInstance();\n\n\t\t\tvar transaction = realm.BeginWrite();\n\n\t\t\ttestObject = new TestObject();\n\t\t\trealm.Add(testObject);\n\t\t}\n\t}\n\tpublic class TestObject:RealmObject\n\t{\n\t\tpublic string Name { get; set; }\n\t}\n",
"text": "Hello,I am looking for help, have searched around everywhere but not seen a similar issue and I’m completely stuck on this one!I am converting a Xamarin/.NET app of mine that previously used Sqlite and a web sync service to use MongoDB Realm. I am completely unable to get RealmObjects to fire PropertyChanged notifications. I have tried everything I can possibly think of including:Forking the GitHub .NET samples and running them, works fineCreating a new sample app with RealmObjects and data binding, works fineCopying the exact pattern from this sample app into my current app, for some reason doesn’t work!!Replace the RealmObject with a normal object implementing INotifyPropertyChanged, then the changes fire so no issue with XAML codingUse the exact sync configuration in my sample app, works fineThis one has really got me stumped. I am at the point where I open straight to a new TestPage.xaml that looks like this:With a viewmodel like this:and TestObject.cs like this:So as you can see a very basic implementation and one that works if I use it exactly the same in my test sample app. Within my current app, if I subscribe to property changed events on the TestObject at the end of the viewmodel constructor, they are never fired. When I type in the entry the label is never updated.So I am stuck!!! I am using the v10.0.0.2-beta.2 from Nuget and latest version of xamarin forms. Has been tested on iOS and Android devices with the same results. If anybody has any similar experience that could help, comments, suggestions, pointing out something that I’ve done completely wrong, I would be grateful for any help!! I could file a bug report but I have no idea what is even going wrong with this one.Many thanks!!!\nWill",
"username": "varyamereon"
},
{
"code": "",
"text": "Based on the code you posted, my guess is that this is caused by the fact you’re opening a write transaction in the ViewModel constructor. PropertyChanged events for managed objects are fired when the transaction is committed.",
"username": "nirinchev"
},
{
"code": "",
"text": "Thanks nirinchev. That means if I have for example a page with a List of Items, and then go to another view to add/edit an Item, how do I need to implement that to get PropertyChanged events to fire on the Item page? They are important for things like error checking on Entry boxes, and checking if a Command can fire. The two methods I have seen are like this (old) example: nirinchev/RealmObservableDemo: A very simple app that demonstrates data binding to Realm objects (github.com) which I have forked and upgraded to the latest Realm beta, and which then fires PropertyChanged events on the edit page, and realm-dotnet/examples/QuickJournal at master · realm/realm-dotnet (github.com) from your own samples, which is the same method I have been using and I guess will not fire PropertyChanged as the edit is commited upon Save().The first example seems to work with data binding, property changed etc. but I have not been able to replicate it in my app as I always get the exception “Cannot modify managed objects outside of a write transaction.” when I pass the item from the list page to the edit page. I am using a custom navigation service as I am using Xamarin.Forms Shell and to pass anything other than a string on navigation I need my own service.Thanks, I hope you can help.Will",
"username": "varyamereon"
},
{
"code": "private Transaction transaction;\n\n// This is what the UI binds to\npublic MyModel Model { get; private set; }\n\n// This is executed when the VM is first created\npublic void OnCreate()\n{\n this.transaction = realm.BeginWrite();\n this.Model = realm.Add(new MyModel());\n}\n\n// This is executed when the user navigates to the page backed by this VM\n// Either the first time when the page is loaded or when a child page is dissmissed\npublic void OnNavigatedTo()\n{\n this.RaisePropertyChanged(nameof(Model));\n}\nrealm.Write(() => { ... })",
"text": "Let me try to address these one by one:Regarding PropertyChanged not firing while in transaction - that is indeed a bit unfortunate when you have multiple screens spanning the same transaction. Unfortunately, I don’t see an easy way out of this since it’s tied to the notifications mechanisms in the core database. As a workaround, you can probably manually raise propertychanged for the root model when you return to the parent page. Something likeThat way, every time your page comes into view, the VM will force a redraw, which will cause it to pick up any changes that happened in the child page.Alternatively, if you don’t need a Save/Cancel button, you can let the UI drive the updates - that way every change is going to get persisted as soon as the user makes it and PropertyChanged notifications will be raised immediately.Regarding the “Cannot modify managed objects outside of a write transaction.” error - this is thrown when you try to set a property on an object while the Realm is not in a write transaction. This is not allowed when doing it in code - you’ll need to wrap the property setting logic in realm.Write(() => { ... }). When doing it from the UI via a two-way databinding in Xamarin.Forms, Realm will automatically do that for you.Hope that clarifies things, but if you have follow up questions, don’t hesitate to ask!",
"username": "nirinchev"
},
{
"code": "",
"text": "Hi Nick, many many thanks for taking the time to respond and to help me out.This is all pretty much as I thought but it just isn’t working for me!! By methods 2 & 3 I think you are referring to something like in this example RealmObservableDemo/ContactsViewModel.cs at master · nirinchev/RealmObservableDemo (github.com) (which I’ve just noticed is written by yourself ). That would be the ideal way for me as like you say it doesn’t need to be done in a transaction and should get all the nice property changed stuff. But this method is causing the exception for me mentioned above.Seems like I need to find out what is causing it…Thanks again!Will",
"username": "varyamereon"
},
{
"code": "",
"text": "If you have a simple project that exhibits the behavior, I’d be happy to take a look and try to track down what’s causing the issue. It is possible that there was a change in newer versions of Xamarin.Forms that broke the automatic behavior.",
"username": "nirinchev"
},
{
"code": "",
"text": "Thank you so much @nirinchev. At the moment I am unable to get sync working and am trying to resolve that first, once that is sorted I will try to resolve the issue myself otherwise I can look at putting something together.Many thankWill",
"username": "varyamereon"
},
{
"code": "x:DataType",
"text": "Hi @nirinchev. I managed to get sync up and running. I will set up my app structure to use the method you demonstrated above in Item 2, I have an example of this working nicely for me now, I just need to find a way of cancelling the changes if the user navigates back (otherwise for example an empty item could be created every time if the navigate forward and back).I seem to have found the reason I was getting the ‘write transaction’ error every time I navigated to the detail page. I have been using compiled bindings (as described here Xamarin.Forms Compiled Bindings - Xamarin | Microsoft Docs). When I remove the x:DataType from the page then I no longer get the error. Is this something that would be considered a bug as it is recommended practice for Xamarin Forms.Thanks!!Will",
"username": "varyamereon"
},
{
"code": "",
"text": "Oh… yes, I can see the issue with compiled bindings. The way the .NET SDK handles two-way data bindings is by implementing a specific interface that the data binding engine looks for when resolving the bindings. We take advantage of the fact that this is likely to only ever be used by the binding engine to supply it with a fake setter for the persisted properties. When the binding engine invokes the setter (e.g. in response to changes from the UI), we check existing transaction, and if none exists, we wrap the set value call in an implicit transaction.With compiled bindings, the binding engine no longer looks up properties using reflection, so it invokes the setter directly, thus skipping all of our custom logic. This is quite unfortunate and we’ll need to figure out a workaround - right now I don’t have any suggestions besides disabling compiled bindings, sorry about that ",
"username": "nirinchev"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | RealmObject PropertyChanged events not firing | 2020-11-30T11:38:14.589Z | RealmObject PropertyChanged events not firing | 4,248 |
null | [
"replication",
"containers"
] | [
{
"code": "Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade\nFatal assertion\",\"attr\":{\"msgid\":28559,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp\",\"line\":64\n***aborting after fassert() failure\nWriting fatal message\",\"attr\":{\"message\":\"Got signal: 6 (Aborted).\n",
"text": "We have a 3 node replicaset mongo v4.4.1 running in kubernetes… mongodb-0 is not starting up with below errorwe upgraded from 3.6.4 to 4.4.1 a while ago. everything was working fine… then after few deployments we are getting “Failed to start up WiredTiger under any compatibility version” now. not sure what happened.gist of mongo logsany help would be greatly appreciated.",
"username": "Vineet_Bhatia"
},
{
"code": "",
"text": "Hi @Vineet_BhatiaYou have likely neglected to upgrade the feature compatibility level during your upgrades from 3.6 thru 4.4.1Mongod should have written out the last version that successfully ran your database. Check this post for some information on that. How to check mongo version from volumes that I've set in the docker-compose.yml file? - #2 by chrisOnce you get it running change the FCV as per the upgrade guide for the version running. This is found in the release notes section of the manual.",
"username": "chris"
}
] | Unable to start mongodb 3 node replicaset | 2020-12-04T19:59:42.105Z | Unable to start mongodb 3 node replicaset | 5,252 |
null | [
"monitoring",
"capacity-planning"
] | [
{
"code": "",
"text": "Hello\nestimating how much RAM a MongoDB system will need is not an easy task, As a rule of thumb we have: size of all indexes plus number of all active documents multiplied with the average document size. So far so good.But how can I verify any numbers when my system is up and running? How do I know when I reach limits?\nLet’s do this by an example:Here comes the part I want to verify.UPDATEHow to measure the minimal size needed for recent documents in cache?So this is not like in the design phase where we estimate document sizes, connections, index sizes and try to get a feeling, this is about to measuring in a running system.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hi…as per my knowledge after the Mongod process boots up, in addition to being the same as an ordinary process, it loads each binary and reliant library the memory, which serve as the DBMS. It must also be responsible for a large number of tasks such as client connection management, request processing, database metadata and storage engine. All of these tasks involve allocation and freeing up memory, and under default conditions the MongoDB uses Google tcmalloc as a memory allocator. The largest shares of the memory are occupied by “storage engine” and “customer terminal connection and request processing”.",
"username": "Fin_Perlak"
},
{
"code": "",
"text": "Hello @Fin_Perlak wellcome to the community!Thank you for your input. These are valid points, I thing I should have worded it different.How can I determine the size of active documents in the WiredTiger cache?The WiredTiger cache will fill up over time but not all documents in the cache are “active” some might be old and just remain in cache. The critical part is when the WiredTiger cache is filled by the indexes and only active documents since then the system needs to start paging and this will be slow. I want to know before when it get’s critical.If you or any reader has an idea how to get this number I’d be happy if you could share your knowledge with us.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Keyhole is probably what you want. Check out this wiki about monitoring WireTiger cache.",
"username": "ken.chen"
},
{
"code": "db.serverStatus().wiredTiger.capacitydb.serverStatus().wiredTiger.cache",
"text": "So if I understand you correctly:Can you please elaborate on what info would lead you to conclude more RAM is needed?\nThe sections db.serverStatus().wiredTiger.capacity and db.serverStatus().wiredTiger.cache seem to contain metrics that will measure activity levels (recent / current) but they don’t necessarily mean “buy more ram” since this is related to working sets and expected future load.So are you looking to get some reading of “current/recent working set sizes”?",
"username": "Nuri_Halperin"
},
{
"code": "db.serverStatus().wiredTiger.capacitydb.serverStatus().wiredTiger.cache",
"text": "Hello @Nuri_HalperinYou want to know how much of used memory is for “current” working sets.Yes that is true. With “current” working sets = size of indexes + size of all actively used documents in memoryYou want to be able to tell if your “current” or “near future” working sets will fit in ramIf my “current” working sets does not yet fit in to RAM I’d expect to see page faults. So it would be interesting to estimate something like the percentage of actively used documents of (cache capacity - index size)sections db.serverStatus().wiredTiger.capacityMy playground is on version 3.4, .capacity was introduced in an later versionI also thought about looking on page eviction via db.serverStatus().wiredTiger.cache since: when an application approaches the maximum cache size, WiredTiger begins eviction to stop memory use from growing too large. But this will not help since I still can have not currently used data in the cache which fills it slowly up. So at some point we will reach the eviction_target and the worker will start evicting pages. But this still will not tell me the size of my “current” working set.I am not a DBA so I may not get the meaning of all variables, may be I am looking into a false direction, I hope the above lines show what I mean.Michael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hello @ken.chenkeyhole is a great tool to have a deeper view on the analytics data. In your blog post you mention a section called “metrics” (sacn_*) which can help to find e.g. missing or not fitting indexes. Unfortunately I do not see these metrics in Grafana. Is there anything special to do to retrieve and show these diagnostic data?Via the mongod log we can find slow queries anyway, I am just curious how to get this to the Grafana view.There is one more question: also in your blog you write briefly on some metrics what they can imply / tell. I tried to find further information like this specially for the wiredTiger metrics but did not found something useful. Do you know a source which you can share?In practice: here is a system which starts after a while to have a constant amount of dirty cache. I would like to understand why.grafik2470×1126 392 KBThanks a lot !\nMichael",
"username": "michael_hoeller"
},
{
"code": "mongod",
"text": "@michael_hoeller, there is much information and knowledge I can’t answer in a simple reply. Read the Part 1 of my blogs for the logs analytics. This is where I usually begin with when I work with a customer. Read the Part 3 which explains the reasons metrics are flagged if any. Check out MongoDB University for additional info.From the info you sent me via DM, you had a typical case of bad schema design. A performance degrade experience can be from many factors. Memory is one of them, but mongod will always try to use as much memory as it can get. After all memory is used, the system begins the process of paging in/out, and thus, in this case, disk quality also counts. Check out MongoDB Consulting if you need professional helps.",
"username": "ken.chen"
},
{
"code": "",
"text": "Hello @ken.chenthanks for you reply. I am all aware of this. The initial question was how to find out which sizes does my working set for recent processed documents needs. If I give a mongod 64 GB it will use it also 128 GB and so on. The initial question could be rephrased as: how to measure the minimal size needed for recent documents in cache, So this is not like in the design phase where we estimate document sizes, connections and try to get a feeling, this is about to measure in a running system. Unfortunately this answer is not mentioned in any of the links. Did I missed this part or do we just have not such a metric?Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to analyze MongoDB RAM usage | 2020-11-24T13:38:31.098Z | How to analyze MongoDB RAM usage | 40,101 |
null | [
"realm-web"
] | [
{
"code": "",
"text": "When I login using the Realm Web SDK and inspect the user object, I can see a list of all the registered users. This project is only a development project and only ahs 19 registered test users so far, but I can view their access tokens and everything through that.Is this something that only happens because it’s in development mode?",
"username": "Max_Karlsson"
},
{
"code": "",
"text": "Hey Max,Realm Web caches logged in users (and their access tokens) until they are logged out or actively removed - is it possible that you used these users while developing the application and by closing the browser window every time, you pushed them all of them into the local storage cache and never logged them out?Do you mind sharing the code snippet where you see this happen?",
"username": "Sumedha_Mehta1"
}
] | I get a list of all users including their access tokens after logging in | 2020-12-02T18:45:30.304Z | I get a list of all users including their access tokens after logging in | 1,992 |
null | [
"swift"
] | [
{
"code": " func subscribe(primaryKey key: String, completion: @escaping ((Result<LocationModel, LocationError>) -> Void)) {\n \n guard let location: DBLocationModel = self.realmService.fetch(id: key) else {\n return completion(.failure(.noLocation))\n }\n \n self.subscribeBy = RealmSwift.changesetPublisher(location)\n .receive(on: DispatchQueue.main)\n .sink(receiveValue: { response in\n switch response {\n \n case .change(let location, _): completion(.success(self.locationWrapper.map(location)))\n \n case .deleted: print(\"go back to rootview, but nothing is happening :(\")\n \n case .error: completion(.failure(.noLocation))\n }\n })\n \n }\n",
"text": "Hi,i am working on a little app with Realm and trying to figure out, why there is no notification for deletion, when observing a single object with a Publisher.if i am changing the object, it is working well and updating. but if i am deleting this object out of the database, there is no action.anyone got the same thing and figured out why?",
"username": "Alexander_Puchta"
},
{
"code": "Results<DBLocationModel> var results = realm.objects(DBLocationModel.self) // Results remains valid even if you delete all objects\n var subscribeBy = results.changesetPublisher\n .receive(on: DispatchQueue.main)\n .sink(receiveValue: { response in\n print(response)\n })\n",
"text": "Hi @Alexander_Puchta ,\nAs soon as you delete this object all subs will be invalidated.\nInstead you could subscribe to Results<DBLocationModel>",
"username": "Pavel_Yakimenko"
},
{
"code": "",
"text": "Yes i subscribe to complete db at the main screen, but this is my detail screen and my app will be able to get edited with iphone and watch at the same time. so i am trying to get this notifiation on detail screen, so i can pop back to main view, without notifications. there has to be a solution within combine, and i am trying to figure this out but thanks for your advice.",
"username": "Alexander_Puchta"
},
{
"code": "receiveCompletion:sink().deleted.append(.deleted).sink()",
"text": "Object publishers (both changeset and non) report that the object has been deleted by completing the pipeline rather than sending a .deleted message. This can be handled with the receiveCompletion: callback on sink(), or if you’d prefer to get a .deleted message you can add .append(.deleted) before .sink().It turns out we forgot to actually document this anywhere.",
"username": "Thomas_Goyne"
},
{
"code": " self. subscribeBy = valuePublisher(location)\n .receive(on: DispatchQueue.main)\n .sink(receiveCompletion: { result in\n print(result)\n }, receiveValue: { value in\n print(value)\n })\n",
"text": "With Combine you can use it in little bit another way.",
"username": "Pavel_Yakimenko"
},
{
"code": "",
"text": "Thank you guys, i will test this tomorrow. stay safe!",
"username": "Alexander_Puchta"
},
{
"code": "",
"text": "Well, great. This is my solution. Thanks a lot!",
"username": "Alexander_Puchta"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | RealmSwift.changesetPublisher(object) | 2020-12-01T11:39:51.433Z | RealmSwift.changesetPublisher(object) | 3,210 |
null | [
"student-developer-pack"
] | [
{
"code": "",
"text": "Hello I just want to know about the free certification after completing any learning path\nI have already started with my developer learning path and wanted to know if this free certfication for github student is valid now ie dec 20\nand if I finish the path I don’t have to pay anything to get the certification ?",
"username": "Shivam_Pandey"
},
{
"code": "",
"text": "Hi @Shivam_Pandey,Welcome to the forum \nThe certification is valid through Jan 2022, so you have time to finish the learning path \nIf you finished the learning path, you can send me an email and I will give you the voucher details. You indeed don’t have to pay for your first certification exam.Good luck!!",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Need certification details | 2020-12-02T18:45:38.254Z | Need certification details | 5,131 |
null | [
"monitoring"
] | [
{
"code": "",
"text": "How much memory is allocated by default per connection?",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Hi @Kim_Hakseon,Each connection consumes up to 1MB of RAM. I believe this mentioned in a few places in the documentation, but didn’t find the reference I was looking for on a quick search.This blurb from the MongoDB Atlas Best Practices is relevant:MongoDB drivers implement connection pooling to facilitate efficient use of resources. Each connection consumes 1MB of RAM, so be careful to monitor the total number of connections so they do not overwhelm RAM and reduce the available memory for the working set. This typically happens when client applications do not properly close their connections, or with Java in particular, that relies on garbage collection to close the connections.Additional RAM may be allocated for in-memory processing such as aggregation or in-memory sorting. Typically there is a memory limit to prevent one connection from consuming all server resources. The Limits: Operations section of the MongoDB manual has more details.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I thought I could check it somewhere in the manual, but I couldn’t find it, so I asked.Thank you very much.",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Hi @Kim_Hakseon,It’s a great question – thanks for asking! I didn’t find the reference in a quick search, either, but I do know the default stack size per thread has been 1MB since MongoDB 2.0.The documentation section I was thinking of still exists in the 4.0 and earlier manual versions: How do I calculate how much RAM I need for my application?.There’s an open issue in the docs backlog which you can upvote & comment on: DOCS-12986: Add information about Connection Memory Consumption.It looks like this section was removed when the documentation was being revised for the MongoDB 4.2 release. The docs concern appears to be that the existing description didn’t provide enough guidance for the FAQ topic of “How do I calculate how much RAM I need for my application?” (which is potentially a full standalone tutorial/course!).I’ll bring this to the attention of the docs team, as we should have some mention of the connection-per-thread memory usage. Thanks again for asking the question!Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I am more grateful for your kind and quick reply.Thank you ",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Memory allocated per connection | 2020-12-04T02:56:49.099Z | Memory allocated per connection | 6,714 |
[
"data-modeling"
] | [
{
"code": "",
"text": "Hello,\nNew MongoDB user here with SQL knowledge.See image below, where I’m showing 2 different simplified schemas for the same database for an app I am working on.The database is for an e-commerce store. The orders collection holds the order data. The products collection holds the product catalog data, which gets copied to the products_ordered collection representing products ordered by a customer.Users of the app will regularly be viewing orders and therefore for this use case it makes sense to go with scenario A, and have everything related to a given order be in a single document.However users of the app will also regularly need to view all products that are on any unfulfilled order, and for this use case I feel like scenario B is better.In the end, I believe scenario B would be better because I think that in cases where a single order is viewed, it would be pretty quick to query the products for that order from the products_ordered collection, whereas in the case when all products are viewed at once, scenario B would be significantly faster than scenario A.What are your thoughts?Thank you !schema1099×610 9.44 KB",
"username": "Samuel_Leith"
},
{
"code": "subtotaltotalcustomercustomerorders",
"text": "Some thoughts:Some will disagree with the following, but fwiw, ask yourself if you are order centric or customer centric. If everything radiates out from entity customer then it’s easy to design for MongoDB. If customer is on the other side of an interdepartmental firewall and your nose is pressed to the grindstone of orders then you may be better off with a classic RDBMS.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hi @Samuel_Leith,In my opinion scenario A will mostly suite MongoDB better.I am not certain how exactly the query flow in your application works but usually customers will login into a store and they will see all available products to browse with a summary of thier profile like amount of orders and notifications.Therefore I think you might benefit from using an extended pattern from orders to products where customer collection will hold orderid and orders will hold userid and the product list with main details to show on order page.If the application want to find full product details run a query to products collection:Using a reference to improve performanceThere is definitely no reason why MongoDB can’t be as good for E-commerce as RDBMS and usually much better , so its just a matter of understanding query patterns…Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hello,\nThis is not for an E-commerce platform per se, but rather for an internal App we will be using. I ended up going with scenario A and it is working perfectly well, thank you.Site done: As for the calculated fields, there are indeed a few that are saved in the database because these are actually fetched via the Magento API, and it is important for our internal App to have the exact same info as shown on Magento.For any data originating from without our app and not from Magento, the a calculated field would not be saved in the database.Cheers!",
"username": "Samuel_Leith"
}
] | How would you model this database? | 2020-11-25T20:29:00.555Z | How would you model this database? | 4,339 |
|
null | [] | [
{
"code": "",
"text": "Our monogdb 3.0.7 database appears to have crashed. When starting the database, I get an Error 14. Looking online, most people suggested looking at date/time and permissions. Neither of those are the issue. I do not have a backup of the database to restore. What can I do get the database operational?I have set the database aside and was able to initialize a new one. So the mongo binaries are ok.\nI have tried combining old index and collection files with a new database structure which did not work (still trying to learn about the database structure).\nFiles “WiredTiger.turtle” and “WiredTiger.wt” are not zero bytes.\nI have removed “mongod.lock” and still get the Error 14.\nI tried pointing mongoDB 3.2.22 at the database (hoping an upgrade would fix it) but still got the same Error 14 message.I’ve seen a few recovery articles, but I am still hoping there is an alternative.Brian",
"username": "Brian_Paquin"
},
{
"code": "",
"text": "Error Code 14 is somewhat generic.Returned by MongoDB applications which encounter an unrecoverable error, an uncaught exception or uncaught signal. The system exits without performing a clean shutdown.Post your full error message here this may assist in diagnosis.",
"username": "chris"
},
{
"code": "2020-12-01T11:24:06.218-0500 I CONTROL [main] ***** SERVER RESTARTED *****\n2020-12-01T11:24:06.226-0500 I CONTROL [initandlisten] MongoDB starting : pid=10875 port=27017 dbpath=/Volumes/Pathsrv1Data2/MongoDb 64-bit host=pathsrv1.yalepath.org\n2020-12-01T11:24:06.226-0500 I CONTROL [initandlisten] db version v3.2.22\n2020-12-01T11:24:06.226-0500 I CONTROL [initandlisten] git version: 105acca0d443f9a47c1a5bd608fd7133840a58dd\n2020-12-01T11:24:06.226-0500 I CONTROL [initandlisten] allocator: system\n2020-12-01T11:24:06.226-0500 I CONTROL [initandlisten] modules: none\n2020-12-01T11:24:06.226-0500 I CONTROL [initandlisten] build environment:\n2020-12-01T11:24:06.226-0500 I CONTROL [initandlisten] distarch: x86_64\n2020-12-01T11:24:06.226-0500 I CONTROL [initandlisten] target_arch: x86_64\n2020-12-01T11:24:06.226-0500 I CONTROL [initandlisten] options: { config: \"/usr/local/etc/mongod.conf\", net: { bindIp: \"pathsrv1.yalepath.org,10.48.106.44\" }, processManagement: { fork: true }, security: { authorization: \"enabled\" }, storage: { dbPath: \"/Volumes/Pathsrv1Data2/MongoDb\", engine: \"wiredTiger\" }, systemLog: { destination: \"file\", logAppend: true, path: \"/usr/local/var/log/mongodb/mongo.log\" } }\n2020-12-01T11:24:06.228-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),verbose=(recovery_progress),\n2020-12-01T11:24:06.231-0500 E STORAGE [initandlisten] WiredTiger (22) [1606839846:231197][10875:0x10d8e6dc0], file:WiredTiger.wt, connection: live.avail: merge range 4096-16384 overlaps with existing range 12288-16384: Invalid argument\n2020-12-01T11:24:06.231-0500 E STORAGE [initandlisten] WiredTiger (-31804) [1606839846:231304][10875:0x10d8e6dc0], file:WiredTiger.wt, connection: the process must exit and restart: WT_PANIC: WiredTiger library panic\n2020-12-01T11:24:06.231-0500 I - [initandlisten] Fatal Assertion 28558\n2020-12-01T11:24:06.231-0500 I - [initandlisten] \n\n***aborting after fassert() failure\n",
"text": "",
"username": "Brian_Paquin"
},
{
"code": "",
"text": "I’d recommend you stay with the version that last successfully ran this database, also try the last version of the 3.0 series.I think you are into a recover from backup scenario, but you could try a --repair startup.You should upgrade to a current supported version as soon as you can. This version is well out of support and updates.",
"username": "chris"
},
{
"code": "",
"text": "I tried using --repair with v3.0.7 and got the same Error 14.\nYes, I would like to upgrade after I get out of this mess.Any other things I can try to get it running?\nIf not, any documentation you can suggest to recover the existing database files?Thanks, Brian",
"username": "Brian_Paquin"
},
{
"code": "",
"text": "I’m not aware of any recovery method of the existing file. You’re recovering from backup at this point(assuming standalone mongodb).",
"username": "chris"
}
] | Mongodb 3.0.7 and error 14 | 2020-12-02T18:44:06.208Z | Mongodb 3.0.7 and error 14 | 3,078 |
null | [
"database-tools",
"backup"
] | [
{
"code": "",
"text": "Hello!I created used mongodump --archive on our Ubuntu machine (mongo 4.2) and tried to restore it on a Windows Server machine using mongorestore (mongo 4.2) and it is saying:Failed: stream or file does not appear to be a mongodump archiveIs this possible? To make sure that the archive was valid, I restored it on the Ubuntu machine to a different database and it worked.Thanks for the help\nAJ",
"username": "AJ_Keresztes"
},
{
"code": "",
"text": "It should work on Windows\nWhat command you used on Windows\nMay be some syntax issues/differences between platforms giving that error?",
"username": "Ramachandra_Tummala"
},
{
"code": "mongorestoredump/mongorestore dump/",
"text": "Hello @AJ_Keresztescould you please add the commands you used for mongodump and mongorestore?As mongorestore restores from the dump/ directory, it creates the database and collections as needed and logs its progress.mongorestore dump/\nWhere \"dump/ is the folder in which you saved the output from mongodump.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Thanks for replies.Here’s what I used to create the backup (Ubuntu):\nmongodump --archive=‘mongodump-dev’ --db=dev --username dev --password pwHere’s what I’m doing to restore(Windows):\nmongorestore --archive=“mongodump-dev” --nsFrom=‘dev.’ --nsTo='apac.’",
"username": "AJ_Keresztes"
},
{
"code": "mydumpmongorestore --dryRun mydump/",
"text": "Hello @AJ_Keresztesso you want to rename while restoring. I doubt that the namespace pattern for --nsFrom and --nsTo will work, just the error message should be a different one than the one you mentioned in your post, You will find details when you follow the link I posted before.grafik890×333 32.9 KBTo work around all side effects you can test the following: make a new folder e.g. mydump put the archive in there. Now run\nmongorestore --dryRun mydump/\nYou need to be in the folder which contains the folder mydump/. If you still get the an error please past the full error message here.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Here’s what I’m getting:image988×182 50.3 KB",
"username": "AJ_Keresztes"
},
{
"code": "",
"text": "If I change the archive to have a bson extension then it just says 0 document(s) restored successfully. 0 document(s) failed to restore.",
"username": "AJ_Keresztes"
},
{
"code": "",
"text": "This worked for meCreated a dump remotely from a DB running on mongodb university clustermongodump --uri mongodb+srv://user:[email protected]/test --archive=test.archive2020-12-01T10:34:42.574+0530 done dumping test.employee_names (0 documents)\n2020-12-01T10:34:42.716+0530 done dumping test.testcoll (1 document)\n2020-12-01T10:34:42.814+0530 done dumping test.employee (1 document)\n12/01/2020 10:34 AM 1,251 test.archive\nImported into my local instance on my Windows laptopmongorestore --archive=test.archive --nsFrom “test.\" --nsTo \"testrst.”\n2020-12-01T10:54:49.366+0530 preparing collections to restore from2020-12-01T10:54:49.522+0530 finished restoring testrst.employee (1 document)\n2020-12-01T10:54:49.523+0530 doneuse testrst\nswitched to db testrst\nshow collections\nemployee\nemployee_names\ntestcoll",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thanks - good to know!",
"username": "michael_hoeller"
},
{
"code": "mongorestore --dryRun mydump/mongorestore",
"text": "Hello @AJ_Keresztesmay I ask you to show log files or errors as quoted text instead of screenshots? This will help others to better read your issue. You can read up details about this in the Getting Started section\nBeside this I like to encourage you to follow my suggestion of the previous post.mongorestore --dryRun mydump/\nAs mentioned, this requires that you provide the FOLDER not the FILE in the commandAlso your error message says it already: please add .bson to the file name and it should work, or add --archive before the file name, then you can omit the extension.You can find working examples in documentation of mongorestore which I copied in my previous post.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "C:\\Program Files\\MongoDB\\Server\\4.2\\bin>",
"text": "if I do it with the folder, I get:C:\\Program Files\\MongoDB\\Server\\4.2\\bin>mongorestore --dryRun --archive=“D:\\backup\\mongodump-dev”\n2020-12-02T09:55:08.120-0500 Failed: CreateFile D:\\backup\\mongodump-dev\": The filename, directory name, or volume label syntax is incorrect.\n2020-12-02T09:55:08.120-0500 0 document(s) restored successfully. 0 document(s) failed to restore.C:\\Program Files\\MongoDB\\Server\\4.2\\bin>still not working",
"username": "AJ_Keresztes"
},
{
"code": "",
"text": "Is your command on a single line?Quotes could be the issue\nUse straight double quotes (\")\nIf i paste your command in a notepad the quotes look different\nWhen i tried the same command on my testdump it works for meC:\\Users\\xyz>mongorestore --dryRun --archive=“C:\\Users\\xyz\\test.archive”\n2020-12-02T21:57:05.488+0530 preparing collections to restore from\n2020-12-02T21:57:05.490+0530 dry run completed",
"username": "Ramachandra_Tummala"
},
{
"code": "mongorestore --archive=\"D:\\backup\\mongodump-dev\\mongodump-dev\" --nsFrom='dev.*' --nsTo='apac.*'",
"text": "It looks there was some sort of issue transferring the backup to the windows machine. When I created the backup, I used tar/gzip, and I used tar/gzip on the windows machine to extract it.So, I just manually copied the entire backup (not zipped) to the windows machine and it’s working. I’m not sure why zipping the original didn’t work.I’m using the following command to restore the databse:mongorestore --archive=\"D:\\backup\\mongodump-dev\\mongodump-dev\" --nsFrom='dev.*' --nsTo='apac.*'My goal is to restore from the dev database to apac. Is this the correct command to do this?",
"username": "AJ_Keresztes"
},
{
"code": "mongorestorecd D:\\backup\\mongodump-dev\\mongorestore --archive=\"mongodump-dev\" --nsFrom='dev.*' --nsTo='apac.*'",
"text": "@AJ_Keresztes please have a look at the linked documentation mongorestore \ngrafik899×491 44.4 KB\ncd D:\\backup\\mongodump-dev\\\nmongorestore --archive=\"mongodump-dev\" --nsFrom='dev.*' --nsTo='apac.*'",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Thanks for your response. I looked the documentation (and stack overflow) and I’ve done the exact thing and it’s restoring to dev, not apac.I’ve tested this a few times, and it’s ignoring --nsTo (aside from number of asterisks). Not sure why.Could this be a bug (in mongo 4.2 on Windows)?",
"username": "AJ_Keresztes"
},
{
"code": "",
"text": "It works for me\nCan you show few lines from your log?\nIt would say restoring apac.coll_name from archive…\nWhen you list your Dbs with show dbs are you not seeing apac?",
"username": "Ramachandra_Tummala"
},
{
"code": "D:\\backup\\mongodump-dev>mongorestore --archive=\"mongodump-dev\" --nsFrom='dev.*' --nsTo='apac.*'",
"text": "Hi, thanks for the reply.So I already tested a restore and it restored to dev. When I list dbs, there’s a dev one but not apac. Here’s a sample of the output now:D:\\backup\\mongodump-dev>mongorestore --archive=\"mongodump-dev\" --nsFrom='dev.*' --nsTo='apac.*'\n2020-12-03T09:34:27.268-0500 preparing collections to restore from\n2020-12-03T09:34:27.280-0500 restoring to existing collection dev.legacyCoCs without dropping\n2020-12-03T09:34:27.280-0500 reading metadata for dev.legacyCoCs from archive ‘mongodump-dev’\n2020-12-03T09:34:27.281-0500 restoring dev.legacyCoCs from archive ‘mongodump-dev’\n2020-12-03T09:34:27.350-0500 continuing through error: E11000 duplicate key error collection: dev.legacyCoCs index: id dup key: { _id: ObjectId(‘5a4defd9d06f58148cd4087d’) }\n2020-12-03T09:34:27.350-0500 continuing through error: E11000 duplicate key error collection: dev.legacyCoCs index: id dup key: { _id: ObjectId(‘5a4defd9d06f58148cd4087e’) }so it’s still trying to restore to dev for some reason.",
"username": "AJ_Keresztes"
},
{
"code": "",
"text": "It is the quotes which is causing the issue\nPlease use double quotes for nsFrom & nsTo values\nIt is behaving differently when you use single quotes",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "That was it! Thanks!",
"username": "AJ_Keresztes"
},
{
"code": "",
"text": "Mongo DB might want to change the documentation on the web site for mongorestore on windows. It shows restoring using single quotes which doesn’t work.",
"username": "AJ_Keresztes"
}
] | Can I use an Ubuntu dump archive on Windows for restoring? | 2020-11-30T22:24:05.064Z | Can I use an Ubuntu dump archive on Windows for restoring? | 11,925 |
null | [
"dot-net"
] | [
{
"code": "var database = client.GetDatabase(DatabaseName); \n foreach (BsonDocument namecollection in database.ListCollectionsAsync().Result.ToListAsync<BsonDocument>().Result)\n {\n string name = namecollection[\"name\"].AsString;\n allcollectionNames.Add(name);\n }\n",
"text": "Hi all,\nI am learning to develop with Mongo driver in C#. I have restricted a user the access to some collections into the database. How can I get the collection names where he can access-> in C# and in the Shell?the problem is :Here the code crash if it encounters a collection where the user doesn’t have right to access",
"username": "Markus_Erhardt"
},
{
"code": "mongouse someDbName\ndb.getCollectionInfos( { authorizedCollections : true } )\n",
"text": "Hello @Markus_Erhardt, welcome to the MongoDB community forum.You can try this mongo shell method or its equivalent command listCollections.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "thank you a lot Prasad_Saya. I am looking for something with C# too. I found that listCollections could do it but where to enter the parameter authorizedcollection ? I founded on AWN some infos16.79 MBlook at page 630",
"username": "Markus_Erhardt"
},
{
"code": "",
"text": "I am looking for something with C# tooI think you can specify the filter using the ListCollectionsOptions.Another way is to run the listCollections command using the C# API as shown at Admin Quick Tour → Run a Command.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "could you perhaps help me to find the solution. I look everywhere but find nothing.",
"username": "Markus_Erhardt"
}
] | How to get exactly the collections? | 2020-12-01T19:32:17.855Z | How to get exactly the collections? | 2,075 |
null | [
"monitoring"
] | [
{
"code": "",
"text": "Hi there, This might be long to explain. We have a single mongodb instance that we use for production. but last night during off hours around 2 dec 11:09 pm the process stopped itself. I’m able to confirm this with my Grafana dashboard as around 2.3 GB of RAM was released. Now this continued till 3 dec 5:30 am next morning and it came back up again. Now this is confusing because exactly at that time we run a CRON job to create the backup of this DB. and as I can see that the process ran successfully and i have the backup file in my storage.Now I’m unable to track what could be the reason behind it. as I see no OOM messages in my logs. So i was wondering if you have tracked any such thing before? or any way I can track this issue down?So far I have checked the messages log in EC2 instance and the mongodb logs itself. The instance has 8GB of RAM and normally its overall consumption is 3 GB of RAM on average daily.Let me know if you need anymore details. I want to go down to the bottom of this issue. I dont want this issue to come back in our working hours.",
"username": "Vibhanshu_Biswas"
},
{
"code": "",
"text": "Hi @Vibhanshu_BiswasPlease provide mongodb version, os version and installation method(package manager vs package files vs tarball)If you are using systemd you should also check journalctl around that time for leads.",
"username": "chris"
}
] | Mongodb process stopped itself and restarted itself | 2020-12-03T10:50:20.411Z | Mongodb process stopped itself and restarted itself | 1,939 |
null | [
"queries",
"java"
] | [
{
"code": "someId = object.some_idfun saveObjectInMongo(collection: MongoCollection<Document>, someId: String, object: String) {\n collection.insertOne(Document.parse(object))\n if (collection.find(eq(\"some_id\", someId)).first().isNullOrEmpty()) {\n throw RuntimeException(\"Can't get just saved object\")\n }\n}\n",
"text": "I use java driver (sync) and want to be able to search objects by id or field just right after successful insert.\nDoes MongoDB ensures that after a successful save response, the data will be instantly searchable?In other words will an exception ever be thrown? For example under heavy load. We assume that someId = object.some_id .",
"username": "MorkovkA"
},
{
"code": "",
"text": "This looks like some JavaScript.If you do not do anything to set some_id in object (or after Document.parse(object)) to someId before the insert, then the server will generate an ObjectId and your find will never succeed because most likely there will never be an object with { some_id : someId }. Unless off course some_id is set to someId outside saveObjectInMongo. But that is error prone as someone might pass a someId that is not some_id. So if some_id is already set to someId outside, you better read it inside saveObjectInMongo and remove the parameter someId.I am not too sure about the JS driver but you might need async in front of your insertOne call.",
"username": "steevej"
},
{
"code": "collection.find(eq(\"field\", value))",
"text": "Thank you for your answer! This is kotlin and I used java sync driver.\nThe question was not about id, it was just an example. The main idea was: we insert object and then search it by some fiield like collection.find(eq(\"field\", value)).\nThe problem is, that I am not sure the object will be searchable immediately after successful insert.",
"username": "MorkovkA"
},
{
"code": "",
"text": "I know the question was not about id. But if I answered yes it is and then you try your sample code, it will most likely failed. Two concepts are fundamental to have a precise answer.\nandSo the answer is not simple. But in most cases it will be found specially if in the same process. To be able to give a definitive answer a specific scenario must be presented.",
"username": "steevej"
},
{
"code": "w: \"majority\"\"majority\"",
"text": "@steevej Thank you. Am I right, that if we use write and read concerns as “majority” the data will be instantly searchable from another thread after write ack? We use only one mongod instance.Quote from write-concern majorityAfter the write operation returns with a w: \"majority\" acknowledgment to the client, the client can read the result of that write with a \"majority\" readConcern.",
"username": "MorkovkA"
},
{
"code": "",
"text": "Am I right, that if we use write and read concerns as “majority” the data will be instantly searchable from another thread after write ack?Yes this is correct.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | When inserted object is available for search | 2020-12-02T18:45:45.743Z | When inserted object is available for search | 2,438 |
null | [
"monitoring"
] | [
{
"code": "",
"text": "Dear.\nis there anyway to get the mongo atlas error log, please.",
"username": "Chuong_LA"
},
{
"code": "",
"text": "Hello @Chuong_LAthis might depend on which cluster you are running. You can find here if this feature is available and, if so, how to retrieve the logs.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "I’m using M40 cluster, and can get Mongo logs, but i’d like to get just error log. So can i do that?",
"username": "Chuong_LA"
},
{
"code": "",
"text": "Hello.\nI am not aware that there is a single error-log - I like to pass this question to all readers.\nI’d suggest to check out mlogfilter from the mtools package. This is a great filter tool for mongodb logs.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Oh, thank you, I will try it",
"username": "Chuong_LA"
},
{
"code": "mlogfilter 'mongod.log' --word exceptionmlogfilter ‘mongod.log’ --word exception",
"text": "Hello @Chuong_LAhere are some lines to support by example:mlogfilter 'mongod.log' --word exception\n filter all log lines with the string “exception”mlogfilter ‘mongod.log’ --word exception --namespace <db.colloection> | mplotqueries`\n filter all log lines with the string “exception” and visualizegrafik1079×256 21.6 KB\nThe dots can be clicked and will show the actual log line.There are plenty more options with mlogfilter and the mtools. So I like to encourage you to do further reading to get the maximum out of it as a start you basically only need to install the mtools and go with the fist example.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Thanks a lot, i’ll try",
"username": "Chuong_LA"
}
] | How to Get Error Log | 2020-12-01T07:47:53.490Z | How to Get Error Log | 3,482 |
null | [
"swift",
"graphql"
] | [
{
"code": " class Dog {\n owner: Person \n }\n\n class Person {\n var dogs: LinkingObjects(Dog.self, \"owner\")\n }\nquery {\n persons {\n dogs\n }\n}\n[{\n name : \"Person 1\"\n dogs: []\n},\n{\n name : \"Person 2\"\n dogs: []\n }]\n",
"text": "I have my data in RealmSwift structured like:If I perform the following GraphQL query on my data I get unexpected results:The result I am getting looks liek this;Even though several dogs point at Person1/Person2. Is there anything I am missing in my query to populate fields?\nAlso another question about the graphql endpoint: can I access it from my realm functions or is there another way to have a “stored query” taking an argument and delivering results to clients?",
"username": "Christian_Huck"
},
{
"code": "",
"text": "Hi Christian Is there anything I am missing in my query to populate fields?GraphQL currently doesn’t support LinkingObjects so unfortunately you aren’t missing anything. LinkingObjects are a Realm Database concept that aren’t represented in your server-side JSON Schema or relationships, so you can’t query them directly like this in GraphQL.is there another way to have a “stored query” taking an argument and delivering results to clientsYou can use a custom resolver to define your own queries/mutations or a computed property on a generated type.You could define custom resolvers to recreate the LinkingObjects fields in GraphQL. This should work fine, just know that you’ll need to manually look up the relationship in the resolver function + define a custom resolver for every LinkingObjects field.",
"username": "nlarew"
},
{
"code": "",
"text": "Thanks @nlarew, even though if this is not was I was hoping for. One more question closely related to that: can I also do graphQL queries from within a realm function, using a graphql service? I was not able to do it but it would greatly help with creating these stored queries.",
"username": "Christian_Huck"
},
{
"code": "",
"text": "Realm functions don’t currently include a built-in GraphQL client but that’s definitely a good idea! I just searched on the official MongoDB Realm feedback site and didn’t see any posts that match your ideas. If you want to see support for LinkingObjects in GraphQL and/or a native GraphQL client in functions, I highly encourage you to create a couple of posts on the feedback site so that other devs can upvote them (I will too!). I can tell you that the product/engineering teams actively check those posts and use them to help prioritize work, so that’s your best bet in the longer term.Short term you have a couple of options to work with data in functions. Either:Hopefully one of these approaches works for you!",
"username": "nlarew"
},
{
"code": "",
"text": "Thanks @nlarew. At this point I think GraphQL does not add value to my purpose (retrieving a multi-level data hierachy quickly) so I decided to implement my stored query purely with mongodb queries in realm functions to avoid the pain of integrating a GraphQL client into functions (I have negative experience with manually setting up GraphQL connections).Feel free to checkout (and upvote if you like ) the feedback I created here and there.",
"username": "Christian_Huck"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | LinkingObjects and GraphQL | 2020-11-27T23:48:14.016Z | LinkingObjects and GraphQL | 2,981 |
null | [
"queries"
] | [
{
"code": "",
"text": "How to use $elemMatch in an if statement? I tried declaring my find value to a variable and made an if-else statement using None (my app is in python, None basically means Null). It works but it doesn’t work if I give it a query as it returns the _id of the query instead so its not a None.So how can I write an if statement in MongoDB using a queried $elemMatch? (you can answer this question in whichever programming language you want to use, even shell would be great. I just want to get the logic of it. Thanks.",
"username": "SomeoneOnTheInternet"
},
{
"code": "",
"text": "Hi @SomeoneOnTheInternet,Welcome to MongoDB community!Can you elaborate more on the requirements, maybe even provide a pseudo code or the sample doc before and after the query.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "isfriendsorno = user_data_collection.find_one(\n {\n \"friends\": {\n \"$elemMatch\": {\n \"friend_id\": ObjectId(str(spuseruserid))\n }\n }\n })\nisfriendsorno = user_data_collection.find_one({\"_id\": ObjectId(str(curuseruserid))},\n {\n \"friends\": {\n \"$elemMatch\": {\n \"friend_id\": ObjectId(str(spuseruserid))\n }\n }\n })\n",
"text": "for an example I’m going to give the question with pymongo as thats what I’m using on my app\nsoreturns a query if the user is found and returns a None if it isn’t so its super easy to use it in an if-else statement, but the problem is I’m building a social media app and as this doesn’t have a user to search for (the search query or whatever its called) it returns a general data looking if any other user is friends with the user the friend request is sent to even if the current user isn’t friends with the other user and so it causes problems in the app like showing people who aren’t friends with some users as friends with those users.But when I try it with a search query like:if the other user isn’t friends with the current logged in user this assigns the value of the current logged in users _id to the variable so I cant use it in an if-else statement using “None”.\nexample of what it assigns to the variable if the value is isn’t found in the array when I print it out to see — {’_id’: ObjectId(“whatever the id is”)}}\nI tried tuning it into a string with str(), than I tried to literally turn it into a string with quotes and put the value in a variable inside the quotes, I tried to access it with square brackets, I tried to convert it into another complete user data if its found and just access the part I need later but none of them didn’t work, any help would be appreciated.PS: I’m kind of new to MongoDB and English isn’t my first language so pardon me if I explained it bad.",
"username": "SomeoneOnTheInternet"
},
{
"code": "isfriendsorno = user_data_collection.find_one({\"_id\": ObjectId(str(curuseruserid)),\n \"friends\": {\n \"$elemMatch\": {\n \"friend_id\": ObjectId(str(spuseruserid))\n }\n }} )\n",
"text": "Hi @SomeoneOnTheInternet,I think your actual problem is that the second condition is not inside the condition object but in projection part.Try the following statement:Notice that the command now have a single condition object with an “and” between _id and friends.Let me know if that works for you.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "isfriendsorno = user_data_collection.find_one({\"_id\": ObjectId(str(curuseruserid)),\n \"friends.friend_id\": ObjectId(str(spuseruserid))\n } )\n",
"text": "Hi @SomeoneOnTheInternet,Additionally if the friends array only have ids you can use:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to get a result from $elemMatch (example: True/False) | 2020-12-01T21:47:17.213Z | How to get a result from $elemMatch (example: True/False) | 3,228 |
null | [
"upgrading"
] | [
{
"code": "",
"text": "I want to upgrade mongodb from 2.4 to 4.4.Is there mongodb upgrade any step?2.4 →2.6→3.0→3.2→3.4→3.6→4.0→4.2→4.4\nsequential step2.4→3.0→4.0→4.4\nlarge sequential step2.4→4.4\ndirect stepHow should I do it? or is there another step?",
"username": "Kim_Hakseon"
},
{
"code": "mongodumpmongorestoresetFeatureCompatibilityVersion",
"text": "Hi @Kim_Hakseon,Your general options are currently #1 (sequential in-place major version upgrades with minimal downtime) or #3 (use mongodump & mongorestore to try to upgrade directly). You also have to coordinate upgrading your driver(s), which should happen prior to the server upgrade.There are some variations on the upgrade options if you can get to a version of MongoDB supported by automation tooling. For further discussion, see my reply on Replace mongodb binaries all at once? - #3 by Stennie_X.Since you are catching up on seven years of upgrades, a lot has changed. If you are following the in-place upgrade approach, make sure you read the upgrade steps and compatibility changes for each major release. Some major releases may have slightly different upgrade steps. In particular, MongoDB 3.4+ major version upgrades include an explicit setFeatureCompatibilityVersion command to enable persisting data changes that previous releases may not not handle gracefully.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Oh, Thank you very much for your reply. Are you a mongodb vendor employee?",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Are you a mongodb vendor employee?Hi @Kim_Hakseon,Yes, I work for MongoDB. You can find out more in my intro on the forum: 🌱 G'day, I'm Stennie from MongoDB 🇦🇺.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank youYour answer will be very, very helpful to me.Thank you. ",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "I have one more question.I am now upgrading mongodb step by step.But there’s something I don’t understand.This is what I record as I upgrade, and when I go from 3.x version to 4.0 version, the engine changes and the db size changes.\nBut why does the db size change when upgrading from 2.x version to 3.x version even though it is the same engine?I used dump/restore to go up from 2.6 version and 3.6 version to 3.0 and 4.0 version.",
"username": "Kim_Hakseon"
},
{
"code": "dbPath",
"text": "But why does the db size change when upgrading from 2.x version to 3.x version even though it is the same engine?Hi @Kim_Hakseon,Some storage size differences are expected depending on the provenance of your data and the server versions/configuration involved. For a general comparison after upgrading or restoring your data, I would look at the db/collection/document counts and the data size which should be consistent.Assuming “db size” is measuring the storage size of your dbPath, the change you observed is likely due to MMAPv1 Record Allocation Behaviour Changes in MongoDB 3.0 which removed support for the legacy dynamic record allocation strategy.Earlier versions of MongoDB supported a dynamic record allocation strategy (variable sizes with padding factor for document growth) which was the historical MMAP default for new collections created prior to MongoDB 2.6. The dynamic allocation strategy could result in fragmentation and excessive storage use over time, and it was replaced with Power of 2 Sized Allocations. The new Power of 2 allocation strategy enabled better reuse of disk space with MMAP, but may preallocate more initial space depending on the size of your documents.Current server releases aren’t using semantic versioning so the first two digits of each release series are significant (2.4, 2.6, 3.0, …) rather than comparing 2.x versus 3.x (see MongoDB Versioning). The MMAP storage engine version is associated with a server release, with the general expectation that patch releases (x.y.z) do not introduce and backward compatibility changes but major releases (x.y) may. Any changes are documented in the release notes.That may be helpful when considering why the in-place upgrade path currently doesn’t support skipping one or more successive major releases.Note: the server versioning scheme will change starting with MongoDB 5.0 in 2021: Accelerating Delivery with a New Quarterly Release Cycle, Starting with MongoDB 5.0.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "That’s the difference in allocation, that’s the difference in size.Thank you very much.And I’m really excited and excited to hear that 5.0 will be released next year.",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Upgrading from MongoDB 2.4 to 4.4 | 2020-12-01T09:45:44.125Z | Upgrading from MongoDB 2.4 to 4.4 | 7,035 |
null | [] | [
{
"code": "initialSyncTransientErrorRetryPeriodSeconds10",
"text": "I want to know how “Resumable Initial Sync” works.Fault Tolerance\nStarting in MongoDB 4.4, a secondary performing initial sync can attempt to resume the sync process if interrupted by a transient (i.e. temporary) network error, collection drop, or collection rename. The sync source must also run MongoDB 4.4 to support resumable initial sync. If the sync source runs MongoDB 4.2 or earlier, the secondary must restart the initial sync process as if it encountered a non-transient network error.\nBy default, the secondary tries to resume initial sync for 24 hours. MongoDB 4.4 adds the initialSyncTransientErrorRetryPeriodSeconds server parameter for controlling the amount of time the secondary attempts to resume initial sync. If the secondary cannot successfully resume the initial sync process during the configured time period, it selects a new healthy source from the replica set and restarts the initial synchronization process from the beginning.\nThe secondary attempts to restart the initial sync up to 10 times before returning a fatal error.Is there a special log or data that stores the status?How does this work successfully?",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Hi @Kim_Hakseon\nYou can find some more technical details in the Data Cloning section of the Replication Internals documentation and should answer your question.Dan",
"username": "Daniel_Pasette"
},
{
"code": "",
"text": "Thank you ",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
},
{
"code": "",
"text": "Hi @Kim_Hakseon,One of our Technical Services engineers (@Nuno_Costa) wrote an article with more details on the new Resumable Initial Sync in MongoDB 4.4.There’s a new forum topic for any questions or discussions on this: Would you like to know more about the new MongoDB v4.4 \"Resumable Initial Sync\" feature?.Regards,\nStennie",
"username": "Stennie_X"
}
] | How does Resumable Initial Sync work? | 2020-11-03T05:07:19.661Z | How does Resumable Initial Sync work? | 4,389 |
null | [
"cxx"
] | [
{
"code": "",
"text": "I am looking for a simple c++ example of conencting to a database. The examples provided with the cxx driver are all in the same cmake files as the driver cmake, meaning it is very complex to work out how to build them. Is there a standalone example?",
"username": "chris_d"
},
{
"code": "c++ --std=c++11 getstarted.cpp -o getstarted $(pkg-config --cflags --libs libmongocxx)\n",
"text": "Hi @chris_d,Is there a standalone example?Please take a look at get-started-cxx repository for a standalone example. It contains a simple example of connecting to MongoDB. This repository is part of the Get-Started project, see also get-started-readme for more information.For example, to compile a C++ file:Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thanks, that’s a pretty complex example - I’m looking for something much simpler that just connects, executes a query, and that’s all.\nAlso this example has docker and pkgconfig requirements that means it’s unsuitableIs there a simple standalone example?",
"username": "chris_d"
},
{
"code": "pkg-configc++ --std=c++11 getstarted.cpp -o getstarted $(pkg-config --cflags --libs libmongocxx)\nc++ --std=c++11 getstarted.cpp -o getstarted -I/usr/local/include/mongocxx/v_noabi -I/usr/local/include/bsoncxx/v_noabi -L/usr/local/lib -lmongocxx -lbsoncxx\nc++",
"text": "Hi @chris_d,Also this example has docker and pkgconfig requirements that means it’s unsuitableThe Docker is included if you want to have the environment example as well. The environment includes cmake, pkg-config, MongoDB C driver, etc. In order to compile C++ you need to provide path to the installed library, which could be different depending on your environment setup.The pkg-config is a helper tool, to insert the correct compiler options. In the case above:This would be expanded to:Is there a simple standalone example?You could just take the code example on the repository (i.e. cxx/getstarted.cpp) and compile by constructing your own c++ build command.Regards,\nWan.",
"username": "wan"
}
] | Simple c++ example | 2020-12-02T00:55:04.020Z | Simple c++ example | 5,509 |
null | [
"dot-net"
] | [
{
"code": " IMongoQuery query = Query.EQ(MongoChildData.ItemIdProperty, objectId);\n List<long> dbChildDataIdsAboveThreshold = childDataCollection.Find(query).SetSortOrder(SortBy.Ascending(MongoChildData.ItemIdProperty, MongoChildData.ChildIdProperty)).SetFields(Fields.Include(MongoChildData.ChildIdProperty)).Select(a => a.ChildId).ToList();\n",
"text": "This query is taking minutes to run:I have an index on the collection:m_childDataCollection.CreateIndex(IndexKeys.Ascending(MongoChildData.ItemIdProperty).Ascending(MongoChildData.ChildIdProperty));The collection contains a few million entries but I would expect this query to take seconds. Any ideas what I am doing wrong?",
"username": "Ian_Hannah"
},
{
"code": "mongoexplain(\"allPlansExecution\")",
"text": "@Ian_Hannah Welcome to the forum!Can you provide some more details to help investigate this:Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "We are using Mongo 3.6.13 .I am trying to use this query in mongo:db.childdata.find({ itemId : “5c097bfe8c0e481ab8a82b6d” }).explain(“allPlansExecution”)but it just seems to hang.",
"username": "Ian_Hannah"
},
{
"code": "",
"text": "I should have said in the previous comment only a 101 ids in the ids list.",
"username": "Ian_Hannah"
},
{
"code": " List<long> ids = new List<long>();\n ObjectId objectId = new ObjectId(\"5c097bfe8c0e481ab8a82b6d\");\n IMongoQuery query = Query.EQ(MongoChildData.ItemIdProperty, objectId);\n MongoCursor<MongoChildData> cursor = childDataCollection.Find(query).SetSortOrder(SortBy.Ascending(MongoChildData.ItemIdProperty, MongoChildData.ChildIdProperty)).SetFields(Fields.Include(MongoChildData.ChildIdProperty));\n cursor.SetFlags(QueryFlags.NoCursorTimeout);\n foreach (MongoChildData cd in cursor)\n {\n ids.Add(cd.ChildId);\n }\n",
"text": "FYI I have tried this as well (using a cursor):And it never completes. If I break the execution there are only 101 ids?",
"username": "Ian_Hannah"
},
{
"code": " List<long> ids = new List<long>();\n ObjectId objectId = new ObjectId(\"5c097bfe8c0e481ab8a82b6d\");\n IMongoQuery query = Query.EQ(MongoChildData.ItemIdProperty, objectId);\n var v = childDataCollection.Find(query).SetSortOrder(SortBy.Ascending(MongoChildData.ItemIdProperty, MongoChildData.ChildIdProperty)).SetFields(Fields.Include(MongoChildData.ChildIdProperty)).Select(a => a.ChildId);\n foreach (var b in v)\n {\n ids.Add(b);\n }\n",
"text": "Hi Stennie,It was the ToList() that as making the query slow.However, if I do this:Then the query hangs after the first 100 ids. Doing a ToList() never seems to return. I estimate that there are around 700,000 child ids to return.How can it take so long?",
"username": "Ian_Hannah"
},
{
"code": "",
"text": "Hi,I am having the same issue with the toList! Did you get any response about it?",
"username": "Sergio_Fiorillo"
}
] | Mongo query is taking far too long even with an appropriate index | 2020-03-05T09:40:44.073Z | Mongo query is taking far too long even with an appropriate index | 6,797 |
[
"dot-net",
"production"
] | [
{
"code": "",
"text": "This is a patch release that addresses some issues reported since 2.11.4 was released.The list of JIRA tickets resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.11.5%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:There are no known backwards breaking changes in this release.",
"username": "James_Kovacs"
},
{
"code": "",
"text": "",
"username": "system"
}
] | .NET Driver 2.11.5 Released | 2020-12-02T20:11:31.058Z | .NET Driver 2.11.5 Released | 1,867 |
|
null | [] | [
{
"code": "",
"text": "Hi.\nI have been encountering an interesting problem for 2 months.I have an object type which keeps a nested object inside. There are also several DBRefs in the main object. For the unknown circumstances (I do not exactly know how the problem is triggered), nested object begins to be not written under the main object for that collection during save operation. But DBRefs are fine.So when I drop all the indexes for that collection and recreate them programmatically, this nested object problem is solved and begins to be written accordingly; until another time.Do you have a suggestion/opinion about this?\nThanks.",
"username": "kutay_YILDIRICI"
},
{
"code": "",
"text": "Hi @kutay_YILDIRICI welcome to the community!DBRefs are specific to a driver implementation, so it’s hard to tell what went wrong without knowing the driver you’re using. Could you post additional details:Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "\t_id\n\ttext\n\tsubPost --> nested document\n\tcontact --> DBRef\n\tconversation --> DBRef\n",
"text": "Hi @kevinadi\nMy problem is not about DBRefs, it is about nested documents. Once it is triggered, it is happening all the time, until I drop and recreate the indexes. After recreating the indexes and restarting the apps, documents begin to be written successfully.\nLet me give you an example:In my “POST” collection:\nSuccessful record:Fail record:_id\ntext\ncontact --> DBRef\nconversation --> DBRefWe use Mongo Atlas, v3.6.20 with 3 replica sets\nWe were using Mongo Java driver: 3.8.2 , but upgraded to Mongo Java driver: 3.11.2 recently. For both drivers, we are experiencing that problem.",
"username": "kutay_YILDIRICI"
},
{
"code": "",
"text": "Hi @kutay_YILDIRICIOnce it is triggered, it is happening all the time, until I drop and recreate the indexes. After recreating the indexes and restarting the apps, documents begin to be written successfully.So the documents start to get written correctly once you dropped the indexes and restarted the app. Have you tried doing one of these but not both? E.g. try to restart the app and not dropping the indexes, or vice versa, to eliminate one or the other?Also, how are the nested document created? Are they created in a separate app, or a separate function within the code? Could you try to e.g. insert some print statements in the function that create the nested document to ensure that they’re called?On another note, could you remember in what situation does the nested document stop being created? Is it just random?Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi @kevinadi\nDropping the indexes is a must. But we did not try “drop the indexes and not restart the app” option, because by restarting the app, we can recreate the indexes dynamically. Maybe we also can try creating these indexes manually.There is also another application, a “consumer” application for inserting these documents. It reads from a queue, maps that value to the nested document, creates the main document and adds the nested document to main document, etc. We run and debugged that consumer application on our local. After the saving operation, returned/saved object contains the nested object, but actually the nested document is not in the main document in the Mongo collection.We don’t know what situation triggers that problem. I don’t think it is a random thing. Maybe it is not just directly indexing related problem; also Spring data and persistence can be involved.",
"username": "kutay_YILDIRICI"
}
] | Nested object write problem. Possibly indexing related | 2020-11-17T13:52:21.538Z | Nested object write problem. Possibly indexing related | 3,113 |
[
"aggregation"
] | [
{
"code": "",
"text": "Hi! I hope I was able to catch your eye with my click baity title.\nI am honestly stuck in a problem that I cannot for the life of me solve.I’ve attached the information in this Google Link: Cry for Help - Google DocsIn a nutshell, I have 2 collections that I want to reference then rereference again.I have employee and manages: \n11159×267 33 KB\nWhat I have currently done is to db.employee.aggregate then look up if the employee id matches with the manages._id.manageeID as manageeDataWhat I’d like to do next is from manageeData, get _id.managerID and get the employee data by comparing it to the employee tableBut I just can’t seem to find the right pipeline for it & I just learned MongoDB 2 days ago, and I’m am really stuck in this problem. I hope you can help me.Do I need to use embedding instead of referencing? Is there some sort of pipeline I should use?",
"username": "Gelibellybeans"
},
{
"code": "manageeDataname{ \"_id\" : 1, \"name\" : \"abc\", \"output\" : [ ] }\n{ \"_id\" : 2, \"name\" : \"efg\", \"output\" : [ ] }\n{\n \"_id\" : 3,\n \"name\" : \"ijk\",\n \"output\" : [\n {\n \"_id\" : {\n \"managerID\" : 1,\n \"manageeID\" : 3\n }\n }\n ]\n}\n{\n \"_id\" : 4,\n \"name\" : \"mno\",\n \"output\" : [\n {\n \"_id\" : {\n \"managerID\" : 1,\n \"manageeID\" : 4\n }\n }\n ]\n}\n{ \"_id\" : 5, \"name\" : \"xyz\", \"output\" : [ ] }",
"text": "Hello @Gelibellybeans, welcome to the MongoDB community forum.What I’d like to do next is from manageeData, get _id.managerID and get the employee data by comparing it to the employee tablePlease share a sample of the expected output (as you described above)?That said, the lookup operation output (I think, the manageeData) looks like this (for brevity I have not included all the fields, just a employee name):",
"username": "Prasad_Saya"
},
{
"code": "$graphLookup$lookup",
"text": "If you expect to get the full hierarchy then it’s possible you need $graphLookup rather than regular $lookup…",
"username": "Asya_Kamsky"
}
] | Lookup! When your looking down | 2020-11-30T10:57:22.517Z | Lookup! When your looking down | 1,265 |
|
null | [] | [
{
"code": "$addToSet",
"text": "My question is based on this StackOverflow questionCurrently, having a document with embedded array of documents, the $addToSet operation compare each of the fields of the nested document (or the hash of the nested document. I am not aware how was implemented).It will be great if we have the possibility to specify key / field that will be used in making the decision whether the document we try to add already existsThe proposed answer in the SO question is fine for adding only one element. Adding more elements the same way (with query for existence of the nested document) may lead to loosing the benefits оф ACID features for working with single document because of the multiple separate update statements that should be executed. Even bulk updates don’t fix the problem.Looking at the question, It seems to me that It is a popular one. I am wandering whether adding such feature for $addToSet operation have been ever discussed. It will be nice if it is available in the future and fix the problems described above.P.S. I know that I can use multi-document transactions but I don’t think using them for such case is appropriate.",
"username": "karaimin"
},
{
"code": "{ _id: 1, \n array: [\n { a: \"Foo\", other: 26 },\n { a: \"Bar\", other: 99 }\n]}\na:\"Bar\"a:\"Baz\"newSubDocs = [ { a: \"Bar\" }, { a:\"Baz\" } ];\ndb.coll.update( { _id:1 },\n [ \n {$set: { array: {$concatArrays: [ \n \"$array\", \n {$filter: {\n input:newSubDocs, \n cond: {$not: {$in: [ \"$$this.a\", \"$array.a\" ]}} \n }}\n ]}}}\n ]\n)\n{ \"_id\" : 1, \n \"array\" : [ \n { \"a\" : \"Foo\", \"other\" : 26 }, \n { \"a\" : \"Bar\", \"other\" : 99 }, \n { \"a\" : \"Baz\" } \n] }\n",
"text": "As of 4.2 there is a way to do this using aggregation expressions in update. There are some examples too.For your case, you would have to do something like this:Starting document:You want to add two subdocuments, one with a:\"Bar\" which already exists and should be ignored, and a:\"Baz\" which doesn’t exist and should be added… I guess specifics would depend on whether other fields exist and whether they should be ignored or “merged” in, but let’s say they should be ignored entirely, update would be something like:Result:If you want to do some sort of merge if the key field already exists, then the example is a bit more complex but I have something analogous to it in my MongoDB World 2019 talk starting around minute 21.",
"username": "Asya_Kamsky"
}
] | Has $addToSet improvement for specifying key already been discussed? | 2020-08-14T08:15:47.832Z | Has $addToSet improvement for specifying key already been discussed? | 2,852 |
null | [
"kotlin"
] | [
{
"code": "",
"text": "When you copy a datamodel from the MondoDb collections automatically get a @required annotation. When you import this to your Android application it doesn’t compile and you get the error message @Required not allowed on RealmList that contain other Realm model classes.I think it has been like this for a while. Is this a bug?",
"username": "Simon_Persson"
},
{
"code": "",
"text": "Would you mind sharing the explicit steps for copying the data model for the collections, so that I can try to reproduce?",
"username": "Claus_Rorbech"
},
{
"code": "",
"text": "It’s probably the same issue as this one: @Required not allowed on RealmList's that contain other Realm model classes on models saved from Realm Studio · Issue #1025 · realm/realm-studio · GitHubNot sure if this has been fixed yet in MongoDb Realm. But all generated models that had lists of them generated a kotlin class with @required which fails to compile if you import this to Android studio. I did send a support issue about this including samples and they were able to reproduce the issue.",
"username": "Simon_Persson"
}
] | Copied data models from MongoDb Realm doesn't compile | 2020-10-22T12:45:53.225Z | Copied data models from MongoDb Realm doesn’t compile | 2,397 |
null | [
"kotlin"
] | [
{
"code": "open class MyObj(\n var myField: ByteArray = ByteArray(0)\n): RealmObject()\nfun match(stuff: Array<ByteArray>): List<RealmReceivedCen> =\n realm.where<MyObj>()\n .oneOf(\"myField\", stuff)\n .findAll()\nString",
"text": "One of my Realm entities has a field that’s a byte array:I’m trying to do a query based on a byte array:This doesn’t compile. Can’t I do this match? Do I have to convert my field to a String then?Thanks!\nIvan",
"username": "Ivan_Schuetz"
},
{
"code": "",
"text": "I think it is just an oversight when implementing convenience methods for the other types. I have created In and oneOf for binary fields · Issue #7228 · realm/realm-java · GitHub to add it, but as the methods are not optimized and are basically just convenient wrappers around the other query API, you could do your own extension method inspired by the ones for the other types (realm-java/RealmQueryExtensions.kt at master · realm/realm-java · GitHub) in the meantime.",
"username": "Claus_Rorbech"
}
] | oneOf with array of array of bytes | 2020-04-01T20:05:13.725Z | oneOf with array of array of bytes | 2,587 |
null | [
"installation",
"devops"
] | [
{
"code": "systemctl start mongodRunning in chroot, ignoring request: start\n",
"text": "Asked in parallel in Running MongoDB within a Singularity container - Stack OverflowI created a Singularity container from which I want to run MongoDB. Installing worked, but when I try to start the server with systemctl start mongod, this results in the outputAre there some additional configurations I need to do in my container setup?\nOr in configuring MongoDB?\nI’m not familiar in dealing with chroot.",
"username": "Ksortakh_Kraxthar"
},
{
"code": "systemctl/usr/bin/mongo \"$@\"",
"text": "The issue was solved on StackOverflow. My mistake was to use systemctl, I replaced the call by directly calling /usr/bin/mongo \"$@\".",
"username": "Ksortakh_Kraxthar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Running MongoDB from a container | 2020-12-01T12:30:18.505Z | Running MongoDB from a container | 2,617 |
null | [] | [
{
"code": "",
"text": "After creating a new cluster in mongoDB Atlast (M30 on GCP infrastructure), the URL generated to connect to the new cluster does not contain the suffix “GCP” it is like [cluster_name.exaby.mongodb.net].The url generated by atlas for the cluster we built there did contain “GCP” in the name ; it was cluster_old.exaby.gcp.mongodb.net.Can anyone explain why we are getting such difference on the URL generation and how to fix it?Thanks",
"username": "Jean_Duclair_TAGU"
},
{
"code": "",
"text": "Hi @Jean_Duclair_TAGU,Welcome to MongoDB community!Over time Atlas evolve its dns assignments to clusters. Now clusters can be spun over multiple cloud providers therefore adding a cloud symbol suffix doesn’t make sense anymore.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo Atlas cluster name generation for GCP | 2020-11-30T23:34:28.957Z | Mongo Atlas cluster name generation for GCP | 1,800 |
null | [
"atlas-device-sync",
"android"
] | [
{
"code": "",
"text": "Hi everyone, im new here, i develop a android aplication with Realm, but i stuck in something, unknow if is possible, i need that Realm sincronize when the aplication is closed, the reason is for keep always the data updated even if the app is not open, i tried with jobs but the listeners( i need do some stuff when listener changes) only works when the app is in background or first plane and with service realm doesnt work, i need know if is posible some configuration when realm is instanced or other solution for realize it. Thanks !",
"username": "David_Osorio"
},
{
"code": "",
"text": "I think its possible with background processing, it not depends on realm, depends on android platform the way android handle the app when its killed or in background. its have some limitations, but i think its possible",
"username": "Royal_Advice"
}
] | Realm in background on Android (Closed App) | 2020-12-01T19:35:52.416Z | Realm in background on Android (Closed App) | 2,143 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Hey there!So my team has a meteor app that is using mongo. Right now, we make an API request to one of our other repos, do some stuff, and then hit that meteor app api to read/write the mongo data. We would like to read/write that data from the other repo instead of making an additional request. I would like to set this connection up as a singleton (functional programming rather than javascript classes). I’ve tried setting up the connection and exporting it to be used in other files to no avail. I’ve been able to connect, read/write within one file but not export that connection to be used by other files in Node. If anyone has any insight I would be super appreciative! Thanks so much!",
"username": "Austin_Burgard"
},
{
"code": "var database = new MongoInternals.RemoteCollectionDriver(\"<mongo url>\");\nMyCollection = new Mongo.Collection(\"collection_name\", { _driver: database });\n<mongo_url>mongodb://127.0.0.1:27017/meteor",
"text": "Hey @Austin_Burgard!Thanks for making the leap to the MongoDB Community! We’re lucky to have you! Looks like you are trying to read/write data from two MongoDB databases within the same Meteor application so you don’t need to make multiple round trips to get data. It would helpful to see your code to better help you, but the short answer is yes, you can do this.It is now possible to connect to remote/multiple databases with Meteor:Where <mongo_url> is a MongoDB URL such as mongodb://127.0.0.1:27017/meteor (with the database name)If you don’t think that this solves your problem, would you be willing to share more of your code and error logs to help me further troubleshoot your connection issues? Thank you! ",
"username": "JoeKarlsson"
},
{
"code": "const { MongoClient } = require(\"mongodb\")\nconst uri = \"the local instance of my mongodb from meteo\"\n\nlet client = new MongoClient(uri, { useUnifiedTopology: true })\n\nmodule.exports = client\nrequire(\"dotenv\").config({ path: \"../.env\" })\nconst client = require(\"./getMongo\")\n\nconst test = async () => {\n await client.connect()\n const users = client.db(\"meteor\").collection(\"users\")\n const one = await users.findOne()\n console.log(one)\n}\n\ntest()\n",
"text": "Thanks @JoeKarlsson!Im actually in two separate repos. One is a Meteor app. That has the MongoDB that I need to access in my other repo, which is a simple create-react-app with Node/Express. I did come up with this in order to read/write that Mongo instance from Meteor in the other node app.And then in another file where I’m wanting to use those operations:This works and I think will allow me to do read/write operations from other files. What do you think about this approach @JoeKarlsson?",
"username": "Austin_Burgard"
},
{
"code": "",
"text": "Yeah - that should work, but it might be a little complicated, but I think you understand the nuance and structure of your application more than I do If it works, and you don’t have any major performance issues, I say it’s good to go. Nice job!!! ",
"username": "JoeKarlsson"
}
] | Exporting A Single Instance of Existing Mongodb in Node | 2020-11-30T22:23:40.946Z | Exporting A Single Instance of Existing Mongodb in Node | 7,365 |
null | [
"production",
"c-driver"
] | [
{
"code": "",
"text": "I’m pleased to announce version 1.17.3 of libbson and libmongoc,\nthe libraries constituting the MongoDB C Driver.Bug fixes:No changes since 1.17.2; release to keep pace with libmongoc’s version.Thanks to everyone who contributed to this release.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB C driver 1.17.3 released | 2020-12-01T22:45:37.470Z | MongoDB C driver 1.17.3 released | 1,920 |
[
"golang"
] | [
{
"code": "",
"text": "When doing authentication by db.RunCommand() , it always says: (BadValue) Auth mechanism not specified. The official documentation says mechanism is optional. Even I set it as {“mechanism”, “SCRAM-SHA-256”}, still not work.\nIt’s confusing, any suggestion is appreciated.\nMongoDB server version: 4.4.1\ngo: 1.15\ngo driver: v1.4.3",
"username": "Kevin_Meng"
},
{
"code": "",
"text": "May be shell version mismatch\nMake sure you are connected to admin DB",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi Ramachandra,Thanks for reply.\nI noticed this thread, I didn’t use mongo shell, but go driver.\nI already used admin DB for authentication: db := client.Database(“admin”) in the sample code above.BR, Kevin",
"username": "Kevin_Meng"
},
{
"code": "clientOptions := options.Client.ApplyURI(\"mongodb://USERNAME:[email protected]:32768/\"\nclient, err := mongo.Connect(ctx, clientOptions)\nauthenticate",
"text": "Hi @Kevin_Meng,Can you try adding authentication information into the URI itself:Doing it this way will allow the driver to automatically authenticate each connection that it creates to the MongoDB deployment. You will not need to manually run the authenticate command.If I’m misunderstanding your use case and this solution doesn’t address your question, can you provide some information about why you need to manually execute this command?– Divjot",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "Hi Divjot_Arora,Thanks for the reply.\nI want to build a SaaS application with “Database per Tenant” approach and I need each tenant to authenticate with different credentials to MongoDB.\nI learned MongoDB completely separated the actions of \"connect” and “authenticate”, means we could leverage connection pool to create a pool of “blank” connections and then borrow a connection from the pool to do authentication for current tenant.\nThis is the reason why I separate the connection and authentication.\nCould you please advise the best practice for this use case? Thank you very much!BR, Kevin",
"username": "Kevin_Meng"
},
{
"code": "mongo.Client",
"text": "@Kevin_Meng Will all tenants share the same mongo.Client instance in your application? If so, can you outline the control flow for two tenants connecting to your app? How does the application know which connections belong to which tenants?– Divjot",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "Hi Divjot_Arora,Yes, the service requires user to access with JWT, there is a property:TenantID in the claims part of JWT which will be extracted to identify which tenant database to connect, authenticate, and do the subsequent DB queries, once complete, drop it; For a new request to the service, do the above steps again.\nHope this is helpful to you.BR, Kevin",
"username": "Kevin_Meng"
},
{
"code": "saslStartsaslContinueauthenticate",
"text": "Hi @Kevin_Meng,I don’t have much experience with this sort of use case. We generally recommend that a mongo.Client be created with a set of credentials which will then be automatically applied to all of the connections it creates. I’m not sure what the server behavior is if an application attempts to manually authenticate connections.Also, based on the authentication spec, the auth conversation for the SCRAM-SHA-256 mechanism uses saslStart and saslContinue commands, not the authenticate command, so it’s possible that the commands you’re sending are not in the format the server expects.– Divjot",
"username": "Divjot_Arora"
}
] | (BadValue) Auth mechanism not specified | 2020-11-20T23:13:48.294Z | (BadValue) Auth mechanism not specified | 7,062 |
|
null | [
"production",
"golang"
] | [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to announce the release of 1.4.4 of the MongoDB Go Driver .This release contains several bugfixes. For more information please see the release notes .You can obtain the driver source from GitHub under the 1.4.4 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site . BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team",
"username": "Isabella_Siu"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Go Driver 1.4.4 Released | 2020-12-01T20:40:20.419Z | MongoDB Go Driver 1.4.4 Released | 1,857 |
null | [] | [
{
"code": "",
"text": "The mongo-cxx-driver-r3.6.1 examples fail to compile:/mongo-cxx-driver-r3.6.1/examples/projects/mongocxx/cmake/static$ build.shCMake Error at CMakeLists.txt:74 (message):\nExpected BSONCXX_STATIC to be defined– Configuring incomplete, errors occurred!",
"username": "chris_d"
},
{
"code": "",
"text": "A post was merged into an existing topic: Compiling mongo-cxx-driver-r3.6.1",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | C++ driver examples failing to compile | 2020-12-01T19:52:25.916Z | C++ driver examples failing to compile | 2,555 |
null | [
"server",
"installation"
] | [
{
"code": "",
"text": "Trying to start mongodb-linux-x86_64-rhel62-4.4.2/\nfor the first time:./mongo: error while loading shared libraries: liblzma.so.0: cannot open shared object file: No such file or directory",
"username": "Dale_Wyttenbach"
},
{
"code": "sudo yum install libcurl openssl xz-libs",
"text": "Hi @Dale_WyttenbachFrom the folder name it looks like you are installing from tarball? You chosen installation method is good information to know.The prerequisites for a tarball installation on RedHat/CentOS/Oracle Linux are:Use the following command to install the dependencies required for the MongoDB Community .tgz tarball:sudo yum install libcurl openssl xz-libsliblzma.so.0 is provided by the xz-libs dependency.",
"username": "chris"
},
{
"code": "",
"text": "Thank you that is very helpful!",
"username": "Dale_Wyttenbach"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Shared libraries required for RedHat server | 2020-11-30T18:44:02.520Z | Shared libraries required for RedHat server | 3,083 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi Team,\nIs there is any limitation of storing special characters for string values in mongodb ??Thanks,\nGheri.",
"username": "Gheri_Rupchandani"
},
{
"code": "",
"text": "Hello @Gheri_Rupchandanican you please elaborate more about the case you have in mind? Do you want to use “special characters” in a string field? If so which do you have in mind that raise doubts, do you have a real world example?Regards,\nMichael",
"username": "michael_hoeller"
}
] | Is there any limitation with string value? | 2020-12-01T15:42:58.039Z | Is there any limitation with string value? | 1,194 |
null | [
"atlas-device-sync",
"performance"
] | [
{
"code": "",
"text": "Our app used realm cloud before for a live synchronization feature for realtime collaboration such as depicted in this early realm image video.. Unfortunately we realized that the speeds we can achieve with a M10 cluster in our region over the networks are far worse than the experience we had before with Realm. e.g. synchronizing the typing of a text took 1 second to be synced from one client to another via realm cloud. Now with MongoDB Realm, we experience delays of about 10 seconds as well as inconsistencies in the time, even on a cluster with no load.Regarding this issue I have the following questions:Are there plans to optimize this or is it in the nature of the replicated mongodb clusters, that syncs will take their time to arrive?What steps could an app developer do to optimize, e.g. only sync/write to the database after n keystrokes/points drawn.How will frequent writes, e.g. updating a string on every keystroke (e.g. “H, He, Hel, Hell, Hello” in a short timespan. affect the size of the database ( that includes history)",
"username": "Christian_Huck"
},
{
"code": "",
"text": "@Christian_Huck Thank you for your post. We realize that the sync performance right now leaves much to be desired, which is why this quarter’s product plan is almost exclusively dedicated to improving sync performance, its also why we are still flagging Realm Sync as Beta. Our initial implementation focused on correctness and bug fixing, and now that the sync stability has improved we are changing gears to performance. The underlying architecture of the new Realm Sync is such that we intended to not only meet, but far exceed performance in the legacy Realm Cloud - the legacy Realm Sync architecture had backed ourself into a corner with some of our decisions, and would have necessitated a large refactor anyway.That being said, even with the planned performance improvements, there is always going to be best practices that the user should follow in order to increase throughput. One is to always batch writes as much as possible on the client, if your client is writing a lot of data, then we recommend a rule of thumb of 10k objects per transaction. Another is to try to not store a lot of blob or binary data in realm and replicate via sync - this is because sync works using an operations log, essentially it is a copy of the state, one for actual state and one for the operation + payload. You could imagine that an implementation that simply inserted and deleted a 1MB photo over and over again would blow up your realm sync history log, reducing performance. The last thing to keep in mind, is that if your app has a shared writable data between groups of users you should design your partitioning strategy to maximize throughput. This is because OT, our conflict resolution algorithm, can generally be the largest factor when it comes to throughput, and this algorithm is applied per partition. The more users you have writing to a partition, the longer these users have been offline, and the amount of changes they have accred locally before syncing to the server-side are all variables which will decrease performance. If you are running into this then you should look to create more partitions and reduce the amount of users per-partition.I’m not sure on your specific use case but feel free to open up a support ticket and we can look at your particular use case and implementation and either make suggestions on how to improve performance now, or make sure that your specific use case will be helped by future work we are doing on the sync server.As an example, we recently had a support case where a user was collecting sensor data every 100ms and wanted this data to replicate to another sync user in realtime. However, the developer was performing a write operation every 100ms (per-sensor reading) which caused the user receiving the data, their sync time would continuously lag behind, and the lag time would increase the longer the sync write user continued to write. What we did was instead batch those sensor readings to perform a sync write operation on the client around every 2-3 seconds. This enabled the receiving sync user to receive the data in constant time - there was no longer any steadily increasing lag time. I hope this helps",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward thanks for the uodate. Meanwhile, moving to a production environment, our speed has improved. One thing we did is migrating from mutable collection sets to inverse relationships (Dog->Owner instead of Owner -> dogs).I will open a support ticket still to see if we can improve our speed further and optimize our writes.",
"username": "Christian_Huck"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to optimize Realm Sync for performance | 2020-11-08T23:27:44.375Z | How to optimize Realm Sync for performance | 4,329 |
[] | [
{
"code": "",
"text": "Hello all! I am a very new user to mongodb and am managing my DB using compass. I am using pipedream to port data over from an API. This works fine and my data shows up in mongodb.However, the API will always send the latest amount of XP that a user has. How do I make it so each user (determined by the input “user”) will have a single DB entry and when new data is brought in (new xp amounts and level) that that db entry will be updated with the latest. Thank you,",
"username": "Park_Dalton"
},
{
"code": "",
"text": "Hello @Park_Dalton, welcome to the MongoDB community forum.Here is a way you can think about.The data coming thru the pipedream can be inserted into a staging collection - this is an in-between or an intermediate collection. Then, define a Change Stream on this staging collection.The change stream can subscribe to a data change event (i.e., document insert) and do this further: The collection where your original data is there is checked for the “user”, and update the document with the latest data from the staging collection.",
"username": "Prasad_Saya"
}
] | New user trying to use pipedream to update existing data | 2020-12-01T01:18:06.563Z | New user trying to use pipedream to update existing data | 1,959 |
|
null | [] | [
{
"code": "",
"text": "How do I connect MongoDB Atlas to cPanel?\nIs it possible to use MongoDB Atlas and deploy backend NodeJS and frontend ReactJS in cPanel?\nIf so, do you have any documentation?",
"username": "Md_Islam"
},
{
"code": "mongocli",
"text": "Welcome to the MongoDB community @Md_Islam!MongoDB Atlas is a managed database service which you can administer using the Atlas web UI, API, or mongocli Command Line Interface. Atlas does not include support for hosting application backends like Node.js, however there is an integrated serverless platform called MongoDB Realm which you can use to run backend data services including functions and triggers. MongoDB Realm has various SDKs (including a Web SDK you can call from JavaScript and TypeScript applications) that enable you to write client applications interacting with backend data services which are written in JavaScript. MongoDB Realm’s serverless functions support using a subset of Node.js packages as External Dependencies.cPanel is a web hosting panel for managing hosted applications, and is commonly available on shared or managed application hosting services. You can use cPanel to manage your application environment and connect to a MongoDB Atlas database cluster. However, I’m not aware of any specific integration (or need for integration) with Atlas. You should just have to configure your Atlas IP security and use the standard approach to Connect via a Driver. If you are running an application server, you will generally want to choose an Atlas cloud provider & region which is close (by network proximity) in order to minimise network latency for application requests.If you’re having trouble connecting to Atlas, there are some great Connection Tutorials in the documentation and a guide to Troubleshooting Connection Issues.You can also ask for feedback and suggestions in the community forums.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How do I connect MongoDB Atlas to cPanel | 2020-11-30T18:22:15.521Z | How do I connect MongoDB Atlas to cPanel | 7,740 |
null | [
"atlas-device-sync"
] | [
{
"code": "currentUser.idnewUser.id.createEnding session with error: failed to validate upload changesets: SET instruction had incorrect partition value for key \"ownerId\" (ProtocolErrorCode=212)MongoDB error: E11000 duplicate key error_idintegrating changesets failed: error creating new integration attempt: error doing preliminary merge for integration attempt: error finding merge window: error finding reciprocal history version range: connection(cluster0-shard-00-01-ltqcv.mongodb.net:27017[-11247596]) failed to write: context canceled (ProtocolErrorCode=101)table.hppREALM_ASSERT(!key.is_unresolved());",
"text": "I’m looking for a way to move objects from one realm into another. I’ve been trying to do this by modifying their partition values accordingly (for me, that’s ownerId, which I’m changing from currentUser.id to newUser.id). At this point I’ve tried every possible variation of this that I could think of: doing a .create with the object and deleting the old one, migrating them all to a local realm and then back to the other synced realm, just changing the ownerId and praying that would do something.I’ve gotten so many different errors with everything I’ve tried, the most common being\nEnding session with error: failed to validate upload changesets: SET instruction had incorrect partition value for key \"ownerId\" (ProtocolErrorCode=212)\nas well as\nMongoDB error: E11000 duplicate key error (even after generating new _ids)\nand\nintegrating changesets failed: error creating new integration attempt: error doing preliminary merge for integration attempt: error finding merge window: error finding reciprocal history version range: connection(cluster0-shard-00-01-ltqcv.mongodb.net:27017[-11247596]) failed to write: context canceled (ProtocolErrorCode=101)I’ve also gotten a ton of native iOS errors (I’m using React Native) intermittently when logging out users, logging in users on different devices, and just occasionally when logging in. (the most recent was in table.hpp line 249: REALM_ASSERT(!key.is_unresolved());.Would love some direction on how to move forward on this.",
"username": "Peter_Stakoun"
},
{
"code": "",
"text": "@Peter_Stakoun Can we see how you’ve set up your schema and partitionKey? Also how are you copying the data over? You’ll need to copy the values over - you cannot just copy object references over since they are from different realms",
"username": "Ian_Ward"
},
{
"code": "static schema: ObjectSchema = {\n name: 'User',\n primaryKey: '_id',\n properties: {\n _id: 'string',\n ownerId: 'string',\n .....\n },\n}\ndestinationRealm.create(\n User.schema.name,\n {\n ...User.serialize(user),\n ownerId: app.currentUser!.id,\n },\n Realm.UpdateMode.All,\n)\nuserserializeownerIdEnding session with error: failed to validate upload changesets: SET instruction had incorrect partition value for key \"ownerId\" (ProtocolErrorCode=212)",
"text": "I ended up figuring out a way to get around this in a way that didn’t require me to change any partition keys, so I would be ok with closing out this issue without resolution.My schema is set up as such:User is a custom object I use to wrap user data and ownerId is the partition key.\nThe way I was copying the object was:where user is an object within the source realm and serialize is a function I wrote to turn the object into a vanilla js object with all of the object’s properties. I assumed that destructuring this object and changing the ownerId would work, but I got the error Ending session with error: failed to validate upload changesets: SET instruction had incorrect partition value for key \"ownerId\" (ProtocolErrorCode=212) when using this method.",
"username": "Peter_Stakoun"
},
{
"code": "...obj.keys()obj.entries()",
"text": "@Peter_Stakoun So the spread syntax ... does not work with Realm. We do have obj.keys() and obj.entries() that return the realm properties which will work in a for…in loop. Hope that helps.",
"username": "Ian_Ward"
}
] | Migrating data from one sync realm to another | 2020-11-29T02:08:11.274Z | Migrating data from one sync realm to another | 3,120 |
null | [
"node-js",
"connecting",
"typescript"
] | [
{
"code": "cannot read property 'replace' of undefinedawait client.connect()import { MongoClient } from 'mongodb';\n\nconst username = encodeURIComponent(process.env.REACT_APP_MONGO_READ_USERID as string);\n\nconst userpass = encodeURIComponent(process.env.REACT_APP_MONGO_READ_USERPASS as string);\n\nconst uri = `mongodb+srv://${username}:${userpass}@cluster0.xup6s.mongodb.net/ggtavern?w=majority`;\n\nconst client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true });\n\nexport const getItemsfromMongo = async <T>(collection: string): Promise<T[]> => {\n\n try {\n\n console.log(client);\n\n try {\n\n await client.connect();\n\n console.log(client);\n\n } catch (error) {\n\n console.error('Failed to connect to MongoDB server');\n\n throw error;\n\n }\n\n const mongoCollection = client.db(\"ggtavern\").collection<T>(collection);\n\n const cursor = mongoCollection.find()\n\n const items = await cursor.toArray();\n\n console.log(items);\n\n await client.close();\n\n return items;\n\n } catch (err) {\n\n console.error(err);\n\n return [];\n\n }\n\n}\n",
"text": "Hi, I’m trying to connect to my mongoDB Atlas via nodejs in my React app, but I’m getting an error cannot read property 'replace' of undefined on await client.connect()I’m working in Typescript.Here’s my code to connect. I’ve validated that the connection string works via Compass.",
"username": "Zachary_Bryant"
},
{
"code": "cannot read property 'replace' of undefinedawait client.connect()",
"text": "cannot read property 'replace' of undefined on await client.connect()First of all, Zachary, I just want to warmly welcome you to the MongoDB Community! We are soooo lucky to have you here, and I am excited to help you troubleshoot your code. Second, can you send me a link to your code on GitHub, I want to try and run your code locally to see if I can reproduce your error.",
"username": "JoeKarlsson"
},
{
"code": "const client = new MongoClient()awaitconst client = new MongoClient()",
"text": "My first guess is that your Mongo client hasn’t initialized to the client yet since const client = new MongoClient() is an asynchronous function. If you promisify or add an await on your const client = new MongoClient(), I think that this will solve your problem.Let me know if that works for you ",
"username": "JoeKarlsson"
},
{
"code": "awaitconst client = new MongoClient()",
"text": "@JoeKarlsson\nHi There! I am extremely new to this community and have actually come across this same issue. Adding an await on my const client = new MongoClient() did not work. My code is essentially the same as the gentlemen above. Any other ideas?",
"username": "Tatiana_Wiener"
},
{
"code": "",
"text": "Hey @Tatiana_Wiener! Welcome to the MongoDB Community! We’re lucky to have you here!Can you post your code and error message in here, so I can better troubleshoot it? Thank you ",
"username": "JoeKarlsson"
},
{
"code": "async function main(){\n const uri = \"mongodb+srv://{username}:{password}@cluster0.5r4og.mongodb.net/{dbname}?retryWrites=true&w=majority\";\n \n\n const client = await new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true});\n \n try {\n await client.connect();\n \n } catch (e) {\n console.error(e);\n } finally {\n await client.close();\n }\n}\n\nmain().catch(console.error);\nconst MongoClient = require('mongodb').MongoClient; \nexport default function AboutUsPage(){\n React.useEffect(() => {\n window.scrollTo(0, 0);\n document.body.scrollTop = 0;\n });\n const classes = useStyles();\n main().catch(console.error)\nexport default function AboutUsPage(){\n React.useEffect(() => {\n window.scrollTo(0, 0);\n document.body.scrollTop = 0;\n });\n const classes = useStyles();\n const uri = \"mongodb+srv://{Username}:{Password}@cluster0.5r4og.mongodb.net/{DB}?retryWrites=true&w=majority\";\n const client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true});\n connect().catch(console.error);\n \n async function connect() {\n await client.connect();\n\n }\n",
"text": "Hi! Thanks so much for the reply! I have been trying to work through this error for days now! I have working code below (keep in mind I have omitted the username, password, and DB name just for posting purposes)\nconst {MongoClient} = require(‘mongodb’);The issue is if I try to call this main function within another function, it does not work. For example: if I had this function plus another React function:This does not work. I get the error:\nTypeError: Cannot read property ‘replace’ of undefined\nat matchesParentDomain (uri_parser.js:24)\nat uri_parser.js:67I have even gone so far as trying:This still does not work! I have no idea what I could possibly be doing wrong.",
"username": "Tatiana_Wiener"
},
{
"code": "replacereplace",
"text": "@Tatiana_Wiener - Thank you so much for the additional information and context - it’s so helpful! Looks to me to be an async issue. You are calling a MongoDB methods before you have connected to the MongoDB cluster in your main method. It’s not shown in your code where you are actually invoking the replace method, but this function is being called too soon. You need to make sure this gets invoked after connecting to your MongoDB cluster. Do you have a link to the GitHub repo or could you show me where/how you are invoking the replace method? My gut is telling me that’s where the issue most likely lives.Again, thank you so much for posting! I hope we can get this problem sorted out soon! ",
"username": "JoeKarlsson"
}
] | Getting error upon await client.connect() in node.JS | 2020-08-29T23:19:19.206Z | Getting error upon await client.connect() in node.JS | 22,748 |
null | [
"dot-net"
] | [
{
"code": "{\n\t\tfind: 'InformationCollection',\n\t\tfilter: {\n\t\t 'INFORMATION.LONG_ATTRIBUTES.fieldOne': 12345\n\t\t}, \n\t\tsort: {\n\t\t TIMESTAMP: -1\n\t\t}, \n\t\tlimit: 1,\n\t\tprojection: {\n\t\t _id: 0\n\t\t},\n\t\tallowDiskUse: true\n\t }\n",
"text": "I have an application that is trying to query a MongoDB collection using a BSON Document and the ‘limit’ option does not work. The application is using the MongoDatabase.runCommand(Document) call. I got the BSON parser to accept the document text (thanks to https://docs.mongodb.com/manual/reference/command/find/#dbcmd.find) but the limit does not seem to be used when performing the query. I get all matching results back no matter what I set the limit option to. Has anyone else run into this issue? The application I am working with does not want to use straight Java calls to the .limit(int) method if it can be avoided.The BSON Document:",
"username": "Brian_Lipa"
},
{
"code": "",
"text": "Turns out something later in the code was overwriting the limit value immediately prior to the actual call to the database. D’oh!",
"username": "Brian_Lipa"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo BSON Query Limit Not Working? | 2020-11-30T17:06:00.497Z | Mongo BSON Query Limit Not Working? | 2,681 |
null | [
"golang"
] | [
{
"code": "Marshal\\Unmarshaltype URL struct {\n Uri string\n}\n\ntype MyStruct struct {\n Image URL `bson:\"image\"`\n}\n\nfunc (m *URL) UnmarshalBSON(data []byte) error {\n var r bson.Raw\n if err := bson.Unmarshal(data, &m); err != nil {\n return err\n }\n log.Printf(\"%+v\", r.String()) // An empty string\n return nil\n}\nplaygroundmgomongo-driver",
"text": "I’m trying to use a custom struct in my mongo object definition but stuck with Marshal\\Unmarshal BSON.Unmarshal alway returns an empty field.Here is an example playground P.S. The database is legacy and I’m unable to change a scheme, I’m trying to migrate from old mgo driver to official mongo-driver .",
"username": "Philidor_Green"
},
{
"code": "UnmarshalBSONbson.Unmarshal(data, &m)bson.Unmarshal(data, &r)r",
"text": "Hi @Philidor_Green,One thing I notice in your example is that UnmarshalBSON internally calls bson.Unmarshal(data, &m) rather than bson.Unmarshal(data, &r). The next line tries to print out the contents of r, but it’s only been set to the zero-value in the variable declaration, so it will be an empty string.Can you provide some example input/output to show your desired data format? We can help you write Marshal/Unmarshal implementations to do that.– Divjot",
"username": "Divjot_Arora"
},
{
"code": "type URL struct {\n\tUri string\n\tPrefix string\n}\n\ntype MyStruct struct {\n\tImage URL `bson:\"image\"`\n}\n{\n image: \"somestring here\"\n}\n{\n image: {\n uri: \"somestring here\",\n prefix: \"\"\n }\n}\nmgoSetBSON\\GetBSON",
"text": "Hey @Divjot_Arora,Yeah an example is little broken, what I need is to use a struct as fieldHere is another example. Go Playground - The Go Programming LanguageAnd desired output saved to db:But instead I’ve:With old mgo driver I’ve SetBSON\\GetBSON which have desired behaivour.Thanks",
"username": "Philidor_Green"
},
{
"code": "ValueMarshalerValueUnmarshalermongo.Client// You can also use mgocompat.RegistryRespectNilValues for a registry\n// that's compatible with mgo's RespectNilValues behavior\nclientOpts := options.Client().SetRegistry(mgocompat.Registry)\nclient, err := mongo.Connect(ctx, clientOpts)\n",
"text": "Thanks for the example output. This is a little more complex in the driver, but you can use the ValueMarshaler and ValueUnmarshaler interfaces to do this. I’ve written up some example code at Go Playground - The Go Programming Language.If you’re doing an mgo to driver migration, you might be interested in looking at the mgo-compatible BSON registry we’ve written, which offers support for interfaces similar to mgo’s Getter and Setter. See mgocompat package - go.mongodb.org/mongo-driver/bson/mgocompat - Go Packages for some more information about this BSON registry. If you want to try using it, you can add this line to your mongo.Client construction to enable it everywhere:– Divjot",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "@Divjot_Arora, I was searching for a way to marshal/unmarshal UUID manually and your playground helped me.TY",
"username": "Wanderson_Rosa"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Custom struct type unmarshal | 2020-08-18T07:56:47.429Z | Custom struct type unmarshal | 11,016 |
null | [
"queries",
"node-js"
] | [
{
"code": "// node route\n\nrouter.get('/sel', (req, res)=>{\nasync function selecDatas() {\n\n const client = new MongoClient(uri, { useUnifiedTopology: true } )\n\n try {\n\n await client.connect( );\n\n const database = client.db('training');\n\n const collection = database.collection('emoncms');\n\n \n\n collection.find({ _id : ObjectID('5fc38b2f2626be03a023df57')}, \n{ 'exams.notes' : { $lt : 3 }} ).toArray(function(err, docs) {\n\n console.log(\"Found the following records\");\n\n console.log(docs[0].exams);\n });\n } catch(e) {\n console.log(e)\n }\n};\n\n selectDatas().catch(console.dir);\n res.end()\n});\n// return all exams ... not only $lt3 ???\n name: 'Json',\n surname: 'Bourne',\n exams: [\n { _id: [ObjectID], notes: 1 },\n { _id: [ObjectID], notes: 2 },\n { _id: [ObjectID], notes: 4 },\n { _id: [ObjectID], notes: 3 }\n ]\n}\n",
"text": "Hello it’s my first message here ! thanks for your help.\ni am a newbie on mongo and i am unable to output a range of values on a find.\nI am using the latest nodejs driver.",
"username": "Upsylon_Developpemen"
},
{
"code": "",
"text": "Hello @Upsylon_Developpemen, welcome to the MongoDB community forum.You can use the $filter array operator to select a range of data from an array.",
"username": "Prasad_Saya"
},
{
"code": "collection.aggregate([\n { $project: {\n exams: {\n $filter: {\n input: \"$exams\", // le tableau à limiter \n as: \"index\", // un alias\n cond: { $lt: [ \"$$index.notes\", 2 ] }\n }\n }\n }}\n])\n.project({'exams.notes' : 1})\n.toArray(function (err, doc) { console.log(doc[0].exams)});\n",
"text": "A big thank-you ! I’ve been looking for a solution for days.\nI was in hell! : - / I needed this for a job. I’m going to read more about the documentation that I thought I had already read in depth. Thanks again, you’re saving my day The solution:",
"username": "Upsylon_Developpemen"
},
{
"code": " { $project: {\n\n exams: {\n",
"text": "One last thing: how do you target a specific document?\nI tried different things but mongo always makes either the first doc or undefined: - /\n// ! dont work !\ncollection.aggregate( { _id : ‘5fc38b2f2626be03a023df57’ },[…",
"username": "Upsylon_Developpemen"
},
{
"code": "",
"text": "find sorry! \ncollection.aggregate([{ $match : query },…",
"username": "Upsylon_Developpemen"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Select a data range What did I miss? | 2020-11-29T18:24:36.835Z | Select a data range What did I miss? | 3,002 |
null | [
"change-streams"
] | [
{
"code": "db.collection('Test' + ).watch().\n on('change', data => console.log(new Date(), data));\n\ndb.collection('Test' + ).watch().\n on('change', data => console.log(new Date(), data));\n",
"text": "Hi,i am trying to c dynamically create change streams. And i am wondering if there is way to avoid create change streams watching the same target?If there is now some change in Test collection i will receive notification twice. Since I am sending notifications next to SQS i will have duplicated messages.Is there a way check if change stream with target ‘xy’ already exist, to avoid have same watchers ?Thank you",
"username": "Jan_Ambroz"
},
{
"code": "",
"text": "Hello @Jan_Ambroz, welcome to the MongoDB community forum.I think you can track the collection by its name in your application. You can either store the collection name in a data structure (e.g., an array) or even in a collection and check the store if the collection is being watched. This means, your application includes logic to store, and then remove the name of the collection depending upon its being watched or not.",
"username": "Prasad_Saya"
}
] | Change stream watching the same target | 2020-11-27T00:04:43.479Z | Change stream watching the same target | 1,859 |
[
"ruby"
] | [
{
"code": "",
"text": "I’m wondering that bson_ext gem is still used by mongo gem?In an old version document, I found a mention.\nhttps://api.mongodb.com/ruby/1.1.2/$ gem install bson\nAnd for a significant performance boost, you’ll want to install the C extensions:However, in a newer version does not mention at all.\nhttps://api.mongodb.com/ruby/2.5.3/",
"username": "Hiroshi_Saito"
},
{
"code": "",
"text": "But the current documentation instead points to GitHub - mongodb/bson-ruby: Ruby Implementation of the BSON Specification (2.0.0+)",
"username": "Jack_Woehr"
},
{
"code": "bson-ruby",
"text": "The C extensions were merged into bson-ruby as far as I can tell.",
"username": "alexbevi"
},
{
"code": "",
"text": "@Jack_Woehr @alexbevi Thanks for your comments.It seems that bson-ruby is safe to remove from Gemfile if newer version of mongo ruby driver.",
"username": "Hiroshi_Saito"
},
{
"code": "",
"text": "Also I found this, bson-ruby/ext/bson at master · mongodb/bson-ruby · GitHub.\nC extension is indeed a part of bson-ruby gem.",
"username": "Hiroshi_Saito"
}
] | Does MongoDB Ruby Driver still use bson_ext if available? | 2020-11-27T03:32:12.018Z | Does MongoDB Ruby Driver still use bson_ext if available? | 4,307 |
|
null | [
"app-services-cli"
] | [
{
"code": "",
"text": "Al intentar inicaren el CLI en power shell , introduzco realm-cli login --api-key = “” --private-api-key = “” y me aparece API invalida",
"username": "LUIZA_FERNANDA_NARVA"
},
{
"code": "",
"text": "Hi @LUIZA_FERNANDA_NARVA,Assuming you are asking for how to authenticate with the Realm CLI, please ensure that the api key you are using is from your Project Access API KeysDirections can be found here - https://docs.mongodb.com/realm/reference/cli-auth-with-api-token/",
"username": "Sumedha_Mehta1"
}
] | Problem authenticating the api key | 2020-11-18T17:22:44.796Z | Problem authenticating the api key | 1,924 |
null | [
"queries",
"mongoose-odm",
"cxx"
] | [
{
"code": "await Sessions.find({ ip: ip}).sort({ receive_time: -1 }).lean().limit(1).exec();{\n \"_id\": {\n \"$oid\": \"5f14cc1e283ea9705f1b31c2\"\n },\n \"userID\": 1433571522,\n \"ip\": \"135.114.236.220\",\n \"isNew\": false,\n \"receive_time\": {\n \"$numberLong\": \"1595198494512\"\n },\n \"event\": \"Click\",\n \"x\": 93.39,\n \"y\": -492.00\n}\n\n\n{\n \"_id\": {\n \"$oid\": \"5f14cc22283ea9705f1b31c3\"\n },\n \"userID\": 1433571522,\n \"ip\": \"135.114.236.220\",\n \"isNew\": false,\n \"receive_time\": {\n \"$numberLong\": \"1595198498608\"\n },\n \"event\": \"Press\",\n \"x\": 91.39,\n \"y\": -20.00\n}\n\n\n\n{\n \"_id\": {\n \"$oid\": \"5f14cc22283ea9705f1b31c4\"\n },\n \"userID\": 1433571522,\n \"ip\": \"135.114.236.220\",\n \"isNew\": false,\n \"receive_time\": {\n \"$numberLong\": \"1595198498652\"\n },\n \"event\": \"Type\",\n \"x\": 11.24,\n \"y\": -29.00\n}\n",
"text": "I have two micro services where one write to DB and another read from DB.The first micro service is written in C++ while the other is written in NodeJS.When querying for data say for a IP the MongoDB return older data or no data at all (In case if the client is new) but If I query like say after 5 min it returns the recent data.await Sessions.find({ ip: ip}).sort({ receive_time: -1 }).lean().limit(1).exec();Every record on DB has a field called receive_time which I use to store a 64 bit timestamp .I don’t think this has to do with concurrency at all because C++ is performant enough to write to DB fasterWorth noting, The database is in a separate server.Ubuntu 18 LTS\nC++ Mongo Driver is mongocxx 3.6.x\nNodeJS Mongo Driver is Mongoose and version is 5.0Example of Data",
"username": "Ara_Threlfall"
},
{
"code": "hh:mm:sshh:mm:ss",
"text": "Welcome to the MongoDB community @Ara_Threlfall!This sounds like a timing issue in how your two different microservices interact with your MongoDB deployment.To help understand this issue, please provide more information on your environment:What specific version of MongoDB server are you using?Is this MongoDB deployment a standalone, replica set, or sharded cluster?What specific version of Mongoose and Node.js are you using? (Mongoose v5.0 covers a range of versions … I’m looking for the actual release version like 5.0.11).How are your C++ and Node microservices interacting with the database? For example: are you writing documents in one service and reading in the other? Does the C++ microservice also query for documents but return results with the expecting timing?It might help if you can map out a timeline of events (data written at hh:mm:ss by C++ app, Node app starts reading at hh:mm:ss).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "C++ microservice also query for documents but return results with the expecting timing?C++ app writes to two different collections and the Node APP read from two different collectionsWorth noting that I didn’t use any custom index other than that default index",
"username": "Ara_Threlfall"
},
{
"code": "",
"text": "I would suggest that you take a look at https://docs.mongodb.com/manual/changeStreams/ for the reading side.",
"username": "steevej"
},
{
"code": "",
"text": "Change Streams doesn’t fit our use case. We’re mostly on Event Driven Arch rather Real Time.",
"username": "Ara_Threlfall"
},
{
"code": "",
"text": "Here are the timing data you requestedInsert to Collection1 Start Time: Wed Oct 28 19:30:41 2020Insert to Collection1 End Time: Wed Oct 28 19:30:41 2020Insert to Collection1 Start Time: Wed Oct 28 19:30:41 2020Insert to Collection1 End Time: Wed Oct 28 19:30:41 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:41 2020Insert to Collection2 End Time: Wed Oct 28 19:30:41 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:41 2020Insert to Collection2 End Time: Wed Oct 28 19:30:41 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:41 2020Insert to Collection2 End Time: Wed Oct 28 19:30:41 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:41 2020Insert to Collection2 End Time: Wed Oct 28 19:30:41 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:41 2020Insert to Collection2 End Time: Wed Oct 28 19:30:41 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:41 2020Insert to Collection2 End Time: Wed Oct 28 19:30:41 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:41 2020Insert to Collection2 End Time: Wed Oct 28 19:30:41 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:41 2020Insert to Collection2 End Time: Wed Oct 28 19:30:41 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:41 2020Insert to Collection2 End Time: Wed Oct 28 19:30:41 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:41 2020Insert to Collection2 End Time: Wed Oct 28 19:30:41 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:41 2020Insert to Collection2 End Time: Wed Oct 28 19:30:41 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:41 2020Insert to Collection2 End Time: Wed Oct 28 19:30:41 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:41 2020Insert to Collection2 End Time: Wed Oct 28 19:30:41 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:42 2020Insert to Collection2 End Time: Wed Oct 28 19:30:42 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:42 2020Insert to Collection2 End Time: Wed Oct 28 19:30:42 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:42 2020Insert to Collection2 End Time: Wed Oct 28 19:30:42 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:42 2020Insert to Collection2 End Time: Wed Oct 28 19:30:42 2020Insert to Collection2 Start Time: Wed Oct 28 19:30:42 2020Insert to Collection2 End Time: Wed Oct 28 19:30:42 2020Insert to Collection2 Start Time: Wed Oct 28 19:31:27 2020Insert to Collection2 End Time: Wed Oct 28 19:31:27 2020Insert to Collection2 Start Time: Wed Oct 28 19:31:27 2020Insert to Collection2 End Time: Wed Oct 28 19:31:27 2020Insert to Collection2 Start Time: Wed Oct 28 19:31:35 2020Insert to Collection2 End Time: Wed Oct 28 19:31:35 2020Insert to Collection2 Start Time: Wed Oct 28 19:31:35 2020Insert to Collection2 End Time: Wed Oct 28 19:31:35 2020Insert to Collection2 Start Time: Wed Oct 28 19:31:35 2020Insert to Collection2 End Time: Wed Oct 28 19:31:35 2020Insert to Collection2 Start Time: Wed Oct 28 19:31:35 2020Insert to Collection2 End Time: Wed Oct 28 19:31:35 2020Start Collection2 lookup 1603913441431\nFinish Collection2 lookup 1603913441453 Result is 1\nStart Collection1 lookup 1603913441453\nFinish Collection1 lookup 1603913441455 Result is 0",
"username": "Ara_Threlfall"
},
{
"code": "",
"text": "Does anyone can help me to resolve this?",
"username": "Ara_Threlfall"
},
{
"code": "",
"text": "In case anyone haven’t looked into timing data.You can clearly see that C++ app finish write to DB at 19:31:35 and the NodeJS reads at 23:30:53",
"username": "Ara_Threlfall"
},
{
"code": "mongo",
"text": "When querying for data say for a IP the MongoDB return older data or no data at all (In case if the client is new) but If I query like say after 5 min it returns the recent data.Are you able to query the recent data from the mongo shell or any other tools like Compass - after new inserts?",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Yes, it’ shows data in Compass!",
"username": "Ara_Threlfall"
},
{
"code": "",
"text": "Hi @Ara_Threlfall,To help clarify the problem that you’re having, could you provide the following:It would also be useful to simplify your applications to debug the issue that you’re seeing. For example you could use just MongoDB Node.JS driver directly first, and remove Mongoose from the equation, etc.You can clearly see that C++ app finish write to DB at 19:31:35 and the NodeJS reads at 23:30:53Are you just checking this from the log file ? Are the two application log using the same timezone ? the NodeJS one is showing Unix timestamp, so that’s in UTC, but how about the C++?Things that you should also checked are:The issue that you’re having involved many factors and layers, as with any general debugging process, I’d encourage you to start peeling the layers one by one for deduction.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Yes, it’ shows data in Compass!I meant, immediately after the insert - I am assuming it is so.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Wan,I’ve already wrote above that I installed MongoDB Community as it’s from your website without any further configuration.The times are in UTC. No, I can’t share the C++ and NodeJS codebase due to my company policies. But I assure you they are correct.",
"username": "Ara_Threlfall"
},
{
"code": "",
"text": "But I assure you they are correct.Sometimes, correct code might still be wrong in the deployment environment. One scenario, would be that your NodeJS code reads from a delayed secondary. Since you see the data in Compass, then the data is there and the issue lies on the reading side. But not as wrong read code but as wrong deployment architecture.",
"username": "steevej"
},
{
"code": "",
"text": "I used this and didn’t done anything else not even created custom indexesWhy no human in this community forum can check this site?I don’t know what you are saying by deployment method. I just used this site thats it.",
"username": "Ara_Threlfall"
},
{
"code": "",
"text": "Hi @Ara_Threlfall,Why no human in this community forum can check this site?I understand your frustration, just remember that the people who has responded in this thread is trying to help you. I believe the misunderstanding here is that you have answered MongoDB Community, which is the version of MongoDB but not the deployment topology.\nNow that you have clarified that you have just followed the tutorial, we can assumed that it’s a standalone deployment.The times are in UTC. No, I can’t share the C++ and NodeJS codebase due to my company policies. But I assure you they are correct.I’d recommend to create simple applications (C++ and Node.JS) to reproduce the issue that you’re seeing. You could then share the code of these simple applications so that others could help you better.Regards,\nWan",
"username": "wan"
},
{
"code": "const mongoose = require('mongoose');\n\nconst Schema = mongoose.Schema;\n\nmongoose.connect('mongodb://WHATEVERIP:27017/DBNAME?authSource=DBNAME', {\n\n user: \"admin\",\n\n pass: \"passs\",\n\n useNewUrlParser: true,\n\n useUnifiedTopology: true\n\n});\n\nvar MarketingSchema = new Schema({}, { strict: false });\n\nvar MarketingModel = mongoose.model('Marketing', MarketingSchema, 'Marketing');\n\n// r_time is 64 bit timestamp field\n\nvar DataRecord = await MarketingModel.find({ ip: \"PUT IP OF CLIENT HERE\" }).sort({ r_time: -1 }).lean().limit(1).exec();\n\nif (!DataRecord.length) {\n\n console.log(\"NOT FOUND\");\n\n}\n#include <iostream>\n\n#include <string>\n\n#include <nlohmann/json.hpp>\n\n#include <bsoncxx/builder/stream/document.hpp>\n\n#include <bsoncxx/json.hpp>\n\n#include <mongocxx/client.hpp>\n\n#include <mongocxx/stdx.hpp>\n\n#include <mongocxx/uri.hpp>\n\n#include <mongocxx/instance.hpp>\n\n// Connect to Database\n\nmongocxx::instance instance{};\n\nmongocxx::uri uri(\"mongodb://user:pass@YOURIPGUESHERE:27017/?authSource=DBNAMEGOESHERE\");\n\nmongocxx::client client(uri);\n\nmongocxx::database database = client[\"DBNAMEGOESHERE\"];\n\nlong long getUnixTimeInMS()\n\n{\n\n long long milliseconds_since_epoch = std::chrono::system_clock::now().time_since_epoch() / std::chrono::milliseconds(1);\n\n return milliseconds_since_epoch;\n\n}\n\nint main() {\n\n long long rtime = getUnixTimeInMS();\n\n // Get Marketing Table from Database\n\n mongocxx::collection table = ::database[\"Marketing\"];\n\n nlohmann::json body;\n\n body[\"r_time\"] = rtime;\n\n body[\"ip\"] = \"127.0.0.1\";\n\n std::string jsonString = body.dump();\n\n auto document = bsoncxx::from_json(jsonString);\n\n table.insert_one(document.view());\n\n return 0;\n\n}",
"text": "Steve,Here is the code of NodeJSHere is the C++ code,",
"username": "Ara_Threlfall"
},
{
"code": "",
"text": "There is no field named r_time in your sample data yet you use that field as a sort field. With the limit 1, you will end up with a random document for the given IP. Most likely in the natural order of _id, so most likely the first document inserted for the given IP which is most likely not the one you want. So it looks like a code issue.",
"username": "steevej"
},
{
"code": "",
"text": "That sample data was synthetic. I actually have a field called r_time",
"username": "Ara_Threlfall"
},
{
"code": "",
"text": "I don’t see this issue with MySQL but porting to MySQL is complicated for us now",
"username": "Ara_Threlfall"
}
] | Unable to query for recent data | 2020-10-27T21:11:15.161Z | Unable to query for recent data | 5,883 |
[
"python",
"connecting"
] | [
{
"code": "",
"text": "Hi everyone,I’m new to MongoDB and programming overall. I’m trying to return information about the MongoDB server by following the guide from this webpage PyMongo Tutorial: MongoDB And Python | MongoDB . After running my py. file (below a screenshot attached), I get errors -\n\nScreen Shot 2020-11-28 at 17.59.012870×728 315 KB\n\nScreen Shot 2020-11-28 at 18.04.441970×484 57.1 KB\nmongodb+srv://user:[email protected]/dbname?retryWrites=true&w=majority - that is my connection string and I wanted to clarify what what password I have to put in. I assume the one that I generated from database user, right?:\n\nScreen Shot 2020-11-28 at 13.33.001114×1002 78.3 KB\nFor ‘‘dbname’’ name i use ‘dbCars’ a newly created database like shown on the screeshot below\n\nScreen Shot 2020-11-28 at 18.07.241664×478 41 KB\nCorrect me if I put the wrong data in my connection string and please advise why the code im running doesn’t provide the outcome as from the webpage link…Appreciate your help much.",
"username": "sergey_F"
},
{
"code": "",
"text": "I assume the one that I generated from database user, right?You assumed correctly.It is …/dbCars?retryWrites… rather than …/<dbCars>?retryWrites… .Since I see <MyPass> in your URI as parameter to MongoClient(…), may be you have the same issue with the < and > as with dbCars.",
"username": "steevej"
},
{
"code": "",
"text": "thanks a lot. this was part of the solution!Apart from this mistake I used password from my account instead of using a password associated to a user database. I also used for a user database password the same password that I use for my account, which apparently is a bad practise, so I generated a new user database password and finally everything worked…\nthanks",
"username": "sergey_F"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Getting Started with Python and MongoDB | 2020-11-28T20:21:44.564Z | Getting Started with Python and MongoDB | 1,264 |
|
null | [
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "My team are trying to access one of our realm instances (pre-migrate to MongoDB Realm) but it seems to be down. The error message we get when trying to access a realm is: “The requested service is temporarily unavailable.” This is the same error that comes up in Realm Studio when we try to view this realm instance.Looking at the cloud website, the instance is stuck on “Instance is starting” despite us not touching instance at all. Going into the instance the dashboard says: “Instance is not ready”. We are really at a loss for what is causing this issue and would appreciate any help in this matter.",
"username": "Matthew_Hughes"
},
{
"code": "",
"text": "@Matthew_Hughes Whats your instance name?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Matthew_Hughes Hello also i couldn’t connect to my instance. It says “The requested service is temporarily unavailable”. The instance name is “malchinapp.de1a.cloud.realm.io”.",
"username": "Jamiyandorj_Purevdor"
},
{
"code": "",
"text": "@Jamiyandorj_Purevdor I just tested this and seems to be working fine - can you retest?",
"username": "Ian_Ward"
}
] | Failed to connect to Realm Cloud Instance | 2020-11-26T09:56:26.766Z | Failed to connect to Realm Cloud Instance | 3,380 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "Hello,My Realm RN clients write a considerable amount of of data over the year as “scan_records”, I am required to sync this data back onto all clients in the partion so I can query some data for the scan_records for that day.I dont need to sync all historical scan_records, only todays records.Is there any way I can set a query in my sync configuration to only read/sync scan_records from atlas that are less than 24hours old?",
"username": "Patrick_Lambert"
},
{
"code": "",
"text": "@Patrick_Lambert You could use a scheduled Realm Trigger to update the partitionKey value of certain documents which are now not within the 24hour range. This would remove them from the sync partition - https://docs.mongodb.com/realm/triggers/scheduled-triggers/",
"username": "Ian_Ward"
}
] | Realm Sync Optimization Problem | 2020-11-27T09:33:11.763Z | Realm Sync Optimization Problem | 1,386 |
null | [
"queries",
"data-modeling"
] | [
{
"code": " [{\n \"_id\": {\n \"$oid\": \"5fbc2e9b8ca9b50a6c13fc56\"\n },\n \"016581598027\": [\n {\n \"ASIN\": \"B00008OM5J\",\n \"Language\": \"en-US\",\n \"AudienceRating\": \"NR (Not Rated)\",\n \"Binding\": \"Audio CD\",\n \"Creator\": \"Nothingface\",\n \"Edition\": \"Parental Advisory ed.\",\n \"Label\": \"TVT\",\n }\n ]\n }]\n016581598027ASINmydb.Products.find({\"ASIN\":\"B00008OM5J\"})\ndocument = (mydb.Products.find({\"ASIN\":\"B00008OM5J\"}) OR mydb.Products.find(\"????\":\"016581598027\"))\n",
"text": "I am very new to MongoDB. However for the last 48 hours I managed to create a nice collection of Amazon products, mostly audio CDs and DVDs. The Products collection has approximately 50k documents in the following format:When I query Amazon MWS my program automatically saves query results to MongoDB.The 016581598027 is a UPC (Universal Product Code) which may be referred to as a product identifier. Needless to say that each document in my collection is supposed to have a different UPC at this nest level. However I want to be able to query and find a document on a specific ASIN. As a pseudo-code what I want is this:So after the above query fires I must have a reference to a sole document. How do I do that? I’ve tried everything – RTFM, talked to Julia bot, etc. but nothing helps.To make it even more interesting for you, I’d like to extend the previous task with an OR query:How do I implement this?Thanks\nMark",
"username": "IUnknown"
},
{
"code": "",
"text": "I do not have an answer to your question however I will risk a comment about your schema.Your problem would be easier if you had a field named UPC with values like 016581598027.Any reason why the field keyed with the UPC is an array? Is the indent to have multiple UPC object within each top level document?",
"username": "steevej"
},
{
"code": "",
"text": "Dear Steeve,Thank you for your comment. I understand, that the proposed nesting (the UPC attribute is on the same level with the ASIN attribute) would be much easier. However I use a third-party PHP library which returns JSON in exactly this format.Well, I can for sure do some coding to change the returned JSON to comply with what you mentioned, but: I am very curious how MongoDB can solve exactly this problem whenI am very curious how MongoDB can solve this.",
"username": "IUnknown"
}
] | How to find a specific value nested in an unknown object? | 2020-11-27T18:34:11.392Z | How to find a specific value nested in an unknown object? | 3,182 |
null | [
"aggregation",
"python"
] | [
{
"code": "",
"text": "Hi,I’m facing an issue with PyMongo - I’ve built an aggregation pipeline in MongoDB Compass and used the code in a Python program to aggregate the same documents, but in Python the response of the same hard coded aggregation is empty although it fulfills my needs in MongoDB Compass.Thank you for your help.",
"username": "Benjamin_Weber"
},
{
"code": "",
"text": "You probably have a syntax error on the Python side. The syntax is not exactly the same between Compass JSON and Python.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "It will help us to help you if you could post some sample documents and the relevant Python code.Since this is you first interaction here please consult Formatting code and log snippets in posts before posting code.",
"username": "steevej"
},
{
"code": "",
"text": "If you use the export function button in compass you can export as python3, did you do this ?image715×162 14.5 KB\nimage880×470 19.5 KB",
"username": "chris"
}
] | [PyMongo] Empty Response from Aggregation | 2020-11-26T10:26:30.487Z | [PyMongo] Empty Response from Aggregation | 5,017 |
null | [] | [
{
"code": "AssertionException handling request, closing client connection: 34348 cannot translate opcode 2013",
"text": "Hello,I am facing this error:AssertionException handling request, closing client connection: 34348 cannot translate opcode 2013Does anyone know a source where I can find further details what this can indicate? Or has anyone seen this error before? The server is on version 3.4.24Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hi @michael_hoellerI have not specifically seen this before, but it looks like the client is using an opcode not supported by that server version:OP_MSG \t2013 \tSend a message using the format introduced in MongoDB 3.6.If the driver/client supports 3.4 then I would say that is a bug. Otherwise just a plain old driver mismatch.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error "cannot translate opcode 2013" | 2020-11-27T16:33:27.620Z | Error “cannot translate opcode 2013” | 3,192 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "I’m inserting a document in MongoDB using C# MongoDB driver. A document consist of just 78 collections/lists. I found the following exception when document is inserting in mongo:\"Command findAndModify failed: BSONObj size: 27137194 (0x19E14AA) is invalid. Size must be between 0 and 16793600(16MB)\"I don’t know why size exception is throwing.",
"username": "Salman_Elahi"
},
{
"code": "",
"text": "I do not fully understand the following:A document consist of just 78 collections/lists.If you mean that you try to insert one document composed of 78 collections/lists. What is the content of those 78 collections? If you really try to insert 78 collections as a single document, then I think the error is quite normal.",
"username": "steevej"
}
] | BSONObj size: 27137194 (0x19E14AA) is invalid. Size must be between 0 and 16793600(16MB) | 2020-11-27T11:52:02.084Z | BSONObj size: 27137194 (0x19E14AA) is invalid. Size must be between 0 and 16793600(16MB) | 4,962 |
Subsets and Splits