image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "monitoring" ]
[ { "code": "", "text": "Version : Mongo:v3.6.17\nHi All,I see thousands of “sock” in lsof and it keeps increasing with no reason in logs.|mongod|8955|root|19u|sock|0,9|0t0|61158263|protocol:|TCP|\n|mongod|8955|root|28u|sock|0,9|0t0|61146906|protocol:|TCP|lsof -p mongod | grep “0,9” | wc -l\n33419Any configuration suggestion on mongod to reduce?", "username": "Raj_Kumar" }, { "code": "ss -tn \"sport = :27017\"netstat -tn | grep :27017", "text": "Hi @Raj_KumarVery likely this is a poorly written client application. Opening connections unnecessarily instead of using connection pools.ss -tn \"sport = :27017\" or netstat -tn | grep :27017 will give you a list of mongo connections to this host, assuming default port, allowing you to target where the problematic connections are coming from.", "username": "chris" }, { "code": "", "text": "poorly written client applicationpoorly written client applicationHow would I able to trace this out on my client. It is ultimately the mongo carrying the blame. Is there any way to create a dump on mongo to prove that client is misbehaving?", "username": "Raj_Kumar" }, { "code": "", "text": "You will want to verify this by identifying where the connections are from to make sure.3.6.17 Has had a few minor release since, though a quick look did not seem to be related.\n3.6.22 released in February is the latest.What is the client driver and version you are using ?A poorly written application may, for instance create a new mongo client for every request vs creating one client and reusing it.", "username": "chris" }, { "code": "", "text": "this error in mongo oplog when the closewait started increasing2021-03-18T08:49:20.547+0000 I ASIO [NetworkInterfaceASIO-RS-0] Ending idle connection to host M04:27717 because the pool meets constraints; 2 connections to that host remain open\n2021-03-18T08:49:36.601+0000 I ASIO [NetworkInterfaceASIO-RS-0] Ending idle connection to host M03:27717 because the pool meets constraints; 2 connections to that host remain open", "username": "Raj_Kumar" }, { "code": "", "text": "What is the client driver and version you are using ?org.mongodb.mongo-java-driver_3.7.2.jar\nEven 3.6.22 didn’t help!!Any suggestion is very much appreciated!", "username": "Raj_Kumar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
CLOSE_WAIT & lsof mongod count increasing
2021-03-11T19:11:45.911Z
CLOSE_WAIT & lsof mongod count increasing
3,690
null
[ "database-tools", "monitoring" ]
[ { "code": "", "text": "I see a high response time on 27772 but in other I see * indication in insert, query, and others…what does it indicate?[root@09 ~]# mongostat --port 27772\ninsert query update delete getmore command flushes mapped vsize res faults qrw arw net_in net_out conn set repl time\n164 29 15 338 22 269|0 0 18.0G 6.11G 0 0|0 0|0 209k 317k 303 set72 PRI Mar 18 11:51:57.114\n176 27 21 286 21 265|0 0 18.0G 6.11G 0 0|0 0|0 209k 323k 302 set72 PRI Mar 18 11:51:58.114\n165 28 16 322 22 254|0 0 18.0G 6.10G 0 0|0 0|0 205k 310k 302 set72 PRI Mar 18 11:51:59.114\n162 26 28 297 22 260|0 0 18.0G 6.10G 0 0|0 0|0 205k 307k 303 set72 PRI Mar 18 11:52:00.114\n150 40 28 350 21 269|0 0 18.0G 6.10G 0 0|0 0|0 216k 320k 302 set72 PRI Mar 18 11:52:01.114\n202 31 19 304 22 257|0 0 18.0G 6.11G 0 0|0 0|0 222k 327k 302 set72 PRI Mar 18 11:52:02.114\n202 31 24 327 22 264|0 0 18.0G 6.10G 0 0|0 0|0 231k 366k 303 set72 PRI Mar 18 11:52:03.114\n^C2021-03-18T11:52:04.022+0000 signal ‘interrupt’ received; forcefully terminating[root@09~]# mongostat --port 27882\ninsert query update delete getmore command flushes mapped vsize res faults qrw arw net_in net_out conn set repl time\n*78 *0 *10 *85 22 122|0 0 14.7G 1.47G 0 0|0 0|0 59.0k 122k 55 set82 SEC Mar 18 11:52:10.113\n*87 *0 *5 *64 22 130|0 0 14.7G 1.47G 0 0|0 0|0 59.8k 125k 55 set82 SEC Mar 18 11:52:11.113\n*88 *0 *5 *73 21 116|0 0 14.7G 1.46G 0 0|0 0|0 58.7k 122k 55 set82 SEC Mar 18 11:52:12.113\n*72 *0 *3 *51 21 135|0 0 14.7G 1.47G 0 0|0 0|0 60.1k 148k 56 set82 SEC Mar 18 11:52:13.114\n*68 *0 *11 *77 22 134|0 0 14.7G 1.47G 0 0|0 0|0 60.1k 157k 56 set82 SEC Mar 18 11:52:14.113\n*72 *0 *6 *86 21 129|0 0 14.7G 1.47G 0 0|0 0|0 59.8k 128k 56 set82 SEC Mar 18 11:52:15.113\n*77 *0 *4 *76 22 122|0 0 14.7G 1.47G 0 0|0 0|0 59.5k 118k 55 set82 SEC Mar 18 11:52:16.113\n*80 *0 *6 *71 21 132|0 0 14.7G 1.47G 0 0|0 0|0 60.2k 126k 56 set82 SEC Mar 18 11:52:17.113\n*88 *0 *11 *78 22 130|0 0 14.7G 1.47G 0 0|0 0|0 59.8k 132k 55 set82 SEC Mar 18 11:52:18.113", "username": "Raj_Kumar" }, { "code": "", "text": "I think they are replicated operationsFrom mongo docinserts\nThe number of objects inserted into the database per second. If followed by an asterisk (e.g. *), the datum refers to a replicated operation.", "username": "Ramachandra_Tummala" } ]
What is * indicates in mongostat output
2021-03-18T11:57:40.707Z
What is * indicates in mongostat output
1,971
null
[]
[ { "code": "", "text": "Hello everyone,Happy to share that tradis.ai has joined MongoDB Startup Program.About tradis.ai:We built cryptocurrency trading platform powered by AI.It’s tradis mission to make it available for the masses!Thanks @Manuel_Meyer for the opportunity in MongoDB Startup Program.To the moon!", "username": "Mario_TradisAI" }, { "code": "", "text": "Welcome Tradis.ai to the MongoDB Community! ", "username": "ado" }, { "code": "", "text": "Welcome! Would love to stay in touch as you get plugged in.", "username": "Michael_Lynn" }, { "code": "", "text": "Will do!We are already using MongoDB Atlas! Will keep you posted regarding our Data initiatives.", "username": "Mario_TradisAI" } ]
Hello from Tradis.ai
2021-03-17T21:35:35.430Z
Hello from Tradis.ai
5,040
null
[ "cxx" ]
[ { "code": "", "text": "I successfully compiled and installed the mongo cxx driver from the official manual. Compiling the example projects/mongocxx/cmake/shared works as well. So I guess I have a correctly installed mongo cxx driver.Now I wrote a very simple main.cpp, including only mongocxx/instance.hpp and trying to create a mongocxx::instance only. It compiles but when run it gives this error:error while loading shared libraries: libbsoncxx.so._noabi: cannot open shared object file: No such file or directoryThen I tried to track down this error with the official example mentioned above. When I comment out line 39 and 40 this example gives the same error message.Can anyone help and explain this behaviour? Why is libbsoncxx needed when I do not use it and why is it not found? It is definitely there, in the same folder like libmongocxxThanks for any help!", "username": "Norbert_K" }, { "code": "main.cpp", "text": "Please provide your main.cpp, and the terminal output showing compilation, how you invoked the program, and the complete error text.", "username": "Roberto_Sanchez" }, { "code": "#include <mongocxx/instance.hpp>\nint main(int, char* [])\n{\n mongocxx::instance inst;\n return 0;\n}\nexport CMAKE_PREFIX_PATH=/home/gernot/Development/3rdparty/mongo-cxx-driver/3.5.0/installcmake ..\n-- The C compiler identification is GNU 9.3.0\n-- The CXX compiler identification is GNU 9.3.0\n-- Check for working C compiler: /usr/bin/cc\n-- Check for working C compiler: /usr/bin/cc -- works\n-- Detecting C compiler ABI info\n-- Detecting C compiler ABI info - done\n-- Detecting C compile features\n-- Detecting C compile features - done\n-- Check for working CXX compiler: /usr/bin/c++\n-- Check for working CXX compiler: /usr/bin/c++ -- works\n-- Detecting CXX compiler ABI info\n-- Detecting CXX compiler ABI info - done\n-- Detecting CXX compile features\n-- Detecting CXX compile features - done\n-- Configuring done\n-- Generating done\n-- Build files have been written to: /home/gernot/Development/3rdparty/mongo-cxx-driver/3.5.0/examples/projects/mongocxx/cmake/shared/build\nmakeScanning dependencies of target hello_mongocxx\n[ 50%] Building CXX object CMakeFiles/hello_mongocxx.dir/home/gernot/Development/3rdparty/mongo-cxx-driver/3.5.0/examples/projects/mongocxx/hello_mongocxx.cpp.o\n[100%] Linking CXX executable hello_mongocxx\n[100%] Built target hello_mongocxx\n./hello_mongocxx./hello_mongocxx: error while loading shared libraries: libbsoncxx.so._noabi: cannot open shared object file: No such file or directory\nreadelf -d hello_mongo.cxxDynamic section at offset 0x2d98 contains 29 entries:\n Tag Typ Name/Wert\n 0x0000000000000001 (NEEDED) Gemeinsame Bibliothek [libmongocxx.so._noabi]\n 0x0000000000000001 (NEEDED) Gemeinsame Bibliothek [libc.so.6]\n 0x000000000000001d (RUNPATH) Bibliothek runpath: [/home/gernot/Development/3rdparty/mongo-cxx-driver/3.5.0/install/lib]\n 0x000000000000000c (INIT) 0x1000\n 0x000000000000000d (FINI) 0x1268\n 0x0000000000000019 (INIT_ARRAY) 0x3d88\n 0x000000000000001b (INIT_ARRAYSZ) 8 (Bytes)\n 0x000000000000001a (FINI_ARRAY) 0x3d90\n 0x000000000000001c (FINI_ARRAYSZ) 8 (Bytes)\n 0x000000006ffffef5 (GNU_HASH) 0x3a0\n 0x0000000000000005 (STRTAB) 0x5b8\n 0x0000000000000006 (SYMTAB) 0x3f0\n 0x000000000000000a (STRSZ) 395 (Bytes)\n 0x000000000000000b (SYMENT) 24 (Bytes)\n 0x0000000000000015 (DEBUG) 0x0\n 0x0000000000000003 (PLTGOT) 0x3fa8\n 0x0000000000000002 (PLTRELSZ) 72 (Bytes)\n 0x0000000000000014 (PLTREL) RELA\n 0x0000000000000017 (JMPREL) 0x860\n 0x0000000000000007 (RELA) 0x7a0\n 0x0000000000000008 (RELASZ) 192 (Bytes)\n 0x0000000000000009 (RELAENT) 24 (Bytes)\n 0x000000000000001e (FLAGS) BIND_NOW\n 0x000000006ffffffb (FLAGS_1) Flags: NOW PIE\n 0x000000006ffffffe (VERNEED) 0x770\n 0x000000006fffffff (VERNEEDNUM) 1\n 0x000000006ffffff0 (VERSYM) 0x744\n 0x000000006ffffff9 (RELACOUNT) 3\n 0x0000000000000000 (NULL) 0x0\nreadelf -d /home/gernot/Development/3rdparty/mongo-cxx-driver/3.5.0/install/lib/libmongocxx.so._noabiDynamic section at offset 0x8ecc8 contains 31 entries:\n Tag Typ Name/Wert\n 0x0000000000000001 (NEEDED) Gemeinsame Bibliothek [libmongoc-1.0.so.0]\n 0x0000000000000001 (NEEDED) Gemeinsame Bibliothek [libbsoncxx.so._noabi]\n 0x0000000000000001 (NEEDED) Gemeinsame Bibliothek [libbson-1.0.so.0]\n 0x0000000000000001 (NEEDED) Gemeinsame Bibliothek [libstdc++.so.6]\n 0x0000000000000001 (NEEDED) Gemeinsame Bibliothek [libgcc_s.so.1]\n 0x0000000000000001 (NEEDED) Gemeinsame Bibliothek [libc.so.6]\n 0x000000000000000e (SONAME) soname der Bibliothek: [libmongocxx.so._noabi]\n 0x000000000000001d (RUNPATH) Bibliothek runpath: [/home/gernot/Development/3rdparty/mongo-c-driver/install/lib]\n 0x000000000000000c (INIT) 0x28000\n 0x000000000000000d (FINI) 0x77218\n 0x0000000000000019 (INIT_ARRAY) 0x8f768\n 0x000000000000001b (INIT_ARRAYSZ) 312 (Bytes)\n 0x000000000000001a (FINI_ARRAY) 0x8f8a0\n 0x000000000000001c (FINI_ARRAYSZ) 8 (Bytes)\n 0x000000006ffffef5 (GNU_HASH) 0x2f0\n 0x0000000000000005 (STRTAB) 0xb268\n 0x0000000000000006 (SYMTAB) 0x27a8\n 0x000000000000000a (STRSZ) 91028 (Bytes)\n 0x000000000000000b (SYMENT) 24 (Bytes)\n 0x0000000000000003 (PLTGOT) 0x90000\n 0x0000000000000002 (PLTRELSZ) 18504 (Bytes)\n 0x0000000000000014 (PLTREL) RELA\n 0x0000000000000017 (JMPREL) 0x23488\n 0x0000000000000007 (RELA) 0x222a0\n 0x0000000000000008 (RELASZ) 4584 (Bytes)\n 0x0000000000000009 (RELAENT) 24 (Bytes)\n 0x000000006ffffffe (VERNEED) 0x22190\n 0x000000006fffffff (VERNEEDNUM) 3\n 0x000000006ffffff0 (VERSYM) 0x215fc\n 0x000000006ffffff9 (RELACOUNT) 82\n 0x0000000000000000 (NULL) 0x0\n", "text": "", "username": "Norbert_K" }, { "code": "ldd ./hello_mongocxxls -l /home/gernot/Development/3rdparty/mongo-cxx-driver/3.5.0/install/lib /home/gernot/Development/3rdparty/mongo-c-driver/install/lib", "text": "That is strange. What is the output of ldd ./hello_mongocxx and ls -l /home/gernot/Development/3rdparty/mongo-cxx-driver/3.5.0/install/lib /home/gernot/Development/3rdparty/mongo-c-driver/install/lib?", "username": "Roberto_Sanchez" }, { "code": "❯ ldd ./src/pilot/PMSPilot\n linux-vdso.so.1 (0x00007ffff2f47000)\n libdocopt.so.0 => /mnt/f/OpenSource_Projects/PMS/build/_deps/docopt-build/libdocopt.so.0 (0x00007fab53d24000)\n libDBHandle.so => /mnt/f/OpenSource_Projects/PMS/build/src/db/libDBHandle.so (0x00007fab53cb6000)\n libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fab53c84000)\n libmongocxx.so._noabi => /mnt/f/OpenSource_Projects/mongo-cxx-driver-install/lib/libmongocxx.so._noabi (0x00007fab53bd7000)\n libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fab539e0000)\n libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fab539c0000)\n libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fab537c0000)\n /lib64/ld-linux-x86-64.so.2 (0x00007fab53fed000)\n libmongoc-1.0.so.0 => /mnt/f/OpenSource_Projects/mongo-c-driver-install/lib/libmongoc-1.0.so.0 (0x00007fab5370a000)\n libbsoncxx.so._noabi => not found\n libbson-1.0.so.0 => /mnt/f/OpenSource_Projects/mongo-c-driver-install/lib/libbson-1.0.so.0 (0x00007fab536c1000)\n libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fab53572000)\n libssl.so.1.1 => /lib/x86_64-linux-gnu/libssl.so.1.1 (0x00007fab534d0000)\n libcrypto.so.1.1 => /lib/x86_64-linux-gnu/libcrypto.so.1.1 (0x00007fab531f0000)\n librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fab531e0000)\n libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007fab531c0000)\n libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fab531a0000)\n libicuuc.so.66 => /lib/x86_64-linux-gnu/libicuuc.so.66 (0x00007fab52fb0000)\n libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fab52fa0000)\n libicudata.so.66 => /lib/x86_64-linux-gnu/libicudata.so.66 (0x00007fab514d0000)\n❯ ls -l /mnt/f/OpenSource_Projects/mongo-cxx-driver-install/lib\ntotal 1140\ndrwxr-xr-x 1 vformato vformato 4096 Mar 17 11:48 cmake\nlrwxrwxrwx 1 vformato vformato 20 Mar 17 11:48 libbsoncxx.so -> libbsoncxx.so._noabi\n-rw-r--r-- 1 vformato vformato 222272 Mar 18 15:20 libbsoncxx.so.3.7.0-pre\nlrwxrwxrwx 1 vformato vformato 23 Mar 17 11:48 libbsoncxx.so._noabi -> libbsoncxx.so.3.7.0-pre\nlrwxrwxrwx 1 vformato vformato 21 Mar 17 11:48 libmongocxx.so -> libmongocxx.so._noabi\n-rw-r--r-- 1 vformato vformato 938144 Mar 18 15:20 libmongocxx.so.3.7.0-pre\nlrwxrwxrwx 1 vformato vformato 24 Mar 17 11:48 libmongocxx.so._noabi -> libmongocxx.so.3.7.0-pre\ndrwxr-xr-x 1 vformato vformato 4096 Mar 17 11:48 pkgconfig\n", "text": "I just run into the same issueand", "username": "Valerio_Formato" } ]
Cxx driver cannot find dependencies
2020-05-29T15:53:14.668Z
Cxx driver cannot find dependencies
3,631
null
[ "security" ]
[ { "code": "", "text": "All,\nI am totally new with Mongo DB. We are looking at dB that will support field/cell level security out of the box. I saw pipeline and $redact function within Mongo to do this. I want to know if this is best practice and use case to do this. Thanks in advance.", "username": "Tony_Tran" }, { "code": "", "text": "Hey Tony,MongoDB supports Client Side Field Level Encryption. You can learn more about it here: https://docs.mongodb.com/manual/core/security-client-side-encryption/And see a guide on how to use it here:https://www.mongodb.com/how-to/client-side-field-level-encryption-csfle-mongodb-node/ (for Node, but we have articles in other languages as well)Hope that helps!", "username": "ado" }, { "code": "", "text": "Hi Ado,\nThanks for the info. what if we have a lot of fields that we want to redact. I wonder if performance will be an issue. Not sure if we use MongoDB the correct way for this use case.thanks,\nTony", "username": "Tony_Tran" }, { "code": "", "text": "Hey Tony,From the GA announcement (Field Level Encryption is GA | MongoDB Blog) it mentions that the performance impact even on high read documents is fairly minimal.", "username": "ado" } ]
Cell/field level security
2021-03-17T20:01:05.416Z
Cell/field level security
2,236
null
[ "data-modeling", "crud" ]
[ { "code": "const UserSchema = new mongoose.Schema({\nInventory: {\nslot1img: {\ntype: String\n},\nslot1text: {\ntype: String\n},\nslot2img: {\ntype: String\n},\nslot2text: {\ntype: String\n},\nslot3img: {\ntype: String\n},\n}\n})\n", "text": "Lets say that I have this user schema that has a inventory with multiple slots. How could I nest those slots in to the inventory?For example like this:However if I try it like this it wont show up at all in the database, so how do i do this?", "username": "JasoO" }, { "code": "", "text": "I think that you would want to use the embedded document pattern to create a subset inside of Inventory.Below is the link to the Mongoose documenation on sub documents.\nMongogoose docs on sub documents", "username": "tapiocaPENGUIN" } ]
How to create nested fields in a schema?
2021-03-18T12:46:07.046Z
How to create nested fields in a schema?
8,901
https://www.mongodb.com/…7_2_1024x516.png
[ "dot-net" ]
[ { "code": "", "text": "The MongoDB.Driver 2.11.x BsonDocument.Parse(stringjson) not converting decimal types in json to decimal128. It always takes it as double.\ne.g. json string\n{\n“bigdecimal”:123456789123456789123456789.23\n}After applying BsonDocument.Parse() the result is as follows:\nHow can we specify that the certain value is of decimal128 type. Tried the same with BsonSerializer.Deserialize as well, same result.BsonDocument’s bigdecimal element:\nimage1359×685 24.1 KB", "username": "Shabbir_Lathsaheb" }, { "code": "decimal128var json = \"{ 'bigdecimal': 123456789123456789123456789.23 }\";\nvar bson = BsonDocument.Parse(json);\nConsole.WriteLine(bson.GetValue(\"bigdecimal\").BsonType); // prints Double\n\n// Convert the field to decimal128 type\nvar decNum = bson.GetValue(\"bigdecimal\").ToDecimal128();\nConsole.WriteLine(decNum.GetType()); // prints MongoDB.Bson.Decimal128\n\n// Set the element value\nbson.Set(\"bigdecimal\", decNum);", "text": "The MongoDB.Driver 2.11.x BsonDocument.Parse(stringjson) not converting decimal types in json to decimal128. It always takes it as double.Hello @Shabbir_Lathsaheb, welcome to the MongoDB Community forum!In MongoDB, by default a number is of type double. That is the reason when you parse the JSON string with a field’s value as a number it is showing as a double. That is the expected behavior. You need to explicitly convert the number to the decimal128 type.An example, using C# driver API:", "username": "Prasad_Saya" }, { "code": "", "text": "@Prasad_Saya : Thanks for the reply. However, from the json string we cant know which property is of which type. We don’t have schema defined. Like json parser has ability to accept a setting to consider all float as decimal. Can bson parser not have same user defined settings, since its a conflict?", "username": "Shabbir_Lathsaheb" }, { "code": "", "text": "@Shabbir_Lathsaheb, I don’t know of any feature which detects a data type and converts as per your requirement. I think, you may have to build the functionality as per your application needs.Generally, the MongoDB collection document is mapped to a user defined class in C#. The class can have some functionality to convert to and from the desired data type for specific fields. Applications generally have some structure to their data - that is how various components of the application use the data - from database layer to application layer to user interface layer. These layers have their own ways and methods of data representations and conversions are required, sometimes - as in this case.Having data without some schema specification is your application and then you need to improvise. I don’t have further suggestions regarding this.", "username": "Prasad_Saya" } ]
Mongodb c# driver BsonDocument.Parse(stringjson) doesnt convert decimal values with decimal128 type, it takes them as double
2021-03-12T15:18:08.383Z
Mongodb c# driver BsonDocument.Parse(stringjson) doesnt convert decimal values with decimal128 type, it takes them as double
17,119
null
[ "aggregation" ]
[ { "code": "{\n \"_id\" : 1,\n \"year\" : 2013,\n \"columns\" : [\n {\n \"name\" : \"abc tag \",\n \"tags\" : [\n \"PJ\",\n \"MM\"\n ]\n },\n {\n \"ssn\" : \"12345666a\",\n \"tags\" : [\n \"KKLT\"\n ]\n },\n {\n \"placeofbirth\" : \"virginia\",\n \"tags\" : [\n \"FS\"\n ]\n }\n ]\n}\n\n{\n \"_id\" : 2,\n \"year\" : 2013,\n \"columns\" : [\n {\n \"name\" : \"abc tag single\",\n \"tags\" : \"PJ\"\n },\n {\n \"ssn\" : \"12345666a\",\n \"tags\" : \"KKLT\"\n \n },\n {\n \"placeofbirth\" : \"virginia\",\n \"tags\" : \"FS\"\n \n }\n ]\n}\nvar userAccess = [ \"KKLT\", \"FS\" ];\ndb.book2.aggregate(\n [\n { $match: { year: 2013 } },\n { $redact: {\n $cond: {\n if: { $gt: [ { $size: { $setIntersection: [ \"$tags\", userAccess ] } }, 0 ] },\n then: \"$$DESCEND\",\n else: \"$$PRUNE\"\n }\n }\n }\n ]\n);\nUncaught exception: Error: command failed: {\n\n\"ok\" : 0,\n\n\"errmsg\" : \"The argument to $size must be an array, but was of type: null\",\n\n\"code\" : 17124,\n\n\"codeName\" : \"Location17124\"\n\n} : aggregate failed :\n\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\n\ndoassert@src/mongo/shell/assert.js:18:14\n\n_assertCommandWorked@src/mongo/shell/assert.js:583:17\n\nassert.commandWorked@src/mongo/shell/assert.js:673:16\n\nDB.prototype._runAggregate@src/mongo/shell/db.js:266:5\n\nDBCollection.prototype.aggregate@src/mongo/shell/collection.js:1012:12\n\nDBCollection.prototype.aggregate@:1:355\n\n@(shell):1:1\n", "text": "All,I am having similar using $redact example. I am trying to use $redact to do cell level security. Please let me know downside of using this feature to implement cell level security. Any help is greatly appreciated.Here is example of document I tried to doi want user to the field depending on the tags. I try to run the following query.", "username": "Tony_Tran" }, { "code": "", "text": "Hi @Tony_Tran,Welcome to MongoDB communityIt looks like the $size was tried on a null expression.You need to use a $cond and $ifNull to return an empty array if intersection yield null.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks Pavel, can you give me an example on how? I tried different syntax but keep getting error. Sorry I am so new to this.", "username": "Tony_Tran" } ]
Trying to use $redact to do cell level securty
2021-03-17T19:50:03.688Z
Trying to use $redact to do cell level securty
1,873
null
[]
[ { "code": "", "text": "Hello,We are having issues building a secondary from a sharded replica, it keeps getting to the same database and fails with the following error2021-03-16T21:03:06.756+0000 I REPL [repl writer worker 7] CollectionCloner::start called, on ns:customer.tmpdqhEZ.rename\n2021-03-16T21:03:06.758+0000 F STORAGE [repl writer worker 8] Attempted to create a new collection customer.tmpdqhEZ.rename without a UUID\n2021-03-16T21:03:06.765+0000 I REPL [repl writer worker 8] CollectionCloner ns:customer.tmpdqhEZ.rename finished cloning with status: InvalidOptions: Attempted to create a new collection customer.tmpdqhEZ.rename without a UUID\n2021-03-16T21:03:06.765+0000 W REPL [repl writer worker 8] collection clone for ‘customer.tmpdqhEZ.rename’ failed due to InvalidOptions: While cloning collection ‘customer.tmpdqhEZ.rename’ there was an error ‘Attempted to create a new collection customer.tmpdqhEZ.rename without a UUID’Is there a way to drop this dead collection from the replicaset? It is not showing up when looking for collections in customer db?Thank you", "username": "Jonathan_Stairs" }, { "code": "", "text": "Hi @Jonathan_Stairs,Can you confirm the specific version(s) of MongoDB server used in your cluster?Can you also share more context on the operations you are doing leading to this error?It sounds like you may be trying to perform initial sync of a new secondary in a shard replica set, in which case it would be helpful to know fi you are seeding this secondary by copying data files from another member or using the normal initial sync process.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you, this is a sharded cluster with each shard consisting of a primary and 2 replicas. The version currently running is 3.6.20. We found the remaining collection on the secondary and not the primary so it didn’t get the UUID with the Feature Compatibility change. Syncing the new secondary from the primary allowed the initial sync to complete.", "username": "Jonathan_Stairs" }, { "code": "", "text": "Hi @Jonathan_Stairs,Thanks for sharing the resolution to your issue.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB 3.6 build replica failing on temp table
2021-03-16T22:41:31.802Z
MongoDB 3.6 build replica failing on temp table
1,756
null
[ "replication", "devops" ]
[ { "code": "", "text": "Dear all,First post on these boards I’m rather new to MongoDB, so please bear with me.I have two questions regarding MongoDB clusters:1/ Are hybrid clusters possible (with nodes on both Linux and Windows servers) ?\n2/ Is it possible to add extra nodes to an existing cluster without breaking anything (say going from 2 nodes to 3) ?Best regards,Samuel", "username": "Samuel_VISCAPI" }, { "code": "", "text": "Hello @Samuel_VISCAPI, welcome to the MongoDB Community forum!In MongoDB, a cluster can mean a replica-set or a sharded cluster. Since, you didn’t mention which one it is I am posting for a replica-set.1/ Are hybrid clusters possible (with nodes on both Linux and Windows servers) ?I think it is possible.MongoDB uses statement based replication, and the write information is stored and replicated using an oplog. The requirement would be that all the nodes in the cluster must have the same MongoDB versions.2/ Is it possible to add extra nodes to an existing cluster without breaking anything (say going from 2 nodes to 3) ?It is possible - see Add Members to a Replica Set", "username": "Prasad_Saya" }, { "code": "", "text": "Hi @Samuel_VISCAPI,I hope you have got your doubt cleared.\nIf you still have any further questions feel free to mention them down here.Thank you @Prasad_Saya for helping out.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi @Prasad_Saya and @Kushagra_Kesav !Thank you very much for the quick reply and the kind words, really appreciated I was indeed referring to a replica set, sorry I forgot to mention it. The third node on the Windows server would act as an arbiter only.Best regards,Samuel", "username": "Samuel_VISCAPI" }, { "code": "", "text": "Hi @Samuel_VISCAPI,The third node on the Windows server would act as an arbiter only.Please feel free to reach out if you have any questions.Happy Learning Thanks,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "db.mydata.remove({expired: true})", "text": "Welcome to the MongoDB Community @Samuel_VISCAPI!Are hybrid clusters possible (with nodes on both Linux and Windows servers) ?To provide more certainty to @Prasad_Saya’s answer: as long as you are using compatible server versions (ideally, identical) the replication protocol is cross-platform and a replica set can have members using different operating systems.However, generally you would want data-bearing replica set members to have similar configurations so performance and failover is more predictable. Performance, security, and tuning will vary between operating systems on otherwise identical hardware.MongoDB uses statement based replicationUnfortunately this is inaccurate terminology. However I realise it is used in one of our MongoDB University courses and have raised an issue to review this usage.The replication oplog records operations affecting data as ordered sequences of idempotent changes based on the original command and the affected documents. Oplog operations produce the same results whether applied once or multiple times to the target dataset, which would not be the case if the original statement was used. For example, a command deleting a range of documents (eg db.mydata.remove({expired: true}) will result in an oplog entry for every document deleted on the primary to ensure all members of the replica set apply identical changes.I think it is important to call out this difference, as statement-based replication in the SQL world can lead to significant side effects (especially when stored procedures are involved). For some example caveats, see MySQL’s Advantages and Disadvantages of Statement-Based and Row-Based Replication.A better description for MongoDB’s approach might be document-based replication, or perhaps logical replication (as opposed to physical). We need to agree on that when revising the course material .The third node on the Windows server would act as an arbiter only.I recommend you avoid using an arbiter in a production environment with modern versions of MongoDB. Arbiters are not ideal (see my comment on Replica set with 3 DB Nodes and 1 Arbiter) and the operational consequences will be more disruptive with an active cluster.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Hybrid MongoDB cluster and number of nodes
2021-03-16T08:10:53.309Z
Hybrid MongoDB cluster and number of nodes
4,477
null
[ "node-js" ]
[ { "code": "const options = { upsert: false };\nif (user.energy == undefined) return \nif (user.endurance < 5 && user.energy > 40) {\nvar updateDoc = {\n$inc: {\nenergy: -40,\n},\n};\n}\nif (user.endurance >= 5 && user.energy > 38) {\nvar updateDoc = {\n$inc: {\nenergy: -38,\n},\n};\n}\nif (user.endurance >= 10 && user.energy > 36) {\nvar updateDoc = {\n$inc: {\nenergy: -36,\n},\n};\n}\nif (user.endurance >= 20 && user.energy > 34) {\nvar updateDoc = {\n$inc: {\nenergy: -34,\n},\n};\n}\nif (user.endurance >= 30 && user.energy > 32) {\nvar updateDoc = {\n$inc: {\nenergy: -32,\n},\n};\n}\nif (user.endurance >= 40 && user.energy > 30) {\nvar updateDoc = {\n$inc: {\nenergy: -30,\n},\n};\n}\nif (user.endurance >= 50 && user.energy > 28) {\nvar updateDoc = {\n$inc: {\nenergy: -28,\n}\n}\n} \n", "text": "How can fix this mongodb error?: TypeError: Cannot convert undefined or null to object.\nnode_modules\\mongodb\\lib\\utils.js:796:23)\nnode_modules\\mongodb\\lib\\operations\\update_one.js:11:10)\nnode_modules\\mongodb\\lib\\collection.js:758:5)I tried fixing it by saying if one of the conditions is undefined to return, but that won’t solve it.I seem to get the error if none of the below conditions is met:", "username": "JasoO" }, { "code": "", "text": "Hi @JasoO,Can you provide your query? Can you also provide an example of what is being passed to your query when the error is thrown?", "username": "Lauren_Schaefer" } ]
How can i fix MongoDB error: TypeError: Cannot convert undefined or null to object?
2021-02-26T09:22:13.076Z
How can i fix MongoDB error: TypeError: Cannot convert undefined or null to object?
6,295
null
[ "queries" ]
[ { "code": "{\n geometry: \n { $geoWithin: \n { $box: \n [ [-73.995762,40.764826], [-73.934034,40.802038] ] \n } \n } \n}\n", "text": "i am doing a boundary box query on a polygon shape stored as geo json object but it is not returning any result", "username": "anubhav_tarar" }, { "code": "", "text": "Hi @anubhav_tarar,Were you able to resolve this issue?If not, please provide some example documents that you are expecting to match this query.Thanks,\nStennie", "username": "Stennie_X" } ]
Boundary box query on polygon do not return any results
2021-02-26T10:55:16.626Z
Boundary box query on polygon do not return any results
1,596
null
[ "replication" ]
[ { "code": "", "text": "Hello All,I have a replica set with three nodes primary,secondary and arbiter nodes. I need to migrate this replica set to new data center, can you please advise me what is the best option to perform this migration with minimum downtime? The size of datastore is around 4TB.Thanks!", "username": "Kamal_Mahana" }, { "code": "", "text": "Welcome to the MongoDB Community @Kamal_Mahana!I expect you may have completed your migration by now, but here is a suggestion if you are still in the planning stage (or perhaps someone else has a similar question).You can migrate your replica set from the original data centre (DC1) to a new data centre (DC2) without downtime by temporarily adding more members to your replica set.As general steps I would:Add members in DC2 using initial sync or a file copy backup.Configure additional members in DC2 as hidden, priority 0 until you are ready to start sending application traffic to the new data centre.When ready to switch over, reconfigure the new members in DC2 as electable and hide the original members in DC1.Remove the hidden members in DC1 to finalise your migration.I also recommend taking this opportunity to replace your arbiter with a full data-bearing secondary. Arbiters are not ideal (see my comment on Replica set with 3 DB Nodes and 1 Arbiter - #8 by Stennie) and the operational consequences will be more disruptive with an active cluster.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Migration of Replica set to new datacenter
2021-02-26T10:55:40.477Z
Migration of Replica set to new datacenter
3,394
null
[ "atlas-functions", "atlas-triggers" ]
[ { "code": "", "text": "I want to trigger a function once a day and store the values in Realm Values to use it for the day.\nIs there any way I can do it? Or Realm Values is a static content?\nThank you so much for any response.", "username": "Tuan_Nguyen1" }, { "code": "", "text": "I believe that Realm values are read-only from the app’s perspective.Instead, you can store the result in an Atlas document and then use that from your app/functions.", "username": "Andrew_Morgan" }, { "code": "", "text": "That’s the way! Thank you so much.", "username": "Tuan_Nguyen1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to store the Function Result in Realm Values
2021-03-18T08:31:30.939Z
How to store the Function Result in Realm Values
2,337
null
[ "atlas-functions" ]
[ { "code": "exports = () => {\n var res;\n res = context.functions.execute(\"listTodayBookings\");\n return res\nexports = () => {\n var res, compare;\n var today = new Date();\n var h = today.getHours() + 7; //GMT +7\n var m = today.getMinutes();\n var time = h + \":\" + m;\n res = context.functions.execute(\"listTodayBookings\");\n for (let k in res) {\n console.log(k, res[k])\n }\n", "text": "I try to get result from another function and use that return values. But context.functions.execute only works when I return the values immediately. It means that if I use this way, it works perfectly:But not this wayres will be undefined and print nothing. Any helps are appreciated.", "username": "Tuan_Nguyen1" }, { "code": "listTodayBookingsresawait.then", "text": "Hi @Tuan_Nguyen1 welcome to the community!What is listTodayBookings returning? If it’s an asynchronous function then it could be that res is an unresolved promise and you need to use the await or .then syntax.", "username": "Andrew_Morgan" }, { "code": "listTodayBookings", "text": "Thank you for your response. I also tried async and await but it didn’t work though. listTodayBookings returns a dictionary. If I call and return it, it worked well. But I need to do some works before returning.", "username": "Tuan_Nguyen1" }, { "code": "listTodayBookings", "text": "Are you able to share listTodayBookings (or a mocked-up version of it?)", "username": "Andrew_Morgan" }, { "code": "exports = function(){\n const mongodb = context.services.get(\"mongodb-atlas\");\n const db = mongodb.db(\"test\");\n const bookings = db.collection('bookings');\n var d = new Date();\n var i = d.getDay();\n var days = ['MON','TUE','WED','THU','FRI','SAT','SUN'];\n var query = {};\n var res = {};\n var time = days[i];\n query[days[i]] = {'$exists': true};\n var todayBookings = bookings.find(query, {\"_id\": 0});\n todayBookings.toArray().then(data => {\n data.forEach(doc => {\n res[doc['Full Name']] = doc[time];\n });\n });\n return res;\n};\nexports = () => {\n var res, compare;\n var arr = [];\n res = context.functions.execute(\"listTodayBookings\");\n var today = new Date();\n var h = today.getHours() + 7; //GMT +7\n var m = today.getMinutes();\n var time = h + \":\" + m;\n // Object.entries(res).forEach(([k,v]) => console.log(k,v))\n // console.log(time);\n // return Object.entries(res)\n return Object.keys(res).length\n // if (res) {\n // for (let k in res){\n // console.log(k, res[k])\n // }\n // }\n};\n", "text": "listTodayBookings function:checkCancel function:I use those functions to list Customers who have bookings the current day and check if they’re late or not. Please take a look and if you have any solutions for this, I’ll appreciate it very much.", "username": "Tuan_Nguyen1" }, { "code": "listTodayBookingsexports = function(){\n const mongodb = context.services.get(\"mongodb-atlas\");\n const db = mongodb.db(\"RChat\");\n const bookings = db.collection('ChatMessage');\n var res = {};\n var todayBookings = bookings.find({}, {\"_id\": 0});\n console.log(`About to fetch data`);\n todayBookings.toArray().then(data => {\n console.log(`About to process docs`);\n data.forEach(doc => {\n console.log(`Processing doc`);\n res[doc['Full Name']] = \"Fred\";\n });\n });\n console.log(`Returning`);\n return res;\n}; \nAbout to fetch data\nReturning\nAbout to process docs\nProcessing doc\nProcessing doc\n.thenreturn.nextexports = function(){\n const mongodb = context.services.get(\"mongodb-atlas\");\n const db = mongodb.db(\"RChat\");\n const bookings = db.collection('ChatMessage');\n var res = {};\n var todayBookings = bookings.find({}, {\"_id\": 0});\n console.log(`About to fetch data`);\n todayBookings.toArray().then(data => {\n console.log(`About to process docs`);\n data.forEach(doc => {\n console.log(`Processing doc`);\n res[doc['Full Name']] = \"Fred\";\n });\n console.log(`Returning`);\n return res;\n });\n };\nAbout to fetch data\nAbout to process docs\nProcessing doc\nProcessing doc", "text": "@Tuan_Nguyen1 I added some instrumentation to a cut-down version of listTodayBookings to show the order that instructions are executed:These are the results:As you can see, the results are returned before the .then block is executed.Moving the return to within the .next block means that things get executed in the correct order:Results:", "username": "Andrew_Morgan" }, { "code": "exports = function(){\n\t const mongodb = context.services.get(\"mongodb-atlas\");\n\t const db = mongodb.db(\"test\");\n\t const bookings = db.collection('bookings');\n\t var d = new Date();\n\t var i = d.getDay();\n\t var days = ['MON','TUE','WED','THU','FRI','SAT','SUN'];\n\t var query = {};\n\t var res = {};\n\t var time = days[i];\n\t query[days[i]] = {'$exists': true};\n\t var todayBookings = bookings.find(query, {\"_id\": 0});\n\t todayBookings.toArray().then(data => {\n\t\tdata.forEach(doc => {\n\t\t console.log(doc['Full Name'])\n\t\t res[doc['Full Name']] = doc[time];\n\t\t});\n\t\treturn res;\n\t });\n\t};", "text": "I use the return as you recommended but it seems not working though. The console.log inside forEach worked. But the return has no luck. (\"$undefined\": true).\nThe console.log inside forEach print:\n‘Nguyen Tuan’\n‘Nguyen A’\n‘Nguyen B’", "username": "Tuan_Nguyen1" }, { "code": "exports = () => {\n const mongodb = context.services.get(\"mongodb-atlas\");\n const db = mongodb.db(\"test\");\n const bookings = db.collection('bookings');\n var d = new Date();\n var i = d.getDay();\n var days = ['SUN', 'MON','TUE','WED','THU','FRI','SAT'];\n var query = {};\n var res = {};\n var time = days[i];\n query[days[i]] = {'$exists': true};\n var todayBookings = bookings.find(query, {\"_id\": 0});\n return todayBookings.toArray().then(data => {\n\tdata.forEach(doc => {\n\t res[doc['Full Name']] = doc[time];\n\t});\n\treturn res;\n });\n}\nexports = async () => {\n var res, compare;\n var today = new Date();\n var h = today.getHours() + 7; //GMT +7\n var m = today.getMinutes();\n var time = h + \":\" + m;\n res = await context.functions.execute(\"listTodayBookings\");\n for (let k in res) {\n\tconsole.log(k, res[k])\n }\n}", "text": "Hello Andrew, I figured out how to handle this caselistTodayBookings function:the other one:", "username": "Tuan_Nguyen1" }, { "code": "", "text": "Ah yes, not returning the results from the handled promise – that’s caught out more people than I can count!Glad to hear that you’ve got it working.", "username": "Andrew_Morgan" }, { "code": "", "text": "Thank you so much for blazingly fast responses.", "username": "Tuan_Nguyen1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Functions doesn't return as expected
2021-03-16T06:25:30.113Z
MongoDB Functions doesn&rsquo;t return as expected
6,338
null
[ "crud" ]
[ { "code": "db.task.updateMany(\n { \"timestamp2\": { $exists: false } },\n { $set: { \"timestamp2\": $timestamp }}\n)\n", "text": "I have this document in mongodb:_id: “xxxx”, “timestamp”: ISODate(“2022-03-26T10:33:47.738Z”)I would like to create a migration that will copy over timestamp to timestamp2 field. Something like this:So if document 1 have 2022-03-26T10:33:47.738Z as timestamp, its timestamp2 will be the same (2022-03-26T10:33:47.738Z). If document 2 have 2021-03-26T10:33:47.738Z as timestamp, its timestamp2 will be the same (2021-03-26T10:33:47.738Z) How can I achieve this? How can I rollback this migration (whats the code for the migrate-down file?) Thanks", "username": "Ariel_Ariel" }, { "code": "", "text": "Hi,\nplease refer to\nhttps://www.mongodb.com/community/forums/t/mql-for-update-many-with-a-new-field-from-existing-document/99174", "username": "Imad_Bouteraa" } ]
UpdateMany in mongodb using value of other field
2021-03-18T09:51:22.978Z
UpdateMany in mongodb using value of other field
6,591
null
[ "queries", "performance" ]
[ { "code": "/* 1 */\n{\n \"db\" : \"dbarc\",\n \"collections\" : 839,\n \"views\" : 0,\n \"objects\" : 34745518,\n \"avgObjSize\" : 3965.76271463272,\n \"dataSize\" : 137792479785.0,\n \"storageSize\" : 76157050880.0,\n \"numExtents\" : 0,\n \"indexes\" : 9287,\n \"indexSize\" : 17899974656.0,\n \"fsUsedSize\" : 276794646528.0,\n \"fsTotalSize\" : 540052742144.0,\n \"ok\" : 1.0\n}\n/* 1 */\n{\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"dbarc.sps\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {},\n \"winningPlan\" : {\n \"stage\" : \"COLLSCAN\",\n \"direction\" : \"forward\"\n },\n \"rejectedPlans\" : []\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 1727172,\n \"executionTimeMillis\" : 141746,\n \"totalKeysExamined\" : 0,\n \"totalDocsExamined\" : 1727172,\n \"executionStages\" : {\n \"stage\" : \"COLLSCAN\",\n \"nReturned\" : 1727172,\n \"executionTimeMillisEstimate\" : 141095,\n \"works\" : 1727174,\n \"advanced\" : 1727172,\n \"needTime\" : 1,\n \"needYield\" : 0,\n \"saveState\" : 14246,\n \"restoreState\" : 14246,\n \"isEOF\" : 1,\n \"invalidates\" : 0,\n \"direction\" : \"forward\",\n \"docsExamined\" : 1727172\n }\n },\n \"serverInfo\" : {\n \"host\" : \"MONGO1\",\n \"port\" : 27017,\n \"version\" : \"4.0.2\",\n \"gitVersion\" : \"fc1573ba18aee42f97a3bb13b67af7d837826b47\"\n },\n \"ok\" : 1.0\n}\n", "text": "Hi everybody,The context:I need to build a report that shows 1 year worth of data (around 1.7 million of records).The database contains one collection per report, here the db stats:The problem:The response time of queries for that amount of data is too much. 200 seconds for 1.7 million of records. Here the explain plan:The collections has many indexes, one on a text field which can be large, i’m sure i’m missing something big, basic things. I’m new about NoSQL dbs, i think the problem could be the total indexes size but i have no clue at the moment.Thank you very much for the eventual help.", "username": "AlesRoma" }, { "code": "", "text": "Welcome to the forum!If you notice in your execution plan you are doing a COLLSCAN. With nReturned 1727172 and you are not examining any keys (totalKeysExamined: 0). This means that the query that you are using is not using an index. So I would take a look again at your indexes that you already have and the query that you are using and try to build a better index.", "username": "tapiocaPENGUIN" }, { "code": "", "text": "That’s a very good point from tapiocaPENGUIN. This also means that you are shipping your 1.7 million records over the wire and if you look at dataSize it is a lot of data sent over.I would take a look at the aggregation framework, https://docs.mongodb.com/manual/aggregation/. See if you could reduce the amount of data sent over by having the computation of the report done on the server rather than doing it on the client with the complete data set.You might still have some issue if the whole 1.7 millions records are needed if you do not have enough RAM.I would also take a look at the computed pattern at Building With Patterns: The Computed Pattern | MongoDB Blog to see if some subset of the report can be done on historical data that will not changed.", "username": "steevej" }, { "code": "", "text": "Thank you, you are right, but i was figuring out the situation, even querying the _id, the response time is too much:Execution time in seconds : 99\ntotal_records: 1000000The problem is elsewhere not in index, Steevej i think pointed it out. Thanks again", "username": "AlesRoma" }, { "code": "", "text": "First of all thank you very much for the help.I do use aggregation:\nAggregateIterable resultAit = feedCollection.aggregate(aggregatePipeline).allowDiskUse(true)\n.useCursor(true);\nIterator resultit = resultAit.iterator();Then i iterate the documents and i create a list of BSON document that i pass to the application.The problem is in fact, that i go for out of memory error (GC out of memory, heap memory saturated).\nI need all those documents for building report, i could use some subset of data for some report but i could need that amount of data in the future so i don’t know what the solution may will be.\nUsing some search engine like ElasticSearch could be useful?p.s.: i’ve red the computed pattern (and the subset pattern), those approach could be helpful, or maybe i could try to read data locally (back-end on the same machine of the db), but i don’t think is a good solution…Thanks again", "username": "AlesRoma" }, { "code": "", "text": "AggregateIterable resultAit = feedCollection.aggregate(aggregatePipeline).allowDiskUse(true)\n.useCursor(true);\nIterator resultit = resultAit.iterator();Then i iterate the documents and i create a list of BSON document that i pass to the application.Any update/suggestion @AlesRoma as i too facing same issue i.e.\nAggregateIterable resultAit = feedCollection.aggregate(aggregatePipeline).allowDiskUse(true)\n.useCursor(true);\nIterator resultit = resultAit.iterator();when i iterate the documents and i create a list of BSON document mee too facing the same issue.", "username": "Srikanth_Kalakanti" }, { "code": "", "text": "AggregateIterable resultAit =one more question here is, when we get the AggregateIterable results after we do the aggregate pipeline on a collection and add $merge operation as last stage of aggregation pipeline… until we read/iterate the results the data is not updated is this the correct functionality with the $merge in aggregation pipeline?", "username": "Srikanth_Kalakanti" }, { "code": "", "text": "Hello @Srikanth_Kalakanti, welcome to the MongoDB Community forum!Please tell about your own query (actual one) with sample input data and what you are expecting as a result. Also, include the versions of MongoDB Server, the kind of deployment and the Java Driver you are working with.To discuss about performance issue, I’d like to know the size of the collection also. If you can, run the explain with “executionStats” mode and post the results.", "username": "Prasad_Saya" }, { "code": "", "text": "applicationSystem Details\nMongoDB4.4 version\nmongo-java-driver-3.12.7.jarConsider the Collection(Employee and it has a child Details(Array Object)\nEmployee:\nfirst-name,last_name etc\nDetails: Object array(In this array each element contains _id and some other data (Total size of the array size is 16 MB data)So transfering huge data is difficult using the MongoDB Kafka Connector, we chunked this data(with _id element in the array and aggragation pipeline) i.e.\nCreated a new collection(Employee_details) merging the elements of Employee collection and Details Collection i.e each array element - considered as a new object with the _id*So initially Employee has 5000+ record and after splitting into chunks the records(2,42,145) *Now on this new collection(Employee_details) wrote the MongoDb connector to load data to Kafka topic.On this new collection when doing the CDC changes using the Mongo Aggregation Pipeline using the Java programm.\nPipeline = match(and(exists(“Details”, true), ne(“Details”,\nnew BsonNull()))), match(ne(“Details”, Arrays.asList())), unwind(“$Details”), group(“$Details._id”,first(“DetailsId”, “$Details.Id”), first(“GroupType”, “$Details…GroupType”)… etc)\nAggregateIterable result = collection.aggregate(Pipeline, eq(“$merge”, and(eq(“into”, “Employee_details”), eq(“on”, “_id”), eq(“whenMatched”, “merge”), eq(“whenNotMatched”, “insert”));So here only, when we iterate this AggregateIterable result, the data is updating… for only few records of Employee it working\nIf we consider all records(5000+) of Employee we are getting below error:Exception in thread “main” com.mongodb.MongoCommandException: Command failed with error 292 (QueryExceededMemoryLimitNoDiskUseAllowed): ‘Exceeded memory limit for $group, but didn’t allow external sort. Pass allowDiskUse:true to opt in.’ on server xx-shard-00-01.mongodb.net:27017. The full response is {“operationTime”: {“$timestamp”: {“t”: 1615971176, “i”: 2}}, “ok”: 0.0, “errmsg”: “Exceeded memory limit for $group, but didn’t allow external sort. Pass allowDiskUse:true to opt in.”, “code”: 292, “codeName”: “QueryExceededMemoryLimitNoDiskUseAllowed”, “$clusterTime”: {“clusterTime”: {“$timestamp”: {“t”: 1615971176, “i”: 2}}, “signature”: {“hash”: {“$binary”: “XXXXXX=”, “$type”: “00”}, “keyId”: {“$numberLong”: “6880913173017264129”}}}}Tried adding allowDiskUse:true to the Aggregation Pipeline, still same issue.Question here:\nOnce we get the result object from the aggregation pipeline… do we need to compulsory iterate the result(AggregateIterable) to update the CDC changes?Memory issue code identified:\nArrayList results = new ArrayList();\nfor (Document document : result) {\nresults.add(document);\n}\nAs this above code is causing the memory issue, so is their any other alternative for processing result(AggregateIterable) to update the DB?Iteration of result(AggregateIterable) is mandatory? to update the DB?Observation: When we are not reading this result object, DB updates are not working.\nSo need what is the best option to read this result(AggregateIterable)", "username": "Srikanth_Kalakanti" }, { "code": "allowDiskUse:trueAggregateIterable result", "text": "Hello @Srikanth_Kalakanti, please clarify these:Tried adding allowDiskUse:true to the Aggregation Pipeline, still same issue.What is the issue - the error. Please post the error message from the aggregation after adding allowDiskUse:true to the pipeline.Question here:\nOnce we get the result object from the aggregation pipeline… do we need to compulsory iterate the result(AggregateIterable) to update the CDC changes?You are getting a result from the aggregation as AggregateIterable result. I am not able to understand what you mean by: “do we need to compulsory iterate the result(AggregateIterable) to update the CDC changes?”What is it you are updating? It is not clear what you are trying to say. Please explain clearly what is it you are trying and what is the problem.Also, please format the code and error messages properly.", "username": "Prasad_Saya" }, { "code": "allowDiskUse:true", "text": "Please find the error message from the aggregation after adding allowDiskUse:true to the pipeline.Exception in thread “main” java.lang.OutOfMemoryError: GC overhead limit exceededMar 18, 2021 7:15:14 AM com.mongodb.diagnostics.logging.JULLogger log\nINFO: Exception in monitor thread while connecting to server xxx-shard-00-04.xxxx.mongodb.net:27017\ncom.mongodb.MongoException: java.lang.OutOfMemoryError: GC overhead limit exceeded\nat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:138)\nat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117)\nat java.lang.Thread.run(Thread.java:748)\nCaused by: java.lang.OutOfMemoryError: GC overhead limit exceeded\nat java.util.Arrays.copyOfRange(Arrays.java:3664)My Question is :\nHere we are giving an input source collection and applying the aggregation pipe line and finally using $merge to a new collection i.e.\nAggregateIterable result = collection.aggregate(pipeline,$merge($destination collection));So if any changes are done in source collection and then run the above aggreagation pipeline and $merge code, then until we iterate the result object the destination collection data is not changing and for this I iterated the result(AggregateIterable) which is causing the GC/memory issue", "username": "Srikanth_Kalakanti" }, { "code": "", "text": "Hi,actually i could not resolve the problem yet.\nI tried to mitigate it raising ram size java parameters and scaling up the machine.\nAlso, i created methods for showing this records on a chart, reducing the projection (therefore the size of data).\nAlso we are considering using elasticSearch as search engine.Thank you for sharing your experiences", "username": "Alessio_Rotatori" }, { "code": "java.lang.OutOfMemoryErrorjava.lang.OutOfMemoryErrorjava.lang.OutOfMemoryError-XX:-UseGCOverheadLimit", "text": "com.mongodb.MongoException: java.lang.OutOfMemoryError: GC overhead limit exceededFrom the Java API documentation, java.lang.OutOfMemoryError is defined as:Thrown when the Java Virtual Machine cannot allocate an object because it is out of memory, and no more memory could be made available by the garbage collector.From a topic on Oracle’s Java web pages - Understand the OutOfMemoryError Exception, there are various causes this error can occur. Each of those causes and possible solutions are explained in the article. Also, the article is related to Java SE 8 version. Your case is: Exception in thread thread_name: java.lang.OutOfMemoryError: GC Overhead limit exceeded:Cause: The detail message “GC overhead limit exceeded” indicates that the garbage collector is running all the time and Java program is making very slow progress. After a garbage collection, if the Java process is spending more than approximately 98% of its time doing garbage collection and if it is recovering less than 2% of the heap and has been doing so far the last 5 (compile time constant) consecutive garbage collections, then a java.lang.OutOfMemoryError is thrown. This exception is typically thrown because the amount of live data barely fits into the Java heap having little free space for new allocations.Action: Increase the heap size. The java.lang.OutOfMemoryError exception for GC Overhead limit exceeded can be turned off with the command line flag -XX:-UseGCOverheadLimit .", "username": "Prasad_Saya" } ]
Cause of poor performance - 1.7 million records
2020-11-12T07:44:55.604Z
Cause of poor performance - 1.7 million records
31,932
null
[ "ruby" ]
[ { "code": "", "text": "Hi\nI am creating Ruby on Rails API with mongotDB.\nNeed suggestion for serialize the object any gem for this.\nThanks in Advance.", "username": "hanish_jadala" }, { "code": "", "text": "Hi @hanish_jadala,Need suggestion for serialize the object any gem for this.Can you clarify what it is exactly you’re looking to accomplish? Are you looking to read a document from a MongoDB cluster then serialize the result as JSON and return it via a controller method?The more information you can share about your use case the easier it will be to respond ", "username": "alexbevi" }, { "code": "", "text": "Mongoid supports this via custom fields: https://docs.mongodb.com/mongoid/master/tutorials/mongoid-documents/#custom-fields", "username": "Oleg_Pudeyev" }, { "code": "", "text": "Yes, what you said is right. I want to read document from MongoDB and then serialize the result a JSON and return it via controller method.\nI was looking for any gem which help to accomplish this process. When I use PostgreSQL I use active_model_serializers and it helped in great deal. Same way looking for MongoDB.", "username": "hanish_jadala" }, { "code": "require 'bundler/inline'\n\ngemfile do\n source 'https://rubygems.org'\n gem 'mongoid'\n gem 'mongo'\nend\n\nMongoid.configure do |config|\n config.clients.default = { uri: \"mongodb+srv://...\" }\nend\n\nclass Person\n include Mongoid::Document\n field :name, type: String\nend\n\nPerson.create(name: \"Alex\")\nputs Person.last.to_json\n# => {\"_id\":{\"$oid\":\"605090f9ceb02323d23116e8\"},\"name\":\"Alex\"}\n", "text": "Mongoid includes ActiveModel, which by extension includes ActiveSupport. As a result, Mongoid documents can leverage Active Support Core Extensions including JSON support.For example:Note the result will be emitted as MongoDB Extended JSON (v2).", "username": "alexbevi" }, { "code": "", "text": "Thank for your time.Now Person have 15 columns and I need not return all the 15 column in that case how can I handle it.If I want to some changes on top of column\nEg: Attribute called amount which is float type and I want to return as round(2) on it.\nHow can I do it with mongo.", "username": "hanish_jadala" }, { "code": "to_jsonclass Person\n include Mongoid::Document\n field :name, type: String\n field :a, type: Integer, default: 1\n field :b, type: Integer, default: 2\nend\n\np = Person.create(name: \"Alex\")\nputs p.to_json\n# => {\"_id\":{\"$oid\":\"6050af50ceb023bf7eebe776\"},\"a\":1,\"b\":2,\"name\":\"Alex\"}\nputs p.to_json(only: :name)\n# => {\"name\":\"Alex\"}\nputs p.to_json(except: [:a, :b])\n# => {\"_id\":{\"$oid\":\"6050af50ceb023bf7eebe776\"},\"name\":\"Alex\"}\n", "text": "Now Person have 15 columns and I need not return all the 15 column in that case how can I handle it.Have a look at the documentation for the to_json method for details on how to format the results.For example:If I want to some changes on top of columnSee Oleg’s response regarding Custom Fields.Using a Custom Field you can define how the value is “mongoized” and “demongoized”, or stored and retrieved.", "username": "alexbevi" }, { "code": "", "text": "In this case I have to do the changes on every response. Do we have any chance like active-model-serializer, you create a serializer file for model and allow only specific column and also do the calculations on top of the specific column and also add extra keys and in response when we pass model object everything goes according to the serializer file\n…", "username": "hanish_jadala" }, { "code": "active_model_serializersGemfilegem \"active_model_serializers\"\nbundle installrails g serializer postapp/serializers/post_serializer.rbclass PostSerializer < ActiveModel::Serializer\n attributes :id, :title\n def id\n object.id.to_s\n end\nend\napp/controllers/posts_controller.rbclass PostsController < ApplicationController\n before_action :set_post, only: %i[ show edit update destroy ]\n # GET /posts or /posts.json\n def index\n @posts = Post.all\n respond_to do |format|\n # adding the renderer is needed here\n format.json { render json: @posts, root: false }\n end\n end\nhttp://localhost:3000/posts.json_id", "text": "You may be able to just use the active_model_serializers gem directly in your Rails project then.For example, if starting a new application based on the Mongoid Rails Tutorial try the following:The result should show the list of posts with only a string _id and the title.", "username": "alexbevi" }, { "code": "class FrequencyDay\n include Mongoid::Document\n include Mongoid::Timestamps\n field :title, type: String\n field :no_of_day, type: Integer\n field :type_of, type: Boolean\nend\n\nclass FrequencyDaySerializer < ActiveModel::Serializer\n attributes :id, :title, :no_of_day, :type_of, :dummy\n\n def dummy\n \"Dummy\"\n end\nend\n \"response\": {\n \"_id\": {\n \"$oid\": \"60530408203d565918dc80e7\"\n },\n \"created_at\": \"2021-03-18T13:10:56.558+05:30\",\n \"no_of_day\": 6,\n \"title\": null,\n \"type_of\": false,\n \"updated_at\": \"2021-03-18T13:10:56.558+05:30\"\n }\nroot: false\n", "text": "Model:Response:As you can see I have given only id, :title, :no_of_day, :type_of, :dummy\nI can see the time stamps and well can’t see the extra attribute which I have added in serializer.Am using Ruby 3.0.0 and Rails 6.1.3After addedin response things are fine.Thanks you for your time.", "username": "hanish_jadala" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Serializer with MongoDB
2021-03-08T13:00:44.548Z
Serializer with MongoDB
6,888
null
[ "api" ]
[ { "code": "Error: error getting Team information: GET https://cloud.mongodb.com/api/atlas/v1.0/orgs/{ORG_ID}/teams/{TEAM_ID}: 403 (request \"IP_ADDRESS_NOT_ON_ACCESS_LIST\") IP address 3.230.120.28 is not allowed to access this resource.\napinotificationssentinelvcs> curl \\\n --request GET \\\n -H \"If-Modified-Since: Tue, 26 May 2020 15:10:05 GMT\" \\\n https://app.terraform.io/api/meta/ip-ranges\n{\n\t\"api\": [\"75.2.98.97/32\", \"99.83.150.238/32\"],\n\t\"notifications\": [\"52.86.200.106/32\", \"52.86.201.227/32\", \"52.70.186.109/32\", \"44.236.246.186/32\", \"54.185.161.84/32\", \"44.238.78.236/32\"],\n\t\"sentinel\": [\"52.86.200.106/32\", \"52.86.201.227/32\", \"52.70.186.109/32\", \"44.236.246.186/32\", \"54.185.161.84/32\", \"44.238.78.236/32\"],\n\t\"vcs\": [\"52.86.200.106/32\", \"52.86.201.227/32\", \"52.70.186.109/32\", \"44.236.246.186/32\", \"54.185.161.84/32\", \"44.238.78.236/32\"]\n}\n403 (request \"IP_ADDRESS_NOT_ON_ACCESS_LIST\") ", "text": "Dear MongoDB community\nI am deploying an atlas mongo cluster from terraform, and recently I got this issue in a couple of our terraform cloud runs by interaction with atlas mongodb API:The first thing came up to my mind was that atlas API is not accepting incoming connections from terraform cloud hosted runners where I am running the pipelines to setup/update atlas mongodb infrastructure. This due to the IP address from terraform runner is not allowed on atlas API access list.That I tried is keeping in mind the API range list, I’ve added the range gotten from this curl request to my mongo API access list, without success, since the range is wide, sometimes the IP I got from terraform cloud run is not included on the api, notifications, sentinel or vcsNot sure if this is something related from Atlas Mongodb API side, the thing is until two days ago (and for long time) this process on terraform cloud was working well getting connections with Atlas API and I was not getting the 403 (request \"IP_ADDRESS_NOT_ON_ACCESS_LIST\") error until now.I know the terraform IP range list is variable, from time to time, and also I already post a question on terraform hashicorp community and people there says here if it was working and now it doesn’t, perhaps it has to do with some change from MongoDB sideThis seems like a change in your MongoDB organization or the Atlas MongoDB API then, if it used to work and now requires a specific IP allowlist. The error is returned in Terraform but is a response from that service, not Terraform itself.I am really confused about the origin of this error, it looks like Atlas API is not accepting incoming connections from terraform cloud runner (it is a public runner hosted on terraform side) and I have to whitelist the IP address or range of them in Atlas API Access list, but the thing is this range is variable and is not included on the ranges gotten with my curl command.I just wanted to come up here just for the record and know your thoughts about that.\nHas something changed on ATLAS API for incoming connections? Why it was working well and suddenly it doesn’t?I will appreciate your thoughts.", "username": "Bernardo_Garcia" }, { "code": "Error: error creating Team information: POST cloud.mongodb.com/api/atlas/v1.0/orgs/5e85b39c70ff62663a3df63a/teams: 403 (request \"IP_ADDRESS_NOT_ON_ACCESS_LIST\") IP address 40.65.198.72 is not allowed to access this resource.\ncurl \\\n -H \"Accept: application/vnd.github.v3+json\" \\\n https://api.github.com/meta\n", "text": "Update:The same is happening with Github action runnersHowever, for the Github actions case, the IP address range used is the same for Azure cloud.\nI can figure it out here for all Github ecosystem, and I can get a quite extensive IP addresses array list for Github actions by doing this requestThe IP address shown in the error is changing every time, I could add all the IP ranges on Atlas API access list via API (the CIDR blocks are 1639 items), but again, not sure why this situation comes up now and not before when it was working well.", "username": "Bernardo_Garcia" }, { "code": "{\n \"detail\" : \"IP address xyz.xy.xyz.xyz is not allowed to access this resource.\",\n \"error\" : 403,\n \"errorCode\" : \"IP_ADDRESS_NOT_ON_ACCESS_LIST\",\n \"parameters\" : [ \"xyz.xy.xyz.xyz\" ],\n \"reason\" : \"Forbidden\"\n}\n3.230.120.28", "text": "Hi @Bernardo_Garcia,I’ve managed to replicate a similar error response, although not from terraform directly.I have attempted a simple curl request to retrieve a list of database users from my IP address which is NOT on the API key access list in Atlas (I have redacted the IP address information in the below response):The error appears to be caused when the requesting client’s IP address is not on the API access list. The documentation linked should provide instructions on how to view the details of an API key including the access list. From your example, can you ensure the IP 3.230.120.28 is on the Access List of the API key being used to perform the request?You can perhaps add a IP range using CIDR notation to the Access List in Atlas for that specific API key that includes the possible IP’s of the client performing the request if you wish to as well.", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_Tran, thanks for your reply.Indeed, is clear for me the IP ranges for terraform and Github hosted runners are not being whitelisted at the Atlas API access list, and in this case they should.\nI have enabled the API access list for my organization and even I added a bunch of those ranges arrays on it. And of course, I would have to add the entire ranges to make them works, but the thing is, I want to share with you the following particular situation:Until some days ago I didn’t need to whitelist the terraform and Github runners IP ranges on the Atlas API access list. My pipelines were working well for quite a time in that way, without atlas asking me for this.Why suddenly Atlas API is denying me access now to my terraform and Github runners, but before it was allowing my runners to interact with it in a normal way?I think it should be some internal stuff from Atlas API (?)I have a hypothesis that I would like to share with you:Last March 1, I accidentally delete my atlas cluster from an automatic process doing some tests from my local machine.\nMy atlas cluster is created via terraform, and I am running a terratest process to verify my infrastructure (atlas mongoDB cluster included) is being created in a proper way and after that, terratest delete the test environment created for this tests.\nMy mistake was to point from terratest to the current cluster in use and not to the test environment created in runtime, it was unfortunate indeed.Then my hypothesis is:Is Atlas MongoDB team/platform/process blacklisting the terraform hosted runners IP ranges and Github actions hosted runners IP ranges I have been using as a protection measure?\nI mean perhaps they could be thinking this unfortunate situation from my side was an attack and not something from me as an owner of the cluster?I am wondering this since from more than 1 year ago I never did need to whitelist terraform and GitHub runners IP ranges on the Atlas API access list for my organization. And after this situation, Atlas is simply asking me for doing it.I wanted to tell you this, perhaps you can help me to think about it.\nI mean for me is quite strange, why Atlas API does not demanded to whitelist the Ip ranges from the beginning and now it does it, just after this incident?If this situation (my hypothesis) ending up affirmative … is there a chance to go with the support team, explain the situation, and perhaps reverts this behavior?Actually, my atlas cluster is within an M10 plan, but the support for it is within the basic Free plan.\nJust in case, what could you suggest to me to proceed in this case?", "username": "Bernardo_Garcia" }, { "code": "3.230.120.28apinotificationssentinelvcs3.230.120.283.230.120.283.230.120.283.230.120.28", "text": "Hi @Bernardo_Garcia,The IP address shown in the error is changing every timeI was presuming due to the above statement that there was a chance that the outgoing IP’s the client was changing were not on your Atlas API access list. This would also line up with the 403 error you are getting as a response.I have enabled the API access list for my organization and even I added a bunch of those ranges arrays on it.I understand you have stated you have added a bunch of the ranges arrays onto the Atlas API access list. However, I cannot see 3.230.120.28 in the response within any of the arrays. Please note, I am checking the api , notifications , sentinel or vcs arrays but please let me know if I am looking at the wrong array.I wanted to tell you this, perhaps you can help me to think about it.\nI mean for me is quite strange, why Atlas API does not demanded to whitelist the Ip ranges from the beginning and now it does it, just after this incident?To better troubleshoot the issue, please provide the following information:Look forward to hearing from you.\nJason", "username": "Jason_Tran" }, { "code": "3.230.120.28apinotificationssentinelvcsapinotificationssentinelvcs3.230.120.283.230.120.283.230.120.283.230.120.283.230.120.28", "text": "Hi @Jason_Tran, Thanks for getting back to me. I will try to provide as much info I can.I understand you have stated you have added a bunch of the ranges arrays onto the Atlas API access list. However, I cannot see 3.230.120.28 in the response within any of the arrays. Please note, I am checking the api , notifications , sentinel or vcs arrays but please let me know if I am looking at the wrong array.Indeed the ip ranges terraform runners use are not within the request results on api , notifications, sentinel, or vcs arrays.\nTerraform guys also highlight this factTo better troubleshoot the issue, please provide the following information:I tried to add it but it does not work in that way, as we have been checking, terraform runners IPs range change every run to a different IP address. So does not make sense to add just that particular IP 3.230.120.28 in my Atlas API access list, since the next run the client will use another one with a different range even.Confirm the usage of server / client with outgoing 3.230.120.28 . Is the host with outgoing IP 3.230.120.28 a machine you’re currently managing on your own infrastructure? Is it an outgoing address from Terraform Cloud that may be performing the request to Atlas?It is an outgoing address from Terraform Cloud hosted runners, its perform the request to Atlas. Those runners are the public runners people use when people don’t pay Terraform selff-hosted runners in an enterprise planWhether or not the IP 3.230.120.28 in the response has changed each time you attempt to perform a request from the same client.The IP for the terraform runner change every attempt I run my checks from there.I wanted to tell you about the deletion situation because I am still wondering, why Atlas was not requiring me to whitelist my terraform runners before (almost along 1 year) and now it does?", "username": "Bernardo_Garcia" }, { "code": "", "text": "Thank you for providing me with that information @Bernardo_Garcia,One thing that I can think of that could be a possible reason for why it was working at one point (without requiring IP whitelisting against the IP) is that the Require IP Access List for Public API setting within your Organization Settings page was toggled to On (from Off).Now, we can check this by going to the Organization Activity Feed section of your Atlas Org since you have stated this is an Organization API key being used.You can filter for the following events:Check out the below example for the above events:\n\nimage816×350 30.3 KB\nThere should then be a column on the right hand side called Creation Info where you can see when these or if these settings were changed and by whom.Look forward to hearing from you.\nJason", "username": "Jason_Tran" }, { "code": "", "text": "One thing that I can think of that could be a possible reason for why it was working at one point (without requiring IP whitelisting against the IP) is that the Require IP Access List for Public API setting within your Organization Settings page was toggled to On (from Off).Indeed I had to enable IP access list for public API (it was off) because of this error. I mean by the time I got this error from terraform and github actions runners it was off. Are you telling me if I put it to off again, the public API will accept any incoming connections (including my terraform runners)?", "username": "Bernardo_Garcia" }, { "code": "\"error\" : 403, \"errorCode\" : \"IP_ADDRESS_NOT_ON_ACCESS_LIST\" \"error\" : 403, \"errorCode\" : \"ORG_REQUIRES_ACCESS_LIST\"API Key Whitelist Entry Added175.30.100.20 has been added to the access list for the API Key with public key ABCDEFGH.", "text": "Hi @Bernardo_Garcia,Indeed I had to enable IP access list for public API (it was off) because of this error. I mean by the time I got this error from terraform and github actions runners it was off.Thanks for getting back to me with that info. I have done some testing with an Organization API key. Please see my below test cases and results:With the Require IP Access List for Public API setting configured to OFF and having 0 entries in the API Access List, I am able to perform the request as per normal.With the Require IP Access List for Public API setting configured to OFF and having an entry in the API Access List where the IP does not include the outgoing IP of the client performing the request, I get a response with the following error code & error:\n\"error\" : 403,\n \"errorCode\" : \"IP_ADDRESS_NOT_ON_ACCESS_LIST\"With the Require IP Access List for Public API setting configured to ON and have 0 entries in the API Access List, I get a response with the following error code & error:\n \"error\" : 403,\n \"errorCode\" : \"ORG_REQUIRES_ACCESS_LIST\"Are you telling me if I put it to off again, the public API will accept any incoming connections (including my terraform runners)?From my number 2 test case, if it were to be turned off I would assume you currently have some entries against the Access List so you would still hit the same error. If you have a test organization and environment where you can have this setting off with 0 entries, it would probably be best to try it out with this before attempting to make any changes on production.Do you know if there were any entries in the Access List when the setting was OFF? If there were 0 entries an an entry was then added to the Access List with the Setting OFF, It would line up with 2. in my test case above.You can filter to see if the Access List for a particular key was added to by filtering for the API Key Whitelist Entry Added event in your Organization Activity Feed.An example of this entry in the feed would look like:\n175.30.100.20 has been added to the access list for the API Key with public key ABCDEFGH.Best Regards,\nJason", "username": "Jason_Tran" }, { "code": "curlcurl#!/bin/bash\n\nip_addresses=(\"13.64.0.0%2F16\" \"13.65.0.0%2F16\" \"...\")\n\nlen=${#ip_addresses[@]}\nfor ((i=0; i<=$len; i++))\ndo\n echo ${ip_addresses[$i]}\n curl --user \"{PUBLIC_KEY}:{PRIVATE_KEY}\" --digest --include \\\n --header \"Accept: application/json\" \\\n --header \"Content-Type: application/json\" \\\n --request DELETE \"https://cloud.mongodb.com/api/atlas/v1.0/orgs/{ORG_ID}/apiKeys/{API_KEY_ID}/accessList/${ip_addresses[$i]}\"\ndone\n", "text": "@Jason_Tran It makes sense that you’ve mentioned. I was playing around last March 9th with the API in order to create restore jobs via curl command from my local machine, (such as this interaction question from my side can proof it).\nWas in that time when I had to enable API access list (not because the error with terraform runners - I was wrong, sorry - ) in order to allow my home IP address to execute the curl request to create restore jobs.\nSo after that when I executed my pipelines from terraform and github actions, indeed according to the three cases you’ve mentioned, that is why the system is requiring from me to allow the IPs ranges, because from that time my API Access list has been activated and with entries.I am going to disable it, but now I have 1700 entries approx … Do you know how can I delete them in a bulk way via API?\nI just found the example request to delete one entryUPDATE:@Jason_Tran It works now.", "username": "Bernardo_Garcia" }, { "code": "", "text": "Really glad to hear that it works now @Bernardo_Garcia.Thanks for also posting your own solution to the removal of IP entries using the loop.", "username": "Jason_Tran" }, { "code": "", "text": "Thanks for your support @Jason_Tran, ", "username": "Bernardo_Garcia" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas API is not accepting request from terraform cloud IaC runners
2021-03-11T20:08:17.648Z
Atlas API is not accepting request from terraform cloud IaC runners
13,107
null
[ "java", "connecting" ]
[ { "code": "com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@4f3e65f4. Client view of cluster state is {type=REPLICA_SET, servers=[{address=peeringvpc-shard-00-01.hbtm8.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.SocketTimeoutException: connect timed out}}, {address=peeringvpc-shard-00-02.hbtm8.mongodb.net:27017,.....exception={`com.mongodb.MongoSocketOpenException: Exception opening socket`}, caused by {java.net.SocketTimeoutException: connect timed out}}]\n\tat com.mongodb.internal.connection.BaseCluster.createTimeoutException(BaseCluster.java:407)\n\tat com.mongodb.internal.connection.BaseCluster.selectServer(BaseCluster.java:118)\n\tat com.mongodb.internal.connection.AbstractMultiServerCluster.selectServer(AbstractMultiServerCluster.java:52)\n\tat com.mongodb.client.internal.MongoClientDelegate.getConnectedClusterDescription(MongoClientDelegate.java:137)\n\tat com.mongodb.client.internal.MongoClientDelegate.createClientSession(MongoClientDelegate.java:95)\n\tat com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.getClientSession(MongoClientDelegate.java:266)\n\tat com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:170)\n\tat com.mongodb.client.internal.MongoIterableImpl.execute(MongoIterableImpl.java:135)\n\tat com.mongodb.client.internal.MongoIterableImpl.iterator(MongoIterableImpl.java:92)\n", "text": "JDK: openjdk version “11.0.10” 2021-01-19\nMongoclient version: 4.1.1\nMongo database used in Atlas: 4.0.23\nDeployed in: AWSPlease help in understanding what is causing the issue and resolving it.\nBelow is the exception seen in log as soon as the java application starts:", "username": "Melvin_George" }, { "code": "", "text": "This is resolved.\nThe route tables: Had to use route table with the proper subnet associated, not the one with nat.\nThe security groups: Had to update outbound rules with CIDR of atlas cluster.", "username": "Melvin_George" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoTimeoutException on Mongo 4.0 in Atlas with AWS EC2 using 4.1.1 mongo client
2021-03-16T16:50:17.066Z
MongoTimeoutException on Mongo 4.0 in Atlas with AWS EC2 using 4.1.1 mongo client
5,593
null
[ "aggregation" ]
[ { "code": " {\n \"title\": [\n {\"language\":\"en\",\"value\":\"Car\"},\n {\"language\":\"es\",\"value\":\"Coche\"}\n ]\n }\ndb.inventory.aggregate(\n [\n {\n $project: {\n title: { $arrayToObject: \"$title\" }\n }\n }\n ]\n)\n", "text": "I have documents with an structure similar to this (additional fields, but the structure of the title field is exactly the same)If I try to run this aggregation pipeline in atlas:I get an ‘unknown error’ message.Any idea? Is it broken, or what i’m trying to do is not supported?The documentation here: https://docs.mongodb.com/manual/reference/operator/aggregation/arrayToObject/shows a very similar case, so this is supposed to work.Any help would be appreciated.", "username": "CarlosDC" }, { "code": "$arrayToObject[ [ \"item\", \"abc123\"], [ \"qty\", 25 ] ]\nkvkv[ { \"k\": \"item\", \"v\": \"abc123\"}, { \"k\": \"qty\", \"v\": 25 } ]\n$arrayToObject{ $arrayToObject: <expression> }\n<expression>$arrayToObjectdb.inventory.aggregate([\n {\n $project: {\n title: {\n $arrayToObject: {\n $map: {\n input: \"$title\",\n in: [\"$$this.language\", \"$$this.value\"]\n }\n }\n }\n }\n }\n])\n", "text": "Hello @CarlosDC,The documentation here: https://docs.mongodb.com/manual/reference/operator/aggregation/arrayToObject/shows a very similar case, so this is supposed to work.You have to read the instruction provided on top of the documentation $arrayToObject,Converts an array into a single document; the array must be either:– OR - $arrayToObject has the following syntax:The <expression> can be any valid expression that resolves to an array of two-element arrays or array of documents that contains “k” and “v” fields.I am sure you are clear with above provided instruction in $arrayToObject.For your case you can use $map to reconstruct array to array of two-element arrays where the first element is the field name, and the second element is the field value and than convert to object,", "username": "turivishal" }, { "code": "", "text": "Thanks so much for your help Vishal! I would add that I didn’t suspect that ‘k’ and ‘v’ were like… fixed constants that one needs to use. My impression when reading the documentation was that any format of [ { “key1”:“value1”}, {“key2”:“value2”}… ] would work.", "username": "CarlosDC" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$arrayToObject not working for me in Atlas
2021-03-17T16:51:56.306Z
$arrayToObject not working for me in Atlas
2,809
null
[ "aggregation" ]
[ { "code": "$redact: {\n $cond: {\n if: { $gt: [ { $size: { $setIntersection: [ \"$tags\", userAccess ] } }, 0 ] },\n then: \"$$DESCEND\",\n else: \"$$PRUNE\"\n }\n }\n$redact: {\n $cond: {\n 'if': {\n $gt: [\n {\n $size: {\n $setIntersection: [\n '$Programmes.Security.AccessTeams',\n [\n 'Lorem1'\n ]\n ]\n }\n },\n 0\n ]\n },\n then: '$$DESCEND',\n 'else': '$$PRUNE'\n }\n}\n{\n \"_id\": {\n \"$oid\": \"\"\n },\n \"PXXXXXID\": \"Lorem\",\n \"PXXXXXDescription\": \"Lorem\",\n \"PXXXXXXXs\": [\n {\n \"PXXXXXXID\": \"Lorem\",\n \"Security\": {\n \"DXXXXXXX\": \"Lorem\",\n \"AccessTeams\": [\n \"Lorem1\"\n ]\n }\n },\n {\n \"PXXXXXXID\": \"Lorem\",\n \"Security\": {\n \"DXXXXXXX\": \"Lorem\",\n \"AccessTeams\": [\n \"Lorem2\"\n ]\n }\n }\n ],\n \"UXXXXXXXPXXXX\": {\n \"UXXXID\": \"Lorem\",\n \"Security\": {\n \"DXXXXXXX\": \"Lorem\",\n \"AccessTeams\": [\n \"Lorem1\"\n ]\n }\n },\n \"Security\": {\n \"DXXXXXXX\": \"Lorem\",\n \"OwningTeam\": \"Lorem\",\n \"AccessTeams\": [\n \"Lorem1\"\n ]\n }\n}\n", "text": "Hello,I´m using db version 4.2.12.I’m trying to use the $size and $setIntersection just like in the $redact example on mongo documentation:but it gives an error \"The argument to $size must be an array but was of type null.I have a Security Object at root level with a string array field inside (just like the example). The Security object is also inside an Embedded array of objects of which I would like to return only the objects that intersect with the Security Object string array. Can you help me please?My Code:Document Example:Thank you", "username": "Vasco_Pedro" }, { "code": "$PXXXXXXXs.Security.AccessTeams$Programmes.Security.AccessTeamsProgrammes.Security.AccessTeams : 1", "text": "Hi @Vasco_Pedro,Looking at the document example the AccessTeams are on level $PXXXXXXXs.Security.AccessTeams , is tgat the same as $Programmes.Security.AccessTeams.I would recommend to project just Programmes.Security.AccessTeams : 1 and see if some documents produce a null value. If this happens use $ifNull and switch to empty arrays.Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "After $project test there are no null values on $Programmes.Security.AccessTeams.But if I try $ifNull like you said:{\n“Programmes.Security.AccessTeams”: { $ifNull:\n[\"$Programmes.Security.AccessTeams\" , ] }\n}Then AccessTeams shows as an array, of string arrays.Thank you", "username": "Vasco_Pedro" }, { "code": "", "text": "@Vasco_Pedro, thanks for the update.Hope it helped.Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Still can’t replicate $redact “$tags” example on mongo documentation.Neither debug or similar approaches worked.", "username": "Vasco_Pedro" }, { "code": "", "text": "@Pavel_Duchovny thank you for your reply.\nI have one question related to that. If we want to always show one sub-document that doesn’t need the security. How we can do that?Thank you", "username": "Alexandre_Barreto" }, { "code": "", "text": "@Alexandre_Barreto,You mean that you don’t want it to be prune?I guess you can do a nested $cond in the else section to check if its the needed document and prune in case it’s not…Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\n\t“Field1”: “P001”,\n\t“Field2”: “Lorem”,\n\t“Field3”: “Lorem”,\n\t“Field4”: [\n\t{\n\t\t“SubField1”: “Lorem”,\n\t\t“SubField2”: “Lorem”,\n\t\t“SubField3”: “Lorem”,\n\t\t“Security”:{\n\t\t\t“Access”:“Role3”\n\t\t\t“AccessArray”:[“Role1”,“Role2”]\n\t\t}\n\t},\n\t{\n\t\t“SubField1”: “Lorem”,\n\t\t“SubField2”: “Lorem”,\n\t\t“SubField3”: “Lorem”,\n\t\t“Security”:{\n\t\t\t“Access”:“Role3”\n\t\t\t“AccessArray”:[“Role1”,“Role2”]\n\t\t}\n\t}],\n\t“FieldWithoutSecurity”:{\n\t\t“Sub_Field1”: “Lorem”,\n\t\t“Sub_Field2”: “Lorem”\n\t}\n\t“Security”:{\n\t\t“Access”:“Role3”\n\t\t“AccessArray”:[“Role1”,“Role2”]\n\t}\n}\ndb.Collection.aggregate([\n{\n$redact:{\n\t$cond: {\n\t\tif: {\n\t\t\t$or: [\n\t\t\t{\n\t\t\t\t$gt: [\n\t\t\t\t{\n\t\t\t\t\t$size:\n\t\t\t\t\t{\n\t\t\t\t\t\t$setIntersection:\n\t\t\t\t\t\t[\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t“$cond”: {\n\t\t\t\t\t\t\t\t“if”: {\n\t\t\t\t\t\t\t\t\t“$ifNull”: [\"$Security.AccessArray\",false]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t“then”: “$Security.AccessArray”,\n\t\t\t\t\t\t\t\t“else”: [\"\"]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t[“Role1”]\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}, 0\n\t\t\t\t]\n\t\t\t},\n\t\t\t{$in: [\"$Security.Access\", [“Role1”]]}\n\t\t\t]\n\t\t},\n\t\tthen: “$$DESCEND”,\n\t\telse: “$$PRUNE”\n\t}\n}}])\n", "text": "Let try to explain.I have this document with this structure:Then I have this redact:I want to always show the subDocument “FieldWithoutSecurity”, but with this redact that field it’s never returned.", "username": "Alexandre_Barreto" }, { "code": "", "text": "@Pavel_Duchovny\nAny suggestion?", "username": "Alexandre_Barreto" }, { "code": "“if”: {\n\t\t\t\t\t\t\t\t\t“$ifNull”: [\"$Security.AccessArray\",false]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t“then”: “$Security.AccessArray”,\n\t\t\t\t\t\t\t\t“else”: [\"\"]\n[{$redact: {\n $cond: {\n 'if': {\n $or: [\n {\n $gt: [\n {\n $size: {\n $setIntersection: [\n {\n $cond: {\n 'if': {\n $ifNull: [\n '$Security.AccessArray',\n false\n ]\n },\n then: '$Security.AccessArray',\n 'else': ['Role1']\n\n }\n },\n [\n 'Role1'\n ]\n ]\n }\n },\n 0\n ]\n },\n {\n $in: [\n '$Security.Access',\n [\n 'Role1'\n ]\n ]\n }\n ]\n },\n then: '$$DESCEND',\n 'else': '$$PRUNE'\n }\n}}]\n{ _id: \n { _bsontype: 'ObjectID',\n id: <Buffer 60 2e 41 d4 2e ac 2d bb 54 e2 c4 ab> },\n Field1: 'P001',\n Field2: 'Lorem',\n Field3: 'Lorem',\n Field4: \n [ { SubField1: 'Lorem',\n SubField2: 'Lorem',\n SubField3: 'Lorem',\n Security: { Access: 'Role3', AccessArray: [ 'Role1', 'Role2' ] } },\n { SubField1: 'Lorem',\n SubField2: 'Lorem',\n SubField3: 'Lorem',\n Security: { Access: 'Role3', AccessArray: [ 'Role1', 'Role2' ] } } ],\n FieldWithoutSecurity: { Sub_Field1: 'Lorem', Sub_Field2: 'Lorem' },\n Security: { Access: 'Role3', AccessArray: [ 'Role1', 'Role2' ] } }\n", "text": "Hi @Alexandre_Barreto,So I understand that fields without “Security” field will endup in “else” condition of:But in else you return an empty array which will get Prune. You should return the “filtered” value to allow them to be shown:This will result in the desired document I think:Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "2 posts were split to a new topic: Trying to use $redact to do cell level securty", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Evaluate Access at Every Document Level
2021-02-11T18:49:32.191Z
Evaluate Access at Every Document Level
3,161
https://www.mongodb.com/…d_2_1024x70.jpeg
[ "performance", "monitoring" ]
[ { "code": "", "text": "Hello,This is a really beginner question. During peak usage of around 1k users, our mongodb server CPU jumped to somewhere around 80% - 180%.\nWe’re running on a 6-core CPU and 8 GB of RAM.I checked the mongodb.log and it’s filled with this log:\nscreen1212×83 57.1 KBCould this be the cause of the high CPU usage? If not how do I about finding the cause of this?Thank you very much.", "username": "Thearith_Sa" }, { "code": "createdDate", "text": "Welcome to the MongoDB Community @Thearith_Sa!This query in your log example is performing a collection scan of 535,658 docs in order to do an in-memory sort by createdDate and return 10 documents. The in-memory sort is a likely cause for CPU load if there are many of these queries running concurrently.This is definitely a case where you want to Use Indexes to Sort Query Results and Explain Results to understand query execution.This may not be the sole cause of your CPU load, but once you’ve addressed common underperforming queries you’ll have a better view of what slow operations remain.Regards,\nStennie", "username": "Stennie_X" } ]
MongoDB CPU usage spiked to over 100%
2021-03-17T07:39:27.162Z
MongoDB CPU usage spiked to over 100%
7,890
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "Is it possible to make relationship b/w two collections which have different partition keys.If Yes how?What should be the approach if I want to sync two collections with different partition keys.", "username": "shawn_batra" }, { "code": "null_id", "text": "Cross posting the reply from the Github issue:Each partition value will end up in a different Realm instance. Any relationships that you set up between the collections that fall into different Realm instances will be null locally. If your app needs to keep references between different partitions, then you should not setup relationships and instead use the _ids of the objects and look them up manually. Be aware though that Realm will not impose any referential guarantees there, so it’ll always be possible to end up in a situation where an object from Realm 1 points to an id that is supposed to exist in Realm 2, but doesn’t.You can read up on how partitioning works in the docs: https://docs.mongodb.com/realm/sync/partitioning/.", "username": "nirinchev" }, { "code": "", "text": "@nirinchev one more question: What is the correct way to sync data in local instance of realm in electron js", "username": "shawn_batra" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm syncing collections with different partition keys
2021-03-17T19:19:31.421Z
Realm syncing collections with different partition keys
1,995
null
[ "aggregation", "queries" ]
[ { "code": "{\n field1:111,\n field2:222,\n nested:{\n field3:333,\n field4:444\n }\n}\n$group$avgnestednested.field3nested.field4", "text": "In my document, I have a couple of fields in a nested document that I want to aggregateI need the output of $group to be in the similar nested structure. I need their $avg values to be accumulated in the nested document itself.\nWhen I try to use the field name as nested.field3 and nested.field4, it throws error.The field name ‘nested.field3’ cannot contain ‘.’Is there any operator that will let me do this?PS: I dont want to add a project stage to do this.", "username": "Dushyant_Bangal" }, { "code": "", "text": "it throws errorIt would help us help you if you could share the error you are having.", "username": "steevej" }, { "code": "", "text": "Its the standard error:The field name ‘nested.field3’ cannot contain ‘.’Its definitely not allowed this way. So I need to know if there is any operator that does it, or its not possible in group stage at all.", "username": "Dushyant_Bangal" }, { "code": "", "text": "Can you post what you tried?", "username": "steevej" }, { "code": "nested.field3nested.field4db.col.aggregate([{\n $group:{\n _id:null,\n \"nested.field3\":{ $avg:\"$nested.field3\"}\n }\n}])\ndb.col.aggregate([{\n $group:{\n _id:null,\n nested:{field3:{ $avg:\"$nested.field3\"}}\n }\n}])", "text": "Sorry, I thought I was clear in the question.When I try to use the field name as nested.field3 and nested.field4 , it throws error.Here’s what I have tried:and", "username": "Dushyant_Bangal" }, { "code": "\"nested.field3\": { $avg: \"$nested.field3\" }\"The field name 'nested.field3' cannot contain '.'\"nested: { field3: { $avg: \"$nested.field3\" } }\"The field 'nested' must be an accumulator object\"db.col.aggregate([\n {\n $group:{\n _id: null,\n nested_field3: { $avg: \"$nested.field3\" },\n nested_field4: { $avg: \"$nested.field4\" }\n }\n },\n {\n $project: {\n nested: { field3: \"$nested_field3\", field4: \"$nested_field4\" }\n }\n }\n])", "text": "The usage of the field name representation for the computed value in group aggregation cannot be of both these forms, and there are errors as seen below:\"nested.field3\": { $avg: \"$nested.field3\" }\nError: \"The field name 'nested.field3' cannot contain '.'\"nested: { field3: { $avg: \"$nested.field3\" } }\nError: \"The field 'nested' must be an accumulator object\"You can work with this as follows, to get the desired result:", "username": "Prasad_Saya" } ]
Nested accumulator in Aggregation Grouping
2021-03-16T14:05:50.552Z
Nested accumulator in Aggregation Grouping
8,851
null
[ "mongodb-shell" ]
[ { "code": "", "text": "I am missing something very fundamental to working within the shell.The shell and server are 4.4.3. I don’t understand why I can declare a VAR for a DB and COLLECTION as one object and use it in a MQL. For exampleMongoDB Enterprise PRIMARY> var col=db.product;\nMongoDB Enterprise PRIMARY> col.find({}).count()\n14027623But I can’t define each component independently.MongoDB Enterprise PRIMARY> var d=db\nMongoDB Enterprise PRIMARY> var c=product\nuncaught exception: ReferenceError: product is not defined :Thanks\n-Dave", "username": "David_Lange" }, { "code": "producttypeof()> typeof(product)\nundefined\n\n> typeof('product')\nstring\nproduct> var product = 'widgets'\n> c=product\nwidgets\nproductdb.getCollection()> var c = 'product'\n> db.getCollection(c).find()\nmongomongo", "text": "var c=product\nuncaught exception: ReferenceError: product is not defined :Hi @David_Lange,Per the error message, the problem is that product in this assignment context is expected to be a variable. You can use the typeof() method to check the type of a value:If product happens to be a variable with a defined value, it can be used as the value for an assignment:To assign the literal string product as a value, you need to use quotes:var c = ‘product’To fetch a collection using a variable, use db.getCollection().For some general tips (and differences between scripted & interactive mongo), see: Write Scripts for the mongo Shell.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Define DB and collection as 2 variables
2021-03-18T02:09:14.030Z
Define DB and collection as 2 variables
3,599
null
[ "replication", "installation" ]
[ { "code": "mongod --bind_ip 0.0.0.0 --oplogSize 128 --replSet rs0 --storageEngine wiredTigerrs.initiate({ _id: 'rs0', members: [ { _id: 0, host: 'mongo0:27017' } ]})rs.status(){ \"operationTime\" : Timestamp(1615929974, 2), \"ok\" : 0, \"errmsg\" : \"Our replica set config is invalid or we are not a member of it\", \"code\" : 93, \"codeName\" : \"InvalidReplicaSetConfig\", \"$clusterTime\" : { \"clusterTime\" : Timestamp(1615929974, 2), \"signature\" : { \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"), \"keyId\" : NumberLong(0) } } }", "text": "Hello,I am trying to deploy a MongoDB replica set. On first deployment I run the command:\nmongod --bind_ip 0.0.0.0 --oplogSize 128 --replSet rs0 --storageEngine wiredTigerThis works fine and after my Render service launches I can initialize my replica set with:\nrs.initiate({ _id: 'rs0', members: [ { _id: 0, host: 'mongo0:27017' } ]})However, if I redeploy this mongo instance it will not rejoin the replica set and give this message:\nrs.status()\n{ \"operationTime\" : Timestamp(1615929974, 2), \"ok\" : 0, \"errmsg\" : \"Our replica set config is invalid or we are not a member of it\", \"code\" : 93, \"codeName\" : \"InvalidReplicaSetConfig\", \"$clusterTime\" : { \"clusterTime\" : Timestamp(1615929974, 2), \"signature\" : { \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"), \"keyId\" : NumberLong(0) } } }I am trying to figure out how to make sure the mongo instance will re join the replica set on restart.", "username": "Sean_Doughty" }, { "code": "", "text": "I am not sure I understand what you do, but it looks like you are trying to run rs.initiate() when you restart. If that is the case you should not. Simply restarting your mongod will make it rejoin its replica set.", "username": "steevej" }, { "code": "rs.initiate()mongod --bind_ip 0.0.0.0 --oplogSize 128 --replSet rs0 --storageEngine wiredTiger", "text": "Hello Steevej,Thanks for your response!\nNo, I’m only running the rs.initiate() on the first deployment and then:mongod --bind_ip 0.0.0.0 --oplogSize 128 --replSet rs0 --storageEngine wiredTiger, on subsequent restarts.", "username": "Sean_Doughty" }, { "code": "", "text": "@Sean_Doughty What is Render ?However, if I redeploy this mongo instance it will not rejoin the replica set and give this message:How is storage managed, is data persisted and remounted when you redeploy?", "username": "chris" }, { "code": "data/db", "text": "Render is a cloud platform: https://render.com/Storage is managed with a persistent disk mounted to data/db, data is maintained between redeploys.", "username": "Sean_Doughty" }, { "code": "", "text": "{ _id: ‘rs0’, members: [ { _id: 0, host: ‘mongo0:27017’ } ]}Using docker I replicate this error if I change the container name. When you redeploy it is not resolving mongo0 to itself like it did when you did the re.initiateHope this can help in your troubleshooting.", "username": "chris" } ]
Mongo Replica Sets on Render
2021-03-16T21:59:39.321Z
Mongo Replica Sets on Render
2,797
null
[ "spring-data-odm" ]
[ { "code": "", "text": "I am trying to save an object in Mongodb which has one property of type Map. The key for the map is my own class (something similar to say Employee having certain attributes). The issue is that when I try to save the object , I am getting an error\n“org.springframework.data.mapping.MappingException: Cannot use a complex object as a key value.”I read multiple details and it may be more of a Spring issue rather than Mongodb but wanted to post this question here just in case there is some solution already?", "username": "ragarwal1_N_A" }, { "code": "util.Map", "text": "Hi,\nIt is more of a spring issue,\nthe keys type must be a MongoSimpleTypes\nIt’s because the keys of a util.Map are mapped to String values.check this link:**[Alexander Bätz](https://jira.spring.io/secure/ViewProfile.jspa?name=laures)**… opened **[DATAMONGO-449](https://jira.spring.io/browse/DATAMONGO-449?redirect=false)** and commented\n\nWhile serializing properties/objects of the type `Map` the keys are serialized by using the `toString()` method.\nThis fails for complex classes even when there is a `Converter<MyComplexClass, String>` for it.\n\nDuring deserialization the string representing the id is deserialized by using the converters, so a workaround is to call the proper converter in the `toString()` method. But this only works for classes that can be converted from/into strings\n\n\n---\n\n**Affects:** 1.0.1\n\n**Issue Links:**\n- [DATAMONGO-242](https://jira.spring.io/browse/DATAMONGO-242) Allow using complex types as Map keys\n (_**\"duplicates\"**_)", "username": "Imad_Bouteraa" } ]
Cannot use a complex object as a key value
2021-03-17T20:33:07.272Z
Cannot use a complex object as a key value
5,003
null
[]
[ { "code": "", "text": "How to configure Mongodb server parameters like wiredTigerConcurrentReadTransactions and wiredTigerConcurrentWriteTransactions in MonGoDb Atlas version?What is the default value of these parameters in Atlas? Can we increase the count by increasing the cluster tier?We have big number of concurrent read/write transactions. Seems like some transactions are aborting. May be related to above parameters default count.", "username": "Basil_Abraham" }, { "code": "wiredTigerConcurrentReadTransactionswiredTigerConcurrentWriteTransactions", "text": "Hi @Basil_Abraham,Welcome to the MongoDB community forum!Unfortunately you will not be able to directly configure the wiredTigerConcurrentReadTransactions and wiredTigerConcurrentWriteTransactions parameters as setParameter is one of the unsupported commands in Atlas.However, if you have a dedicated Atlas cluster (M10+) it may be possible to tune some parameters for your use case:Please contact Atlas support if your use case requires access to a command not currently supported by the existing Atlas database user privileges.If you do choose to contact support, I would recommend providing further information regarding the issues you are attempting to address with the configuration of those parameters as they may provide alternate suggestions based on your use case or cluster metrics.Hope this helps,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @ Jason_Tran, Thanks for your reply.Did below parameter do the job for parameters wiredTigerConcurrentReadTransactions and `wiredTigerConcurrentWriteTransactions in atlas ?\nimage727×139 9.98 KB\n", "username": "Basil_Abraham" }, { "code": "wiredTigerConcurrentReadTransactionswiredTigerConcurrentWriteTransactions", "text": "Hi @Basil_Abraham,Did below parameter do the job for parameters wiredTigerConcurrentReadTransactions and `wiredTigerConcurrentWriteTransactions in atlas ?The Connection Limits for Atlas Clusters are from the net.maxIncomingConnections configuration option. Attempting to change wiredTigerConcurrentReadTransactions and wiredTigerConcurrentWriteTransactions server parameters won’t increase the Atlas connection limit. You’ll need to select the cluster tier that matches the connection limit requirement for your application(s).Hope this helps!\nJason", "username": "Jason_Tran" } ]
Configure MongoDb Server parameters in Atlas
2021-03-16T07:42:56.318Z
Configure MongoDb Server parameters in Atlas
2,845
null
[ "mongoose-odm" ]
[ { "code": "const serverInfoSchema = new Schema ({\n name : String,\n uid : number,\n serverIcon : String,\n owner : number,\n region : String,\n membersNumber : number,\n members : Array,\n roles : Array,\n channel : Array,\n region : String\n})\nconst serverInfoModel = mongoose.model('server-info', serverInfoSchema);\n", "text": "hey, I’m pretty new to MongoDB. So whenever I’m getting a document it keeps duplicating any idea why", "username": "TyFang_XV" }, { "code": "", "text": "Hi there!That’s an odd issue. Let’s see if we can get to the bottom of it. Can you tell me a little more about what is being duplicated? In your code do you call any functions that create a document? Does it happen only when you start up the server or every time you hit a specific endpoint?If you have a GitHub repo you can share, that would also help ", "username": "ado" }, { "code": "", "text": "edited:hey, I was just going through all my files and found a function that is causing this. My code was a mess as I rushed to finish it so didn’t see that. sorry inconvenience", "username": "TyFang_XV" }, { "code": "", "text": "Sounds good! Yeah I’ve been there many many times. Good luck with your project ", "username": "ado" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Document keeps duplicating every time i read the document
2021-03-14T17:45:52.042Z
Document keeps duplicating every time i read the document
2,009
null
[]
[ { "code": "", "text": "I currently have data on my VPS that is from mongodb v3.6.3. If I mongoexport this data and import it onto a server with mongodb on the latest release would it work fine?", "username": "mental_N_A" }, { "code": "", "text": "Try it out. It can work sometimes.The upgrade path from 3.6 to 4.4 is well documented and is the tried and true method.", "username": "chris" } ]
Is it safe for me to use data from mongodb v3.6.3 to latest release?
2021-03-17T20:21:31.069Z
Is it safe for me to use data from mongodb v3.6.3 to latest release?
1,853
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi everyone, newbie to mongo… I want to add a list of skills to my users. A skill is not only the name of the skill, but also an object with some more info about the skill - how rare is it, the source of the skill, skill rate for this user and so on…At the end - I’ll get a list of skills and I want to find all users with one or more of those skills. I might also want to do some calculation on the skill data in order to give it some grade (but this is not critical, I can do it in the code as well).My question - what is the best structure for keeping those skills - array of objects where the skill name is one of the attributes of the nested object, or an object where each key is the skill name and the value is the skills attributes?What search function should I use?It’s an offline process, so it doesn’t have to be super fast.\nI expect 5K-50K of users in the collection.Thanks! ", "username": "Boaz_Ron" }, { "code": "skills : [\n { skill_name : \"magic\" , strength : 5 } ,\n { skill_name : \"fire maker\" , strength : 10 } \n]\n", "text": "I would go with the attribute pattern:Learn about the Attribute Schema Design pattern in MongoDB. This pattern is used to target similar fields in a document and reducing the number of indexes.something like:", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
User's skills data structure
2021-03-17T16:52:55.208Z
User&rsquo;s skills data structure
2,163
null
[ "app-services-cli" ]
[ { "code": "realm-cli export --app-id=<...> --for-source-controlcannot export within config directory \"...\"config.jsongit add .git commit -m \"...\"git push origin master", "text": "When I run this command from the root of my local repositoryrealm-cli export --app-id=<...> --for-source-controlI getcannot export within config directory \"...\"This is due (I presume) to the fact that this is the directory containing the top-level config.json for the realm app.This doesn’t make sense to me because this is the repository root and the exported files are supposed to be integrated into this repository (actually, mostly overwriting this directory’s contents). If I can’t pull these files into the git repo, how am I supposed to then subsequentlygit add .\ngit commit -m \"...\"\ngit push origin master\n^^^^ or whichever remotes I have configured for this repo which I want to keepThanks in advance,Eric", "username": "Eric_Lightfoot" }, { "code": "", "text": "Has anyone else come across this issue? Am I misusing the realm-cli tool somehow? Or am I misunderstanding it’s use/purpose?This is a key piece of my workflow and it is blocking me from moving forward on some significant work", "username": "Eric_Lightfoot" }, { "code": "", "text": "Did you try: realm-cli import", "username": "Mike_Notta" } ]
Realm-cli cannot export within config directory
2021-02-18T18:21:40.613Z
Realm-cli cannot export within config directory
3,029
https://www.mongodb.com/…_2_1024x294.jpeg
[ "connecting", "server", "installation" ]
[ { "code": "", "text": "I am new to database and have encountered some issues while connecting to mongoDB, I have checked on StackOverflow but still have no clue. This is the installation documentation I followed.\n( https://docs.mongodb.com/manual/tutorial/install-mongodb-on-os-x/ )OS: macOS Catalina version 10.15.7\nmongoDB: MongoDB 4.4 Community Edition\n\nimage1681×484 120 KB\nI got an error status after running the mongoDB server, I’ve tried to install MongoDB 4.2 but the error remains, would be grateful if someone could help me out.Thank you in advance!", "username": "Djdj_Djdj" }, { "code": "", "text": "can you please check logs under below directory/usr/local/var/log/mongodb/", "username": "ROHIT_KHURANA" }, { "code": "", "text": "Hi @Djdj_Djdj !Before you start a mongo session, what you need to do is starting the mongodb services.Try this command:\nbrew services start mongodb-communityTo stop the service you can use this command:\nbrew services stop mongodb-communitNote: Your mongodb-community status is error. You should have started status in order to run the mongo session.", "username": "Fahmi_Hidayat" }, { "code": "", "text": "I have found and resolved the problem, thank you so much!", "username": "Djdj_Djdj" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connect to MongoDB
2021-03-16T21:13:57.051Z
Connect to MongoDB
2,509
null
[ "data-modeling", "swift", "indexes" ]
[ { "code": "", "text": "For my iOS app I’ve created custom user data collection for my users. I want to ensure that some of the fields in that collection are unique - like username. My first idea was to call a Realm Function which would first call another function as system user to check if username is taken and if not then write username to custom user data of user that called that function (users can only read and write on partition matching their generated user id).The issue with this solution is that I assume that 2 users could call this function at the same time and it could pass for both of them. Maybe something like unique indexes could help? Is there a way to add unique index on Realm Object fields? Or maybe is there some other solution for ensuring uniqueness in MongoDB Realm?", "username": "Michael_Macutkiewicz" }, { "code": "", "text": "A username property within a Realm Object is just a string. So why can’t you run a query for that string within the collection and if it doesn’t exist, create it?Leveraging a Transaction would be useful:A transaction is a list of read and write operations that Realm treats as a single indivisible operation.Essentially it either all passes or all fails. That would help create uniqueness of the selected user name", "username": "Jay" }, { "code": "setUsername", "text": "So why can’t you run a query for that string within the collection and if it doesn’t exist, create it?That was exactly my idea. But I cannot simply query it with local Realm, because I need to search for this username across all partitions and each user only have access to his own. This is why I’m calling Realm Function setUsername that checks if username is taken and if not then changes username. The only issue I see here is that I don’t know if 2 Realm Functions can run concurrently for 2 users which could cause racing conditions.", "username": "Michael_Macutkiewicz" }, { "code": "", "text": "Hmm. Two questions.If each user has their own user space and user name and there’s no sharing of data, then why require unique user names?It’s very common to have basic user data, such as user names all stored within the same partition and accessible by all users. Have you considered adding that structure? It’s somewhat a denormalization of data but makes tasks like what you’re doing super simple.", "username": "Jay" }, { "code": "", "text": "As you’ve pointed out right now there is no real reason for username to even exist, but I’m considering adding community aspect to my app later in development and I think it’s better for users to create username during register, then make them do it sometime later after introducing said community features.Denormalization aspect aside - would that mean that each user would locally keep Realm with some basic data on all the other users? If that would be let’s say username and some low resolution avatar picture - wouldn’t that Realm file be huge and generate lot’s of sync traffic?", "username": "Michael_Macutkiewicz" }, { "code": "", "text": "would that mean that each user would locally keep Realm with some basic data on all the other users?Yes, and that’s very common practice. This would be useful when you want to, as mentioned, use other users name or avatars to identify messages or post. Or when you have collaborative app where mulltiple users are working on a project. How about a chat app? You need to know who you’re chatting with and can look it up via that data.wouldn’t that Realm file be huge and generate lot’s of sync traffic?You’re mileage may vary but generally speaking, no.If you’re adding and removing hundreds of users on a daily basis, maybe. Thinking about the actual numbers, if you store a user name, a small avatar and a user id, that’s a tiny amount of mostly static data; a couple hundred bytes each. Using non-technical round numbers, 10,000 users would be about 5Mb of data.", "username": "Jay" }, { "code": "", "text": "For completeness, you are able to enforce uniqueness of attributes in your collection by adding unique indexes in Atlas: https://docs.atlas.mongodb.com/data-explorer/indexes/", "username": "Andrew_Morgan" }, { "code": "try! realm.write {\n realm.add(userWithDuplicateName)\n}\ndo {\n try realm.write {\n realm.add(userWithDuplicateName)\n }\n} catch let err as NSError {\n print(err.localizedDescription)\n}", "text": "@Andrew_MorganSuper great info! Thanks for that tip.From a MongoDB Realm Swift perspective, if an index is added to a collection and unique is set to true, what’s the expected behavior from the API when an object is added that has the same index?For example a User object that has the user_name property set as an index and unique in Atlas. What happens with this code when the user with that user_name already exists?perhaps it throws so could be captured do a do:catch?", "username": "Jay" }, { "code": "", "text": "Someone can correct me if I’m wrong, but I believe that putting indexes on atlas schemas will only prevent the data from syncing, not from writing it to a local realm. So there is still a need to implement additional logic preventing local writes.", "username": "Michael_Macutkiewicz" }, { "code": "", "text": "Hi @Michael_Macutkiewicz The synchronization of the (duplicate) write to the backend happens asynchronously, and so the write will succeed. When Realm Sync attempts to write the new document to Atlas, it will fail with a duplicate key error and the Object is removed from Realm (and that deletion is synced back to the mobile app(s)) :\nimage994×880 65.7 KBIn this example, I added a unique index on the body of the text messages (you can see that the addition is accepted and synced but a fraction of a second later, the duplicate key is detected and the chat message is removed:\nezgif.com-gif-maker600×619 580 KB", "username": "Andrew_Morgan" }, { "code": "", "text": "Ok. What about scenario where document is updated, not inserted? Will the whole document be deleted? Or will the previous value of the field be restored?", "username": "Michael_Macutkiewicz" }, { "code": "", "text": "@Andrew_MorganBy your image, there appears there’s a notification that goes along with thisI am asking as if the app attempts to add a user that has a username that already exists (per the original question) the new user should be notified of that event so using the observer the proper path?", "username": "Jay" }, { "code": "Chatster", "text": "What you’re seeing is that sync removes the new object when the Atlas translator realises that the unique index constraint has been broken. That then automatically updates the realm in the iOS app. Because the live Realm result set is bound to the UI, the message is removed from the UI automatically.I realise that this isn’t a good fit for what you’re looking to do, but wanted to answer the query about unique indexes.Duplicating a small amount of data into a partition that’s accessible to all is definitely an option – see the Chatster Object/document in this article. There’s still the risk that 2 new users pick the same username at about the same time.A Realm function to set the username (combined with an Atlas unique index) should be a robust solution.", "username": "Andrew_Morgan" } ]
How to ensure uniqueness of certain fields in collection?
2021-03-13T14:01:01.093Z
How to ensure uniqueness of certain fields in collection?
4,433
null
[ "node-js" ]
[ { "code": "", "text": "Can anyone help me with my project it’s based on MEAN stack and i have some difficulty in mongodb.Actually i want to use one of the field in my document as a variable in a loop in nodejs i am not able find it anywhere can anyone help me with it please it would be great help.", "username": "Jatin_Kadam" }, { "code": "", "text": "Hello @Jatin_Kadam, welcome to the community!It would be helpful for others to read and respond to your question if you could describe your issue in more detail and elaborate on your doubts. With more information we can try to find an answer.Cheers\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Yes sure,So i have a field in my documents which has number so i want to use that as my variable in for loop in nodejs to get results accordingly.", "username": "Jatin_Kadam" }, { "code": "for (let step = 0; step < maxCount; step++) {\n // do something here\n}\n", "text": "Hi! If sounds like what you are asking is:Let’s say you have field in your collection called maxCount. And you want a for loop something like this:Is that what you mean? If not, can you provide an example so we can best tailor the answer to your need? Thanx!", "username": "Sheeri_Cabral" }, { "code": "", "text": "Thankyou for your response.But it’s not what i was looking for.I think i am not able to convey my problem properly ,i’ll explain it in detail it would be great if you can help.So we are trying to create a Question Paper generator project using MEAN stack and in that while accepting a question there is a field named “marks” associated with it.So what i was thinking is for eg. inorder to get a 20 marks question paper use this field in forloop to get questions of exactly 20 marks.Is this possible or should i consider a different approach for this.", "username": "Jatin_Kadam" }, { "code": "{ \n title: \"Question Paper 1\",\n marks: 10\n}\n{ \n title: \"Question Paper 2\",\n marks: 20\n}\n{ \n title: \"Question Paper 3\",\n marks: 15\n}\n", "text": "What’s the data type of the ‘marks’ field?Can you show us some sample records? e.g. is it something like:and you want to return only “Question Paper 2”?", "username": "Sheeri_Cabral" }, { "code": "", "text": "mongo850×459 26.9 KB .\nThis is a document in my collection which has a field maxMarks which i have to use so that the no. of documents i get has a total of 20 or any other marks that the user inputs.", "username": "Jatin_Kadam" }, { "code": "// let's assume the object is stored in an object called 'document'\nfor (let step = 0; step < document.maxMarks; step++) {\n // do something here\n}\n// let's assume the object is stored in an object called 'document'\nfor (let step = document.minMarks; step < document.maxMarks; step++) {\n // do something here\n}\n", "text": "@Jatin_Kadam Thank you, this is SUPER HELPFUL.So, something like this?or even:Or is it more complex than that?", "username": "Sheeri_Cabral" }, { "code": "", "text": "Thankyou so much that was a great help.I want one more help as i shared a ScreenShot earlier of my document in that there is a filed subject i want to change it to “Multimedia System” from “MSD”\nin all documents i tried aggregation and it shows the result but it does not commits to the change and is back to original can you please helpwith it.", "username": "Jatin_Kadam" }, { "code": "db.collectionName.updateMany( {}, { $rename: { \"Multimedia System\": \"MSD\" }});\n", "text": "Hi @Jatin_Kadam - to rename a field in mongkdb, you can use the $rename operator like this:", "username": "Sheeri_Cabral" }, { "code": "", "text": "Yes,that worked .Thankyou!!", "username": "Jatin_Kadam" } ]
Help for MEAN project
2020-10-16T17:11:48.809Z
Help for MEAN project
2,443
null
[ "swift", "atlas-device-sync" ]
[ { "code": "do {\n let config = Realm.Configuration()\n let result = try Realm.deleteFiles(for: config)\n\n if result == true {\n print(\"all files have been deleted\")\n } else {\n print(\"deleting of files failed\")\n }\n} catch let error as NSError {\n print(error.localizedDescription)\n}\n", "text": "Anyone know the correct implementation for deleting the local Realm files when using MongoDB Realm Sync in Swift? We want to use this to force a refresh.We can certainly do it at the finder level but since Realm offers that function, we would like to leverage it.The SwiftReference for .deleteFiles shows it’s available but not working for us. We’ve ensured Realm not be currently open on any thread or in another process.Just prints it failed.", "username": "Jay" }, { "code": "", "text": "Hi Jay. What error you have in the catch block?\nCould you check the access rights for the files? Are files actually existing?\nAny info about error’s domain, code and userInfo?", "username": "Pavel_Yakimenko" }, { "code": "deleting of files failed", "text": "Thanks for the response @Pavel_YakimenkoIt doesn’t throw an error. The result var is set to false and printsdeleting of files failedThe is a macOS app, no sandboxing and full access rights to files.My guess is that since the file names and paths are different for Sync vs local, it’s not finding them.So are the files existing? For sure, but again, it only fails for sync’d realms and works correctly for local only realms.", "username": "Jay" }, { "code": "", "text": "I can successfully delete the synced realm on mac (custom and default path) so there could be something else.\nWhat call you are using to open realm?\nAlso why you’re unhappy with the existing realm sync mechanism? Is there something not working for you?", "username": "Pavel_Yakimenko" }, { "code": "import RealmSwift\n\nclass ViewController: NSViewController {\n @IBAction func deleteLocalFilesAction(_ sender: Any) {\n self.deleteLocalRealm()\n }\n\n override func viewDidLoad() {\n super.viewDidLoad()\n }\n\n func deleteLocalRealm() {\n do {\n let config = Realm.Configuration()\n let result = try Realm.deleteFiles(for: config)\n\n if result == true {\n print(\"all files have been deleted\")\n } else {\n print(\"deleting of files failed\")\n }\n } catch let error as NSError {\n print(error.localizedDescription)\n }\n }\n", "text": "@Pavel_YakimenkoWhat call you are using to open realm?I am not opening realm at all as if realm is open, it cannot be deleted. The Realm.delete is a Class function call so is there some kind of initialization that needs to be done?Who said I was unhappy with Sync? We’ve been building on Realm for almost 5 years now and two years into development on our main project. We are heads deep into the project so being unhappy isn’t an option! lol.Here’s the entire projectThere’s no other code in the AppDelegate other than the default and the UI has one button action that calls deleteLocalRealm.", "username": "Jay" }, { "code": "Realm.deleteFiles(for: Realm.Configuration())", "text": "Realm.deleteFiles(for: Realm.Configuration()) will delete the default local Realm, and then it returns false because that file already doesn’t exist. To delete the local files for a specific sync Realm you need to pass in the config for that Realm instead.", "username": "Thomas_Goyne" }, { "code": "Realm.deleteFiles(for: Realm.Configuration()) do {\n let app = App(id: Constants.REALM_APP_ID)\n let user = app.currentUser\n let config = user?.configuration(partitionValue: Constants.REALM_PARTITION_VALUE)\n let result = try Realm.deleteFiles(for: config!)\n\n if result == true {\n print(\"all files have been deleted\")\n } else {\n print(\"deleting of files failed\")\n }\n} catch let error as NSError {\n print(error.localizedDescription)\n}\n", "text": "@Thomas_GoyneRealm.deleteFiles(for: Realm.Configuration()) will delete the default local Realm and then it returns false because that file already doesn’t existYes, that’s what that function does as shown and as mentioned, it works correctly for non-sync’d realms because the default.realm file exists. Our expectation was that it would remove the local realm files the same way for either a local only Realm or a Sync’d realm.So that actually lead to the solution, which is now clear but wasn’t before.The file structure between non-sync and sync realms is different with the sync’d realm files having separate files for each partitionRealm Structure1518×904 236 KBTo delete the local files for a specific sync Realm you need to pass in the config for that Realm instead.And there we go; Realm.delete will only delete the files for the partition specified within the passed config, not all files, to force a re-sync which was the original objective.With updated code, it’s working correctly, keeping in mind that if you want a full re-sync, you need to call Realm.delete for each partition, passing in a config.Thanks all!", "username": "Jay" }, { "code": "", "text": "3 posts were split to a new topic: .realm cannot be deleted because it is currently opened", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Realm.deleteFiles Implementation (Swift)
2020-12-05T15:20:51.077Z
Realm.deleteFiles Implementation (Swift)
4,699
null
[ "mongoid-odm" ]
[ { "code": "dependent: :destroy", "text": "using Mongoid and i got a model that has a bunch of associations defined with dependent: :destroy. and a specific primary_key defined (that is not the _id).Now my understanding was that when i call .destroy all of the associations will be destroyed as well (possibly chaining further down). Opposed to calling .delete where i would expect the associations to be untouched.Is that assumption correct?\nBecause i tried to re-create some records. Made a new one, set attributes from the old one (especially my primary_key field used to link the associations), then delete the old one.\nIntention was that the new records keeps every associated record of the former.\nBut something happened here and i can’t even begin to understand what. Lots of the associated data was cascading .destroy while i only called delete on the old record?!?", "username": "Ralf_Vitasek1" }, { "code": "", "text": "Per https://jira.mongodb.org/browse/MONGOID-5060 this is an open issue in Mongoid.", "username": "Oleg_Pudeyev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongoid: Calling delete vs. destroy on a record with associations
2021-03-11T10:29:35.603Z
Mongoid: Calling delete vs. destroy on a record with associations
4,472
null
[]
[ { "code": "$meta$count$count$search$count", "text": "Hi,Is there any way to get the total hit count from Lucene’s TopDocs.totalHits? It would be wonderful if that information was available somewhere in the query result, maybe through the $meta operator, because using $count in the aggregation pipeline is extremely slow depending on the size of the result set. E.g. our collection has 2 million documents, and $count takes an average of 8 seconds on a result set of ~12k hits. The pipeline has only two steps: $search followed by $count. The search step is very fast.", "username": "Celio_Cidral" }, { "code": "", "text": "@Celio_Cidral We are working on exposing it. It should be available in three to nine months but it is in progress. Lots of moving parts to do it right.", "username": "Marcus" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Get total hit count from Lucene
2021-03-10T14:50:30.485Z
Get total hit count from Lucene
2,427
null
[]
[ { "code": "users{\n $search: {\n text: {\n query: 'search text',\n path: ['samAccountName', 'userPrincipalName', 'displayName']\n }\n }\n}\nusers[\n {\n \"_id\": ObjectId(\"...\"),\n \"samAccountName\": \"adams\",\n \"userPrincipalName\": \"[email protected]\",\n \"displayName\": \"John Adams\"\n },\n {\n \"_id\": ObjectId(\"...\"),\n \"samAccountName\": \"john\",\n \"userPrincipalName\": \"[email protected]\",\n \"displayName\": \"John Snow\"\n }\n]\nsamAccountNameuserPrincipalNamedisplayNamesamAccountNamedisplayName", "text": "Hi!We are implementing a feature on our webpage where users can search for users stored in a collection in MongoDB (users).After setting up a Search Index (with dynamic mapping) in Atlas, we managed to create a search aggregation that is querying multiple fields. The only thing we need to get working now, is scoring/weighting different properties differently.Our current aggregation query;Example documents in the users collection:In our search we want the fields to be scored in the order it’s in the path array, so matches for samAccountName is scored higher than userPrincipalName and displayName, so if I search for “john”, I will get the user with samAccountName “john” higher in the results than the other users with displayName containing this value.How can we achieve this? Is it best to add this score boost to the index, or in the search query?Thanks in advance. \n\\\\ Mats Andreassen", "username": "vtfkmats" }, { "code": "{ \n \"$search\": { \n \"compound\": {\n \"must\": [{\n \"text\": {\n \"query\": \"Hunter S. Thompson\",\n \"path\": \"author_name\",\n \"score\": {\n \"boost\": {\n \"value\": 9\n }\n }\n }\n },\n {\n \"text\": {\n \"query\": \"Fear and Loathing in Las Vegas\",\n \"path\": \"title\",\n \"score\": {\n \"boost\": {\n \"value\": 5\n }\n }\n }\n }],\n \"should\": [{\n \"range\": {\n \"value\": \"0\",\n \"path\": \"qty_available\",\n \"score\": {\n \"boost\": {\n \"value\": 3\n }\n }\n }\n }],\n }\n }\n", "text": "@vtfkmats Thank you for your question. I hope my answer can be helpful.I suggest you explore the compound operator, must, and/or should depending on your use case. You can see this book search example below where I applying different scoring/weighting properties per field.Let me know if that’s helpful.", "username": "Marcus" } ]
$search in multiple fields with different score
2021-03-17T10:43:55.682Z
$search in multiple fields with different score
11,195
null
[ "data-modeling", "atlas-device-sync" ]
[ { "code": "Projects > Tasks // i.e. tasks is a sub-collection in each project.\nTaskProject", "text": "I’m new to MongoDB as well as to MongoDB Realm Sync. I was following the Realm Sync tutorial and Realm data model docs, but I wanted to learn more so I tweaked the Atlas collection structure as follows.What I don’t know is how to come up with Realm Sync Schema which can support sub-collections. The best I came up with is a Schema where Task s are modelled as an array within the Project . But, I’m worried that this can hit the 4MB (although a lot!) document limit for projects with a lot of the tasks.", "username": "siddharth_kamaria" }, { "code": "TasksProjects", "text": "Hi @siddharth_kamaria, welcome to the community!First of all, could you clarify what you mean by “sub-collection” and what you’re trying to achieve with them?You’re correct that embedding Tasks within Projects could cause an issue if you had lots of large tasks.There are some design schema patterns that can be used (e.g. storing the 20 tasks with the closest due date in the Project document and then the rest each having their own document), but it really depends on your access patterns and what you’re trying to achieve.", "username": "Andrew_Morgan" }, { "code": "Projectuser=\\(user.id)Task_id_paritionproject=\\(project.id)ProjectChatsterProject", "text": "Hi Andrew,After posting the video I came across your RChat blog posts and video which clarified a lot of the doubts that I had. Kudos to the team and you for hosting these sessions and writing blog posts!By sub-collection I was referring to nested documents in MongoDB Atlas. I’ll enlist the route that I ended up taking.As for having the top N due tasks into the Project object is a good idea worth exploring. I would love to see more content on modelling Realm partitions, which IMHO is the most difficult and crucial part in getting the Realm Sync right.", "username": "siddharth_kamaria" } ]
Modeling sub-collections in MongoDB Realm Sync
2021-03-12T16:45:58.325Z
Modeling sub-collections in MongoDB Realm Sync
5,990
null
[ "dot-net", "transactions" ]
[ { "code": "", "text": "Hello all,\nI am relatively new to working with mongodb. Currently I am getting a little more familiar with the API and especially with C# drivers. I have a few understanding questions around bulk updates. As the C# driver offers a BulkWriteAsync method, I could read a lot about it in the mongo documentation. As I understand, it is possible to configure the BulkWrite not to stop in case of an error at any step. This can be done by use the unordered setting. What I did not found is, what happens to the data. Does the database do a rollback in case of an error? Or do I have to use a surrounding by myself? In case of an error: can I get details of which step was not successful? Think of a bulk with updates on 100 documents. Can I find out, which updates were not successfull? As the BulkWriteResult offers very little information, I am not sure if this operation is realy a good one for me.thanks in advance", "username": "user0234354" }, { "code": "", "text": "The result will contain any documents that were not written. There is an example here.To ensure you don’t lose writes you should use a properly configured replica set. . The easiest way to ensure you have a properly configured replica set and the associated write Concern is to use an Atlas cluster.", "username": "Joe_Drumgoole" } ]
C# BulkWriteAsync, Transactions and Results
2021-03-16T16:32:47.366Z
C# BulkWriteAsync, Transactions and Results
5,075
null
[]
[ { "code": "", "text": "Hi, I’m Rohit Khurana from India. I’m working as SME for open source databases. I have being used community edition mongodb from last 5 years and manged 150+ data nodes in different replica sets and sharded clusters.\nPlease feel free to reach out for any DBA related things about mongodb.", "username": "ROHIT_KHURANA" }, { "code": "", "text": "Hello @ROHIT_KHURANA ! I’d like to connect with you. Thanks in advance ", "username": "Fahmi_Hidayat" } ]
Hi, I'm Rohit Khurana from India
2021-03-17T05:55:39.554Z
Hi, I&rsquo;m Rohit Khurana from India
3,079
null
[]
[ { "code": "{\n \"_id\": ObjectId(\"604c6e86081415566cbf7011\")\n \"parameters\": [\n {\n \"_id\": ObjectId(\"602b7455f4b4bf5b41662ec1\")\n \"name\": \"Purpose\",\n \"options\": [{\n \"id\": ObjectId(\"602b764ff4b4bf5b41662ec2\")\n \"name\": \"deb\",\n \"sel\": false,\n \"value\": null\n }, {\n \"id\": ObjectId(\"602b767df4b4bf5b41662ec3\")\n \"name\": \"perf\",\n \"sel\": false,\n \"value\": null\n }, {\n \"id\": ObjectId(\"602b764ff4b4bf5b41662ec4\")\n \"name\": \"security\",\n \"sel\": false,\n \"value\": null\n }, {\n \"id\": ObjectId(\"602b767df4b4bf5b41662ec5\")\n },\n \"name\": \"rel\",\n \"sel\": false,\n \"value\": null\n }],\n \"type\": \"multiple\",\n }, {\n \"_id\":ObjectId( \"602b79d35d4a1333b8b6e5ba\")\n \"name\": \"Struct\",\n \"options\": [{\n \"id\": ObjectId(\"602b79d353c89933b8238325\")\n \"name\": \"SW\",\n \"sel\": false,\n \"value\": null\n }, {\n \"id\":ObjectId(\"602b79d353c89933b8238326\")\n \"name\": \"HW\",\n \"sel\": false,\n \"value\": null\n }],\n \"type\": \"multiple\n }\n ]\n}\n.updateOne(\n {\n \"_id\":ObjectId(\"604c6e86081415566cbf7011\"), \n \"parameters._id:\"ObjectId( \"602b79d35d4a1333b8b6e5ba\")\n },\n {\n $set:{\n \"parameters.$.options\":newOptions\n }\n })\n", "text": "Hi all,\nI have a problem for update a sub-sub array…\nI have a Collection such as this:I need to upgrade one of sub options.I have try with this command (for example for update Struct options):But doesn’t works.Do you can suggest another method?", "username": "Francesco_Di_Battist" }, { "code": "$[].updateOne(\n {\n \"_id\":ObjectId(\"604c6e86081415566cbf7011\"), \n \"parameters._id:\"ObjectId( \"602b79d35d4a1333b8b6e5ba\")\n },\n {\n $set:{\n \"parameters.$[].options\":newOptions\n }\n })\n", "text": "Hi @Francesco_Di_Battist,Welcome to MongoDB community.I think what you are looking for is the $[] operator:https://docs.mongodb.com/manual/reference/operator/update/positional-all/#up._S_[]Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": " 602b79d35d4a1333b8b6e5ba$[<identifier>]arrayFilters", "text": "Hi @Pavel_Duchovny, thanks for help me.\nI have try it but after this command, newOptions are in all subdocument (not for subdocuments with id 602b79d35d4a1333b8b6e5baI suppose that I use $[<identifier>] and arrayFilters that match the paramters._id, but I don’t understand.", "username": "Francesco_Di_Battist" }, { "code": ".updateOne(\n {\n \"_id\":ObjectId(\"604c6e86081415566cbf7011\"), \n \"parameters._id:\"ObjectId( \"602b79d35d4a1333b8b6e5ba\")\n },\n {\n $set:{\n \"parameters.$[elem].options\":newOptions\n }\n },{arrayFilters : [ { \"elem._id\": ObjectId( \"602b79d35d4a1333b8b6e5ba\")} ]})\n", "text": "Oh I see,Ok this is how you should do it:Hope that I got the syntax right…", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,\nthank you it works.I doesn’t had understand arrayFilters (i had tried with parameters.elem._id).Thanks a lot", "username": "Francesco_Di_Battist" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Update a sub-sub array of a collections
2021-03-13T18:12:27.984Z
Update a sub-sub array of a collections
1,483
null
[ "indexes" ]
[ { "code": "", "text": "I have a query like this db.getCollection(‘test’).aggregate([{$match:{or:[{\"CreatorId\":7},{\"Users.._id\":7}]}},{$sort:{“CreateTime”:-1}},{$limit:20}])As the documentation says, all of the $or clauses must be in the index, so I created three indexes: CreatorId, Users._id and CreateTime.\nHowever when I run the explain, it shows only CreateTime index was used, therefor the query time is very long.\nWhat is the correct way to make an index for this query?\nThanks!", "username": "yuda_zhang" }, { "code": "", "text": "In case I was not clear enough, I want to have an index that can hit on all “CreatorId”, “Users.$._id” and “CreateTime” fields", "username": "yuda_zhang" }, { "code": "$or$or{ CreateTime: -1 }$match$ordb.test.createIndex({ CreatorId: 1, CreateTime: -1 })\ndb.test.createIndex({ \"Users._id\": 1, CreateTime: -1 })\n", "text": "Hi @yuda_zhang,In case I was not clear enough, I want to have an index that can hit on all “CreatorId”, “Users.$._id” and “CreateTime” fieldsThe documentation on $or Clauses and Indexes indicates that when evaluating the clauses in the $or expression, MongoDB either performs a collection scan or, if all the clauses are supported by indexes, MongoDB performs index scans.The pipeline you shared however is not simply filtering, but also sorting. The optimizer in the case of your example is selecting a plan that will Use and to Sort the Query Results as this will prevent an in-memory (blocking) sort.The result is a full index scan of { CreateTime: -1 } then all documents are fetched and filtered for the conditions in the $match stage of the pipeline. This is the default behavior prior to MongoDB 4.4 (due to SERVER-7568) as the Aggregation Framework favoured non-blocking sorts.To address this each branch of the $or should also account for the sort criteria as follows:Note that the order of fields matters when creating a compound index. For more information on this see the blog post at Optimizing MongoDB Compound Indexes - The \"Equality - Sort - Range\" (ESR) Rule | ALEX BEVILACQUA.", "username": "alexbevi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Index with $or and $sort
2021-03-17T08:28:04.493Z
Index with $or and $sort
2,806
null
[ "app-services-user-auth", "realm-web" ]
[ { "code": "realm-webapp.emailPasswordAuth.registerUser(email, password)app.login(credentials)", "text": "Hi, i am using the realm-web sdk to login or register my users. At the moment, i am using to forms: one for login and one for register new users. I want to merge the forms into one. If the user exists, he will be logged in, otherwise a user will be created automatically and a confirmation email will be sent.But that doesn’t seem to be possible via the email/password provider, does it? If i want to register a new user, i need to run app.emailPasswordAuth.registerUser(email, password) If i want to register a user, i need to run app.login(credentials)How can I unite this? Can I check already after entering the email address if a user with this address already exists or would I have to create an extra function that is then executed by the system user? (Personally I would find this a bit cumbersome)", "username": "Niklas_Grewe" }, { "code": "async function logInOrRegister(app, email, password) {\n const credentials = Realm.Credentials.emailPassword(email, password);\n try {\n // The await here is important to allow catching any error\n return await app.logIn(credentials);\n } catch (e) {\n await app.emailPasswordAuth.registerUser(email, password);\n return app.logIn(credentials);\n }\n}\n", "text": "I would probably attempt a logIn with the email / password and if that fails, register the user and perform the logIn again.Something along the lines of:", "username": "kraenhansen" }, { "code": "", "text": "Hi @kraenhansenthank you for this idea. It works, but i have some problems with this. Assume that a user types a wrong password, i get the error: “Invalid username/password” - that is fine, since this error does not reveal any sensitive information. However, it is not possible to deduce whether the email address has already been confirmed or not.After all, if the login fails, the register function is called immediately. From this I then get the error that the username already exists.It is difficult for me to find out whether a user exists or not, is verified or not, because the error messages do not clearly indicate this. Is there a way to improve this? I would not like to switch back to two forms (login/register)", "username": "Niklas_Grewe" }, { "code": "async function logInOrRegister(app, email, password) {\n if (app.currentUser) {\n await app.removeUser(app.currentUser);\n }\n const credentials = Realm.Credentials.emailPassword(email, password);\n try {\n // The await here is important to allow catching any error\n return await app.logIn(credentials);\n } catch (e) {\n console.log(\"Log in failed:\", e.error);\n if (e.error === \"confirmation required\") {\n await app.emailPasswordAuth.resendConfirmationEmail(email);\n throw new Error(\"Confirmation mail has been sent\");\n } else if (e.error === \"invalid username/password\") {\n try {\n await app.emailPasswordAuth.registerUser(email, password);\n return await app.logIn(credentials);\n } catch (e2) {\n console.log(\"Register followed by log in failed:\", e2.error);\n if (e2.error === \"confirmation required\") {\n throw new Error(\"Confirmation mail has been sent\");\n } else if (e2.error === \"name already in use\") {\n throw new Error(\"Invalid password!\");\n } else {\n throw e2;\n }\n }\n } else {\n throw e;\n }\n }\n}\n", "text": "It is difficult for me to find out whether a user exists or not, is verified or not, because the error messages do not clearly indicate this.I believe this is by design, to make it more difficult to probe the login endpoint for email addresses that already have an account on an app.The errors thrown have an error message from the server that you can use to build your logic around. I threw together a more complete example that includes requesting resending the confirmation email:I hope this helps.", "username": "kraenhansen" }, { "code": "", "text": "Hi @kraenhansenthanks for your update. I will try that in my app. One more question about the login flow. With Firebase, after about 10 failed login attempts, the account is temporarily locked. Can something like this also be implemented in Realm to increase security? Is it possible to set up functions with triggers that count how often a login failed? Could you show me an example of this so I can get an idea of how to implement something like this? Thank you", "username": "Niklas_Grewe" }, { "code": "", "text": "Can something like this also be implemented in Realm to increase security?I believe this is out of my expertise (I honestly don’t know - I’m working on the Realm JS SDK team).Perhaps someone else on the forum knows?", "username": "kraenhansen" } ]
Auto-Create Email/Password Users?
2021-03-14T17:34:00.120Z
Auto-Create Email/Password Users?
3,212
null
[ "atlas-functions" ]
[ { "code": "@magic-sdk/adminexports = async (loginPayload) => {\n const { Magic } = require('@magic-sdk/admin');\n const mAdmin = new Magic('<API_KEY>');\n \n try {\n const didToken = loginPayload;\n const metadata = await magic.users.getMetadataByToken(didToken);\n const userID = metadata.issuer;\n const userEmail = metadata.email;\n \n return { \"id\": userID, \"email\": userEmail };\n \n } catch(error) {\n console.log(error);\n }\n return;\n };\nfailed to execute source for 'node_modules/@magic-sdk/admin/dist/index.js'\n", "text": "Hi, i want to use Magic.link for my auth flow. I uploaded the @magic-sdk/admin as dependency and use it in my function like this:but when i try to run it, i get the following error:How can i solve it?", "username": "Niklas_Grewe" }, { "code": "magic-sdk", "text": "Hi @Niklas_Grewe,How did you upload the magic-sdk module to your realm application and how do you run the custom function auth?Please be prepare that the dependencies is in BETA and not all uploaded modules are supported currently on MongoDB Realm.Can you use an API call to an ADMIN interface instead of uploading code?Perhaps cosider using an alternative like Auth0 API for magic links.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "magic-sdknpm install --save @magic-sdk/admin\ntar -czf node_modules.tar.gz node_modules/\nnode_modules.tar.gzimport { Magic } from 'magic-sdk';\nconst m = new Magic('API_KEY');\n\nasync function loginWithMagic() {\n try {\n const didToken = await m.auth.loginWithMagicLink({ email: '[email protected]', showUI: false });\n const credentials = Realm.Credentials.function(didToken);\n const user = await app.login(credentials);\n return user;\n } catch {\n // Handle errors if required!\n }\n}\n", "text": "How did you upload the magic-sdk module to your realm application?in a empty directory, i runi uploaded the node_modules.tar.gz package to Realm over the UI. I see this under the Dependencies Tab:Bildschirmfoto 2021-03-16 um 15.03.582384×142 6.36 KBand how do you run the custom function auth?Can you use an API call to an ADMIN interface instead of uploading code?i will try thatPerhaps cosider using an alternative like Auth0 API for magic links.Magic.link use a blockchain to offer a dezentralised authentication flow. That’s higher secure as storing tokens, user data and so on in a single database. I don’t know, how secure is Auth0, but it is expensive as far as I know or what are your experiences in this regard?Passportjs is also a very popular auth library for Nodejs. Is this already supported by Realm Functions? Do you know of any other passwordless authentication alternatives that work well with Realm? it might be helpful if Realm itself offered such a login method ", "username": "Niklas_Grewe" }, { "code": "", "text": "Hi @Niklas_Grewe,I have reproduce the issue and it seems that this is a syntax error inside one of the inner dependencies, probably due to some modules been unsupported.So this Blockchain database is stored as part of magic.link offering?I am not aware of a solution that was POCd with Realm and maybe other people know.I will test Auth0 to see if its possible to use it. I know that oauth providers and JWT solutions work with realm.There is an interesting new articlehttps://www.mongodb.com/how-to/oauth-and-realm-serverless/Thanks.\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Niklas_Grewe,A wild idea might be to host this code somewhere like realm hosting or a lambda function and call it via rest or aws service from custom function…", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_DuchovnyI have reproduce the issue and it seems that this is a syntax error inside one of the inner dependencies, probably due to some modules been unsupported.ok thanks for the info. Is there any hope that these will be supported in the future?So this Blockchain database is stored as part of magic.link offering?No. Magic is based on the Ethereum decentralized blockchain network to authenticte and share user dataI will test Auth0 to see if its possible to use it. I know that oauth providers and JWT solutions work with realm.Thank you. Please let me know what you and what experience you have gained and what is possible. and what I would be particularly interested in. Auth0 offers the possibility to use external databases to store the created user accounts. Do we have a Chance to connect MongoDB realm also in this respect with Auth0? More Informations here: Database ConnectionsA wild idea might be to host this code somewhere like realm hostingIts possible to host a nodejs app with realm hosting? I thought only static or single page applications are supported", "username": "Niklas_Grewe" }, { "code": "", "text": "Hi @Niklas_GreweAuth0 offers the possibility to use external databases to store the created user accounts. Do we have a Chance to connect MongoDB realm also in this respect with Auth0? More Informations here: https://auth0.com/docs/connections/database#using-your-own-user-store Since you can connect it to MongoDB you can connect it to Atlas cluster I believe but this needs to be tested. Perhaps you can even use the wire protocol and its realm connection string to do a direct realm store.Its possible to host a nodejs app with realm hosting? I thought only static or single page applications are supportedWell depands on what you define as an app and its purpose.I think with webpack you should be able to create a deployable applicationI once blogged about stitch hosting and it has the same mechanism just rebrand to realm:A brief walkthrough of using the Static Hosting capability of MongoDB Stitch.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_DuchovnySince you can connect it to MongoDB you can connect it to Atlas cluster I believe but this needs to be tested. Perhaps you can even use the wire protocol and its realm connection string to do a direct realm store.Since you still wanted to test Auth0 with Realm, could you check that too? I am still a beginner when it comes to MongoDB and Realm. With this you would help me ", "username": "Niklas_Grewe" }, { "code": "", "text": "Hi @Pavel_Duchovnyi now find a way to use Magic.link with Realm Custom Functions. It works but, how can i add User Metadata like there email address? The Realm Docs says:Custom Function authentication users are not automatically associated with any data.How can I save user metadata without interrupting Realm’s automatic create/login process?", "username": "Niklas_Grewe" }, { "code": "", "text": "Hi @Niklas_Grewe,Can you share how you made it work?Custom data is stored in a collection you can query it via the database service in a function or through context after user is authenticated and run function:If you need to access from sdk after Auth completed check the relevant sdk docs.Thanks", "username": "Pavel_Duchovny" } ]
Custom Function Authentication not working with nodejs package
2021-03-16T10:26:17.396Z
Custom Function Authentication not working with nodejs package
3,481
null
[ "atlas-device-sync" ]
[ { "code": "**Error:**\nfailed to start background translator failed: failed to start sessions during translator registration: error starting/resuming session: recoverable event subscription error encountered: error binding session during translator registration { appPartitionId:XXXXXXXX } : error sending bind message for session with ident 5: messenger closed\n\n**Source:**\nError syncing MongoDB write\n", "text": "I’m new to using Sync and I’ve been encountering the following issue:I enable Sync (dev mode). Everything looks fine in the logs. Then I make a change to a document in a synched collection (either from Compass of directly in Atlas: it’s the same) and the logs show the following error:It also does not stop showing “Enabling Sync…copying data” at the top of my screen. And as a result, my client app does not show the change I’be made to the collection.Any idea what may be going on? Thanks in advance!", "username": "Laekipia" }, { "code": "", "text": "A common cause of sync issues during development is schema changes (either on the Atlas or app side).My first step is always to remove the app from the mobile device (removes any existing synced data) and then remove the sync config through the Realm UI and then re-add it.Another thing to check is whether all of your current Atlas documents match the current schema – you can check this by clicking the “Validate” button on the Realm UI’s schema view.", "username": "Andrew_Morgan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
TranslatorFatalError with Sync
2021-03-12T11:20:29.280Z
TranslatorFatalError with Sync
2,625
null
[ "python", "developer-hub" ]
[ { "code": "", "text": "Introducing FARM Stack - FastAPI, React, and MongoDB | MongoDBpython -m pip install --upgrade pipERROR: Command errored out with exit status 1:\ncommand: ‘d:\\programs\\python\\python.exe’ -c ‘import sys, setuptools, tokenize; sys.argv[0] = ‘\"’“‘C:\\Users\\SUBHAJ~1.PAT\\AppData\\Local\\Temp\\pip-install-5611n1gc\\uvloop_a974861d62fd487c8fe444b701a01170\\setup.py’”’“‘; file=’”‘“‘C:\\Users\\SUBHAJ~1.PAT\\AppData\\Local\\Temp\\pip-install-5611n1gc\\uvloop_a974861d62fd487c8fe444b701a01170\\setup.py’”’“';f=getattr(tokenize, '”‘“‘open’”’“‘, open)(file);code=f.read().replace(’”‘\"’\\r\\n’“'”‘, ‘\"’\"’\\n’“'”‘);f.close();exec(compile(code, file, ‘\"’“‘exec’”’\"‘))’ egg_info --egg-base ‘C:\\Users\\SUBHAJ~1.PAT\\AppData\\Local\\Temp\\pip-pip-egg-info-j6dfg82l’\ncwd: C:\\Users\\SUBHAJ~1.PAT\\AppData\\Local\\Temp\\pip-install-5611n1gc\\uvloop_a974861d62fd487c8fe444b701a01170\nComplete output (5 lines):\nTraceback (most recent call last):\nFile “”, line 1, in \nFile “C:\\Users\\SUBHAJ~1.PAT\\AppData\\Local\\Temp\\pip-install-5611n1gc\\uvloop_a974861d62fd487c8fe444b701a01170\\setup.py”, line 15, in \nraise RuntimeError(‘uvloop does not support Windows at the moment’)\nRuntimeError: uvloop does not support Windows at the moment\n----------------------------------------ERROR: Could not find a version that satisfies the requirement uvloop==0.14.0\nERROR: No matching distribution found for uvloop==0.14.0", "username": "Suvo_Pathak" }, { "code": "requirements.txtgit pullpip install -r requirements.txt", "text": "Hi @Suvo_Pathak,Thanks for the feedback! It looks like Aaron, the post’s author, has used pip-tools to pin all the dependencies - but they’ve been pinned based on his UNIX-based install, including uvloop, which doesn’t run on Windows.I’ve updated the requirements.txt, so if you could git pull the latest changes to the FARM-Intro project and run pip install -r requirements.txt again, I think it should work this time.Mark", "username": "Mark_Smith" } ]
FARM-FastAPI UVLOOP Windows Problem
2021-03-12T06:32:41.931Z
FARM-FastAPI UVLOOP Windows Problem
8,050
null
[ "swift" ]
[ { "code": "", "text": "I’ve been experimenting with the @ObservedRealmObject wrapper, and it’s been working fine for me in builds. However, it breaks down in the previews, either eventually crashing (when played) or not running at all.For now I’ve been commenting them out when I need to see a simple preview, but is there a better way to go about it? I didn’t see any previews on ListSwiftUI (except ContentView, which wasn’t working)", "username": "Corey_de_la_Cruz" }, { "code": "", "text": "Previews should work. Do you have any sample code? What’s the preview error?", "username": "Jason_Flax" }, { "code": "", "text": "Oh, ignore thelet realm = try ! Realm()line, was experimenting and forgot to remove it.", "username": "Corey_de_la_Cruz" }, { "code": "@objcMembers class Person: Object, ObjectKeyIdentifiable {\n\tvar name = \"no name\"\n}\n\nstruct PersonView: View {\n\n\t@ObservedRealmObject\n\tvar person: Person\n\tlet realm = try! Realm()\n\n\tvar body: some View {\n\t\tText(person.name)\n\t}\n}\n\nstruct PersonView_Previews: PreviewProvider {\n\tstatic var previews: some View {\n\t\tPersonView(person: Person())\n\t}\n}\nUncaughtExceptionError: Crashed due to an uncaught exception\n\nRealmTest crashed due to an uncaught exception `NSRangeException`. Reason: Cannot remove an observer <RealmSwift.SwiftUIKVO 0x60000370ed40> for the key path \"name\" from <RLM:Unmanaged Person 0x60000194ee80> because it is not registered as an observer..\n\n==================================\n\n| RemoteHumanReadableError: Failed to update preview.\n| \n| The preview process appears to have crashed.\n| \n| Error encountered when sending 'previewInstances' message to agent.\n| \n| ==================================\n| \n| | RemoteHumanReadableError: The operation couldn’t be completed. (BSServiceConnectionErrorDomain error 3.)\n| | \n| | BSServiceConnectionErrorDomain (3):\n| | ==BSErrorCodeDescription: OperationFailed", "text": "Right now I’m just passing in an object into the view (normally I change the values, but it isn’t necessary for this example). I’m sure there’s a better way, but this does work without the @ObservedRealmObject.Sample:Error:", "username": "Corey_de_la_Cruz" }, { "code": "", "text": "Hi @Jason_Flax – I’m seeing the same exception when I test this view (using Realm Cocoa 10.7.0)", "username": "Andrew_Morgan" }, { "code": "", "text": "There is a pull request open to fix this: Add guard to KVO subscription to prevent multiple observer removals to occur unnecessarily by jsflax · Pull Request #7132 · realm/realm-swift · GitHub.", "username": "Jason_Flax" }, { "code": "jf/fix-swiftui-invalidated-obj", "text": "Hi @Corey_de_la_Cruz, I just tested your view with that PR and the preview now works. If you want to try it for yourself (rather than waiting for it to be reviewed and released) then you can update SPM to use branch jf/fix-swiftui-invalidated-obj", "username": "Andrew_Morgan" }, { "code": "", "text": "Thanks for the update! I had to modify the source to get my project to run, and did notice it helped with the previews (didn’t realize it was the same issue).I’ll use that branch for now, thanks!", "username": "Corey_de_la_Cruz" }, { "code": "", "text": "@Corey_de_la_Cruz glad to hear that it’s working. To state the obvious, you can watch the PR to know when it gets merged.", "username": "Andrew_Morgan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
SwiftUI previews with ObservedRealmObject
2021-02-26T19:28:51.972Z
SwiftUI previews with ObservedRealmObject
4,267
null
[ "data-modeling", "python" ]
[ { "code": "", "text": "I am new to storing file in mongodb. I need to store text files containing 60 fractional values. These files must be associate with a patient id, a medical parameter like body temperature. I also need to associate the file with date and time. Number of files will be in Lakhs.\nWhat is the best approach of storing\nas file in collection, taking a lot of time to store using Python. Retrieve data I will try now.\ngridFS?\nWhat if I have pdf files containing ECG image.\nKindly suggest", "username": "sps" }, { "code": "", "text": "One option for storing files is to use gridFS. See https://docs.mongodb.com/manual/core/gridfs/ for more details.Another option is to store the files in something like a S3 bucket.", "username": "Lauren_Schaefer" }, { "code": "", "text": "Thank you, I will try s3 bucket and respond if there is any problem…", "username": "sps" }, { "code": "", "text": "You can also store small binary files (need to be smaller than 16mb) inside of the bin data type within your MongoDB documents.For text files I’d recommend parsing into a proper MongoDB document so you can query on the data. You could also look at Atlas Data Lake for querying data in text within S3", "username": "Andrew_Davidson" }, { "code": "", "text": "Great point on storing the data in MongoDB if you’ll need to query it!", "username": "Lauren_Schaefer" }, { "code": "", "text": "I have not got the chance to work with all the options. Till now, I have worked with storing file in a collection and gridFS.\nI need another suggestion in this respect.\nIf I want to store normal data fields along with 2.2MB sized files, is it OK if I keep these files in collection? May need to store multiple files (2/3) in one document.\nKindly advise.", "username": "sps" }, { "code": "", "text": "It depends on what the files are and how you want to query them. If the files aren’t text-based, you could put them in a S3 bucket and store links to them in your documents.If the files are text-based and you’ll want to query them, Andrew’s recommendations above are excellent. If you’re interested in learning more about Atlas Data Lake, check out MongoDB Atlas Data Federation - Available On AWS S3 | MongoDB | MongoDB. Also, https://www.mongodb.com/how-to/atlas-data-lake-setup has information on how to get setup and query the data.", "username": "Lauren_Schaefer" }, { "code": "", "text": "It’s totally fine to store your data fields along with the file data in the collection - in fact I’d say it was recommended - you’ll be able to more easily search on this data, for example.Providing your total MongoDB document size doesn’t exceed 16MB, you can store the data from several files in a single MongoDB document. It depends on your circumstances (how many files multiplied by how big each file is). One option would be to store the files in buckets that would allow you to store a group of files across several documents, along with the associated data fields.The other suggestions in this thread are all good, I’d check them all out and go with what suits your circumstances best.", "username": "Mark_Smith" }, { "code": "", "text": "Thanks to all of you.", "username": "sps" } ]
Inserting file in mongodb using Python
2021-03-11T09:48:31.657Z
Inserting file in mongodb using Python
11,025
null
[ "monitoring" ]
[ { "code": "", "text": "Hello,We would like to push Mongo Atlas into ElasticSearch to be searchable via Kibana. Are there any Mongo Atlas fluentd plugins that can download logs from Atlas and push to ElasticSearch?I realize there’s an API to download the logs but wanted to check if there’s another approach using fluentd plugins. Thanks.", "username": "Harun_Gadatia" }, { "code": "", "text": "Hi @Harun_Gadatia,Welcome to MongoDB communityI think the API is the way to go here. You can use Atlas triggers to automate it possibly.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Mongo Atlas logs in ELK stack
2021-03-16T20:33:45.236Z
Mongo Atlas logs in ELK stack
3,787
null
[]
[ { "code": "", "text": "We built a nice dashboard using charts. We combined it with a custom chrome plugin which allows our support team to view real-time failure data and right-click on a specific failure to goto an item-specific page in our internal (not mongodb) dashboard. With a recent release of charts - the ability to right-click on content within a text table chart was removed. Therefore, our support workflow is hosed until we build an alternative. Do mongodb folks have any comments on why this functionality was removed? I chatted with support via the charts website, their answer was, “Sorry for the inconvenience, but this is no longer supported.” Wow - thanks for the warning regarding removing functionality. Very frustrating.", "username": "Chris_Edgington" }, { "code": "", "text": "Hey @Chris_Edgington, sorry to hear we broke your workflow and that you weren’t given a good explanation. We’ve suppressed the browser’s context menu on all chart types because we are implementing our own context menu with additional capabilities, most notably the ability to “drill down” to see the documents which contributed to a chart item.Is it an option for you to use our new click events functionality to create a similar experience with embedded charts, rather than through a browser plug-in? The current beta of click-events doesn’t support table charts, but the next update (due out in a couple of weeks) will.Tom", "username": "tomhollander" }, { "code": "", "text": "Honestly I would not want to proceed down that route and make our workflow more dependent on charts functionality. It would be weeks of work to do this and then we’d be more dependent on mongodb charts. Why not provide a chart configuration option to enable / disable the charts custom context menu? Then your customers can use yours or use their own.", "username": "Chris_Edgington" } ]
Charts right-click browser context menu disabled without warning
2021-03-17T01:36:27.549Z
Charts right-click browser context menu disabled without warning
1,906
null
[ "atlas-functions" ]
[ { "code": "exports = function(changeEvent) {\n\n const {fullDocument} = changeEvent;\n const nodemailer = require('nodemailer');\n\n let transporter = nodemailer.createTransport({\n host: 'smtp.gmail.com',\n port: 465,\n secure: true,\n auth: {\n user: 'my-gmail-address',\n pass: 'my-gmail-password'\n }\n });\n const mailOptions = {\n from: 'from-gmail-address',\n to: 'to-email-address',\n subject: 'New User',\n html: `<p>email content with new user info</p>` \n };\n\n\n return transporter.sendMail (mailOptions, (error, info) => {\n if (error) {\n console.log(error);\n return;\n }\n console.log('Sent Successfully!');\n });\n};\n", "text": "I have successfully uploaded the Nodemailer npm dependency in my Mongo Realm functions and now trying to send an email when a document in my Users collection is added. I use the following code:The function is triggered at document insert, but I keep getting an error that “hostname” is not a function. Any idea what the problem is?", "username": "Akbar_bakhshi1" }, { "code": "", "text": "I’m getting the exact same problem. Have you found a solution?", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "No. Unfortunately I couldn’t find a solution. What I did was to write my nodemailer code in my backend API (Express/NodeJs) and use the Realm trigger to send a request to that API endpoint instead.", "username": "Akbar_bakhshi1" }, { "code": "", "text": "Same issue here. Has anybody a working solution?", "username": "rouuuge" }, { "code": "", "text": "I couldn’t make it work, I’ll probably end up doing the same as @Akbar_bakhshi1. Although it would be cool to be able to use this library in Realm functions.", "username": "Jean-Baptiste_Beau" }, { "code": "child_processclusterdomainpunycodereadlinev8vm", "text": "Hi All,Nodemailer has a dependency for Punycode which is currently an unsupported module in Realm Functions as mentioned in this article, and is likely the cause of the error.Realm functions do not support the following built-in modules:Our team is looking to provide Punycode support at some point in the future, however we do recommend using a service such as AWS SES for sending out emails. This will be a more reliable alternative especially if you are sending with high frequency/volume.Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "Just if for Info I created a feedback idea you can vote:Nodemailer is currently not supported because of the punycode dependency. Would be nice to send mails directly on the mongodb server.", "username": "rouuuge" }, { "code": "", "text": "I believe we have very recently added support for Punycode.\nThe documentation on this is yet to be updated.Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "still have the error:{“message”:“The ‘hostname’ argument must be of type string”,“code”:“ECONNECTION”,“command”:“CONN”}kind regards", "username": "rouuuge" } ]
Nodemailer dependency with Realm functions
2020-12-29T20:16:53.129Z
Nodemailer dependency with Realm functions
4,425
null
[ "java" ]
[ { "code": "", "text": "Hi Team,We are using Mongo-java driver .311, would like to migrate to mongo legacy java driver 4.0.6.In mongo docs mentioned to use BasicDbObject.parse method as alternative to com.mongodb.util.JSON.parse, it is not working as expected. Please suggest other api.Thanks & Regards,\nVenkatakrishna Tolusuri", "username": "Venkatakrishna_Tolus" }, { "code": "", "text": "Hi @Venkatakrishna_Tolus,Can you be more explicit about what your expectations are, and in what way they are not being met? Sample JSON input would be useful as well.Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "Hi Jeffrey_Yemin,Thanks for quick response. Please find the issues.ex 1: Suppose if we need to parse the textString testString = “‘text’”\nWhen we use old api(Json.parse) result is text but with new Api (BasicDBObject.parse) it is resulting ‘text’.ex 2: Suppose if we pass array to parse\nOld Api : it is returning List as returning as a result.\nProposed api : It is return array as String.ex 3: com.mongodb.util.JSON class havin a method Json.parse(“String”, callback). In 4.0 haven’t find any alternative.Kindly suggest alternatives for above api.Regards,\nVenkata krishna Tolusuri", "username": "Venkatakrishna_Tolus" }, { "code": "String jsonFragment = \"\\\"text\\\"\";\nString fieldName = \"val\";\nString jsonString = String.format(\"{%s : %s}\", fieldName, jsonFragment);\n\nBasicDBObject res = BasicDBObject.parse(jsonString);\nObject value = res.get(fieldName);\nSystem.out.println(value.getClass().getSimpleName());\nSystem.out.println(value);\nString\ntext\n", "text": "OK, I see. The 4.0 driver no longer has a way to parse JSON that does not represent a full JSON object. To use BasicDBObject.parse. you’ll have to wrap those JSON object fragments (e.g. strings, arrays) inside a JSON object and then extract the value. For example:will print:Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "Hi Jeffrey_Yemin,I followed your suggestions, now it is working fine. Can you please provide alternate for the below api call.\nex 3: com.mongodb.util.JSON class havin a method Json.parse(“String”, callback). We haven’t find any alternative in 4.0 versionIn our application we are taking query as a string and while parsing we are doing customization to query by above method.Please kindly provide alternate way for that .Thanks & Regards,\nVenkata krishna Tolusuri", "username": "Venkatakrishna_Tolus" }, { "code": "BasicDBObject.parse", "text": "Without seen an example of the callback that you’ve written, it’s hard to say. But I suspect anything you do in the callback can also be done subsequent to parsing by mutating the document that is returned from BasicDBObject.parse.Regards,\nJeff", "username": "Jeffrey_Yemin" } ]
Alternate API for overloaded parse methods com.mongodb.util.JSON in Mongo-legacy driver 4.06
2021-03-09T05:04:02.893Z
Alternate API for overloaded parse methods com.mongodb.util.JSON in Mongo-legacy driver 4.06
4,774
null
[ "replication", "security" ]
[ { "code": "", "text": "We have created CA, intermediate CA and then a signed certificate with all the necessary requirement in mongodb website.We want to use x509 authentication. Currently we can only work with TLS using allowInvalidCertificates optins, and we are not sure what is the implication.Enabling the CAFile option, also cause errors with connecting between replicas with errors complaining about using self sign.What areHopefully, this post can generate enough official replies for different errors return so we can better set up tls connections between replicas.", "username": "Dave_Teu" }, { "code": "", "text": "We want to use x509 authentication. Currently we can only work with TLS using allowInvalidCertificates optins, and we are not sure what is the implication.You’re ignoring the errors and connecting anyway, if another server with TLS is put in between your client will happily connect and that server can decrypt and inspect anything that is sent to it.Enabling the CAFile option, also cause errors with connecting between replicas with errors complaining about using self sign.This error sound like the certificate is signed incorrectly or that the wrong file is being used for the CAFile parameter.Valid certifications - Do they have to be “paid”? We do not need third party to verify since we are only connected between our own servers. What constitude “invalid”?A valid certificate is one whose claims match what you are asking for, Subject/Subject Alternate Name matches hostname, the startDate and endDates are vaild among other things.But most importantly it is one where the chain of trust is established. The root of this is the Certificate Authority. This CA must be installed on your system for any issued intermediate or leaf certificate to be trusted, alternatively the CA can be set in configuration or command line.unable to get local issuer certificate error - please provide clearer explanationThe issuing certificate is not trusted by your system. If your certificates are issued directly by the CA then the CA is not installed. If your certificate is issue by an intermediate CA then it is likely the server certificate has not been prepared correctly, you will need to append the intermediate CA to the server CA.No SSL certificate provided by peer error - please provide clearer explanationThe server is expecting a TLS client certificate and the connecting client is not sending oneSSL peer certificate validation failed: unsupported certificate purpose error - please provide clearer explanationThe certificate is using the wrong extended attributes for it’s role. Using clientAuth when it is server and vice-versa.The security appendix has a good set of instructions for correctly configuring certificates for cluster x509 member authentication, Transport encryption (TLS) and Client certificates for x509 authentication.", "username": "chris" }, { "code": "COPY ./ca.crt /usr/share/ca-certificates/my_root_ca.crt\n\nRUN echo my_root_ca.crt >> /etc/ca-certificates.conf\n\nRUN update-ca-certificates\n", "text": "Thank you very much.How aboutWith your recommendations I have done the following to all my replica dockers, which installs the root certificate on my mongo dockersThe security appendix has a good set of instructions for correctly configuring certificates for cluster x509 member authentication, Transport encryption (TLS) and Client certificates for x509 authentication.I created my certs base on this website you recommended.", "username": "Dave_Teu" } ]
CAFile and self sign certificate for Replicas
2021-03-16T09:16:44.280Z
CAFile and self sign certificate for Replicas
4,620
null
[ "atlas-device-sync" ]
[ { "code": "SyncManager.sharedSyncManager.sharedRealm.deleteFiles(for: config)", "text": "Hi,I was facing a few Realm Sync error, notably 208 and 211. I was looking for some documentation on what each sync error code means, something similar to the old Realm Sync document here – https://docs.realm.io/sync/v/3.x/using-synced-realms/troubleshoot/errors.My open questions -Thank you in advance ", "username": "siddharth_kamaria" }, { "code": "//\n// ViewController.swift\n// RealmPractice\n//\n// Created by Paolo Manna on 21/01/2021.\n//\n\nimport RealmSwift\nimport UIKit\n\n// Constants\nlet partitionValue\t= \"<Partition Value>\"\nlet appId\t\t\t= \"<Realm App ID>\"\nlet realmFolder\t\t\t= \"mongodb-realm\"\nlet username\t\t\t= \"\"\nlet password\t\t\t= \"\"\nlet userAPIKey\t\t\t= \"\"\nlet customJWT\t\t\t= \"\"\nlet asyncRealmNames\t\t= \"AsyncRealmNames\"\n\n", "text": "Hi Siddharth,Regarding what the error codes mean, please refer to this post.As for how to do a client reset in iOS, our team is still working on documenting this in the Advanced Guides sections of our iOS SDK documentation in a similar format to other SDKs such as Android. For now the only documentation we have on this for iOS is the doc you already found here.Having said that, we do have a sample app which utilises an example of a Client Reset in iOS that might help you to reference. Please see link below.Hope that helps!Regards\nManny", "username": "Mansoor_Omar" }, { "code": "app.syncManager.errorHandler", "text": "mongodb/realm-practice/blob/main/swift/RealmPractice/Classes/ViewController.swiftThank you for the example code and quick reply! The way to listen for sync errors now is to use app.syncManager.errorHandler.", "username": "siddharth_kamaria" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Sync Error Codes Documentation & Client Reset Best Practices
2021-03-15T18:26:09.390Z
Realm Sync Error Codes Documentation &amp; Client Reset Best Practices
4,028
https://www.mongodb.com/…c_2_914x1024.png
[ "backup", "ops-manager" ]
[ { "code": "", "text": "Dear Mongo community\nI want to use MongoDB Ops manager, initially a temporary proof of concept on our prod cluster by installing a simple test Ops Manager setup\nMy goal is to move atlas cluster snapshots backups from the atlas to AWS s3 buckets of an automate way by taking the daily, weekly, and monthly frequency I have on cloud backups on my cluster. According to the architecture it is possible, right?\nimage1185×1327 160 KB\nReviewing the Installation checklist for a test installation I want to ask about this con:If you lose the server, you lose everything: users and projects, metadata, backups, automation configurations, stored monitoring metrics, etc.I am new to this ops manager architecture workflow and the way it works.\nThey say If I lose the server, I lose everything including backups, so my question is:\nAs long the snapshots are stored on my AWS S3 bucket, if I lose the server, (the ops manager) will I lose the snapshots present either on my atlas cluster (that ones I see on the dashboard) or my AWS s3 bucket?I want to deep dive into this kind of disadvantaged behavior. Just for the proof of concept and also justify go for a production setup with replicasCan someone with experience baking databases up via MongoDB Ops tell me about how is the backup process and specifically with this host loss?", "username": "Bernardo_Garcia" }, { "code": "", "text": "Hi @Bernardo_GarciaJust to clarify, MongoDB Atlas and MongoDB Ops Manager are two separate products.In your first question, I can see mention of only MongoDB Atlas however the image pasted below is for Ops Manager:My goal is to move atlas cluster snapshots backups from the atlas to AWS s3 buckets of an automate way by taking the daily, weekly, and monthly frequency I have on cloud backups on my cluster. According to the architecture it is possible, right?Specific to Atlas, there currently isn’t a way to directly export Atlas cluster backups to your own S3 bucket. However, there are currently some feedback posts for Atlas under review which may be useful to read.\nIf you are wishing to link up Ops Manager to Atlas for S3 backups (from the Ops Manager deployment), then this isn’t possible.Reviewing the Installation checklist for a test installation I want to ask about this con:If you lose the server, you lose everything: users and projects, metadata, backups, automation configurations, stored monitoring metrics, etc.The con you have stated here is in specific reference to the “Test Install” which consists only of a single server where everything is installed. Production environments should use highly available deployments.Hope this helps.\nJason", "username": "Jason_Tran" }, { "code": "mongodumpmongorestoremongodumpmongodumpmongorestore", "text": "Dear @Jason_Tran, thanks for the update.\nWhen I saw the architecture picture for OpsManager, I thought ops manager deployment was able to interact with external MongoDB clusters/deployments like we have in Atlas MongoDB service. Something like an intercommunication between existing clusters to maintain them, monitor them, and backing up them, without matter if those clusters already do exist previously to the ops manager deployment or are outside of its scope. I was wrong then.With your information, and looking at the architecture picture, It seems then, MongoDB Ops Manager involves their own MongoDB database deployments with MongoDB agents beside them to allow this communication with Ops Manager to get their features, including backup daemon.So if we want to involve an automated backup solution for mongo clusters (aka, mongo clusters intended as MongoDB deployments), then … do those clusters should be created within MongoOps Manager deployment context?If so, then those mongo deployments will not have to do with atlas service, they will be indeed part of the MongoOps service deploymentI ask this because I am interacting with the Atlas MongoAPI right now to create restore jobs to allow me to download snapshots.\nOne solution I had in mind is to download these snapshots via API and store them somewhere, then upload them to an external storage like AWS S3 or Azure storage accounts.\nThis is an approach that I will need to be script it indeed to automated, and under a cloud-native solution perspective, I will have to think about doing it within a VM instance to store the snapshots and upload them after. Or perhaps a couple of containers to download them and store them in a PVC inside k8s and restore them if neededThe thing is I really need to do this via API and not using a mongodump / mongorestore approach since my data on atlas is growing and at some point, and I heard we can experience some performance problems since all data dumped via mongodump has to be read into memory by the MongoDB server and it backus the data and index definitions.When connected to a MongoDB instance, mongodump can adversely affect mongod performance. If your data is larger than system memory, the queries will push the working set out of memory, causing page faults.I found this on the mongodb docsAnother thing is since my atlas cluster has three replicaset nodes (1 primary and two secondaries) perhaps a mongodump / mongorestore approach against the secondary nodes could be something sustainable? I am not sure, since the data in the short term will be GB for every snapshot, and the memory ram is just 2gb. I wouldl have to scale the cluster (M10 plan actually)So to sum up, Is mongo ops manager just useful when we raise a mongo deployment databases using MongoOps manager from the beginning right?", "username": "Bernardo_Garcia" }, { "code": "mongodump", "text": "Hi @Bernardo_Garcia,So if we want to involve an automated backup solution for mongo clusters (aka, mongo clusters intended as MongoDB deployments), then … do those clusters should be created within MongoOps Manager deployment context?If so, then those mongo deployments will not have to do with atlas service, they will be indeed part of the MongoOps service deploymentIf you have concerns or requirements around backup retention for your use case, it would be worth reviewing the Atlas - Snapshot Scheduling and Retention Policy documentation.MongoDB Atlas has an integrated Cloud Backup feature for dedicated clusters (M10+). It sounds like your goal is to get Atlas backup snapshots regularly saved to storage in your own S3 buckets (per your earlier discussion on Moving existing atlas mongo snapshots to external storage.There currently isn’t a feature to directly export cloud snapshots from Atlas to S3 at this stage. I expect you can work out your own custom solution using the Atlas API, but I also recommend sharing your use case as a feature suggestion on the MongoDB Feedback Engine so others can upvote, comment, and follow any updates.Another thing is since my atlas cluster has three replicaset nodes (1 primary and two secondaries) perhaps a mongodump / mongorestore approach against the secondary nodes could be something sustainable? I am not sure, since the data in the short term will be GB for every snapshot, and the memory ram is just 2gb. I wouldl have to scale the cluster (M10 plan actually)You could possibly perform the mongodump with the –host and --port options where the --host would be a secondary node in your cluster. Of course this does void the fact that the mongodump can still adversely affect mongod performance. It would, in the case of specifying a secondary node in the --host option, focus the possible adverse effects to that particular node.So to sum up, Is mongo ops manager just useful when we raise a mongo deployment databases using MongoOps manager from the beginning right?Ops Manager can be useful but the important thing in the context of this discussion is that it is to be used for self hosted MongoDB Deployments and cannot interact with MongoDB Atlas clusters / servers.Hope this helps!\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_Tran, thanks for your clarification and ideas. Indeed it should be a custom solution from my side.", "username": "Bernardo_Garcia" } ]
Installing Ops Manager for move backups to S3
2021-03-10T15:49:18.641Z
Installing Ops Manager for move backups to S3
3,624
null
[ "aggregation", "indexes" ]
[ { "code": "{\n \"fieldA\" : a,\n \"fieldB\" : b,\n \"CreateTime\" : timeStamp\n}\ndb.getCollection('test').aggregate(\n [\n {\n $match:{\"fieldA\":\"a\",\"fieldB\":\"b\"}\n },\n {\n $sort:{\"CreateTime\":-1}\n }\n ]\n)\ndb.getCollection('test').find(\n {\n \"fieldA\":\"a\",\n \"fieldB\":\"b\"\n }\n).sort({\"CreateTime\":-1})\n", "text": "I am having troubles with aggregation pipeline with $sortThe object I got is like this:and I created all three single-field indexthen I have the query like this:When I check the explain(), It shows it only hit the “CreateTime” index, which doesn’t really help when the data volume is very large.However if I try to do the find query like this:it shows all the indexes are hit and it does help with the performance a lot(finish query within 10 ms with 20 m data volume)Is there anyway I can make my aggregation use more than one index while there is a “sort” in it?I know that compound index is a choice, but I’d like to avoid using it since it would be a lot less reusable.Thanks", "username": "yuda_zhang" }, { "code": "{a:1, b:1, c:1}{a:1}\n{a:-1}\n{a:1, b:1}\n{a:-1, b:-1}\n{a:-1, b:-1, c:-1}\n{b:1}{b:1, a:1, c:1}{a:1, b:1, c:1}{b:1, c:1}a", "text": "Hi @yuda_zhang and welcome in the MongoDB Community !The compound index is the right way to do this. They are preferred to index intersections and more optimized.\nCompound indexes are also reusable in the sense where if you have the index {a:1, b:1, c:1}, you also have “for free” all the following indexes:Also, because your 2 first fields are equality matches, they are probably interchangeable (unless there is a big cardinality difference between these 2 fields and the first one is the most selective entry).So for example, if in another query you need the index {b:1}, then you could consider creating {b:1, a:1, c:1} instead to “reuse” this index even more.It would be also completely fine to have {a:1, b:1, c:1} and {b:1, c:1} if the selectivity of the field a is important and you have enough RAM to support these 2 indexes.Indexes are worth it. Use the best one possible to achieve the best performances :-).Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "fieldAFieldBcreateTime", "text": "Hi Yuda,Welcome to the MongoDB Forums. Maxime has already give you some good advice regarding compound indexes. I would also like to mention the ESR rule in case you are not familiar with this.ESR stands for Equality Sort Range.The basic premise of this is that when creating your compound indexes, you should create them in the following order:In this case the best index to support this query would be one that contains fieldA and FieldB followed by createTime.Following this rule will reduce the number of documents scanned and help you to avoid blocking sorts.Ronan", "username": "Ronan_Merrick" }, { "code": "", "text": "Thanks a lot for the answers above! The motivation of this post is that I want to be “lazy” and just create multiple single-field indexes and then not to worry about too much about the additional queries(just like Beugnet mentioned, sometimes the order in where clause matters). But seem like I should stick with the compound index. ", "username": "yuda_zhang" } ]
How to use multiple single-field indexes in an aggregation pipeline with a $sort?
2021-03-12T10:49:36.131Z
How to use multiple single-field indexes in an aggregation pipeline with a $sort?
2,745
null
[]
[ { "code": "", "text": "1)Can I dockerize Mongo DB using Docker? If yes, please provide steps of instructions\ntool names within mongo -??2)Say if the MongoDB server goes down , while user submitting the data from the website. How to achieve a way to avoid data leaks and recover the submitted data?\ntool names within mongo -??3)What are the best practices for backing up and restoring of Mongo DB?\ntool names within mongo -??", "username": "Slow-Steady_English" }, { "code": "", "text": "Hello @Slow-Steady_English, welcome to the MongoDB Community forum!Most of the information you are looking for is in the MongoDB Manual (a.k.a. documentation).See Install with DockerThe user submits data and the data is written to the database, and if the server has crashed the data is not written to the database - in general. You may want to be clear about what is your application scenario in detail to discuss this.To enable application access the database data all the time data can be replicated - using replica-sets. MongoDB replica-set allows data is stored on more than one server. A replica-set has a primary and secondary nodes. All the application data writes happen on the primary only. As the data is written to it, the same data is replicated to the secondary nodes. This way, if any node goes down, the application can keep running. In case a primary crashes, another node becomes a primary and accepts writes from the application.See ReplicationSee:And it is recommended that:Schedule periodic tests of your back up and restore process to have time estimates on hand, and to verify its functionality.", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I am new to MongoDB - help with below questions
2021-03-16T07:17:41.570Z
I am new to MongoDB - help with below questions
1,243
null
[]
[ { "code": "", "text": "Is there any way to change the deployment region of an existing Realm app?I have the Atlas cluster in Frankfurt and I’m told it would be better/faster to have the Realm app also in Frankfurt or maybe Ireland. Right know my Realm app is in Virginia.Would this improve both read and writes or only writes?Alternatively, if I create a new Realm app, I would need to move the existing users to the new app and I don’t know how to do that.Thanks!\nThomas", "username": "Thomas_Hansen" }, { "code": "", "text": "Hi Thomas,At this time it is not possible to change the deployment model after the app has already been created. You would need to create a new app and choose the change in deployment model there.Would this improve both read and writes or only writes?Sync writes are sensitive to latency between the server and the cluster. If you are using a Global deployment then this will process user traffic in the region closest to the user which could potentially be far from the cluster. Choosing a Local deployment in the same region where your cluster is located would help with performance in this regard.Alternatively, if I create a new Realm app, I would need to move the existing users to the new app and I don’t know how to do that.Unfortunately it is currently not possible to move users from one app to another as per this thread.Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "Thanks @Mansoor_Omar for the response.It’s a major concern though. Not being able to backup users or change Realm region. First of all this means I’m stuck in the region (Virginia) I originally chose for my Realm app. And also if anything happens with my Realm app it takes the whole business down. I can just sit back and pray that nothing happens. It’s like having no backup and restore in Atlas - that would also not be an option.I hope you can raise this internally at MongoDB.Best,\nThomas", "username": "Thomas_Hansen" } ]
Change Realm deployment region?
2021-03-12T08:12:33.341Z
Change Realm deployment region?
3,987
null
[ "data-modeling", "devops" ]
[ { "code": "", "text": "When would I create a new database in MongoDB - does creating a separate database offer any advantage? From my understanding all the databases in a cluster share the same hardware resources so there’s no advantage there. Same for collections.\nWhen would I create a new collection? Only when the data looks different from other collections? or is there some sort of configuration, isolation etc that creating a new collection vs using the same collection provides?", "username": "Shruthi_s1" }, { "code": "", "text": "Hi @Shruthi_s1,Welcome to MongoDB community.Its true that collection and databases share the same resources and potentially connection string from your drivers.Moreover database is a logical context and does not necessarily influence number of files or eventually data size.Database seperation is usually done as same collection have different context or for security measures where I have a user per database and it can read/write data to that database only.However, seperating collections is a data design consideration.Its important to remember that we want to access as less documents as possible to fetch our information while maintaining a good tradeoff to our index sizes, write performance and concurrency patterns.In general a completely separate business logic entity should store its data in. A sperate collection and have its own indexes and access pattern.There are other considerations like relationship, compression and future scale out (sharding) that can influence a good choice of schema and collection seperation.I recommend reading the following:https://www.mongodb.com/article/schema-design-anti-pattern-summary/https://www.mongodb.com/article/mongodb-schema-design-best-practices/A summary of all the patterns we've looked at in this seriesThanks.\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hello @Shruthi_s1, welcome to the MongoDB Community forum!In general, an application has some data associated with it. A typical web application has a database where the application’s data is stored. For example, a financial accounting application has various modules like accounts receivables, payables and general ledger - this is a categorization of an application at a very high level. As such each of these modules can be an application by itself. The data is also categorized by its functionality.If you are building such an application, it is likely the application’s data is stored in three databases - one for each module. And, within the accounts receivables module there are various functions like, customer management, invoice management, etc. Each of these data is different and is stored in different collections within the accounts receivables database.Another example is a blogging application. There are users, blog posts and reviews. The data is stored in different collections - users, blogs and reviews (maybe users and blogs plus reviews). It will be impractical to store user and blog information in a same collection. Because, user data is different, it has different fields and structure - user name, password, email, etc. A post’s data is a title, content, the user who wrote it, reviews, etc. These cannot be put together in same collection. The data is inserted, updated, and queried from the collection. To get user data you go to user collection.So, you can think about collections are a grouping of similar data. And a database is a grouping of similar collections, i.e., data serving a larger functionality or a module. You would not like storing customer and invoice information in a same collection - it is impractical to store and use. Analogically, it is like putting salt and pepper in different containers - different containers for different ingredients serving different purposes.MongoDB has standalone, replica-set and sharded clusters. These configurations serve different purposes.A standalone is a single server where all the databases (and their collections) are stored. In case the server goes down, your application and its users will wait until the server is again up and running.A replica-set has the feature that the data is replicated on multiple databases servers. So, the advantage is if one of the servers die, other servers with their replicated data will go on serving the application and its users.A sharded cluster has multiple shards - each shard is a replica-set - and the application’s data is distributed among these shards. For example, the customer data is stored on multiple shards. If there are five shards, and there are one hundred customers, you can think that each shard stores about twenty customer data (actual distribution is done based on criteria like shard key).MongoDB has sharding at collection level, and a sharded cluster can have sharded and un-sharded data.How do you determine what cluster, database or collection? It is a broad subject. In fact it’s a combinations of various subjects like data modeling (or database design), then there is application design, etc. And, these are also based upon the requirements of an application.Some useful references:", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Creating a new database vs a new collection vs a new cluster
2021-03-16T00:13:12.705Z
Creating a new database vs a new collection vs a new cluster
17,339
null
[ "atlas-cluster", "atlas", "upgrading" ]
[ { "code": "", "text": "I have two clusters in my MongoDB Atlas account, one is M0 and is in Free tier. Other one is, M10 and is in Dedicated Cluster tier. Both these clusters reside in the same region AWS / Frankfurt (eu-central-1). The need is, I have to upgrade my M0 cluster for growing connections and the data storage. But instead of upgrading the cluster, I just want it to be pointed to the M10 cluster that I already have in my account. Is there a way of doing this through MongoDB Atlas? or I will have to upgrade the cluster?Reason for not upgrading the same cluster (M0) is, when I tried upgrading it, there appears a warning that saysNote: Upgrading your cluster to a dedicated tier is irreversible. You can make other cluster changes at any time.and I don’t want to have two dedicated clusters as it will add to cost.", "username": "Avani_Khabiya" }, { "code": "mongodumpmongorestore", "text": "Hi @Avani_Khabiya,But instead of upgrading the cluster, I just want it to be pointed to the M10 cluster that I already have in my account. Is there a way of doing this through MongoDB Atlas? or I will have to upgrade the cluster?Unfortunately you cannot point the M0 cluster to the M10 cluster if you are wishing to join these up.You would have to upgrade the M0. However, if not required, you do not have to jump to M10 straight away. There are the M2 and M5 shared tiers as well.Reason for not upgrading the same cluster (M0) is, when I tried upgrading it, there appears a warning that saysNote: Upgrading your cluster to a dedicated tier is irreversible. You can make other cluster changes at any time.Yes, this is correct in that you cannot downgrade an M10+ tier cluster back down to a M0,M2 or M5. You would need to perform a mongodump and mongorestore if you wish to transfer your data back from an M10+ tier cluster down to the M0,M2 or M5 tiers.and I don’t want to have two dedicated clusters as it will add to cost.As advised above, there is the option of M2 and M5 shared tier clusters as well.Best Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Point M0 cluster to existing M10 cluster that is in the same region and account
2021-03-15T10:45:03.090Z
Point M0 cluster to existing M10 cluster that is in the same region and account
4,090
null
[ "mongodb-shell" ]
[ { "code": "", "text": "Hi,I have taken look at the MongoDB Shell source code in Github:\nhttps://github.com/mongodb/mongo/blob/c575750f73b7a490a60919777dc49c45ec4f2e0c/src/mongo/shell/linenoise.cppI have found out that MongoDB Shell have built in terminal device on which the user enters a stream of data and it manage cursor position etc ( using linenoise that is a readline replacement ) . It read a keystroke from the keyboard and translate it using the function readUnicodeCharacter to get Unicode (UChar32) character.Why usage of simple code with scanf instead of custom console terminal is not good enough?Example:\nchar text[256];\nprintf(“Enter somthing: “);\nscanf(”%[^\\n]”,text);\nprintf(“text = %s”,text);", "username": "sakdkjkj_jjsdjds" }, { "code": "mongomongomongoshmongomongoshmongodb-js/mongosh", "text": "Hi @sakdkjkj_jjsdjds,Line editing libraries like Readline and Linenoise provides extra functionality over just reading raw character input. Linenoise aims to provide common line editing features in a much more concise package than Readline.Core Linenoise features include support for:The mongo shell version of Linenoise has also evolved with additional keyboard shortcuts and cross-platform behaviour.However, Linenoise is only used in the legacy mongo shell.Last year MongoDB introduced a new MongoDB shell (mongosh) with a more modern user experience. The new shell is available as a standalone tool and is also integrated into recent versions of MongoDB Compass.Unlike the legacy mongo shell, mongosh is in active development and the team is looking for feedback on how the experience can be improved.For more information see:MongoDB ShellFeedback: How can we improve the MongoDB Shell?GitHub source: mongodb-js/mongoshRegards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why MongoDB Shell use custom console terminal
2021-03-15T16:36:57.091Z
Why MongoDB Shell use custom console terminal
2,417
null
[ "spring-data-odm" ]
[ { "code": "2021-03-10 22:37:07.304 INFO 8467 --- [ntLoopGroup-3-7] org.mongodb.driver.connection : Opened connection [connectionId{localValue:7, serverValue:456818}] to server.link.usually.here:27017\n2021-03-10 22:37:47.379 INFO 8467 --- [ntLoopGroup-3-8] org.mongodb.driver.connection : Opened connection [connectionId{localValue:9, serverValue:456838}] to server.link.usually.here:27017\n2021-03-10 22:37:47.396 INFO 8467 --- [tLoopGroup-3-10] org.mongodb.driver.connection : Opened connection [connectionId{localValue:8, serverValue:456819}] to server.link.usually.here:27017\n2021-03-10 22:37:47.396 INFO 8467 --- [tLoopGroup-3-11] org.mongodb.driver.connection : Opened connection [connectionId{localValue:11, serverValue:456819}] to server.link.usually.here:27017\n2021-03-10 22:37:47.406 INFO 8467 --- [ntLoopGroup-3-9] org.mongodb.driver.connection : Opened connection [connectionId{localValue:10, serverValue:456819}] to server.link.usually.here:27017\n public Mono<Guideline> addGuideline(Guideline guideline, String keycloakUserId) {\n Mono<Guideline> guidelineMono = userRepository.findByKeycloakUserId(keycloakUserId)\n .flatMap(user -> {\n return teamRepository.findUserInTeams(user.get_id());\n }).zipWith(instructionRepository.findById(guideline.getInstructionId()))\n .zipWith(userRepository.findByKeycloakUserId(keycloakUserId))\n .flatMap(objects -> {\n User user = objects.getT2();\n Instruction instruction = objects.getT1().getT2();\n Team team = objects.getT1().getT1();\n if (instruction.getTeamId().equals(team.get_id())) {\n guideline.setAddedByUser(user.get_id());\n guideline.setTeamId(team.get_id());\n guideline.setDateAdded(new Date());\n guideline.setGuidelineStatus(GuidelineStatus.ACTIVE);\n guideline.setGuidelineSteps(Arrays.asList());\n return guidelineRepository.save(guideline);\n } else {\n return Mono.error(new InstructionDoesntBelongOrExistException(\"Unable to add, since this Instruction does not belong to you or doesn't exist anymore!\"));\n }\n });\n return guidelineMono;\n }\n", "text": "Hello,For a service (which is in turn called by a front-end application) with reactive API’s, using Spring Webflux and ReactiveMongoRepository for CRUD actions and ReactiveMongoTemplate for ChangeStreams, I am seeing a lot of connections being opened, for just some minor read actions.In a short amount of time, I see this (and many more) amount of connections being opened. The overview below is just a few, but I am seeing many more:For example: I use the code below to retrieve the Guideline after adding it. Since I am programming user reactive approach, I have to open multiple connections to perform certain actions.My question basically is: am I doing this the right way? And how can I limit the amount of connections, since this looks like there is just to many connections are being opened for minor and simple actions.Since I am new to Spring Webflux in combination with ReactiveMongoRepository and ReactiveMongoTemplate, any help is welcome. Thanks in advance.", "username": "vv001" }, { "code": "", "text": "From my experience working with Mongo and reactive programming, initially Mongo opens connection for the number of connection pool set. Any idea what is connection pool set as?Edit: Based on your code, there are 3 calls to DB and you see 5 connections…so are you concerned there are 2 extra connections?", "username": "major1mong" }, { "code": "MongoClient", "text": "With MongoDB Java Driver, the default connection pool is set at 100 (and it is configurable). With Spring Data MongoDB the, typically, the MongoClient is registered in a configuration bean.With Java driver it is recommended that the open connections are closed with the application to cleanup the open resources.", "username": "Prasad_Saya" }, { "code": "@Autowired\nReactiveMongoTemplate reactiveMongoTemplate;\n", "text": "I have not really really setup a MongoClient. Is that required? In order to be able to close connections in the application,@Repository\npublic interface SettingsRepository extends ReactiveMongoRepository<Settings, String> {\n}The above is how I use the ReactiveMongoTemplate and ReactiveMongoRepository in my code. Do I still need to add the MongoClient in order to close the connections with the application?", "username": "vv001" }, { "code": "testlocalhost27017", "text": "If you haven’t done any connection configuration, the default takes effect - like, the database connected is test on default localhost and port 27017 .I have not really really setup a MongoClient. Is that required?If you are writing sample code and data, then the setup may not be required. Configuration allows tuning the connection and its attributes. Spring Data MongoDB’s Reference has more information.", "username": "Prasad_Saya" }, { "code": "", "text": "Correct me if I am wrong: where I see the risk in that approach is the following. I have end points in Spring Webflux that are connected to MongoDB changestreams, which should be open as long as the user is listening to them. If I limit the connections in the pool or the idle time, that might cause issues with these changestreams.Is that a correct assumption?", "username": "vv001" } ]
Using Spring ReactiveMongoTemplate and ReactiveMongoRepository vs. amount of MongoDB Atlas connections
2021-03-10T22:00:33.525Z
Using Spring ReactiveMongoTemplate and ReactiveMongoRepository vs. amount of MongoDB Atlas connections
10,091
null
[ "aggregation", "queries", "python" ]
[ { "code": "cursor=execute(\"select date_trunc('day',timestamp1) as day,avg(id13) from timestamppsql where timestamp1 >='2010-01-01 00:05:00' and timestamp1<='2011-01-01 00:05:00' group by day\")\n cursor=execute(\"select date_trunc('hour',timestamp1) as hour,avg(id13) from timestamppsql where timestamp1 >='2010-01-01 00:05:00' and timestamp1<='2010-01-01 10:05:00' group by hour order by hour desc\")\n", "text": "Hi guys.\nI have 2 queries where i find very hard to convert them to mongodb type language.\nThe first one is this:And the second one is this:I cant find out whats the equivalent date_trunc in mongodb.Any help would be very much appreciated.Thanks in advance!", "username": "harris" }, { "code": "", "text": "Hey Harris,Take a look at the $dateToString aggregation function Specifically, if you want just the hour, you could use the %d for day and %H for hour do something like:{ $dateToString: { format: “%d”, date: “$date” } } // for day of the month\n{ $dateToString: { format: “%H”, date: “$date” } } // for hourMore on the options can be found here: https://docs.mongodb.com/manual/reference/operator/aggregation/dateToString/#format-specifiersLet me know if that helps ", "username": "ado" }, { "code": "cursor = mydb1.mongodbtime.aggregate(\n [\n {\n \"$match\": {\n \"timestamp1\": {\"$gte\": datetime.strptime(\"2010-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\" :datetime.strptime(\"2011-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\")}\n }\n },\n {\n \"$group\": {\n\n \"_id\": \"null\",\n \"day\":{\"$dateToString\": { format: \"%d\", \"timestamp1\": \"$timestamp1\" }},\n \"avg_id13\": {\n \"$avg\": \"$id13\"\n }\n }\n }\n ]\n)\ndocuments must have only string keys, key was <built-in function format>\n", "text": "{ $dateToString: { format: “%d”, date: “$date” } } // for day of the monthI am sorry for possible mistakes i am still new in mongodb:\nThis is what i have wrote:\n(timestamp1 is a datetime object)The output is this:", "username": "harris" }, { "code": "documents must have only string keys, key was <built-in function format>formatcursor = mydb1.mongodbtime.aggregate(\n [\n {\n \"$match\": {\n \"timestamp1\": {\"$gte\": datetime.strptime(\"2010-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\" :datetime.strptime(\"2011-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\")}\n }\n },\n {\n \"$group\": {\n\n \"_id\": {\"$dateToString\": { \"format\": \"%d-%m-%Y\", \"date\": \"$timestamp1\" }},\n \"avg_id13\": {\n \"$avg\": \"$id13\"\n }\n }\n },\n {\n \"$project\": {\n \"_id\":0,\n \"day\":\"$_id\",\n \"avg_id13\":1\n }\n }\n ]\n)\n", "text": "Hey Harris,You are receiving the error documents must have only string keys, key was <built-in function format> because you are missing quotes around the format field.Also you don’t need to have the “null” id. You only need that if you are creating a total for all documents.Try this:", "username": "Ronan_Merrick" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Convert sql date query to mongodb type query
2021-03-11T15:40:46.722Z
Convert sql date query to mongodb type query
4,553
null
[ "java" ]
[ { "code": "org.mongodb:mongodb-driver-syncConnectionPoolListenerMongoMetricsConnectionPoolListenerConnectionPoolWaitQueueEnteredEventConnectionPoolWaitQueueExitedEventConcurrentPoolgetCountgetInUseCountgetAvailableCount", "text": "Micrometer is a metrics library that provides some support for MongoDB. I’m migrating Micrometer to use the latest java driver (from 3.x to 4.x, org.mongodb:mongodb-driver-sync).One of the metrics that Micrometer provides for MongoDB is the size of the wait queue (this information seems to be useful for the users). The instrumentation is implemented through the ConnectionPoolListener, see MongoMetricsConnectionPoolListener. Since the ConnectionPoolWaitQueueEnteredEvent and the ConnectionPoolWaitQueueExitedEvent were removed in 4.x, we are not able to provide this metric that easily with 4.x.ConcurrentPool has public methods to track this (getCount, getInUseCount, getAvailableCount) but its interface does not and getting a reference could also be tricky.Is there a recommended way to track the wait queue size using the latest java driver (4.x)?", "username": "Jonatan_Ivanov" }, { "code": "ConnectionCheckOutStartedEventConnectionCheckedOutEventConnectionCheckOutFailedEvent", "text": "The closest approximation would be to consider ConnectionCheckOutStartedEvent as entering the wait queue and either ConnectionCheckedOutEvent or ConnectionCheckOutFailedEvent as leaving the wait queue. The only difference is that the time between those events will include, in the event that a pooled connection is not available, the time spent opening a new connection and completing the connection handshake.Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "Thank you very much, this makes sense, we implemented the queue size tracking the way you suggested.", "username": "Jonatan_Ivanov" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
[Java] Monitoring WaitQueue size
2021-03-12T22:23:31.399Z
[Java] Monitoring WaitQueue size
2,780
null
[ "graphql" ]
[ { "code": "graphql-operations.tsexport function useGetAllLessonsQuery(\n baseOptions?: ApolloReactHooks.QueryHookOptions<\n Types.GetAllLessonsQuery,\n Types.GetAllLessonsQueryVariables\n >\n) {\n return ApolloReactHooks.useQuery<\n Types.GetAllLessonsQuery,\n Types.GetAllLessonsQueryVariables\n >(GetAllLessonsDocument, baseOptions);\n}\nError getting lessons: Error: GraphQL error: reason=\"could not validate document: \\n\\twords.25: Invalid type. Expected: undefined, given: null\\n\\twords.31: Invalid type. Expected: undefined, given: null\\n\\twords.33: Invalid type. Expected: undefined, given: null\\n\\twords.34: Invalid type. Expected: undefined, given: null\\n\\twords.37: Invalid type. Expected: undefined, given: null\\n\\twords.38: Invalid type. Expected: undefined, given: null\\n\\twords.39: Invalid type. Expected: undefined, given: null\\n\\twords.40: Invalid type. Expected: undefined, given: null\\n\\twords.42: Invalid type. Expected: undefined, given: null\\n\\twords.45: Invalid type. Expected: undefined, given: null\\n\\twords.46: Invalid type. Expected: undefined, given: null\\n\\twords.48: Invalid type. Expected: undefined, given: null\\n\\twords.49: Invalid type. Expected: undefined, given: null\\n\\twords.55: Invalid type. Expected: undefined, given: null\\n\\twords.58: Invalid type. Expected: undefined, given: null\"; code=\"SchemaValidationFailedRead\"; untrusted=\"read not permitted\"; details=map[]export type GetAllLessonsQuery = (\n { __typename?: 'Query' }\n & { lessons: Array<Maybe<(\n { __typename?: 'Lesson' }\n & Pick<Lesson, '_id' | 'lesson_id'>\n & { words?: Maybe<Array<Maybe<(\n { __typename?: 'Word' }\n & Pick<Word, '_id' | 'word'>\n )>>> }\n )>> }\n);\nexport type GetAllLessonsQueryVariables = Exact<{ [key: string]: never; }>;export const GetAllLessonsDocument = gql`\n query getAllLessons {\n lessons {\n _id\n lesson_id\n words {\n _id\n word\n }\n }\n }\n`;\n", "text": "Hello,I’m building a web app in React using MongoDB Atlas and MongoDB Realm. I was able to successfully authenticate but am struggling to query my data. I followed this tutorial: https://docs.mongodb.com/realm/tutorial/web-graphql/ but am using JS instead of Typsecript (not sure if that matters).When I try to call this function in graphql-operations.ts:I get the following error:Error getting lessons: Error: GraphQL error: reason=\"could not validate document: \\n\\twords.25: Invalid type. Expected: undefined, given: null\\n\\twords.31: Invalid type. Expected: undefined, given: null\\n\\twords.33: Invalid type. Expected: undefined, given: null\\n\\twords.34: Invalid type. Expected: undefined, given: null\\n\\twords.37: Invalid type. Expected: undefined, given: null\\n\\twords.38: Invalid type. Expected: undefined, given: null\\n\\twords.39: Invalid type. Expected: undefined, given: null\\n\\twords.40: Invalid type. Expected: undefined, given: null\\n\\twords.42: Invalid type. Expected: undefined, given: null\\n\\twords.45: Invalid type. Expected: undefined, given: null\\n\\twords.46: Invalid type. Expected: undefined, given: null\\n\\twords.48: Invalid type. Expected: undefined, given: null\\n\\twords.49: Invalid type. Expected: undefined, given: null\\n\\twords.55: Invalid type. Expected: undefined, given: null\\n\\twords.58: Invalid type. Expected: undefined, given: null\"; code=\"SchemaValidationFailedRead\"; untrusted=\"read not permitted\"; details=map[]The schema validation passes in the graphQL database and it’s just a read operation so I don’t really understand how it can fail schema validation? I’ve set read & write permissions to true. Maybe somebody out here is more familiar with GraphQL querying and can point me in the right direction?Types.GetAllLessonsQuery:Types.GetAllLessonsQueryVariables:export type GetAllLessonsQueryVariables = Exact<{ [key: string]: never; }>;GetAllLessonsDocument (this passes GraphiQL validation):Thank you!", "username": "Mark_Rogers" }, { "code": "untrusted=\"read not permitted\"", "text": "The root problem is the untrusted=\"read not permitted\" If you go to your realm console and select rules on the left hand column, there’s a permissions section that will appear. Go ahead and edit your permissions there. Once you have the permissions set up to enable that particular user (or an anonymous user) to read data, your error should go away.", "username": "Brian_Christensen" }, { "code": "", "text": "In my case, this happens because GraphQL > Validation Action is “Error” for “Reads”.Workaround: Set Validation Action to “Warn”.", "username": "Hendy_Irawan" } ]
SchemaValidationFailedRead on GraphQL
2020-07-15T02:23:52.554Z
SchemaValidationFailedRead on GraphQL
5,229
null
[]
[ { "code": "", "text": "Hi there. I am currently using MongoDB atlas for my database on my react application and when I run a command to activate the database for local use it will return the info that’s on it. But with my application and deploying it online using AWS amplify to a domain the info won’t show for anyone else on my website. A friend recommended using realm but want to be sure this is the right step. Thanks for any help", "username": "Jack_Haugh" }, { "code": "", "text": "Hello @Jack_Haugh, welcome to the MongoDB Community forum!The react application should retrieve the data even when hosted on the AWS Amplify - as it is doing so from the application on the local machine. It is the configuration and setups that you may want to make sure are complete - likely from the instructions provided by the hosting provider.I see you are connecting to MongoDB database on Atlas.A friend recommended using realm but want to be sure this is the right step.I don’'t know if you should use Realm or not - is it part of your application and its design?", "username": "Prasad_Saya" }, { "code": "", "text": "Hi there @Prasad_Saya. Thanks for the reply. I may have said it wrong. What I’m trying to achieve is that a user of any sort comes to our website and they can see the info that should be outputted on the page but it doesn’t show globally and I’m unsure on what. I will send code into here soon to help with he case. If you want to see my problem then the link to the page is https://www.student-mania.com/forumPage . If you see the query page that’s were you type the info and then it sends to the forum page.", "username": "Jack_Haugh" }, { "code": "", "text": "Server file", "username": "Jack_Haugh" }, { "code": "", "text": "Hello @Jack_Haugh, I see some code you have posted. Please format the code properly - see Formatting code and log snippets in posts. And, which part of the code you had posted is not working as you wanted?", "username": "Prasad_Saya" }, { "code": "", "text": "@Prasad_Saya i just put a hyperlink to the server file for review", "username": "Jack_Haugh" }, { "code": "Mongoose#connectconst client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true });\n\nclient.connect(err => {\n const collection = client.db(\"test\").collection(\"devices\");\n // perform actions on the collection object\n client.close();\n });\n\n// admin:studentmania modulesDB\n// Connecting to the MongoDB\nmongoose.connect(mongoDB,{useNewUrlParser:true});", "text": "Hello @Jack_Haugh,Tell me what are these code snippets doing? You are connecting to MongoDB NodeJS driver and closing the connection. Then the Mongoose#connect - is it connecting?", "username": "Prasad_Saya" }, { "code": "", "text": "Hi again.\nThe server code was giving to us by a lecturer before and it worked locally by running a command called node server.js that ran the server file and took info from our server on MongoDB. Right now if i ran this command it will show the info locally no problem but if i searched the website and looked at forum page on my phone it wont show and thats where im trying to get it to run globally for use by everyone", "username": "Jack_Haugh" }, { "code": "mongoDB", "text": "@Jack_Haugh, Id like to know where is this field coming from: mongoDB. This is meant to be the connection URI to the database.// Connecting to the MongoDB\nmongoose.connect( mongoDB, { useNewUrlParser:true } );", "username": "Prasad_Saya" }, { "code": "", "text": "I’m unsure buddy on what you mean. To be honest this was giving to us a year or two ago and never questioned it", "username": "Jack_Haugh" } ]
Using MongoDB to show data online
2021-03-15T14:13:13.886Z
Using MongoDB to show data online
5,779
null
[ "aggregation", "queries" ]
[ { "code": "{ \n \"_id\" : ObjectId(\"604b67f16cebca011964cee6\"), \n \"sym\" : \"COST\", \n \"DateText\" : \"2008-01-02T12:00:00\", \n \"AdjClose\" : NumberDecimal(\"67.22\"), \n \"PeriodChange\" : NumberDecimal(\"0\"), \n}\n{ \n \"_id\" : ObjectId(\"604b67f16cebca011964cee7\"), \n \"sym\" : \"COV\", \n \"DateText\" : \"2008-01-02T12:00:00\", \n \"AdjClose\" : NumberDecimal(\"42.4\"), \n \"PeriodChange\" : NumberDecimal(\"0\"), \n}\n{\"sym\" : \n { \"$in\" : \n [\n {\n \"distinct\" : \"StocksTimeSeries\",\n \"key\" : \"sym\",\n \"query\" : { \"AdjClose\" : { \"$lt\" : \"50\"}}\n }\n ]\n }\n}\ndb.getCollection(\"StocksTimeSeries\").find({\"sym\" :{\"$in\" : db.getCollection('StocksTimeSeries').distinct(\"sym\",{\"AdjClose\":{ \"$lt\" : 50 }}) }})\n{ \n \"_id\" : ObjectId(\"604b67f16cebca011964cee7\"), \n \"sym\" : \"COV\", \n \"DateText\" : \"2008-01-02T12:00:00\", \n \"AdjClose\" : NumberDecimal(\"42.4\"), \n \"PeriodChange\" : NumberDecimal(\"0\"), \n}\n", "text": "Hi All,\nI am new to MongoDB. I want to see all records from a collection where symbol is in the list of all distinct symbols which have AdjClose value less than 60.\nMy data looks likeI am trying this query.But it isn’t returning any record.On the other hand, if I try to use shell command, it works.Considering these two records, I am expecting the result to be second record.Could you guys please help me.Thanks", "username": "Roopesh_Kumar" }, { "code": "", "text": "Please take a look at the following and edit your code accordingly.It would also help if you could publish some sample documents from your collection and from the expected results.It works in the shell because the expression db.getCollection(‘StocksTimeSeries’).distinct(“sym”,{“AdjClose”:{ “$lt” : 50 }}) get executed first and the the result is used as the value of the $in operator.You might need to use the aggregation framework to accomplish what you want.", "username": "steevej" }, { "code": "", "text": "Hi Steevaj,\nThanks for the code formatting information. I have updated the post now.\nAlso, could you please give me some idea for for how to write a query with aggregation framework? Would the same query run if executed with collection.aggregate?\nSorry if I sound idiotic.Thanks", "username": "Roopesh_Kumar" }, { "code": "db.getCollection(\"StocksTimeSeries\").find({\"sym\" :{\"$in\" : db.getCollection('StocksTimeSeries').distinct(\"sym\",{\"AdjClose\":{ \"$lt\" : 50 }}) }}){\n\t\"_id\" : ObjectId(\"604f5846756789b5fd660e34\"),\n\t\"sym\" : \"COV\",\n\t\"DateText\" : \"2008-01-03T12:00:00\",\n\t\"AdjClose\" : NumberDecimal(\"52.4\"),\n\t\"PeriodChange\" : NumberDecimal(\"0\")\n}\n{$lt:50}c = db.getCollection( \"StocksTimeSeries\" ) ;\nc.find( {\"AdjClose\":{ \"$lt\" : 50 }} ) ;\npipeline = [\n\t{\n\t\t\"$match\" : {\n\t\t\t\"AdjClose\" : {\n\t\t\t\t\"$lt\" : 50\n\t\t\t}\n\t\t}\n\t},\n\t{\n\t\t\"$group\" : {\n\t\t\t\"_id\" : \"$sym\"\n\t\t}\n\t},\n\t{\n\t\t\"$lookup\" : {\n\t\t\t\"from\" : \"StocksTimeSeries\",\n\t\t\t\"localField\" : \"_id\",\n\t\t\t\"foreignField\" : \"sym\",\n\t\t\t\"as\" : \"list\"\n\t\t}\n\t},\n\t{\n\t\t\"$unwind\" : \"$list\"\n\t},\n\t{\n\t\t\"$replaceRoot\" : {\n\t\t\t\"newRoot\" : \"$list\"\n\t\t}\n\t}\n]\ndb.getCollection(\"StocksTimeSeries\").aggregate( pipeline ) ;\n", "text": "First, I added a extra document from the list you supplied to illustrate an issue I found with the query:db.getCollection(\"StocksTimeSeries\").find({\"sym\" :{\"$in\" : db.getCollection('StocksTimeSeries').distinct(\"sym\",{\"AdjClose\":{ \"$lt\" : 50 }}) }})The extra document is:DateText and AdjClose were changed. The latter so that {$lt:50} is false.You query results in all documents with sym equals to COV, including the one with AdjClose equals to 52.4. Depending of what you want, only the one with AdjClose less than 50 or both records.If the requirement is to only have the one with less than 50 then your query can be simplified to:If you really want what your original query will give with the additional document I added. With the aggregation framework you will need:If you are unfamiliar with the aggregation framework, I suggest that you take M121 from https://university.mongodb.com.", "username": "steevej" } ]
How to get data from filtered data
2021-03-14T14:25:22.477Z
How to get data from filtered data
4,611
https://www.mongodb.com/…9_2_1024x550.png
[]
[ { "code": "", "text": "when I copy paste link given : mongo “mongodb+srv://sandbox.d790l.mongodb.net/sample_airbnb” --username m001-student, it shows the following error even though I have created the cluster and username as mentioned in the course.Also am able to connect from NoSQLBooster by using the following “mongodb+srv://m001-student:[email protected]/sample_airbnb”.image1333×716 21.9 KB", "username": "shanif_hussain" }, { "code": "", "text": "Welcome to the community!You have entered the command in wrong area of IDE\nEnter it in terminal\nYour connect string looks fine", "username": "Ramachandra_Tummala" } ]
Cannot Connect to mongodb Atlas
2021-03-15T05:28:52.956Z
Cannot Connect to mongodb Atlas
2,076
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "Hello,I have a mobile app which uses realm sync and I wish I could modify data through web app and rest api made with node js.I have 2 solutions to update the data with Node JS : Realm and Realm-Web\nI have tried both and both work but with Realm-Web data is not updated in real time in the mobile app (which uses sync). Is it normal ? So should I be using Realm? The problem is, with Realm, I’m going to have to store all the .realm files from all the partitions where my Node JS app is hosted.So which one is better ? Thanks for your help ! ", "username": "Arnaud_Combes" }, { "code": "", "text": "[…] but with Realm-Web data is not updated in real time in the mobile app (which uses sync). Is it normal?Short answer is no. Documents updated via Realm Web should trigger Sync to a mobile device if the document updated is matching the partition of the synced Realm. Perhaps the sync logs, viewable via the MongoDB Realm UI can reveal the cause.If you don’t need the data access rules that MongoDB Realm provide when updating data from your Node.js rest API, you could consider using the MongoDB Node.js Driver and connecting directly to your Atlas cluster.", "username": "kraenhansen" }, { "code": "", "text": "the data access rules that MongoDB Realm provideThanks a lot, it works perfectly ", "username": "Arnaud_Combes" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm or realm-web for Rest Api
2021-03-11T17:01:30.113Z
Realm or realm-web for Rest Api
2,466
null
[ "queries" ]
[ { "code": "jobs:[\n{\n type:'Dev',\n nbRequired:2,\n collabs:['name1','name2','name3']\n},\n{\n type:'Front',\n nbRequired:2,\n collabs:['name1','name2','name3']\n},\n]\ncollab:[\n {type:'dev',name:'name1'},\n {type:'dev',name:'name2'},\n {type:'dev',name:'name3'},\n {type:'Front',name:'name1'},\n {type:'Front',name:'name2'},\n {type:'Front',name:'name3'}\n]\n", "text": "Hi everyone !!There is a query I’v been on for a long time now and I need help, if you’d be pleased !In my documents there is a field like this :At the end I’d like to get an array like this to be used in my js script:Does anybody have an idea of what the best query would be ?Thanks a lot !!", "username": "Preney_Valere" }, { "code": "", "text": "Hello @Preney_Valere, welcome to the MongoDB Community forum!To get the required result you need to write an Aggregation query. I see that you can use the aggregation stages $unwind and $group. In addition, take a look at the Array Expression Operators used in aggregation queries.", "username": "Prasad_Saya" } ]
Help with mongodb query on an array field
2021-03-14T19:00:42.388Z
Help with mongodb query on an array field
1,342
null
[ "schema-validation" ]
[ { "code": "db.createCollection( \"test_val\" , { \n validator: { $jsonSchema: { \n bsonType: \"object\" ,\n properties: { \n PartId: { bsonType: \"int\",\n description :\"int required\"\n }\n } \n \n }) \ndb.test_val.insertOne({\"PartId\" : 123})\n\nWriteError({\n\t\"index\" : 0,\n\t\"code\" : 121,\n\t\"errmsg\" : \"Document failed validation\",\n\t\"op\" : {\n\t\t\"_id\" : ObjectId(\"604c3a16c569f2096c735bed\"),\n\t\t\"PartId\" : 123\n\t}\n})\ndb.test_val.insertOne({\"PartId\" : NumberInt(123)})\n\n{\n\t\"acknowledged\" : true,\n\t\"insertedId\" : ObjectId(\"604c3a54c569f2096c735bee\")\n}\n", "text": "I am testing a $jsonSchema. I don’t understand why the bsonType “int” requires the numberInt() function.Simple validation, where PartId is type intHere I insert an integer (I think?) but it fails.It works with the NumberInt() function.", "username": "David_Lange" }, { "code": "", "text": "Hi @David_Lange,I think by default the shell turn a number into a decimal.If you need explicitly int you have to convert it like you haveThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "doubleNumberInt()mongodoublenumbernumberint32doubledecimal128", "text": "Hi @David_Lange,MongoDB’s internal document representation uses the BSON binary serialisation format which supports standard JSON data types as well as extended types which are not native to JavaScript (for example, 32-bit and 64-bit integers).JavaScript’s built-in Number type is a double-precision 64-bit binary value which maps to the BSON type of double.To align your data model with validation you could:Use the NumberInt() wrapper to pass a value as a 32-bit integer in the mongo shell. Drivers also support extended MongoDB type representations but will use naming consistent with the driver API.Change your validator to use the double type to match JavaScript’s default Number type.Change your validator to use the number BSON alias which will match against any numeric type (32-bit integer, 64-bit integer, double, or decimal128).The number alias provides some flexibility for numeric fields with a range of values that may require different numeric precision.For example, a 32-bit integer (int32) uses fewer bytes in an uncompressed document than a double or decimal128 value:Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$jsonschema integer
2021-03-13T04:15:04.653Z
$jsonschema integer
4,299
null
[ "mongodb-shell" ]
[ { "code": "db.getCollection('inversionista').find({broker: id}).forEach(function(scope) { \n db.getCollection('puja').find({inversionista: scope._id}).forEach(function(scope) { \n const subasta = db.getCollection('subasta').findOne({_id: scope.subasta})\n const factura = db.getCollection('factura').findOne({_id: subasta.factura})\n const pyme = db.getCollection('pyme').findOne({_id: factura.proveedor})\n const inversionista = db.getCollection('inversionista').findOne({_id: scope.inversionista})\n const dias = factura.dias\n var percetange = 0;\n var item;\n var monto;\n if(pyme.broker && pyme.broker == id){\n if (dias <= 45) {\n percetange = 0.003;\n } else if (dias > 45 && dias <= 75) {\n percetange = 0.004;\n } else if (dias > 75) {\n percetange = 0.005;\n }\n monto = factura.montoSubastar;\n item = {\n valueId: pyme.razon_social,\n tipoId: 'FACTURA',\n subastaId: scope.subasta,\n moneda: factura.moneda,\n fechaRegistro: scope.informacionPagoGarantia.fechaPagoRealCavaliBLP,\n monto: percetange * monto\n }\n db.getCollection('broker').findOneAndUpdate({\n _id: inversionista.broker,\n $push: { transacciones: {$each: [item] } } ,\n })\n }\n if (dias <= 45) {\n percetange = 0.0035;\n } else if (dias > 45 && dias <= 75) {\n percetange = 0.0065;\n } else if (dias > 75) {\n percetange = 0.0085;\n }\n monto = factura.monto_adjudicado;\n item = {\n valueId: inversionista.nombres + ' ' + inversionista.apellidos,\n subastaId: scope.subasta,\n tipoId: 'INVERSIONISTA',\n moneda: factura.moneda,\n fechaRegistro: scope.informacionPagoGarantia.fechaPagoRealCavaliBLP,\n monto: percetange * monto\n };\n db.getCollection('broker').findOneAndUpdate({\n _id: inversionista.broker,\n $push: { transacciones: { $each: [item] } },\n });\n });\n});\n", "text": "I have been with this problem for two days, I want to see if you can help me, the problem is this error.when I try to do this query.error that I have that this.can’t convert undefined to object :\nDBCollection.prototype.findOneAndUpdate", "username": "Arthur_Charly_Quezad" }, { "code": "", "text": "The way your code is presented is very hard to understand. Please take a look at following and edit your post accordingly. It will help us help you better.", "username": "steevej" }, { "code": "db.getCollection('broker').findOneAndUpdate({ _id: inversionista.broker } ,\n { $push: { transacciones: {$each: [item] } } })\n", "text": "Thanks for formatting the code.Since you have 2 calls to findOneAndUpdate(), I would start by identify which one is failing and for which data. May be the full stack trace indicates which of the call is failing. Catching the error and printing the current data will help you pinpoint the problem.One thing I not is that you might be using the wrong syntax. If you look at the examples given in https://docs.mongodb.com/manual/reference/method/db.collection.findOneAndUpdate/, you will see that it is …findOneAndUpdate( query , update ). Your $push is part of the first parameter so it is part of the query. You then end up with no update parameter, most likely the undefined. I will try the following call instead.I do not think you need { $each : [ item ] }. Just item should work.Lastly, I will look at the aggregation framework, in particular, $lookup and $graphLookup because you are making 4 database access within 2 loops that are also making database access. I pretty sure you can get your data with a single aggregation pipeline.", "username": "steevej" } ]
Error - can't convert undefined to object : DBCollection.prototype.findOneAndUpdate
2021-03-12T16:01:18.634Z
Error - can&rsquo;t convert undefined to object : DBCollection.prototype.findOneAndUpdate
7,448
null
[ "kafka-connector" ]
[ { "code": "1579", "text": "Hi, I’m investigating approaches to store my IOT data from multiple ‘things’. The measurements from things are stared in Kafka a topic per thing, and within the thing the key is the measurement name (string) and the value is the value of the measurement either INT or Float.If I use a file sink I get a file of values no problem, but get issues with the MongoDB SinkERROR Unable to process record SinkRecord{kafkaOffset=14217, timestampType=CreateTime} ConnectRecord{topic=‘MySensors-81’, kafkaPartition=0, key=81-0-17, keySchema=Schema{STRING}, value=1579, valueSchema=Schema{STRING}, timestamp=1614000439773, headers=ConnectHeaders(headers=)} (com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData:110)\norg.apache.kafka.connect.errors.DataException: Could not convert value 1579 into a BsonDocument.How do I define to the connector that the valueSchema is INT32 or Float (some readings have decimal)?", "username": "Clive_Richards" }, { "code": "\"value.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n\"value.converter.schemas.enable\": \"false\",\n\"value.converter\": org.apache.kafka.connect.storage.StringConverter\"\n\"value.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n\"value.converter.schemas.enable\": \"true\",\n", "text": "Can you send your configuration for the sink minus any authentication credentials? Instead of writing just the integer/float value, wrap the value in a json document and set the value.converter in the a sink something like:Alternatively, to get it working you could set the converter to stringBy default the sink connector is looking for a BSON document you need to tell it to take whatever is on the kafka topic and interpret it as a string, a JSON document, etc…Note: You can write the message with an Avro or JSON schema as well, then use", "username": "Robert_Walters" }, { "code": "\"transforms\": \"HoistField,Cast\",\n\"transforms.HoistField.type\": \"org.apache.kafka.connect.transforms.HoistField$Value\",\n\"transforms.HoistField.field\": \"temp\",\n\"transforms.Cast.type\": \"org.apache.kafka.connect.transforms.Cast$Value\",\n\"transforms.Cast.spec\": \"temp:float64\",\n", "text": "ok, I had another thought on this one. You could do a single message transform on the sink to takes that integer value and hoist it into a document with a key/value pair as follows:This should take that integer and create something that is like\n{ “temp”:1712}", "username": "Robert_Walters" }, { "code": "", "text": "The example data below is a reading from a power meter sensor (thing) calculating watts via a led pulse count … I have a topic per “thing” and about 50 things atm want to grow .The data is being created in the kafka topic through a webthings.io adapter, so I dont have too much control, but was contemplating asking the developer if he could toggle JSON in the valueI have tried every combination of converters - even figuring out how to populate kafka schema to entertain arvo converter that resulted in NPE Your second thought is interesting - I just have no clue how to implement it - or are these parameters for the properties file?Given programming isnt anywhere near the top of my skils list … it would be good to guide the form of the document somehow - although it is IOT I was have expecting a pattern was available … perhaps that pattern is json in the value field show the data type of the reading - if so, I’ll put a request into the git for the webthings kafka adapter - I have been assisting testing the codeI’m using kafka-connect-standalone usingbin/connect-standalone.sh config/connect-standalone.properties config/MongoSinkConnector.propertiesThe data in the topic looks like this:\n~kafka/kafka/bin/kafka-console-consumer.sh --topic MySensors-81 --property print.key=true --property key.separator=: --from-beginning --bootstrap-server xxxxx.local:9092\n81-0-17:1579\n81-1-18:628.08\n81-0-17:1650\n81-1-18:628.09\n81-0-17:1579The connect-standolone.properties arebootstrap.servers=xxxxx.local:9092\nkey.converter=org.apache.kafka.connect.storage.StringConverter\nkey.converter.schemas.enable=false\nvalue.converter=org.apache.kafka.connect.storage.StringConverter\nvalue.converter.schemas.enable=false\noffset.storage.file.filename=/tmp/connect.offsets\noffset.flush.interval.ms=10000\nplugin.path=/usr/local/share/kafka/pluginsThe MongoSinkConnector.propertiesname=mongo-sink\ntopics=MySensors-81\nconnector.class=com.mongodb.kafka.connect.MongoSinkConnector\ntasks.max=1\ndebug=true\nkey.converter=org.apache.kafka.connect.storage.StringConverter\nkey.converter.schemas.enable=false\nvalue.converter=org.apache.kafka.connect.storage.StringConverter\nvalue.converter.schemas.enable=false\nconnection.uri=mongodb://localhost/\ndatabase=Sensors\ncollection=sink\nmax.num.retries=1\nretries.defer.timeout=5000\nconfluent.topic.security.protocol=“PLAINTEXT”\nkey.projection.type=none\nkey.projection.list=\nvalue.projection.type=none\nvalue.projection.list=\nfield.renamer.mapping=\nfield.renamer.regex=\ndocument.id.strategy=com.mongodb.kafka.connect.sink.processor.id.strategy.BsonOidStrategy\npost.processor.chain=com.mongodb.kafka.connect.sink.processor.DocumentIdAdderdelete.on.null.values=false\nwritemodel.strategy=com.mongodb.kafka.connect.sink.writemodel.strategy.ReplaceOneDefaultStrategy\nmax.batch.size = 0\nrate.limiting.timeout=0\nrate.limiting.every.n=0", "username": "Clive_Richards" }, { "code": "", "text": "What would you like the data to look like when it is in MongoDB? To confirm, your data in the topic are strings and looks like this, correct?81-0-17:1579\n81-1-18:628.08\n81-0-17:1650\n81-1-18:628.09\n81-0-17:1579", "username": "Robert_Walters" }, { "code": "", "text": "Yes that is the data - although the developer of the Producer has agreed to write as JSON - I’ll see what it looks like in the next couple of days …Errors on adapter startup · Issue #6 · tim-hellhake/kafka-bridge · GitHubThe current structure is :\ntopic: device id\nmessage key: property name\nmessage value: {[property name]: [property value]}.The plan is to use MongoDB as the persistence for all telemetry - I’ll then be wanting to visualise the data and perform analytics - haven’t chosen any toolsets as yet for this.", "username": "Clive_Richards" }, { "code": "", "text": "The change has been made to the Producer - the data now formatted as per below using this consumer command … does this help? Would I then use the JSON converter?~kafka/kafka/bin/kafka-console-consumer.sh --topic MySensors-81 --property print.key=true --property key.separator=: --from-beginning --bootstrap-server xxxxx.local:909281-0-17:{“81-0-17”:1304}\n81-1-18:{“81-1-18”:0}\n81-0-17:{“81-0-17”:5016}\n81-1-18:{“81-1-18”:958.96}\n81-0-17:{“81-0-17”:5039}\n81-1-18:{“81-1-18”:959.01}\n81-0-17:{“81-0-17”:4920}\n81-1-18:{“81-1-18”:959.05}\n81-0-17:{“81-0-17”:2595}\n81-1-18:{“81-1-18”:959.1}\n81-0-17:{“81-0-17”:5173}\n81-1-18:{“81-1-18”:959.15}\n81-0-17:{“81-0-17”:5358}\n81-1-18:{“81-1-18”:959.19}\n81-0-17:{“81-0-17”:4989}\n81-1-18:{“81-1-18”:959.24}\n81-0-17:{“81-0-17”:4941}\n81-1-18:{“81-1-18”:959.28}", "username": "Clive_Richards" }, { "code": "", "text": "Perhaps it might be easiest if the developer could represent the document like{ “name”:“81-0-17,” “value”:1304},\n{ “name”:“81-1-18”, “value”:0},\netcIf this can’t be done you have many other options. You could use use transforms to parse the message and put it in a JSON formatLearn how to use Single Message Transforms with Kafka connectors you manage yourself.you can also write your own transform, or use kSQL to create a persistent query and transform the data into a new topic that contains a JSON document structure. You could also just take the existing message and insert into MongoDB as a string and have your application do the manipulation of the data.", "username": "Robert_Walters" }, { "code": "", "text": "Success using value.converter=org.apache.kafka.connect.json.JsonConverteruse Sensors\nswitched to db Sensors\nshow collections\nsink\ndb.sink.find();\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da14”), “81-0-17” : NumberLong(808) }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da15”), “81-1-18” : NumberLong(0) }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da16”), “81-0-17” : NumberLong(1043) }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da17”), “81-1-18” : 1056.89 }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da18”), “81-0-17” : NumberLong(1048) }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da19”), “81-1-18” : 1056.9 }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da1a”), “81-0-17” : NumberLong(1066) }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da1b”), “81-1-18” : 1056.91 }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da1c”), “81-0-17” : NumberLong(1024) }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da1d”), “81-1-18” : 1056.93 }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da1e”), “81-0-17” : NumberLong(1003) }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da1f”), “81-1-18” : 1056.94 }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da20”), “81-0-17” : NumberLong(1020) }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da21”), “81-1-18” : 1056.95 }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da22”), “81-0-17” : NumberLong(936) }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da23”), “81-1-18” : 1056.96 }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da24”), “81-0-17” : NumberLong(914) }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da25”), “81-1-18” : 1056.97 }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da26”), “81-0-17” : NumberLong(1065) }\n{ “_id” : ObjectId(“604c54c6868c0042fcc9da27”), “81-1-18” : 1056.98 }Now I have this working I can make schema design decisions about how I store the data. The key is inherently storing a lot of information - “-”<“metric number”>-<“metric reading type e.g watts, or temp in celcius, humidity in % etc”>. The metric reading type is held in a schema store defined by webthings.io so I could keep a copy as a lookup, but I would have to do it every time … not so easy if trying to use a simple visualisation app … this is this the whole database debate I guess…I think I’m leaning toward being readable … its not as if its someones name which can change and duplicates seem to be ok or worried about saving bytes on disk … a temperature reading for that date and time will always be that temperature reading for that moment in time …Anyway I like your idea of transformations - it makes sense, so I’ll be researching that more…Speaking of time - how do I get the reading date/time stamp into the document - just occurred to me - its not in the values -next problem to solve… anyway I’ll close this - thankyou for your help Robert", "username": "Clive_Richards" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
ERROR Unable to process record SinkRecord
2021-03-02T11:18:14.246Z
ERROR Unable to process record SinkRecord
8,561
https://www.mongodb.com/…_2_1024x537.jpeg
[ "careers" ]
[ { "code": "", "text": "Hi all Check this fantastic MongoDB talk to CUNY Tech Prep students on how to ace a technical interview.This talk can be of great help if you’re currently looking for a job or preparing yourself for upcoming technical interviews. You can find the video here.Google Drive file.In this video, you’ll hear about:You can find the slides here.Let us know what you think and good luck with your interview(s)!", "username": "Lieke_Boon" }, { "code": "", "text": "", "username": "Lieke_Boon" } ]
MongoDB Best Practices: How to ACE a Tech Interview
2021-03-07T15:28:18.694Z
MongoDB Best Practices: How to ACE a Tech Interview
5,851
https://www.mongodb.com/…4_2_1024x512.png
[ "licensing" ]
[ { "code": "", "text": "Truthfully I am getting burned out by trying to figure out open source licenses. It has been a on going 4 day adventure just to see if all the work I have put into build a SaaS will pay off.However now I come to hear about MongoDBs SSPL license.As this University it about preparing people to become users of MongoDB, I think it would be wise to have a course on how to understand the SSPL licesnse of MongoDB. So people know the ins and outs of use in production.My plan was to use MongoDB as the db to my SaaS webapp. However does this mean I have to open source all my code!! How can I make any money? I cannot believe how complicated this all is.I really hope I do not have to abandon mongodb in this project due to this license. Or do I have to buy a commercial license? Again something that would be great to cover in a course.However I would barely even be considered a start up. And now I am looking at a Commercial license with an UNKNOWN cost. Any time a price is not listed is because it is ridiculously high. Looking forward to hearing it… Plus all the common “go speak to a lawyer”, lol yeah at $400+/hour great lolIt has been a long 4 days…EDITDoes this help me?This is discouraging to say the least. The SSPL was rejected as an Open source license.Red Hat won't use MongoDB in Red Hat Enterprise Linux or Fedora thanks to MongoDB's new Server Side Public License.", "username": "Natac13" }, { "code": "", "text": "I doubt that the University will create a course about it but the least they can do is include a topic around it with some real life use cases.", "username": "007_jb" }, { "code": "", "text": "Anything to clear it up a bit more. lol I completely understand why they have the license they do. The faqs may have cleared it up but I guess I will be running it by my lawyer.", "username": "Natac13" }, { "code": "", "text": "Hi @Natac13,I appreciate this is an older question, but licensing should not require a course to understand.Hopefully you also were able to find the Server Side Public License FAQ which should address your specific concerns.My plan was to use MongoDB as the db to my SaaS webapp. However does this mean I have to open source all my code!!The SSPL clause in regards to SaaS is aimed at providers offering MongoDB as a public service, not developers who are using MongoDB as the data store for their own applications.Two relevant quotes from the FAQ:The copyleft condition of Section 13 of the SSPL applies only when you are offering the functionality of MongoDB, or modified versions of MongoDB, to third parties as a service. There is no copyleft condition for other SaaS applications that use MongoDB as a database.No. We do not consider providing MongoDB as a service internally or to subsidiary companies to be making it available to a third party.Regards,\nStennie", "username": "Stennie_X" } ]
Course on MongoDB SSPL License
2019-09-29T11:42:57.832Z
Course on MongoDB SSPL License
3,753
null
[]
[ { "code": "", "text": "Hello\nI might have overseen it, … I would like to be able to set an initial view for the starting page.\nE.g. set a filter on certain categories and/or tag, combined whit the latest sort (newest on top)\nManually this is already possible to do, I am missing the option to persist my choice.\nAll the best\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Hi @michael_hoeller,Thanks for your question. I know you can select your default view in the Interface options in your Preferences, but that only lets you select between category, top, unread, latest, etc. I am looking into whether there’s an option/plugin/etc. that might allow what you are asking for (user-level preferences on categories that appear on the home page). Will update here when I find out more.Cheers,Jamie", "username": "Jamie" }, { "code": "", "text": "Hi Michael,Closing out this older discussion as we’ve worked out what is available through current site features.As Jamie noted you can change the Default Home Page view via Interface Preferences in your user profile; the category filtering aspect was explored via recent discussion on Filter for messages.Thanks again for your continued feedback and ideas!Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" } ]
User default initial view?
2020-02-06T10:40:47.350Z
User default initial view?
4,732
null
[ "schema-validation" ]
[ { "code": "", "text": "I am running mongodb 4.4. We would like to create a collection with defined data types before any data is inserted, to be used by a tool. Is this possible?\nThanks\n-Dave", "username": "David_Lange" }, { "code": "", "text": "Hopefully the following link will be useful.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can you create a collection with specified data types
2021-03-12T19:44:21.175Z
Can you create a collection with specified data types
2,112
null
[ "student-spotlight" ]
[ { "code": "", "text": "Hi students,While you’re looking at our offers for students, you might wonder what kind application or website you can build with MongoDB. There are tons of possibilities, and we are here to help! Are you looking for inspiration and do you want to know what other students created? Check out our Student projects on DevCenter. We’re showcasing projects that students are building with MongoDB. Created by students, for students.Do you have MongoDB project that you’re proud of? Please share your work with us by sharing it below, and we might add your project to our DevCenter! Code, content, tutorials, programs and community to enable developers of all skill levels on the MongoDB Data Platform. Join or follow us here to learn more!", "username": "Lieke_Boon" }, { "code": "", "text": "", "username": "Lieke_Boon" } ]
MongoDB Student Spotlights: submit your project or read inspiring stories of others
2021-02-25T19:45:36.119Z
MongoDB Student Spotlights: submit your project or read inspiring stories of others
5,408
null
[ "aggregation", "queries" ]
[ { "code": "```\ndb.ProductReport.explain().find({ coordination: { $geoWithin:{ $box:[ [-180,-90],[40,90]]}} } , {coordination:1,_id:0} )\n{\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"Projections.ProductReport\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"coordination\" : {\n \"$geoWithin\" : {\n \"$box\" : [\n [\n -180,\n -90\n ],\n [\n 40,\n 90\n ]\n ]\n }\n }\n },\n \"queryHash\" : \"AB8EF93D\",\n \"planCacheKey\" : \"43E7EF83\",\n \"winningPlan\" : {\n \"stage\" : \"PROJECTION_SIMPLE\",\n \"transformBy\" : {\n \"coordination\" : 1,\n \"_id\" : 0\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"coordination\" : {\n \"$geoWithin\" : {\n \"$box\" : [\n [\n -180,\n -90\n ],\n [\n 40,\n 90\n ]\n ]\n }\n }\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"coordination\" : \"2d\"\n },\n \"indexName\" : \"coordination_2d\",\n \"isMultiKey\" : false,\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"coordination\" : [\n \"[BinData(128, 0400000000000000), BinData(128, 07FFFFFFFFFFFFFF)]\",\n \"[BinData(128, 0C00000000000000), BinData(128, 0FFFFFFFFFFFFFFF)]\",\n \"[BinData(128, 1000000000000000), BinData(128, 1FFFFFFFFFFFFFFF)]\",\n \"[BinData(128, 2000000000000000), BinData(128, 2FFFFFFFFFFFFFFF)]\",\n \"[BinData(128, 3000000000000000), BinData(128, 3FFFFFFFFFFFFFFF)]\",\n \"[BinData(128, 4000000000000000), BinData(128, 4FFFFFFFFFFFFFFF)]\",\n \"[BinData(128, 5000000000000000), BinData(128, 53FFFFFFFFFFFFFF)]\",\n \"[BinData(128, 5800000000000000), BinData(128, 5BFFFFFFFFFFFFFF)]\",\n \"[BinData(128, 6000000000000000), BinData(128, 6FFFFFFFFFFFFFFF)]\",\n \"[BinData(128, 7000000000000000), BinData(128, 73FFFFFFFFFFFFFF)]\",\n \"[BinData(128, 7800000000000000), BinData(128, 7BFFFFFFFFFFFFFF)]\",\n \"[BinData(128, 8400000000000000), BinData(128, 87FFFFFFFFFFFFFF)]\",\n \"[BinData(128, 9000000000000000), BinData(128, 9FFFFFFFFFFFFFFF)]\",\n \"[BinData(128, C000000000000000), BinData(128, C3FFFFFFFFFFFFFF)]\",\n \"[BinData(128, C400000000000000), BinData(128, C7FFFFFFFFFFFFFF)]\",\n \"[BinData(128, D000000000000000), BinData(128, D3FFFFFFFFFFFFFF)]\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\" : [ ]\n },\n \"serverInfo\" : {\n \"host\" : \"masoudy-ubuntu\",\n \"port\" : 27017,\n \"version\" : \"4.4.4\",\n \"gitVersion\" : \"8db30a63db1a9d84bdcad0c83369623f708e0397\"\n },\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1615554402, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1615554402, 1)\n}\n``` \ndb.ProductReport.explain().aggregate([ {$project:{coordination:1,_id:0}}, {$match:{ coordination: { $geoWithin:{ $box:[ [-180,-90],[40,90]]}}}} ] )", "text": "i have a simple collection with 60000 docs and each doc containing 100 fields in which one of them is “coordination” which is an array containing longitude and latitude of each document.without a 2d index on coordination field, querying geoWithin box takes 100ms.but with the index, it takes 400ms because not only it fetches index keys, but also it fetches the actual documents, even though i have projected on the coordination fieldi’m confused.this must be a bug or sth, because we have a covering index ignored!this is the explain output :in my understanding, because we have a 2d index on coordination field and because we are projecting only coordination field and excluding _id field, it must only return the result off of the index making it a covering one.doing a count command is event worse and slower.what am i missing here?even if i try do this with aggregation and first do the projection and then the matching , it will do the same and after using index, it will pass the fetch stage as input to the projection, so it seems it will first fetch all fields of documents and then do the projections part!!db.ProductReport.explain().aggregate([ {$project:{coordination:1,_id:0}}, {$match:{ coordination: { $geoWithin:{ $box:[ [-180,-90],[40,90]]}}}} ] )", "username": "Masoud_Naghizade" }, { "code": "", "text": "there is section in docs that simply says, GeoSpatial indexes don not cover queries!", "username": "Masoud_Naghizade" } ]
Mongo 2d index very very slow
2021-03-12T14:39:29.577Z
Mongo 2d index very very slow
2,445
null
[]
[ { "code": "saida_cadastro != null AND data_obito == ${cMonth}", "text": "I am trying to “filter” to resume only the data during the current month. Has anyone been through this issueex.:obterTodasSaidasCadastro = () => {\nconst currentMonth = new Date();\nconst cMonth = currentMonth.getMonth();\n…\n.filtered(\nsaida_cadastro != null AND data_obito == ${cMonth},\n),\n…\n});};\neven testing manually like this: data_obito = 2021-03-03T00: 00: 00.000Z, did not work\nor\ndata_obito = 2021-03-03, did not workand in this example, even using template string, it doesn’t work, who knows, could post the solution here,Thank you very much in advance", "username": "humberto_junior" }, { "code": "AND BEGINSWITH data_obito == ${firstDay} AND ENDSWITH data_obito == ${lastDay}saida_cadastro != null", "text": "got the solution const firstDay = moment().startOf(‘month’).format(‘YYYY-MM-DD@hh:mm:ss’);\nconst lastDay = moment().endOf(‘month’).format(‘YYYY-MM-DD@hh:mm:ss’);let currentMonth = AND BEGINSWITH data_obito == ${firstDay} AND ENDSWITH data_obito == ${lastDay};\n……\nlet project = Array.from(\nrealm\n.objects(‘Project’)\n.filtered(saida_cadastro != null, currentMonth),\n);\n…", "username": "humberto_junior" } ]
Filter current month only
2021-03-08T17:19:15.417Z
Filter current month only
2,209
https://www.mongodb.com/…053f20f524ab.png
[ "aggregation" ]
[ { "code": "db.spinHistory.explain().aggregate([\n {\n \"$match\": {\n \"gameRef\": \"6047a10c58ed573e490b8f54\"\n }\n },\n {\n \"$project\": {\n \"platformRef\": 1,\n \"gameRef\": 1,\n \"currency\": 1,\n \"win\": 1,\n \"bet\": 1,\n \"bonusWin\": \"$data.bonusWin\",\n \"_id\": 0\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"platformRef\": \"$platformRef\",\n \"gameRef\": \"$gameRef\",\n \"currency\": \"$currency\"\n },\n \"bet\": {\n \"$sum\": \"$bet\"\n },\n \"win\": {\n \"$sum\": \"$win\"\n },\n \"bonus\": {\n \"$sum\": \"$data.bonusWin\"\n },\n \"count\": {\n \"$sum\": 1\n }\n }\n },\n {\n \"$project\": {\n \"platformRef\": \"$_id.platformRef\",\n \"gameRef\": \"$_id.gameRef\",\n \"currency\": \"$_id.currency\",\n \"bet\": 1,\n \"win\": 1,\n \"bonus\": 1,\n \"count\": 1\n }\n }\n])\n{\n \"v\": 2,\n \"key\": {\n \"gameRef\": 1,\n \"platformRef\": 1,\n \"currency\": 1,\n \"bet\": 1,\n \"win\": 1,\n \"data.bonusWin\": 1\n },\n \"name\": \"idx_spin_history_main_fields\",\n \"background\": false\n}\n{\n \"stages\": [\n {\n \"$cursor\": {\n \"queryPlanner\": {\n \"plannerVersion\": 1,\n \"namespace\": \"oak9e_rgs_temp.spinHistory\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"gameRef\": {\n \"$eq\": \"6047a10c58ed573e490b8f54\"\n }\n },\n \"queryHash\": \"27C08187\",\n \"planCacheKey\": \"E204EC8C\",\n \"winningPlan\": {\n \"stage\": \"PROJECTION_DEFAULT\",\n \"transformBy\": {\n \"bet\": true,\n \"platformRef\": true,\n \"win\": true,\n \"currency\": true,\n \"gameRef\": true,\n \"bonusWin\": \"$data.bonusWin\",\n \"_id\": false\n },\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"gameRef\": 1,\n \"platformRef\": 1,\n \"currency\": 1,\n \"bet\": 1,\n \"win\": 1,\n \"data.bonusWin\": 1\n },\n \"indexName\": \"idx_spin_history_main_fields\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"gameRef\": [],\n \"platformRef\": [],\n \"currency\": [],\n \"bet\": [],\n \"win\": [],\n \"data.bonusWin\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"gameRef\": [\n \"[\\\"6047a10c58ed573e490b8f54\\\", \\\"6047a10c58ed573e490b8f54\\\"]\"\n ],\n \"platformRef\": [\n \"[MinKey, MaxKey]\"\n ],\n \"currency\": [\n \"[MinKey, MaxKey]\"\n ],\n \"bet\": [\n \"[MinKey, MaxKey]\"\n ],\n \"win\": [\n \"[MinKey, MaxKey]\"\n ],\n \"data.bonusWin\": [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n },\n \"rejectedPlans\": []\n }\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"platformRef\": \"$platformRef\",\n \"gameRef\": \"$gameRef\",\n \"currency\": \"$currency\"\n },\n \"bet\": {\n \"$sum\": \"$bet\"\n },\n \"win\": {\n \"$sum\": \"$win\"\n },\n \"bonus\": {\n \"$sum\": \"$data.bonusWin\"\n },\n \"count\": {\n \"$sum\": {\n \"$const\": 1\n }\n }\n }\n },\n {\n \"$project\": {\n \"_id\": true,\n \"bet\": true,\n \"bonus\": true,\n \"count\": true,\n \"win\": true,\n \"platformRef\": \"$_id.platformRef\",\n \"gameRef\": \"$_id.gameRef\",\n \"currency\": \"$_id.currency\"\n }\n }\n ],\n \"serverInfo\": {\n \"host\": \"DESKTOP-V3NTFPM\",\n \"port\": 27017,\n \"version\": \"4.4.3\",\n \"gitVersion\": \"913d6b62acfbb344dde1b116f4161360acd8fd13\"\n },\n \"ok\": 1\n}\n", "text": "I have an application in which I am using MongoDB. I have 1.1M documents in a single collection and I am trying to do some aggregations, my document structure looks like this:My aggregation query looks like this:This query takes 5 seconds to execute (in 1.1M documents). I wonder if there is any way to optimize it?I have this index set up:and the explain plan gives me this:Please can you let me know if there is anything I can do to make this work faster?", "username": "Paulius_Matulionis" }, { "code": "data.bonusWinbonusWindata.bonusWin[\n {\n '$match': {\n 'gameRef': '8bdbef7c0ad31c5805d258ca'\n }\n }, {\n '$group': {\n '_id': {\n 'platformRef': '$platformRef', \n 'gameRef': '$gameRef', \n 'currency': '$currency'\n }, \n 'bet': {\n '$sum': '$bet'\n }, \n 'win': {\n '$sum': '$win'\n }, \n 'bonus': {\n '$sum': '$data.bonusWin'\n }, \n 'count': {\n '$sum': 1\n }\n }\n }, {\n '$project': {\n 'platformRef': '$_id.platformRef', \n 'gameRef': '$_id.gameRef', \n 'currency': '$_id.currency', \n 'bet': 1, \n 'win': 1, \n 'bonus': 1, \n 'count': 1, \n '_id': 0\n }\n }\n]\n{\n \"v\": 2,\n \"key\": {\n \"gameRef\": 1,\n \"platformRef\": 1,\n \"currency\": 1,\n \"bet\": 1,\n \"win\": 1,\n \"data.bonusWin\": 1\n },\n \"name\": \"gameRef_1_platformRef_1_currency_1_bet_1_win_1_data.bonusWin_1\"\n}\n{\n \"winningPlan\": {\n \"stage\": \"PROJECTION_DEFAULT\",\n \"transformBy\": {\n \"bet\": 1,\n \"currency\": 1,\n \"data.bonusWin\": 1,\n \"gameRef\": 1,\n \"platformRef\": 1,\n \"win\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"gameRef\": 1,\n \"platformRef\": 1,\n \"currency\": 1,\n \"bet\": 1,\n \"win\": 1,\n \"data.bonusWin\": 1\n },\n \"indexName\": \"gameRef_1_platformRef_1_currency_1_bet_1_win_1_data.bonusWin_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"gameRef\": [],\n \"platformRef\": [],\n \"currency\": [],\n \"bet\": [],\n \"win\": [],\n \"data.bonusWin\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"gameRef\": [\n \"[\\\"8bdbef7c0ad31c5805d258ca\\\", \\\"8bdbef7c0ad31c5805d258ca\\\"]\"\n ],\n \"platformRef\": [\n \"[MinKey, MaxKey]\"\n ],\n \"currency\": [\n \"[MinKey, MaxKey]\"\n ],\n \"bet\": [\n \"[MinKey, MaxKey]\"\n ],\n \"win\": [\n \"[MinKey, MaxKey]\"\n ],\n \"data.bonusWin\": [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }\n}\nbet{\n \"winningPlan\": {\n \"stage\": \"PROJECTION_DEFAULT\",\n \"transformBy\": {\n \"bet\": 1,\n \"currency\": 1,\n \"data.bonusWin\": 1,\n \"gameRef\": 1,\n \"platformRef\": 1,\n \"win\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"gameRef\": 1,\n \"platformRef\": 1,\n \"currency\": 1,\n \"win\": 1,\n \"data.bonusWin\": 1\n },\n \"indexName\": \"gameRef_1_platformRef_1_currency_1_win_1_data.bonusWin_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"gameRef\": [],\n \"platformRef\": [],\n \"currency\": [],\n \"win\": [],\n \"data.bonusWin\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"gameRef\": [\n \"[\\\"8bdbef7c0ad31c5805d258ca\\\", \\\"8bdbef7c0ad31c5805d258ca\\\"]\"\n ],\n \"platformRef\": [\n \"[MinKey, MaxKey]\"\n ],\n \"currency\": [\n \"[MinKey, MaxKey]\"\n ],\n \"win\": [\n \"[MinKey, MaxKey]\"\n ],\n \"data.bonusWin\": [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }\n }\n}\nFETCHbet", "text": "Hi @Paulius_Matulionis and welcome in the MongoDB community !Firstly, your explain plan looks pretty good. There is no “FETCH” stage which means that only the index is used to answer your query which is pretty neat. We call this a “covered query”.Secondly, I found an error in your pipeline and also the first $project stage is useless.Errors/optimizations:Here is what your final pipeline should look like:Here is my index:And my winning plan:We have a way to verify that our query is covered. If I remove the bet field from my index, here is my winning plan now:As you can see, I have an additional stage FETCH now that is required because MongoDB needs to fetch the entire document on disk to retrieve this bet field that we are now missing.The pipeline I shared above along with the index is the optimal way to run this query.\nThe only way left we could use to save a few milliseconds is to remove the final project stage that is just making our life more convenient in the application layer to map the results.Make sure that ALL your indexes fit in RAM & that you have enough RAM left to run this query and it should be lightning fast. The only limitation at this point is the hardware & the configuration of your cluster & OS.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Mongo DB aggregation optimization
2021-03-12T09:08:30.877Z
Mongo DB aggregation optimization
4,610
null
[ "python" ]
[ { "code": "", "text": "Please could someone tell me where I can find Python examples of MongoDB Atlas API?TIA", "username": "Shaun_McCullagh" }, { "code": "mongodbatlas", "text": "Hi Shaun,Welcome to the MongoDB Community!For the simplest example of using the Atlas API from Python, see Calling the MongoDB Atlas API - How to do it from Node, Python, and Ruby.Once you have authenticated, the Atlas API should be straightforward to work with.However, if you are looking for a more Pythonic interface or examples check out @Joe_Drumgoole’s mongodbatlas package: Install from PyPi or View source on GitHub.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi @Stennie_X,\nDo you have an example of using Atlas API from .NET?\nI am planning to use the api for creating fts index.Thanks,\nSupriya", "username": "Supriya_Bansal" }, { "code": "", "text": "Hi Supriya,We don’t currently have a per-language example, but it’s something we’d love to develop in future. I hear you in particular on how valuable this would be for search indexes!-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Python examples of MongoDB Atlas API
2020-05-12T13:14:21.369Z
Python examples of MongoDB Atlas API
2,386
https://www.mongodb.com/…a_2_1024x439.png
[ "ops-manager" ]
[ { "code": "", "text": "I have MongoDB Community Version and Ops Manager for testing purposes.\nCan’t you connect these?\nCan Ops Manager be used only with MongoDB in Enterprise Version?And why does DATA SIZE come out as N/A?\nIs the agent connection incorrect?\nimage1586×681 37.9 KB", "username": "Kim_Hakseon" }, { "code": "", "text": "The following table lists which versions of MongoDB Community and Enterprise are compatible with each major version of Ops Manager:So it appears that it can be.Did you deploy this shard with ops manager or is it existing and you are importing it ?", "username": "chris" }, { "code": "", "text": "I divided the shared cluster into 3 servers and placed Ops Manager on another separate server.\nOps Manager has confirmed the placement of the Sharded Cluster, but it does not seem to be able to retrieve data continuously.Both Ops Manager and MongoDB use the latest versions.", "username": "Kim_Hakseon" }, { "code": "", "text": "I checked the Log. Log messages like this appeared on all agents.[2021-03-11T15:56:20.841+900] [.error] [metrics/collector/realcollector.go:mainLoop:130] [15:56:20.841] Error sending hardware metrics to MMS : Error POSTing to http://<.hostname>:8080/agents/api/automation/metrics/batch/v1/6048a339d74eb16855e31190?ah=SQL_1&ahs=sql_1&av=10.14.20.6466&aos=linux&aa=x86_64&ab=64&at=1615445780840: Post “http://<.hostname>:8080/agents/api/automation/metrics/batch/v1/6048a339d74eb16855e31190?ah=SQL_1&ahs=sql_1&av=10.14.20.6466&aos=linux&aa=x86_64&ab=64&at=1615445780840”: dial tcp 10.64.40.40:8080: connect: connection refusedWhat’s the solution?The mongo.mongoUri=<.SetToValidUri> in Ops Manager Config File was determined to be OK.\n(Compass confirmed normal connection.)", "username": "Kim_Hakseon" }, { "code": "", "text": "Did you deploy this shard with ops manager or is it existing and you are importing it ??I checked the Log. Log messages like this appeared on all agents.Looks like agents cannot connect to ops manager. Check the firewall of the opsmanager host.", "username": "chris" } ]
Ops Manager connected with MongoDB Community Version
2021-03-11T06:54:14.812Z
Ops Manager connected with MongoDB Community Version
3,354
https://www.mongodb.com/…4_2_1024x512.png
[]
[ { "code": "", "text": "when i tried : db.inventory.find({item:“postcard”}, {_id:0, instock:{$slice:1}})returned: \nimage1210×45 8 KB\ni think it should return instock element 0 only", "username": "kuku_super" }, { "code": "", "text": "Thanks for finding and reporting it! You’re making MongoDB better for everyone Here are the steps for reporting bug for MongoDB projects: Submit Bug Reports · mongodb/mongo Wiki · GitHub", "username": "JoeKarlsson" }, { "code": "db.inventory.insertMany( [\n { item: \"journal\", status: \"A\", size: { h: 14, w: 21, uom: \"cm\" }, instock: [ { warehouse: \"A\", qty: 5 } ] },\n { item: \"notebook\", status: \"A\", size: { h: 8.5, w: 11, uom: \"in\" }, instock: [ { warehouse: \"C\", qty: 5 } ] },\n { item: \"paper\", status: \"D\", size: { h: 8.5, w: 11, uom: \"in\" }, instock: [ { warehouse: \"A\", qty: 60 } ] },\n { item: \"planner\", status: \"D\", size: { h: 22.85, w: 30, uom: \"cm\" }, instock: [ { warehouse: \"A\", qty: 40 } ] },\n { item: \"postcard\", status: \"A\", size: { h: 10, w: 15.25, uom: \"cm\" }, instock: [ { warehouse: \"B\", qty: 15 }, { warehouse: \"C\", qty: 35 } ] }\n]);\ninstock{ warehouse: \"B\", qty: 15 }> db.inventory.find({\"item\":\"postcard\"}, {_id:0, instock:{$slice:1}})\n{ \"item\" : \"postcard\", \"status\" : \"A\", \"size\" : { \"h\" : 10, \"w\" : 15.25, \"uom\" : \"cm\" }, \"instock\" : [ { \"warehouse\" : \"B\", \"qty\" : 15 } ] }\n_id> db.inventory.find({\"item\":\"postcard\"}, {_id:1, instock:{$slice:1}})\n{ \"_id\" : ObjectId(\"6048f80bb8352c2198e23bb8\"), \"instock\" : [ { \"warehouse\" : \"B\", \"qty\" : 15 } ] }\n_id_idinstockinstock> db.inventory.find({\"item\":\"postcard\"}, {_id:0, status:0, size:0, item:0, instock:{$slice:1}})\n{ \"instock\" : [ { \"warehouse\" : \"B\", \"qty\" : 15 } ] }\ninstock:{$slice:1}_id", "text": "Hi @kuku_super,I don’t see a bug here.Collection:The first element of the array instock is { warehouse: \"B\", qty: 15 } for the postcard.When you use this query:You get that same first element of the array so the result looks good to me. You also get status and size because you never specified anything for them. The _id has a special treatment in the projection so you never chose “inclusion” or “exclusion” anywhere in the projection here so the fields are included by default.But if you run:Then it’s different because you have explicitly said that you wanted _id and not the other fields not mentioned so you only get _id and instock.If you only want instock, you will have to do:I’m surprised that the instock:{$slice:1} doesn’t trigger the “inclusion” though and removes automatically all the other fields except _id. That’s what I would have expected to be honest.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "thx, @MaBeuLux88if instock:{$slice:1} followed the inclusion would make the inclusion and exclusion mechanism much better.that gave any other field a 0 to get a array element inclusion was a ugly way.so i suggest to fix it", "username": "kuku_super" }, { "code": "", "text": "I agree that this would be more logical. But maybe there is a deeper reason that I ignore.\nPlease open a ticket and link it back here so we will have an official answer from the query team.", "username": "MaBeuLux88" }, { "code": "", "text": "i report bug here : https://jira.mongodb.org/browse/MONGOSH-636", "username": "kuku_super" } ]
Where to report the bug?
2021-03-10T13:46:49.817Z
Where to report the bug?
3,339
https://www.mongodb.com/…_2_1024x252.jpeg
[ "vscode" ]
[ { "code": "", "text": "@mmarconSo, I just had a look and wondered if this is what I am meant to seeing?\nv0.5a1045×258 75.6 KB\nFrom the #267, I wasn’t sure if I was expected to see support for depreciated commands, in the same vein as how JSDocs comments can flag depreciated functions, within VSCode?I like the link to each commands webpage, that’s useful. Not sure if it was there before.Thanks all the same.", "username": "NeilM" }, { "code": "mongosh", "text": "Thank you for checking out the new version of the extension and sharing your feedback.The ultimate goal is to indicate that those commands are deprecated – or exclude them completely – but since the backend that provides the runtime for playgrounds is mongosh, the MongoDB Shell and those commands are available but deprecated at the moment they are still listed.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "@Massimiliano_MarconSorry, I was wondering if I should have seen the copyDatabase appear with a strike through the command text?Regarding what is show as depreciated - Doesn’t the server version that the user is connecting to dictate the commands that are available?Therefore, if I am connecting to a 3.6 server and I was using a command that only came available in 4.2 … mmmmm. So VSC playground would allow it, but the MongoDB server would bounce the command?Does the mongosh know what commands available in which version, and if so, it could flag commands which are unavailable based upon the server the user is connected to.However, if something is depreciated, it maybe useful to highlight the version it was depreciated in e.g.db.copyDatabase “Depreciated 3.6”.I think to exclude depreciated commands, when you might be using a script which is going to run against an older version of the server, might be a bit of own goal. Don’t assume that on premise servers will get upgraded that often.It does get hairy very quickly.", "username": "NeilM" } ]
VSC extension 0.5
2021-03-11T15:08:21.904Z
VSC extension 0.5
3,198
null
[ "compass" ]
[ { "code": "", "text": "I’m using Cloudflare access to authenticate my MongoDB server(using an SSH tunnel) but Robo3T not invoking browser authentication while using Cloudflare DNS with Robo3T SSH configuration(creating a tunnel through a bastion host to access the MongoDB server).But if I’m using bastion server IP instead of DNS i.e. bypassing Cloudflare access authentication, it works perfectly fine.I’m facing the same issue with the MongoDB-compass as well.Any help is really appreciated.", "username": "abhishek021" }, { "code": "mongoshmongo", "text": "Does it work if you use the shell (either mongosh or mongo)?", "username": "Massimiliano_Marcon" }, { "code": "", "text": "This authentication is done whenever I want to connect with Mongo Server using Cloudflare DNS instead of IP address.\nIf I’m creating the same SSH tunnel using the terminal, then its working as expected i.e. popping up the browser for the authentication.Creating a seperate SSH tunnel everytime from the terminal for port forwarding Mongo Server 27017 to my local is not a feasible idea in my case, that’s why I was trying to use robo3T SSh tunnel feature.", "username": "abhishek021" } ]
robo3t/Compass MongoDB not invoking browser authentication for cloudflare
2021-03-10T17:58:34.009Z
robo3t/Compass MongoDB not invoking browser authentication for cloudflare
2,444
null
[]
[ { "code": "localizedDescription", "text": "I’d like to get the error code when a login attempt fails, to display the appropriate message to the user. In the doc, the only information about errors is the localizedDescription property, which is in fact, not localized.How can I get the error codes, or some information to know what error it is? I could always check the localizedDescription, but this is not very stable since the MongoDB team might change the error message at some point.", "username": "Jean-Baptiste_Beau" }, { "code": "ErrorNSError(error as! NSError).code", "text": "@Jean-Baptiste_Beau the error returned from an API call will contain the domain and code. You’ll need to cast the Error to NSError with (error as! NSError).code", "username": "Lee_Maguire" }, { "code": "", "text": "When doing this, and testing logging in with wrong credentials to get the “invalid user/password” error, the code is -1, is that normal? It doesn’t look like a normal error code", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "@Lee_Maguire is there a list of possible error codes related to authentication somewhere? I can’t find it in the doc, and this makes me wonder if I can safely rely on the error codes I see when testing or if those might be changed in the future.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Hi @Jean-Baptiste_Beau we don’t have error codes in our documentation, but here is the enum that supplies the error codes to the SDK’s realm-core/generic_network_transport.hpp at fd78e200dffe24a101ecdcacd691ace9607f63f5 · realm/realm-core · GitHub", "username": "Lee_Maguire" }, { "code": "", "text": "Thank you for the link! There seem to be no error code for invalid login credentials, it would be nice to have one to display an appropriate message to the user, as this may be one of the most common errors — and maybe the most important one to display to the user.Currently the error code for “invalid email/password” is -1 (unknown).", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
[MongoDB Realm] Get Authentication error code on iOS (Swift)
2021-02-08T10:16:48.646Z
[MongoDB Realm] Get Authentication error code on iOS (Swift)
3,037
null
[ "queries" ]
[ { "code": "var db_pim = db.getSiblingDB('pim_read_store')\nvar db_ods = db.getSiblingDB('pricing_ods')\n\ndb_ods.stg_c1.drop()\n\ndb_pim.item_collection.aggregate([\n { \"$project\": {\n \"PartId\": \"$windecsPartId\"\n }},\n { $out: { db: db_ods, coll: \"stg_c1\" }}\n]);\n", "text": "I would like to use a java variable in the $out stage.\nThis fails unless i use $out : “pricing_ods”\nI don’t understand why the varabile db_ods can be used on the drop() but not in the aggregation.\nthanks\n-Dave", "username": "David_Lange" }, { "code": "var db_ods = ‘pricing_ods’;var db_ods = db.getSiblingDB(‘pricing_ods’)db_odstypeof db_odsvar db_ods = ‘pricing_ods’;typeof{ $out: {db: db_ods, coll: “stg_c1” }}dbcoll", "text": "Hello @David_Lange,var db_ods = db.getSiblingDB(‘pricing_ods’)Instead try this:var db_ods = ‘pricing_ods’;In the shell you can see that from var db_ods = db.getSiblingDB(‘pricing_ods’), the variable db_ods is of type object. You can verify this:typeof db_odsBut, for var db_ods = ‘pricing_ods’; the typeof returns a string. The { $out: {db: db_ods, coll: “stg_c1” }} is expecting string values for db and coll fields.", "username": "Prasad_Saya" }, { "code": "uncaught exception: Error: command failed: {\n\t\"ok\" : 0,\n\t\"errmsg\" : \"wrong type for field (db) object != string\",\n\t\"code\" : 13111,\n\t\"codeName\" : \"Location13111\"\n}\nmongoprint()object.tojson()mongoobject.toString()db_odstoString()db_pim.item_collection.aggregate([\n { \"$project\": {\n \"PartId\": \"$windecsPartId\"\n }},\n { $out: { db: db_ods.toString(), coll: \"stg_c1\" }}\n]);\nmongotypeof> typeof(db_ods)\nobject\n> typeof(db_ods.toString())\nstring\n", "text": "I don’t understand why the varabile db_ods can be used on the drop() but not in the aggregation.Hi @David_Lange,It would be helpful to include the error message and your specific version of MongoDB server for context, but I expect the error you are getting is similar to:JavaScript objects may have different representation depending on the usage context. For example, in the mongo shell, calling print() on an object will implicitly call object.tojson() (which is a feature of the mongo shell). You can also invoke standard JavaScript methods like object.toString().I think you want to keep your db_ods variable as a DB object so you can call collection methods, but should explicitly call toString() to get the expected representation in the context of your aggregation query:Note: all of the JavaScript in this example is being evaluated in the mongo shell before the aggregation query is sent to the MongoDB server. As @Prasad_Saya mentioned, you can check the result type of a statement using the typeof JavaScript operator:Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "super helpful thanks.", "username": "David_Lange" }, { "code": "", "text": "great info appreciate the help!", "username": "David_Lange" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Where to use java script var
2021-03-11T04:18:35.392Z
Where to use java script var
3,020
null
[ "dot-net", "atlas-device-sync" ]
[ { "code": "", "text": "Hi, Hopefully a quick question…\nI have a developing dot-net synced project with no data.\nI need to rework one collection to change field names and types.\nCan I delete the collection from Realm and recreate using Development Mode?\nDo I delete the Collection from the schema or from the Atlas page?\nThanks", "username": "Richard_Fairall" }, { "code": "", "text": "I believe you’ll need to terminate and reenable sync to be able to do that. But yes - you can go to your JSON Schema tab in the server UI and delete the schema for that collection.", "username": "nirinchev" }, { "code": "", "text": "Thanks Nikola,It worked with a few hiccups, such as another class holding a list of the changed object, but I got there in the end. Not for very complicated collections though.", "username": "Richard_Fairall" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Delete collection in order to recreate with changed fields
2021-03-11T12:54:59.991Z
Delete collection in order to recreate with changed fields
1,898
null
[ "aggregation", "queries", "python" ]
[ { "code": "cursor=execute(\"select avg(id13)::numeric(10,2) from timestamppsql where timestamp1<='2011-01-01 00:05:00'\")\ncursor=mydb1.mongodbtime.aggregate({\n '$group': {\n \"timestamp1\":{ \"$lte\" : datetime.strptime(\"2011-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\") },\n\n \"avg_id13\": {\"$avg\": \"$id13\"}\n }\n})\npipeline must be a list\n", "text": "i have a query in sql that i want to convert in mongodb with python pymongo.This is the code i tested in mongodb:The output is this:How can i fix that?Any help would be appreciated.", "username": "harris" }, { "code": "", "text": "The pipeline must be a list. So it must starts with [ and ends with ]. Each object in the list is a stage. Simply add then around your object $group.", "username": "steevej" }, { "code": "cursor = mydb1.mongodbtime.aggregate(\n [\n {\n \"$match\": {\n \"timestamp1\": {\"$lte\": datetime.strptime(\"2011-01-01 00:05:00\", \"%Y-%m-%d %H:%M:%S\")}\n }\n },\n {\n \"$group\": {\n \"_id\": \"null\",\n \"avg_id13\": {\n \"$avg\": \"$id13\"\n }\n }\n }\n ]\n)", "text": "Hi,To apply a filter, you need to use a $match stage. That’s like your WHERE clause in SQL.You also need to wrap the pipeline inside square brackets.Try this:", "username": "Ronan_Merrick" }, { "code": "aggregate$match$project$lookup$group$matchWHERE$group", "text": "Hi Harris,@steevej and @Ronan_Merrick have both provided useful answers, so I won’t repeat anything they’ve said. I just wanted to drop in a maybe helpful suggestion that you check out the high-level documentation on the aggregation framework.Conceptually a call to aggregate starts with all the documents in a collection, and then passes them through a series of aggregation stages which will do things like filter out documents (with $match for example), modify them (with stages like $project and $lookup or transform them into something else (with stages like $group). The documents output from each stage are passed to the next stage for more modification, until the end of the pipeline, where whatever documents are emitted from the final stage are then provided back to the client.That’s why the stages are provided as a list - it’s a sequence of operations to be applied to the documents. It’s also why you need to do a $match to filter the documents that match your WHERE criteria, before doing a $group on the documents that remain, to get the average applied to all the documents, as @Ronan_Merrick suggested.I hope this was helpful!", "username": "Mark_Smith" }, { "code": "", "text": "Thank you @Mark_smith , @steevej and @Ronan_Merrick you were all realy helpful!!", "username": "harris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Convert Sql query to Mongodb query
2021-03-10T19:47:32.673Z
Convert Sql query to Mongodb query
8,083
null
[ "swift" ]
[ { "code": "", "text": "Save the date!! January 27th 10am CST - SwiftUI Best Practices with Realm.In this event, Jason Flax, the engineering lead for the Realm iOS team, will explain what SwiftUI is, why it’s important, how it will change mobile app development, and demonstrate how Realm’s integration with SwiftUI makes it easy for iOS developers to leverage this framework. Walk through the code and get all your questions answered -SwiftUI Overview & BenefitsSwiftUI Key Concepts and ArchitectureRealm Integration with SwiftUIRealm Best Practices with SwiftUIQ&APlease register on our Live platform HERE. We look forward to seeing you - oh, and please share too - the more the merrier!Also, if you have any particular questions you’d like to see @Jason_Flax cover, please add as a comment and we’ll either answer here if we can, or add them to the agenda itself on the day.", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" }, { "code": "", "text": "Is there a recording of the session available somewhere?", "username": "fitAI" }, { "code": "", "text": "Sure is -", "username": "Ian_Ward" }, { "code": "", "text": "Will you please add a link for the source files on GitHub? Thank you.", "username": "Paul_Simmons" }, { "code": "", "text": "Hey Paul, here’s the repo: GitHub - realm/RChat", "username": "Kurt_Libby1" }, { "code": "", "text": "Kurt Libby, I don’t believe that was the project that was presented. I think Jason presented a Reminders style app.", "username": "Paul_Simmons" }, { "code": "", "text": "@Paul_Simmons, ah, I was thinking of the meetup that Andrew hosted.The one that Jason demoed is at realm-swift/examples/ios/swift/ListSwiftUI at master · realm/realm-swift · GitHubHope that helps!–Kurt", "username": "Kurt_Libby1" }, { "code": "", "text": "Thank you Kurt Libby!", "username": "Paul_Simmons" }, { "code": "ListSwiftUI", "text": "Hi @Paul_Simmons – I’m not sure about the code from that specific session, but the Realm Cocoa SDK repo comes with some evergreen samples – ListSwiftUI from here realm-swift/examples/ios/swift at master · realm/realm-swift · GitHub is probably the most relevant.", "username": "Andrew_Morgan" } ]
Realm Event - SwiftUI Best Practices with Realm (Jan 27, 2021)
2021-01-11T13:02:29.158Z
Realm Event - SwiftUI Best Practices with Realm (Jan 27, 2021)
2,852
null
[ "queries", "text-search" ]
[ { "code": "", "text": "Hello all,I want to search documents by the text, and I may search with one or more words. For example, I may search by “coffee”, or I may search by “coffee tea”. In my database some documents contains “coffee” only, some documents contains “tea” only, and there are some documents contains both “coffee” and “tea”. I want to make sure for the documents have both “coffee” and “tea”, the text score is higher than the documents have “coffee” only or have “tea” only. I read the guideline at https://docs.mongodb.com/manual/reference/operator/query/text/#search-for-a-phrase, but here I don’t want to search the “coffee tea” as a whole phrase (in my document the coffee and tea may not connected in sequence, i.e. it may be “coffee and tea”). How can I implement this scenario by the text search statement?Thanks a lot for the help,James", "username": "Zhihong_GUO" }, { "code": "", "text": "Hi @Zhihong_GUO,You should consider using Atlas search which has all those built in operator and compound expressions.However, for regular search with $text if you fo not specify a phrase the query auto presume an OR between words and thus scores it.So just searching “tea coffee” should work.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hello Pavel,Very clear. Thanks for the information.Regards,James", "username": "Zhihong_GUO" } ]
Text search and the score
2021-03-10T11:51:25.086Z
Text search and the score
2,857
https://www.mongodb.com/…_2_1024x115.jpeg
[ "queries", "indexes" ]
[ { "code": "fromtodb.getCollection('akka_persistence_journal_CategoryTree').find({\"pid\":\"CategoryTree-46\", \"from\" : {$gte: 24291}, \"to\":{$lte: 24299}}).sort({\"to\":1})\n\t\"key\" : {\n\t\t\t\"pid\" : 1,\n\t\t\t\"to\" : -1,\n\t\t\t\"from\" : -1\n\t\t}\n{\n \"docsExamined\": 9,\n \"nreturned\": 9,\n \"keysExamined\": 24300\n}\n\nAny suggestion in how to tune our index or our query in order to improve that ratio?\n\nThanks in advance", "text": "Hi all!Our application uses Mongo as journal (event sourcing), on which each document has an aggregate id field, a from and to fieldsScreen Shot 2021-03-10 at 00.41.181984×224 45.4 KBOur main query is, for example:We’ve already created an index as followingBut, when doing an explain over that query we’re not getting the best ratio on totalKeysExamined:totalDocsExamined:nReturned. For example, he explain for the above query is:", "username": "julian_zelayeta" }, { "code": "$gteexplain", "text": "Hello @julian_zelayeta, welcome to the MongoDB Community forum!The documenation topic Sort and Non-prefix Subset of an Index says that, the keys in a compound index are used in a query’s sort only if the query conditions before the sort have equality condition (not range conditions, like $gte, as in your query).That said, please post the explain’s output.", "username": "Prasad_Saya" }, { "code": "{\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"catalog.akka_persistence_journal_CategoryTree\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [ \n {\n \"pid\" : {\n \"$eq\" : \"CategoryTree-46\"\n }\n }, \n {\n \"to\" : {\n \"$lte\" : 27.0\n }\n }, \n {\n \"from\" : {\n \"$gte\" : 21.0\n }\n }\n ]\n },\n \"winningPlan\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"pid\" : 1,\n \"to\" : -1,\n \"from\" : -1\n },\n \"indexName\" : \"pid_1_to_-1_from_-1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"pid\" : [],\n \"to\" : [],\n \"from\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"backward\",\n \"indexBounds\" : {\n \"pid\" : [ \n \"[\\\"CategoryTree-46\\\", \\\"CategoryTree-46\\\"]\"\n ],\n \"to\" : [ \n \"[-inf.0, 27.0]\"\n ],\n \"from\" : [ \n \"[21.0, inf.0]\"\n ]\n }\n }\n },\n \"rejectedPlans\" : [ \n {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"from\" : {\n \"$gte\" : 21.0\n }\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"pid\" : 1,\n \"to\" : -1\n },\n \"indexName\" : \"pid_seq\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"pid\" : [],\n \"to\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"backward\",\n \"indexBounds\" : {\n \"pid\" : [ \n \"[\\\"CategoryTree-46\\\", \\\"CategoryTree-46\\\"]\"\n ],\n \"to\" : [ \n \"[-inf.0, 27.0]\"\n ]\n }\n }\n }\n ]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 7,\n \"executionTimeMillis\" : 1,\n \"totalKeysExamined\" : 28,\n \"totalDocsExamined\" : 7,\n \"executionStages\" : {\n \"stage\" : \"FETCH\",\n \"nReturned\" : 7,\n \"executionTimeMillisEstimate\" : 0,\n \"works\" : 29,\n \"advanced\" : 7,\n \"needTime\" : 20,\n \"needYield\" : 0,\n \"saveState\" : 0,\n \"restoreState\" : 0,\n \"isEOF\" : 1,\n \"docsExamined\" : 7,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 7,\n \"executionTimeMillisEstimate\" : 0,\n \"works\" : 28,\n \"advanced\" : 7,\n \"needTime\" : 20,\n \"needYield\" : 0,\n \"saveState\" : 0,\n \"restoreState\" : 0,\n \"isEOF\" : 1,\n \"keyPattern\" : {\n \"pid\" : 1,\n \"to\" : -1,\n \"from\" : -1\n },\n \"indexName\" : \"pid_1_to_-1_from_-1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"pid\" : [],\n \"to\" : [],\n \"from\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"backward\",\n \"indexBounds\" : {\n \"pid\" : [ \n \"[\\\"CategoryTree-46\\\", \\\"CategoryTree-46\\\"]\"\n ],\n \"to\" : [ \n \"[-inf.0, 27.0]\"\n ],\n \"from\" : [ \n \"[21.0, inf.0]\"\n ]\n },\n \"keysExamined\" : 28,\n \"seeks\" : 21,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0\n }\n }\n },\n \"serverInfo\" : {\n \"host\" : \"e2a568b9a111\",\n \"port\" : 27017,\n \"version\" : \"4.4.4\",\n \"gitVersion\" : \"8db30a63db1a9d84bdcad0c83369623f708e0397\"\n },\n \"ok\" : 1.0\n}\n", "text": "Hi @Prasad_Saya, thank for that explanation.Here is the explain output:", "username": "julian_zelayeta" }, { "code": "\"direction\" : \"backward\"{to:1}{to:-1}{pid:1, to:1, from:1}", "text": "Hi @julian_zelayeta,Looks like you have the best possible index for this query because you don’t have a SORT stage in your winning plan so no in-memory sort which is usually way more costly than scanning index keys.The only comment I can make is that you are reading the index backwards (see \"direction\" : \"backward\") instead of forward because of the sort {to:1} and the index {to:-1} but that really shouldn’t make any difference at this point.You should get similar results with the index {pid:1, to:1, from:1}. It would just read forward instead of backward.If you want to improve the performances on this query, you just need more RAM I guess. But it already executes in 1ms according to your explain plan so it will be hard to beat.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hello @julian_zelayeta,As @MaBeuLux88 mentioned the lack of SORT stage shows that the index is being used for the sort operation.I would like to know, how many documents are there in the collection and also after the query how many are being returned, for a typical query run. In addition, I want you find out how the data is organized in your collection; see Create Queries that Ensure Selectivity. Selectivity can affect how efficiently the index is used for a given query and tells if there are possibilities of further optimization.", "username": "Prasad_Saya" }, { "code": "", "text": "@Prasad_Saya in that collection there is near 6 million documents at this moment. As we use event sourcing with snapshots, a typical query run returns always between 1 and 10 documents.\nFirst our application scans for a snapshot document on our snapshot collections, and then it retrieves from our journal collection the N latest documents, that as I said, are between 1 and 10 documents.", "username": "julian_zelayeta" }, { "code": "", "text": "A few things that just popped in my mind:Using a reference to improve performancehttps://www.mongodb.com/how-to/subset-pattern/", "username": "MaBeuLux88" }, { "code": "pidfromtopid", "text": "@MaBeuLux88 we cant achieve your suggestion, since we are using Akka. Akka forces us to implement those queries separately, one for snapshot collection, and other, for our journal collection. And it forces us to use pid (the aggregate id), from and to as bounds to seek along all the documents for a given pid.", "username": "julian_zelayeta" }, { "code": "", "text": "If that is true then I guess Akka is clearly a “suboptimal” solution to say the least… ", "username": "MaBeuLux88" } ]
Range query with sort optimization
2021-03-10T03:49:13.618Z
Range query with sort optimization
4,310
null
[ "ruby" ]
[ { "code": " queue = Queue.new\n producer = []\n producer << Thread.new do\n unique_ipaddr.find.each do |ipaddr|\n if geoip_infoDB.count(\"ip\" => ipaddr.to_h[\"_id\"]) == 0\n queue.push(ipaddr.to_h[\"_id\"])\n end # end if test\n end # end unique_ipaddr find\n end\nconsumer = []\n48.times do \n consumer << Thread.new do\n while qipaddr = queue.pop\n insert_to_db(get_geoip_results(qipaddr), geoip_infoDB, geoip_errorsDB)\n end # end while\n end # end new thread\nend # end 48 times\nconsumer.each { |t| t.join }\ndef insert_to_db(response, geoip_infoDB, geoip_errorsDB)\n if response.code == 200 && !response.body.nil?\n geoip_infoDB.insert_one(response.parsed_response)\n else\n geoip_errorsDB.insert_one(response.parsed_response)\n end\nend\n/var/lib/gems/2.7.0/gems/mongo-2.14.0/lib/mongo/operation/result.rb:343:in raise_operation_failure: cursor id 2074666401145332617 not found (43) (on databaseserver) (Mongo::Error::OperationFailure)", "text": "Hi,I have a ruby script which is looking up ip addresses from a collection (50m documents) and querying an API for data around the IP address. It then takes the API response, which is native JSON, and inserts it into another collection.I use ruby queues to thread these two operations. The producer thread is taking each ip from the collection, looking it up in the target collection, if not exist, add ipaddr to the queue.In ruby code, as follows:The consumer thread waits until there is something in the queue, pulls the ipaddr from the queue, does the lookup from the API, and inserts the result into the destination collection. The API can handle 50 concurrent connections, but we limit it to 48 to be conservative. Like this:The insert_to_db function is taking the response from the API, making sure it’s valid, and inserting the result:Consistently at 45,000 documents on queue, we hit a cursor id not fond (43) error:/var/lib/gems/2.7.0/gems/mongo-2.14.0/lib/mongo/operation/result.rb:343:in raise_operation_failure: cursor id 2074666401145332617 not found (43) (on databaseserver) (Mongo::Error::OperationFailure)What am I doing wrong?Thank you for reading this far!", "username": "Sam_Pope" }, { "code": "mongodcursorTimeoutMillisgetMorebatch_sizegetMoreno_cursor_timeout", "text": "Hi @Sam_Pope,With default options on the mongod, an idle cursor will time out after 10 minutes (see cursorTimeoutMillis).When performing a read operation, if the cursor is open and not iterated for some time the server will time it out and a subsequent read attempt (via getMore) will result in the error you’re seeing.Three options to consider:Option 3 may not seem the most helpful, but if there is a chance the producer logic can idle for long periods of time a new strategy would eliminate the likelihood of idle cursors being reaped.", "username": "alexbevi" }, { "code": "", "text": "Thank you for the great response, @alexbevi. I spent some time this weekend debugging the issue. It appears the producer thread is the culprit. It’s from a large aggregation (50m docs) and during this read, it does timeout. The consumer is quick insert_one, up to 50m times.I need to debug why a simple find.each is taking more than 10 minutes between reads.I ended up contacting the API provider and asking for a data export, so I can avoid doing 50m lookups every day. This follows your third suggestion. Now I have a new problem of importing 5 crazy csvs of 50m rows, but that’s unrelated to this question. ", "username": "Sam_Pope" }, { "code": "queue = Queue.new\nproducer = []\nproducer << Thread.new do\n unique_ipaddr.find(:no_cursor_timeout => 1).each do |ipaddr|\n if geoip_infoDB.count(\"ip\" => ipaddr.to_h[\"_id\"]) == 0\n queue.push(ipaddr.to_h[\"_id\"])\n end\n end\nend\nfind(:no_cursor_timeout => 1).each", "text": "Here’s how I changed the producer thread:it seems the find(:no_cursor_timeout => 1).each has greatly slowed down the find.It might actually be faster to great 2 Sets, 1 of unique_ipaddr and 1 of ipaddr in geoip_infoDB, and just use ruby to (unique_ipaddr - geoip_infoDB) = to_do_set and just lookup the to_do_set.", "username": "Sam_Pope" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Help with Cursor id not found in ruby
2021-03-05T19:04:04.985Z
Help with Cursor id not found in ruby
7,058
null
[ "aggregation" ]
[ { "code": "recordIDrecordIDcount{ \n \"_id\" : ObjectId(\"5f8f52168\"), \n \"recordID\" : \"11989\", \n \"count\" : NumberInt(5)\n}\n{ \n \"_id\" : ObjectId(\"5f8f52148\"), \n \"recordID\" : \"2561\", \n \"count\" : NumberInt(10)\n }\n{ \n \"_id\" : ObjectId(\"5f8f52038\"), \n \"recordID\" : \"57546\", \n \"count\" : NumberInt(30)\n}\n{ \n \"_id\" : ObjectId(\"5f8f52138\"), \n \"recordID\" : \"12623\", \n \"count\" : NumberInt(40)\n}\n{ \n \"_id\" : ObjectId(\"5f8f52188\"), \n \"recordID\" : \"199429\", \n \"count\" : NumberInt(50) \n}\n{ \n \"_id\" : ObjectId(\"5f8f52148a\"), \n \"recordID\" : \"12793\", \n \"count\" : NumberInt(60), \n}\n{ \n \"_id\" : ObjectId(\"5f8f52168\"), \n \"recordID\" : \"11989\", \n \"count\" : NumberInt(5)\n}\n{ \n \"_id\" : ObjectId(\"5f8f52148\"), \n \"recordID\" : \"2561\", \n \"count\" : NumberInt(10)\n }\n{ \n \"_id\" : ObjectId(\"5f8f52038\"), \n \"recordID\" : \"57546\", \n \"count\" : NumberInt(30)\n}\n{ \n \"_id\" : ObjectId(\"5f8f52038\"), \n \"recordID\" : \"57546\", \n \"count\" : NumberInt(30)\n}\n{ \n \"_id\" : ObjectId(\"5f8f52138\"), \n \"recordID\" : \"12623\", \n \"count\" : NumberInt(35)\n}\ndb.records.aggregate({ $match: {} },\n{ $group: { _id : null, sum : { $sum: \"$count\" } \n --where sum<=50\n} });\n", "text": "I have requirement to update the data based on the inputs using recordID , each recordID has count of associated records. since associated records has huge volume of data , I wanted to control the number of update based on the associated records count via configuration. So in the query, i want to sum the associated document count and should not cross limit configured in property. I wanted to fetch the records using $sum (aggregation) operation in the aggregate query but not sure how to add the criteria.Example1: Let’s say when totalcount<=50) I need to fetch the documents, in which total count sum less than or equal 50 (totalsum<=50), it should return below documents.count <=50 ( count\" = 5 + 10 + 30 )Example2 : when totalSum<=70, query should return below documents.count <=70 ( count : 30 + 35 = 65 )Mongo query some thing", "username": "Imran_khan" }, { "code": "", "text": "Hi @Imran_khan,I am a bit confused by the requirements and the provided example. On one hand you do not have documents with count 35 bit in the example you do?Additionally, why documents 5, 10 are not included in under 70 update?In general I expect that together with count stage you will need to $push the individual objects and unwind them following by $matchI can help more once I understand the use case.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "{ \n \"_id\" : ObjectId(\"5f8f52168\"), \n \"recordID\" : \"11989\", \n \"count\" : NumberInt(5)\n}\n{ \n \"_id\" : ObjectId(\"5f8f52148\"), \n \"recordID\" : \"2561\", \n \"count\" : NumberInt(10)\n }\n{ \n \"_id\" : ObjectId(\"5f8f52038\"), \n \"recordID\" : \"57546\", \n \"count\" : NumberInt(30)\n}\n{ \n \"_id\" : ObjectId(\"5f8f52138\"), \n \"recordID\" : \"12623\", \n \"count\" : NumberInt(40)\n}\n{ \n \"_id\" : ObjectId(\"5f8f52188\"), \n \"recordID\" : \"199429\", \n \"count\" : NumberInt(50) \n}\n{ \n \"_id\" : ObjectId(\"5f8f52148a\"), \n \"recordID\" : \"12793\", \n \"count\" : NumberInt(60), \n}\ndb.records.aggregate({ $match: {} },\n{ $group: { _id : null, sum : { $sum: \"$count\" } \n --where sum<=50\n} });\n{ \n \"_id\" : ObjectId(\"5f8f52168\"), \n \"recordID\" : \"11989\", \n \"count\" : NumberInt(5)\n}\n{ \n \"_id\" : ObjectId(\"5f8f52148\"), \n \"recordID\" : \"2561\", \n \"count\" : NumberInt(10)\n }\n{ \n \"_id\" : ObjectId(\"5f8f52038\"), \n \"recordID\" : \"57546\", \n \"count\" : NumberInt(30)\n}\n\n\"count\" : NumberInt(5) + NumberInt(10) + NumberInt(30) = 45 ( should not cross the upper limit)\n", "text": "@Pavel_Duchovny : I have requirement to sum the count value but with in the limit configured .Example: Let’s say , i have below documents.Now : my CountLimit 50 , Now many mongo query should pick all the records , so total sum of count should be less or equal to countLimint.it should return the result where total sum of count <=50", "username": "Imran_khan" }, { "code": "", "text": "Hi @Imran_khan,So you need to do kind of a rollup count… Where you count all previous documents and exit once you got to the limit point.It sounds like this is more suitable to a mapReduce operation rather than aggregation.In 4.4 this will be possible with a $accumulator function:https://docs.mongodb.com/manual/reference/operator/aggregation/accumulator/#grp._S_accumulatorHowever, not sure what MongoDB version you on and if map reduce is a possibility for you?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny : Thanks for the information. Currently we are using : 4.2.8 mongo version.", "username": "Imran_khan" }, { "code": "", "text": "Hi did you resolve this i’m facing a similar predicament and i can’t figure out the pipeline", "username": "David_Meck" }, { "code": "sort by the field `count` ascending\ngroup the documents into an array\niterate the array using the `$reduce` array operator\n for each iteration, \n keep a running total of the `count` field\n compare the running total with the `count_limit`\n if less\n add the document to an `array_of_matching_documents`\n next iteration\n else \n do nothing\n next iteration\narray_of_matching_documentscount_limit", "text": "Hello @David_Meck, Welcome to the MongoDB Community forum!If using the MongoDB v4.2 or lesser, you can try this logic and code an aggregation query:The array_of_matching_documents has the documents which are within the count_limit - use this for further processing.", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Aggregate $sum with condition on sum limit in mongo query
2020-10-22T00:00:14.081Z
Aggregate $sum with condition on sum limit in mongo query
18,631
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 3.6.23-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 3.6.22. The next stable release 3.6.23 will be a recommended upgrade for all 3.6 users.\nFixed in this release:3.6 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n\n– The MongoDB Team", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 3.6.23-rc0 is released
2021-03-10T16:19:23.610Z
MongoDB 3.6.23-rc0 is released
2,422
null
[ "replication" ]
[ { "code": "", "text": "In the 3 Nodes Replica Set, two nodes were shut down. Then the remaining one node will be secondary.\nBut is there any way to force this node to be primary?", "username": "Kim_Hakseon" }, { "code": "", "text": "No. To have a primary you need a majority of the nodes from the replica set to elect a primary. If you have less nodes that the majority online, there will be no election.", "username": "steevej" }, { "code": "", "text": "Thank You ", "username": "Kim_Hakseon" }, { "code": "", "text": "When operating 2 nodes in operation and 1 node in DR, if 2 nodes fail in operation, we are looking for a way to make it serviceable with 1 node in DR for a while.Is there a way?", "username": "Kim_Hakseon" }, { "code": "", "text": "Hi @Kim_HakseonFor this to be automatic you need to run your replicaSet with majority. A 3 node cluster will survive 1 node failure, a 5 node can survive 2. Build the replicaSet to survive the failure modes you want to.In my experience losing two of three nodes in any replicated system is extremely rare(looking at you IBM Softlayer networking screw-ups).With your current configuration you can manually force reconfigure your remaining node to a single node replicaset. This brings additional challenges and questions you have to address as when the other two nodes are operation they will form a functional replicaSet. What would your clients connect to next time they open a database connection? So not an approach I’d recommend.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Making Primary in Replica Set
2021-03-09T14:26:37.714Z
Making Primary in Replica Set
2,310